Stay informed with free updates
Simply sign up to the Artificial intelligence myFT Digest — delivered directly to your inbox.
The US has said it is setting up its own institute to police artificial intelligence, on the day that UK prime minister Rishi Sunak hosted a summit to help shape global rules and scrutiny for AI.
Wednesday’s announcement, by Gina Raimondo, US commerce secretary, came despite the UK’s own plans to set up an international AI Safety Institute — a move she said she “welcomed and applauded”.
Raimondo said the US institute would “develop best-in-class standards . . . for safety, security and testing” and “evaluate known risks and emerging risks of AI at the frontier”.
The two-day summit at Bletchley Park in England and attended by tech leaders including Elon Musk and OpenAI’s Sam Altman is part of a UK bid to help shape global rules and scrutiny for AI.
While British officials played down any divergence with Washington, one tech chief executive said the US stance meant that the country, home to some of tech’s biggest titans, did not “want to lose our commercial control to the UK”.
The summit — billed as a legacy-defining event for Sunak, a year after he took office — is focused on extreme risks such as AI’s possible scope to develop biological and chemical weapons.
But US vice-president Kamala Harris told a separate event miles away in London that AI models already in operation today also posed “existential” dangers.
“When a senior is kicked off their healthcare plan because of a faulty algorithm, is that not existential for him?” she asked.
Harris met Sunak at Downing Street later in the day.
US president Joe Biden had already issued an executive order this week that his administration terms “the strongest set of actions any government in the world has ever taken on AI safety, security and trust”.
The measure will force some groups to share information on how they ensure the safety of their AI tools, while mobilising agencies throughout the US administration.
The 28 countries at the summit — including the US, UK and China — agreed what they said was the first global commitment of its kind. In a communique they pledged to work together to ensure artificial intelligence is used in a “human-centric, trustworthy and responsible” way.
However, the event also exposed divergences over the use of open-source AI models between large companies and start-ups as well as governments around the world.
“On one hand, [smaller, open-source models] enable open innovation, academic experimentation, small start-ups to get ahead, all things that we should encourage and embrace,” said Mustafa Suleyman, chief executive of Inflection, an AI start-up, and a co-founder of Google DeepMind. “And at the same time, they also give a garage tinkerer the capability to have a one-to-many impact in the world, potentially, unlike anything we’ve ever seen.”
Additional reporting from John Thornhill and Yuan Yang in London