Receive free Artificial intelligence updates

The UK’s new AI tsar has warned that artificial intelligence could be used by malicious actors to hack the NHS, causing disruption to rival the Covid-19 pandemic, as he set out his priorities for his £100mn task force this week.

Ian Hogarth, chair of the UK government’s “Frontier AI” task force, said that weaponising the technology to hobble the National Health Service or to perpetrate a “biological attack” were among the biggest AI risks his team was looking to tackle.

AI systems could be used to supercharge a cyber attack on the UK health service or to design pathogens or toxins, he suggested.

Hogarth stressed the need for global collaboration with countries around the world, including China, to address such issues.

“These are fundamentally global risks. And in the same way we collaborate with China in aspects of biosecurity and cyber security, I think there is a real value in international collaboration around the larger scale risks,” he said.

“It’s just like pandemics. It’s the sort of thing where you can’t go it alone in terms of trying to contain these threats.”

Following the task force’s creation in June, Hogarth has appointed AI pioneer Yoshua Bengio and GCHQ director Anne Keast-Butler to its external advisory board, among others set to be announced on Thursday.

The group has received £100mn in initial funding from the government to conduct independent AI safety research that would enable the development of safe and reliable “frontier” AI models, the underlying technology behind AI systems such as ChatGPT. Hogarth said it was the largest amount any nation-state has committed to frontier AI safety.

Hogarth likened the scale of the threat to the NHS to that of the Covid pandemic, which caused years of disruption to the UK’s public health service, and the WannaCry ransomware attack in 2017, which cost the NHS an estimated £92mn and led to the cancellation of 19,000 patient appointments.

“The kind of risks that we are paying most attention to are augmented national security risks,” said Hogarth, a former tech entrepreneur and venture capital investor, in an interview with the Financial Times.

He added: “A huge number of people in technology right now are trying to develop AI systems that are superhuman at writing code . . . That technology is getting better and better by the day. And fundamentally, what that does is it lowers the barriers to perpetrating some kind of cyber attack or cyber crime.”

Hogarth said the UK needed to develop the “state capacity to understand . . . and hopefully moderate the risks so that we can then understand how to put guardrails around this technology and get the best out of it.”

He has been closely involved in planning the UK’s first global AI safety summit at Bletchley Park at the beginning of November. The event aims to bring state leaders together with tech companies, academics and civil society to discuss AI.

Modelled on the Covid vaccine task force, Hogarth’s team has recently recruited several independent academics, including David Krueger from the University of Cambridge and Yarin Gal from the University of Oxford.

“If you want great regulation — if you want the state to be an active partner and understand the risks of the frontier, not just leaving AI companies to mark their own homework — then what you have to do is bring that expertise into government fast,” he said.

Leave a Reply

Your email address will not be published. Required fields are marked *