Stay up to date with notifications from The Independent

Notifications can be managed in browser preferences.

AI Safety Summit: What have we learned?

The first global gathering on AI safety has concluded at Bletchley Park.

Martyn Landi
Thursday 02 November 2023 21:10 GMT
Rishi Sunak at the AI Safety Summit (Leon Neal/PA)
Rishi Sunak at the AI Safety Summit (Leon Neal/PA) (PA Wire)

Your support helps us to tell the story

This election is still a dead heat, according to most polls. In a fight with such wafer-thin margins, we need reporters on the ground talking to the people Trump and Harris are courting. Your support allows us to keep sending journalists to the story.

The Independent is trusted by 27 million Americans from across the entire political spectrum every month. Unlike many other quality news outlets, we choose not to lock you out of our reporting and analysis with paywalls. But quality journalism must still be paid for.

Help us keep bring these critical stories to light. Your support makes all the difference.

The first AI Safety Summit has come to an end with Rishi Sunak hailing “landmark” agreements and progress on global collaboration around artificial intelligence.

But what did we learn during the two-day summit at Bletchley Park?

– Rishi Sunak wants to make the UK a ‘global hub’ for AI safety

As the summit closed, the Prime Minister made a notable announcement around the safe testing and rollout of AI.

The UK’s new AI Safety Institute would be allowed to test new AI models developed by major firms in the sector before they are released.

The agreement, backed by a number of governments from around the world as well as major AI firms including OpenAI and Google DeepMind, will see external safety testing of new AI models against a range of potentially harmful capabilities, including critical national security and societal harms.

The UK institute will work closely with its newly announced US counterpart.

In addition, a UN-backed global panel will put together a report on the state of the science of AI, looking at existing research and raising any areas that need prioritising.

Then there is the Bletchley Declaration, signed by all attendees on day one of the summit – including the US and China – which acknowledged the risks of AI and pledged to develop safe and responsible models.

It all left the Prime Minister able to say at the end of the summit that the AI Safety Institute, and the UK, would act as a “global hub” on AI safety.

Elon Musk thinks AI is one of the biggest threats facing humanity

The outspoken billionaire’s visit to the summit was seen as a major endorsement of its aims by the UK Government, and while at Bletchley Park, the Tesla and SpaceX boss reiterated his long-held concerns around the rise of AI.

Having suggested a developmental pause earlier this year, he called the technology “one of the biggest threats” to the modern world because “we have for the first time the situation where we have something that is going to be far smarter than the smartest human”.

He said the summit was “timely” given the nature of the threat, and suggested a “third-party referee” in the sector to oversee the work of AI companies.

– Governments from around the world have acknowledged the risks too

Another key moment of the summit came early on day one with the announcement of the Bletchley Declaration, signed by all the nations in attendance, affirming their efforts to work together on the issue.

The declaration says “particular safety risks” lay around frontier AI – the general purpose models which are likely to exceed the capabilities of the AI models we know today.

It warns that substantial risks may arise from “potential international misuse” or from losing control of such systems, and names cybersecurity, biotechnology and disinformation as particular areas of concern.

To respond to these risks it says countries will “resolve to work together in an inclusive manner to ensure human-centric, trustworthy and responsible AI that is safe and supports the good of all through existing international fora and other relevant initiatives”.

Many experts have noted that this is only the start of the conversation on AI, but is a promising place to start.

– A network of global safety institutes could be the first steps towards wider AI regulation

Mr Sunak laid out plans for the UK’s AI Safety Institute at the close of the summit, and how it will evaluate and test new AI models before and after they are released.

This week, the US also confirmed plans to create its own institute, and both countries have pledged that the organisations will work in partnership.

Collaboration was a key theme of the summit, in the Bletchley Declaration and in the state of the science on AI report, which will see all 28 countries at the event recommend an expert to join the report’s global panel.

With more countries expected to create their own institutes, a wider network of safety expert groups collaborating on and examining advances in AI could help pave the way for the framework for more binding rules on AI development, applied around the world.

– There are more safety summits planned

Before the Bletchley Park summit, the Government said it wanted to start a global conversation to continue over the coming years given the speed of AI’s development.

That feat appears to have been achieved with the confirmation that two more summits have been confirmed for next year: a virtual mini-conference hosted by South Korea in around six months and a full summit by France a year from now.

– Some unanswered questions remain

Getting the US, the EU and China to all sign the Bletchley Declaration was a “massive deal”, Technology Secretary Michelle Donelan said at the summit.

But some commentators have already questioned whether political tensions between nations can be truly put aside to collaborate over AI.

China was not included in some of the discussions on the second day of the summit, with “like-minded governments” around AI safety testing.

Questions also remain over plans to combat the impact AI is already having on daily life, notably on jobs.

Critics have questioned why the summit only focused on longer term AI technologies, and not the generative AI apps which some believe are already threatening industries including publishing and administrative work, as well creative sectors.

Even by the end of the summit, discussion on the topic had been sparse.

It remains unclear how much power the UK’s AI Safety Institute will have when it comes to stopping the release of AI models it believes could be unsafe.

The new agreement around safety testing is voluntary and the Prime Minister admitted that “binding requirements” are likely to be needed to regulate the technology, but said now is the time to move quickly without laws.

But the true power of the institute and the agreements made during the summit will not be known until an AI model appears that raises concerns among the new safety bodies.

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in