Things to know about an AI safety summit in Seoul
South Korea is set to host a mini-summit this week on risks and regulation of artificial intelligence
Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.South Korea is set to host a mini-summit this week on risks and regulation of artificial intelligence, following up on an inaugural AI safety meeting in Britain last year that drew a diverse crowd of tech luminaries, researchers and officials.
The gathering in Seoul aims to build on work started at the U.K. meeting on reining in threats posed by cutting edge artificial intelligence systems.
Here is what you need to know about the AI Seoul Summit and AI safety issues.
WHAT INTERNATIONAL EFFORTS HAVE BEEN MADE ON AI SAFETY?
The Seoul summit is one of many global efforts to create guardrails for the rapidly advancing technology that promises to transform many aspects of society, but has also raised concerns about new risks for both everyday life such as algorithmic bias that skews search results and potential existential threats to humanity.
At November’s U.K. summit, held at a former secret wartime codebreaking base in Bletchley north of London, researchers, government leaders, tech executives and members of civil society groups, many with opposing views on AI, huddled in closed-door talks. Tesla CEO Elon Musk and OpenAI CEO Sam Altman mingled with politicians like British Prime Minister Rishi Sunak.
Delegates from more than two dozen countries including the U.S. and China signed the Bletchley Declaration, agreeing to work together to contain the potentially “catastrophic” risks posed by galloping advances in artificial intelligence.
In March, the U.N. General Assembly approved its first resolution on artificial intelligence, lending support to an international effort to ensure the powerful new technology benefits all nations, respects human rights and is “safe, secure and trustworthy.”
Earlier this month, the U.S. and China held their first high-level talks on artificial intelligence in Geneva to discuss how to address the risks of the fast-evolving technology and set shared standards to manage it. There, U.S. officials raised concerns about China’s “misuse of AI” while Chinese representatives rebuked the U.S. over “restrictions and pressure” on artificial intelligence, according to their governments.
WHAT WILL BE DISCUSSED AT THE SEOUL SUMMIT?
The May 21-22 meeting is co-hosted by the South Korean and U.K. governments.
On day one, Tuesday, South Korean President Yoon Suk Yeol and Sunak will meet leaders virtually. A few global industry leaders have been invited to provide updates on how they’ve been fulfilling the commitments made at the Bletchley summit to ensure the safety of their AI models.
On day two, digital ministers will gather for an in-person meeting hosted by South Korean Science Minister Lee Jong-ho and Britain’s Technology Secretary Michelle Donelan. Participants will share best practices and concrete action plans. They also will share ideas on how to protect society from potentially negative impacts of AI on areas such as energy use, workers and the proliferation of mis- and disinformation, according to the organizers.
The meeting has been dubbed a mini virtual summit, serving as an interim meeting until Paris holds a full-fledged in-person edition later this year.
The digital ministers’ meeting is to include representatives from countries like the United States, China, Germany, France and Spain and companies including ChatGPT-maker OpenAI, Google, Microsoft and Anthropic.
WHAT PROGRESS HAVE AI SAFETY EFFORTS MADE?
The accord reached at the U.K. meeting was light on details and didn’t propose a way to regulate the development of AI.
“The United States and China came to the last summit. But when we look at some principles announced after the meeting, they were similar to what had already been announced after some U.N. and OECD meetings,” said Lee Seong-yeob, a professor at the Graduate School of Management of Technology at Seoul’s Korea University. "There was nothing new.”
It's important to hold a global summit on AI safety issues, he said, but it will be “considerably difficult” for all participants to reach agreements since each country has different interests and different levels of domestic AI technologies and industries.
The gathering is being held as Meta, OpenAI and Google roll out the latest versions of their AI models.
The original AI Safety Summit was conceived as a venue for hashing out solutions for so-called existential risks posed by the most powerful “foundation models” that underpin general purpose AI systems like ChatGPT.
Pioneering computer scientist Yoshua Bengio, dubbed one of the “ godfathers of AI,” was tapped at the U.K. meeting to lead an expert panel tasked with drafting a report on the state of AI safety. An interim version of the report released on Friday to inform discussions in Seoul identified a range of risks posed by general purpose AI, including its malicious use to increase the “scale and sophistication” of frauds and scams, supercharge the spread of disinformation, or create new bioweapons.
Malfunctioning AI systems could spread bias in areas like healthcare, job recruitment and financial lending, while the technology's potential to automate a big range of tasks also poses systemic risks to the labor market, the report said.
South Korea hopes to use the Seoul summit to take the initiative in formulating global governance and norms for AI. But some critics say the country lacks AI infrastructure advanced enough to play a leadership role in such governance issues.
__
Chan reported from London.