ChatGPT creator building ‘early warning system’ for AI biological weapon – but says it probably won’t happen
Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.OpenAI, creators of ChatGPT, says it is building an early warning system that would alert people if artificial intelligence was able to make a biological weapon.
But artificial intelligence only appears to pose a slight risk of helping people create such threats, it has said.
That is according to OpenAI’s latest tests to understand how much risk large language models, such as those that power ChatGPT, could create biological threats.
But it cautioned that the finding was not enough to be conclusive – and that further work must be done to understand the true threat posed by artificial intelligence.
The work is part of OpenAI’s broader plan for its “Preparedness Framework”, which aims to evaluate AI-enabled safety risks. It reported part of that early work in an attempt to gather broader input, it said.
As part of the study, it looked at ways that humans might be able to use AI to gather information on creating biological weapons. It looked at ways that, for instance, they could trick an AI model into giving up information about the ingredients of a biological weapon.
Humans were given tasks that would model their ability to create a “biothreat”. They were then evaluated on the basis of how successfully they would be able to actually do it.
The researchers found that the people with access to such a system were slightly more successful. But that was not necessarily significant enough to be meaningful.
The researchers nonetheless concluded that it was relatively easy to get information about biothreats – without even needing to use artificial intelligence. There is far more information available online, they noted.
The OpenAI researchers also noted that it is expensive to do such tasks, and that more work it is to be done to better understand biorisks – such as how much information is actually needed.
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments