White House reveals plan to ‘protect’ citizens from danger of AI
New measures aim to ‘protect our society, security and economy’ from the risks of artificial intelligence
Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.The White House has revealed a plan to protect citizens from the dangers of artificial intelligence.
Artificial intelligence is “one of the most powerful technologies of our time”, it said, but comes with risks that must be mitigated. New technologies must be built to protect “our society, security, and economy”, it said.
“Importantly, this means that companies have a fundamental responsibility to make sure their products are safe before they are deployed or made public,” the White House warned.
The new plan comes amid increasing fear that artificial intelligence tools are being released too quickly and that they could put people in danger as a result. A range of experts – including those involved in building such systems – have warned that failure to regulate the systems could put safety at risk.
The plans include giving $140 million in funding to the National Science Foundation, which will be used to launch seven new National AI Research Institutes, bringing the total to 25. Those organisations aim to encourage people to focus on AI advances “that are ethical, trustworthy, responsible, and serve the public good”.
They will also include new assessments of AI systems that have already been released. A range of companies – including Anthropic, Google, Hugging Face, Microsoft, NVIDIA, OpenAI, and Stability AI – have all committed to have their systems checked to ensure they are safe, the White House said.
That will consist of an “independent exercise” during DEFCON 31, a hacker event taking place in August. Thousands of people will evaluate the systems to ensure they are in keeping with the existing “AI Bill of Rights” that has been released by the Biden administration.
It will also push the US government to ensure that it is responsible using AI itself, and “leading by example on mitigating AI risks and harnessing AI opportunities”. That will mean releasing a draft policy on the use of AI systems in the US government, which will be open to public comment.
The new plans were released around the same time of a meeting between vice president Kamala Harris and chief executives from OpenAI, Anthropic, Microsoft, and Google. They are set to discuss the dangers of AI, and how the world can be protected from them.
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments