AI companies will need to start reporting their safety tests to the US government
The Biden administration will start implementing a new requirement for the developers of major artificial intelligence systems to disclose their safety test results to the government
Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.The Biden administration will start implementing a new requirement for the developers of major artificial intelligence systems to disclose their safety test results to the government.
The White House AI Council is scheduled to meet Monday to review progress made on the executive order that President Joe Biden signed three months ago to manage the fast-evolving technology.
Chief among the 90-day goals from the order was a mandate under the Defense Production Act that AI companies share vital information with the Commerce Department, including safety tests.
Ben Buchanan, the White House special adviser on AI, said in an interview that the government wants "to know AI systems are safe before they’re released to the public — the president has been very clear that companies need to meet that bar.”
The software companies are committed to a set of categories for the safety tests, but companies do not yet have to comply with a common standard on the tests. The government's National Institute of Standards and Technology will develop a uniform framework for assessing safety, as part of the order Biden signed in October.
AI has emerged as a leading economic and national security consideration for the federal government, given the investments and uncertainties caused by the launch of new AI tools such as ChatGPT that can generate text, images and sounds. The Biden administration also is looking at congressional legislation and working with other countries and the European Union on rules for managing the technology.
The Commerce Department has developed a draft rule on U.S. cloud companies that provide servers to foreign AI developers.
Nine federal agencies, including the departments of Defense, Transportation, Treasury and Health and Human Services, have completed risk assessments regarding AI's use in critical national infrastructure such as the electric grid.
The government also has scaled up the hiring of AI experts and data scientists at federal agencies.
“We know that AI has transformative effects and potential,” Buchanan said. “We’re not trying to upend the apple cart there, but we are trying to make sure the regulators are prepared to manage this technology.”
Subscribe to Independent Premium to bookmark this article
Want to bookmark your favourite articles and stories to read or reference later? Start your Independent Premium subscription today.