Why Biden is so concerned about AI
President is concerned about fake images, audio and videos, amid fears they could be used to ruin reputations and perpetrate scams
Your support helps us to tell the story
This election is still a dead heat, according to most polls. In a fight with such wafer-thin margins, we need reporters on the ground talking to the people Trump and Harris are courting. Your support allows us to keep sending journalists to the story.
The Independent is trusted by 27 million Americans from across the entire political spectrum every month. Unlike many other quality news outlets, we choose not to lock you out of our reporting and analysis with paywalls. But quality journalism must still be paid for.
Help us keep bring these critical stories to light. Your support makes all the difference.
President Joe Biden is addressing concerns about artificial intelligence as the administration attempts to guide the development of the rapidly evolving technology.
The White House said on Monday (30 October) that a sweeping executive order will address concerns about safety and security, privacy, equity and civil rights, the rights of consumers, patients, and students, and supporting workers.
The order will also hand a list of tasks to federal agencies to oversee the development of the technology.
‘We have to move as fast, if not faster than the technology itself’
“We can’t move at a normal government pace,” White House Chief of Staff Jeff Zients quoted Mr Biden as telling his staff, according to the AP. “We have to move as fast, if not faster than the technology itself.”
Mr Biden believes that the US government was late to the game to take into account the risks of social media, leading to the related mental health issues now seen among US youth.
While AI may help drastically develop cancer research, foresee the impacts of the climate crisis, and improve the economy and public services, it may also spread fake images, audio and videos, with possibly widespread political consequences. Other harmful effects include the worsening of racial and social inequality and the possibility that it can be used to commit crimes, such as fraud.
The president of the Center for Democracy & Technology, Alexandra Reeve Givens, told the AP that the Biden administration is using the tools at their disposal to issue “guidance and standards to shape private sector behaviour and leading by example in the federal government’s own use of AI”.
Mr Biden’s executive order comes after technology companies have already made voluntary commitments, and the aim is that congressional legislation and international action will follow.
The White House got commitments earlier this year from Google, Meta, Microsoft, and OpenAI to put in place safety standards when building new AI tools and models.
Monday’s executive order employs the Defense Production Act to require AI developers to share safety test results and other data with the government. The National Institute of Standards and Technology is also set to establish standards governing the development and use of AI.
Similarly, the Department of Commerce will publish guidance outlining the labelling and watermarking of content created using AI.
An administration official told the press on Sunday that the order is intended to be implemented within between 90 days and a year. Safety and security issues have the tightest deadlines.
Mr Biden met with staff last Thursday for a half-hour meeting that grew into an hour and 10 minutes to put the finishing touches on the order.
Biden ‘impressed and alarmed’ by AI
The president was engaged in meetings about the technology in the months that preceded Monday’s order signing, meeting twice with the Science Advisory Council to discuss AI and bringing up the technology during two cabinet meetings.
At several gatherings, Mr Biden also pushed tech industry leaders and advocates regarding what the technology is capable of.
Deputy White House Chief of Staff Bruce Reed told the AP that Mr Biden “was as impressed and alarmed as anyone”.
“He saw fake AI images of himself, of his dog,” he added. “He saw how it can make bad poetry. And he’s seen and heard the incredible and terrifying technology of voice cloning, which can take three seconds of your voice and turn it into an entire fake conversation.”
The AI-created images and audio prompted Mr Biden to push for the labelling of AI-created content. He was also concerned about older people getting a phone call from an AI tool using a fake voice sounding like a family member or other loved one for the purpose of committing a scam.
Meetings on AI often went long, with the president once telling advocates: “This is important. Take as long as you need.”
Mr Biden also spoke to scientists about the possible positive impacts of the technology, such as explaining the beginning of the universe, and the modelling of extreme weather events such as floods, where old data has become inaccurate because of the changes caused by the climate crisis.
‘When the hell did I say that?’
On Monday at the White House, Mr Biden addressed the concerns about “deepfakes” during a speech in connection with the signing of the order.
“With AI, fraudsters can take a three-second recording of your voice, I have watched one of me on a couple of occasions. I said, ‘When the hell did I say that?’” Mr Biden said to laughter from the audience.
Mr Reed added that he watched Mission: Impossible — Dead Reckoning Part One with Mr Biden one weekend at Camp David. At the beginning of the film, the antagonist, an AI called “the Entity”, sinks a submarine, killing its crew.
“If he hadn’t already been concerned about what could go wrong with AI before that movie, he saw plenty more to worry about,” Mr Reed told the news agency.
The White House has faced pressure from a number of allied groups to address possible harmful effects of AI.
The director of the racial justice programme at The American Civil Liberties Union, ReNika Moore, told the AP that the union met with the administration to make sure “we’re holding the tech industry and tech billionaires accountable” so that the new tools will “work for all of us and not just a few”.
Ex-Biden official Suresh Venkatasubramanian told the news agency that law enforcement’s use of AI, such as at border checkpoints, is one of the top challenges.
“These are all places where we know that the use of automation is very problematic, with facial recognition, drone technology,” the computer scientist said.
Subscribe to Independent Premium to bookmark this article
Want to bookmark your favourite articles and stories to read or reference later? Start your Independent Premium subscription today.
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments