AI Congress hearing: Sam Altman testifies before Congress saying there is ‘urgent’ need for regulation
OpenAI CEO Sam Altman addresses ‘urgent’ need for AI rules to avert disaster
Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.OpenAI chief executive Sam Altman appeared before Congress on Tuesday morning to testify about the dangers posed by emerging artificial intelligence technologies, including his company’s ChatGPT AI chatbot.
The hearing before the Senate Judiciary Subcommittee on Privacy, Technology and the Law offered congressional members the chance to question Mr Altman and other tech leaders about the “urgent” need to create regulations around AI.
Senators questioned Mr Altman, and the other witnesses, Gary Marcus, a Professor Emeritus at New York University and Christina Montgomery the chief privacy and trust officer at IBM, about the need to AI regulations.
Mr Altman spoke to the dangers of artificial intelligence harming the integrity of future elections, manipulating individuals’ opinions, limiting access to certain information and copyright infringement among other things.
The OpenAI CEO offered possible solutions like creating an international regulator committee or agency, led by the US.
“My worst fears are that [the AI industry] cause significant harm to the world,” Mr Altman said.
Ahead of the hearing, Committee Chairman Richard Blumenthal (D-CT), said, “Artificial intelligence urgently needs rules and safeguards to address its immense promise and pitfalls.”
VOICES: AI isn’t falling into the wrong hands – it’s being built by them
VOICES: The loudest voices in the AI discourse tend to exhibit a startling level of misplaced certainty about what AI is capable of and how the technology can be controlled.
These are essential, urgent questions that will require input from a wide diversity of voices, especially of those who are most likely to be imperilled by the use of AI.
So far, the earliest victims of malign forms of AI, such as inaccurate facial recognition tools or deepfake porn generators, have been members of historically marginalised groups, including women and people of colour. But these groups are woefully underrepresented among the loudest voices in the AI debate.
Arthur Holland Michel reports:
AI isn’t falling into the wrong hands – it’s being built by them
These are essential, urgent questions that will require input from a wide diversity of voices – especially of those who are most likely to be imperilled by the use of AI
Witnesses encourage lawmakers to think of the future
Sam Altman, Christina Montgomery and Gary Marcus all agreed that artificial intelligence is nowhere near being as advanced as it can be, therefore lawmakers need to make legislation that can be used as it develops.
Mr Altman and Mr Marcus agree that the best way to start this is creating a regulatory agency or independent commission to look into the complicated world of AI to create a set of regulations for companies that create the technologies.
Ms Montgomery suggested creating rules that begin with the risk of AI rather than regulating the technology itself is a better way to go about it.
But all three witnesses believe Congress should think of guidelines that can be applied to the future of automatic generated content and more.
AI could harm local news
Senator Amy Klobuchar (D-MN) brought to attention how generative AI could harm local news by encouraging users to use ChatGPT rather than read news from a newspaper.
Ms Klobuchar asked Sam Altman, the CEO of OpenAI, about how the platform will compensate the smaller new organisations that ChatGPT pulls information from to generate answers.
“Do you understand that this could be exponentially worse in terms of local news content if they’re not compensated?” Ms Klobuchar asked.
“Because what they need is to be compensated for their content and not have it stolen,” she added.
Mr Altman said ChatGPT would be willing to do things to help local news.
He said, “We would certainly like to.”
AI expert says medical advise through AI could be harmful
Professor Gary Marcus, an expert in artificial intelligence, said lawmakers should be concerned about medical advice given through AI systems like ChatGPT.
Mr Marcus said there needs to be “tight regulations” around what AI may generate, or not, on medical advice as it could make users think they have a medical condition or use it to seek medical advice.
Senator Kennedy asks witnesses for guidance on rules
Senator Ted Kennedy (R-LA) asked the witnesses what rules they would implore if they were “kings and queens” for the day under the “hypothesis” that Congress doesn’t understand AI and could harm it by creating regulations.
AI can change jobs of the future
Over and over again, lawmakers and witnesses debated the negative impact AI will have on the jobs of the future during Tuesday’s hearing.
Asked if he thought AI could harm most jobs, OpenAI CEO Sam Altman said he felt optimistic about how AI will change jobs.
“I believe there are far greater jobs on the other side of this,” Mr Altman told members of the Senate Judiciary Subcommittee.
Christina Montgomery, chief privacy and trust officer at IBM agreed with Mr Altman.
“The most important thing that we can be doing and should be doing now is prepare the workforce of today and the workforce of tomorrow of partnering with the AI technologies,” Ms Montgomery said.
Gary Marcus, a professor at New York University and an expert in AI, slightly disagreed, saying far down the line AI could “replace” most jobs.
AI could take over much of ‘heavy lifting’ involved in teaching, says Keegan
‘AI could have the power to transform a teacher’s day-to-day work’
Sam Altman says AI can go ‘quite wrong’ without regulations
While testifying before Congress, Sam Altman said, “If this technology goes wrong, it can go quite wrong” when speaking about the harms that artificial intelligence can have.
Sam Altman says OpenAI could use advertisers in the future
When asked if OpenAI would use advertisers in the future, CEO Sam Altman said it wasn’t out of the question completely.
Mr Altman previously said during Tuesday’s hearing that OpenAI was not interested in collecting personal data and using it in OpenAI’s technology models because it was not advertising-focused.
However, when asked if advertising could ever be an option Mr Altman said, “I wouldn’t say never.”
Mr Altman said he prefers a “subscription-based model” to profit from.
Lawmakers compare the need for legislation to the mistakes of social media
From the beginning of the hearing, a similar sentiment has been said among many lawmakers: now is the time to act.
Senator Richard Blumenthal (D-CT) started the hearing by explaining how lawmakers cannot make the same mistake that they made by failing to regulate social media before it became harmful to children and young people.
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments