AI Congress hearing: Sam Altman testifies before Congress saying there is ‘urgent’ need for regulation
OpenAI CEO Sam Altman addresses ‘urgent’ need for AI rules to avert disaster
Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.OpenAI chief executive Sam Altman appeared before Congress on Tuesday morning to testify about the dangers posed by emerging artificial intelligence technologies, including his company’s ChatGPT AI chatbot.
The hearing before the Senate Judiciary Subcommittee on Privacy, Technology and the Law offered congressional members the chance to question Mr Altman and other tech leaders about the “urgent” need to create regulations around AI.
Senators questioned Mr Altman, and the other witnesses, Gary Marcus, a Professor Emeritus at New York University and Christina Montgomery the chief privacy and trust officer at IBM, about the need to AI regulations.
Mr Altman spoke to the dangers of artificial intelligence harming the integrity of future elections, manipulating individuals’ opinions, limiting access to certain information and copyright infringement among other things.
The OpenAI CEO offered possible solutions like creating an international regulator committee or agency, led by the US.
“My worst fears are that [the AI industry] cause significant harm to the world,” Mr Altman said.
Ahead of the hearing, Committee Chairman Richard Blumenthal (D-CT), said, “Artificial intelligence urgently needs rules and safeguards to address its immense promise and pitfalls.”
AI Congress hearing live: ‘Testing labs’ suggested
Sam Altman will face the first questions, with Senator Blumenthal suggesting to him that “independent testing labs” for AI tools could be one idea to help regulate the technology.
He also asks what his thoughts are about mass job displacement. Altman agrees that there will be significant disruption, but adds that he is “very optimistic about how great the jobs of the future will be.”
AI Congress hearing live: ChatGPT boss says technology ‘can go quite wrong'
The OpenAI boss is asked about the potential risks of AI.
“My worst fears are that we cause significant harms to the world,” he responds. “If this technology goes wrong, it can go quite wrong.”
It’s worth noting that Altman has admitted in the past to being a doomsday prepper - specifically prepping for an AI apocalypse.
You can read all about that here:
Why tech bosses are doomsday prepping
Monumental risks of ‘epoch-defining’ AI mean even those building it are preparing for the worst
AI expert says we need a ‘cabinet-level’ position or more to regulate
Gary Marcus, a leading voice in AI, author and professor, advised the Senate Judiciary Committee that the US, or the world, may need a completely new agency to regulate it.
“My view is that we probably need a cabinet-level organisation within the US to address this,” Mr Marcus told the committee on Tuesday.
Mr Marcus explained how AI will likely be a massive part of the future but given how fast-moving and complicated it is the government cannot rely on current legislation and agencies to understand it and thoughtfully regulate it.
Sam Altman says his worst fear is causing ‘significant harm to the world'
“My worst fear is that we cause significant harm to the world,” Sam Altman, the CEO of OpenAI told the Senate Judiciary Subcommittee on Privacy, Technology and the Law.
OpenAI is one of the leading companies in creating new artificial intelligence. The company has created Dall-E and Chat GPT.
Sam Altman says he is concern about election misinformation
Multiple members of the Senate Judiciary Subcommittee raised concerns about the future of election misinformation regarding artificial intelligence, citing foreign intervention with social media in the 2016 election.
Sam Altman said he is “quite concerned about the impact [AI] can have on elections” due to the technologies’ limitations.
Several reports have shown how Chat GPT can generate false information and cite misinformation in articles when asked by users - something called “hallucinations”.
Mr Altman said Chat GPT has been developed to refuse to generate answers to harmful things and is monitored to ensure false information is not consistently given under the guise of truth.
AI expert says the key to regulating AI is transparency
Gary Marcus, an AI expert, said the key to understanding AI systems and regulating them is to ask companies to be more transparent about how they train the models.
“What [AI] is trained on has biases for the system,” Mr Marcus said during the Senate Judiciary Subcommittee hearing on Tuesday.
Mr Marcus encouraged companies with AI tools to provide transparent information that would explain what their system is trained on to help people determine whether the information it is providing is biased.
OpenAI CEO says they are working on copyright model
Sam Altman, the CEO of OpenAI said that the company was working on how to handle copyright with their systems.
“Creators deserve control over how their creations are used,” Mr Altman said on Tuesday.
AI tools like Chat GPT have been accused of stealing artists’ work and then re-purposing it as original content.
Mr Altman said the company was working to create a new copyright model to give artists credit, compensation and consent.
AI leaders and experts agree Section 230 does not apply to them
Sam Altman, Christina Montgomery and Gary Marcus all agreed that Section 230 does not apply to their platforms and indicated new regulatory legislation is needed.
Section 230 provides immunity for online computer services when harmful content is uploaded to their platforms.
Sam Altman speaks to AI-generated images
The CEO of Open AI spoke to AI-generated images, like the one of Donald Trump being arrested when he was indicted.
Sam Altman said an easy way to fix misinformation spread online from images like this is to label them as “generated.”
Trump shares deepfake photo of himself praying as AI images of arrest spread online
Former president shares AI-generated image after deepfake arrest images viewed 5 million times
OpenAI wants less people to use it
OpenAI CEO Sam Altman said multiple times throughout Tuesday’s hearing that ChatGPT needs less people to use it because it does not have enough GPT for everyone to use it.
Mr Altman made it very clear to the Senate Judiciary Subcommittee that they are not an advertising-based platform therefore having more users does not benefit OpenAI.
Mr Altman’s comments were related to lawmakers raising concerns about AI using personal data to grasp people’s attention and hold it for as long as they can.
Though he recognised that AI is being used in marketing, he clarified that was not OpenAI’s goal.
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments