Talk of AI dangers has ‘run ahead of the technology’, says Nick Clegg
Meta announced the opening of its new open-source large language model, Llama 2, on Tuesday.
Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.Talk of artificial intelligence (AI) models posing a threat to humanity has “run ahead of the technology”, according to Sir Nick Clegg.
The former Liberal Democrat leader and deputy prime minister said concerns around “open-source” models, which are made freely available and can be modified by the public, were exaggerated, and the technology could offer solutions to problems such as hate speech.
It comes after Facebook’s parent company Meta said on Tuesday that it was opening access to its new large language model, Llama 2, which will be free for research and commercial use.
Generative AI tools such as ChatGPT, a chatbot that can provide detailed prose responses and engage in human-like conversations, have become widely used in the public domain in the last year.
Speaking on BBC Radio 4’s Today programme on Wednesday, Sir Nick, president of global affairs at Meta, said: “My view is that the hype has somewhat run ahead of the technology.
“I think a lot of the existential warnings relate to models that don’t currently exist, so-called super-intelligent, super-powerful AI models – the vision where AI develops an autonomy and agency on its own, where it can think for itself and reproduce itself.
“The models that we’re open-sourcing are far, far, far short of that. In fact, in many ways they’re quite stupid.”
Sir Nick said a claim by Dame Wendy Hall, co-chair of the Government’s AI Review, that Meta’s model could not be regulated and was akin to “giving people a template to build a nuclear bomb” was “complete hyperbole”, adding: “It’s not as if we’re at a T-junction where firms can choose to open source or not. Models are being open-sourced all the time already.”
He said Meta had 350 people “stress-testing” its models over several months to check for potential issues, and that Llama 2 was safer than any other large language models currently available on the internet.
Meta has previously faced questions around security and trust, with the company fined 1.2 billion euros (£1 billion) in May over the transfer of data from European users to US servers.