Llama 2: How Mark Zuckerberg’s new ChatGPT rival could lead to ‘obscene’ AI
Critics warn Meta’s open approach will lead to ‘spam, fraud, malware, privacy violations, harassment... and dangerous abuse’
Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.Meta has become the first major tech firm to release its flagship artificial intelligence chatbot as a free and open source product.
The launch of Llama 2 with an open commercial licence gives researchers access to the large language model (LLM) AI tool, while also allowing companies and startups to integrate it into their products.
Meta boss Mark Zuckerberg said the move would “drive progress across the industry”, while the firm’s chief AI scientist Yann LeCun claimed the release of Llama 2 will “change the landscape of the LLM market”. However some fear the open approach may lead to misuse of the technology.
Without safeguards put in to restrict user access if rules are broken – like with OpenAI’s ChatGPT and Google’s Bard – open source AI models could potentially be used to generate limitless spam or disinformation.
“By releasing Llama 2 open source, Meta has either ignored the potential for misuse, or wagered that allowing misuse in the short term will contribute to AI safety in the long term,” the Center for AI Safety noted in a blog post following the release of the AI tool on Tuesday.
Meta claimed that its approach will help democratise the technology, which costs vast amounts of money and resources to produce.
The company said in a statement that it supports an “open innovation approach to AI,” adding that “responsible and open innovation gives us all a stake in the AI development process, bringing visibility, scrutiny and trust to these technologies.”
The openness will help mitigate the bias inherent in AI systems, Meta claimed, as it will allow researchers to see the training data and code used to build it.
“Open source drives innovation because it enables many more developers to build with new technology,” Mr Zuckerberg said in a Facebook post on Tuesday.
“It also improves safety and security because when software is open, more people can scrutinise it to identify and fix potential issues. I believe it would unlock more progress if the ecosystem were more open, which is why we’re open sourcing Llama 2.”
Last month, US senators Josh Hawley and Richard Blumenthal wrote in a letter to Mr Zuckerberg that the firm’s technology could lead to a rise in “spam, fraud, malware, privacy violations and harassment”, such as creating “obscene content” involving children.
“Even in the short time that generative AI tools have been available to the public, they have been dangerously abused — a risk that is further exacerbated with open source models,” they wrote.
“At least at this stage of technology’s development, centralised AI models can be more effectively updated and controlled to prevent and respond to abuse compared to open source AI models.”
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments