Google throttles Gemini AI to prevent election interference
Tech giant says restrictions placed on artificial intelligence are ‘out of an abundance of caution’
Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.Google has announced new blocks on its Gemini artificial intelligence tool to prevent it from answering certain questions about elections.
The tech giant said the move to throttle Gemini in India and the US was part of its plan to take a “responsible approach” to generative AI products.
“Out of an abundance of caution on such an important topic, we have begun to roll out restrictions on the types of election-related queries for which Gemini will return responses,” the company wrote in a blog post on Tuesday.
“We take our responsibility for providing high-quality information for these types of queries seriously, and are continuously working to improve our protections.”
2024 is being touted as the biggest ever year for elections, with more than 4 billion people across 50 countries heading to the polls. It is also the first time for the majority of those voting that generative AI has been widely available to the public.
Experts have warned about generative AI’s potential to mislead voters by creating convincing misinformation in the form of text, as well as deepfake images and videos.
A recent survey conducted by OnePoll found that AI-generated content was among the top concerns for both Democrats and Republicans ahead of the US elections later this year.
Commissioned by security firm Yubico and the non-profit group Defending Digital Campaigns, the survey of 2,000 Americans found 42 per cent of Democrats and 49 per cent of Republicans believed AI would have a negative impact on the outcome of the elections.
“We found it interesting that over 78 per cent of respondents are concerned about AI-generated content being used to impersonate a political candidate or create inauthentic content, with Democrats at 79 per cent and Republicans at 80 per cent,” said David Treece, vice president of solutions architecture at Yubico.
“Perhaps even more telling is that they believe AI will have a negative effect on this year’s election outcomes.”
Google follows other companies in adding limits to AI products, with ChatGPT creator OpenAI laying out its plan to prevent its technology from being misused earlier this year.
OpenAI said it was drawing together members of its engineering, legal, policy, safety and threat intelligence teams in order to investigate and address potential abuse of its ChatGPT and Dall-E tools.
“As we prepare for elections in 2024 across the world’s largest democracies, our approach is to continue our platform safety work by elevating accurate voting information, enforcing measured policies, and improving transparency,” a company blog post stated in January.
“Like any new technology, these tools come with benefits and challenges. They are also unprecedented, and we will keep evolving our approach as we learn more about how our tools are used.”
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments