Google boss ‘cautiously optimistic’ about protecting elections from deepfakes

Sundar Pichai also said AI regulation would be needed, but should strike a balance to protect innovation.

Martyn Landi
Thursday 16 May 2024 02:03 BST
Google CEO Sundar Pichai during a visit to Argyle Primary School, in London, alongside Minister for Digital Policy Matt Hancock, as Google announced plans to bring VR technology to one million schoolchildren in the UK as part of a new learning initiative (PA)
Google CEO Sundar Pichai during a visit to Argyle Primary School, in London, alongside Minister for Digital Policy Matt Hancock, as Google announced plans to bring VR technology to one million schoolchildren in the UK as part of a new learning initiative (PA) (PA Archive)

Google boss Sundar Pichai has said he believes the tech giant, and wider industry, is well placed to combat AI-generated misinformation around elections.

Concerns have been raised about the potential for AI-generated audio and visual content to be used by bad actors to interfere with elections around the world – in a year where billions of people in many of the world’s major democracies are due to go to the polls.

Speaking during the technology giant’s annual developer conference, Google I/O, Mr Pichai said he was “cautiously optimistic” about the his own company’s ability to handle this threat, as well as the wider industry.

He said Google had invested heavily in projects to monitor technological threats, including artificial intelligence, while also looked to develop its own AI tools in a safe and responsible fashion to reduce the potential for them to be used nefariously.

“I think we all have come a long way as an industry over the past few years. I think as Google, we have invested in elections integrity as one of our highest priorities as a company, particularly in our products like search and YouTube and as we deploy AI, part of the reason we’re doing work like AI-assisted red teaming is to stay ahead of these problems,” Mr Pichai said.

He added that the company had internal projects which identified threats to society and built technology to counter them, and also worked with governments on safety issues.

He said: “We undertake a lot of research through projects like Jigsaw as well so we can understand patterns in the world and report on them. And we share information where appropriate with the right governments and so on. So, I think we’ve made a lot of progress.

“Having said that, I think given the pace of the progress with the technology we’ve all been worried about deepfakes.”

A number of senior politicians, including Prime Minister Rishi Sunak, Labour leader Sir Keir Starmer and the Mayor of London, Sadiq Khan, have all been victims of AI-generated deepfakes, and Mr Pichai acknowledged that the issue did pose a threat, but would be more of an issue in years to come.

“So far, I think we’re in a fortunate place where I still think we are in a moment where as a society we are able to more easily adjudicate what is real versus what is not, and in combination with all the work we’re doing, I’m cautiously optimistic we’ll be able to do our part handling all of this well,” he said.

“I think the stakes get higher in the future, but for this year, I am cautiously optimistic.”

Speaking to reporters at the Google conference, where the company unveiled an array of new AI tools and features, the tech giant’s chief executive also spoke on the subject of AI regulation, another key issue in the sector, with many countries currently debating how to best monitor the rapidly evolving technology.

He said he believed regulation would be vital to the growth of AI, but it needed to “strike a balance” with allowing freedom to innovate.

“It makes sense to me that countries are thinking about this extraordinarily important topic, I think, as governments, when you’re thinking about the impact that AI will have on society, it seems right to me that they’re debating these topics,” Mr Pichai said.

“I think regulation will play an important role. I think it’s important to strike a balance.

“AI is going to drive a lot of opportunities – economic opportunities – which could pretty much affect industries across the board.

“So allowing for AI innovation to happen in your country is going to be important, otherwise you risk getting left behind. So getting regulation in a way you can promote and you can embrace innovation, while mitigating the harms is the balance that countries are grappling with.”

He added that global standards and a united approach to the technology, much like how the internet worked today, would be a positive model to follow for AI.

“I think over time, we’re going to need more global frameworks,” he said.

“Part of what makes the internet the force it is today is it’s a global good – we all agree on common standards and a way of working on it together, and hopefully the same thing applies to AI as well.”

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in