Google chief’s ominous warning about AI’s threat to humanity: ‘Keeps me up at night’

‘Development of this needs to include not just engineers but social scientists, ethicists, philosophers’

Vishwam Sankaran
Wednesday 19 April 2023 07:50 BST
Comments
Google’s CEO answers whether AI chat Bard is ‘safe’

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

Google chief Sundar Pichai has warned that the wrong deployment of rapidly advancing artificial intelligence technology can “cause a lot of harm” on a “societal level”.

“It can be very harmful if deployed wrongly and we don’t have all the answers there yet – and the technology is moving fast .... So does that keep me up at night? Absolutely,” the tech giant’s Chief Executive Officer (CEO) said.

In an interview with CBS’s “60 Minutes”, Mr Pichai voiced concerns that society needed to adapt for the use of AI technology.

He warned that AI is set to “impact every product across every company”, adding that “knowledge workers” such as those working as writers, accountants, and software engineers are likely to be more affected.

“For example, you could be a radiologist, if you think about five to 10 years from now, you’re going to have an AI collaborator with you,” he said.

“You come in the morning, let’s say you have a hundred things to go through, it may say, ‘these are the most serious cases you need to look at first’,” Mr Pichai added.

The tech giant boss cautioned about the ease with which fake media reports can be generated using AI.

On a “societal scale”, he said, such messages that can be easily made with AI “can cause a lot of harm”.

However, instead of abandoning AI, its use can be regulated with more laws that “align with human values including morality”.

“This is why I think the development of this needs to include not just engineers but social scientists, ethicists, philosophers, and so on,” he said.

In line with Mr Pichai’s statements, Google also recently released a 20 page document listing its “recommendations for regulating AI”.

The Silicon Valley giant also launched its new AI chatbot “Bard” in February in what seemed to be aimed at competing with the now-famous AI system ChatGPT.

Bard is an “experimental conversational AI service” that can be used to simplify complex topics like “explaining new discoveries from Nasa’s James Webb Space Telescope to a 9-year-old”, Google notes in its website.

Asked in the interview whether he thinks Bard is safe, he said: “The way we have launched it today, as an experiment in a limited way, I think [it is]. But we all have to be responsible in each step along the way.”

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in