In focus

How fear could stop us from solving some of the world’s biggest problems

Artificial intelligence is coming to take our jobs – and even our lives, with Rishi Sunak warning this week about the danger of humanity completely losing control of AI. But, asks Andrew Griffin, might our anxiety about the machine age itself be the real threat?

Thursday 26 October 2023 16:42 BST
Comments
Ai-Da Robot, the world’s first ultra-realistic humanoid robot artist, appears at a photo call in a committee room in the House of Lords in October 2022
Ai-Da Robot, the world’s first ultra-realistic humanoid robot artist, appears at a photo call in a committee room in the House of Lords in October 2022 (Getty Images)

You could be forgiven for thinking the apocalypse was here already. All summer the headlines came, often looking like the ignored warnings in a disaster film: artificial intelligence is coming for us all, to take our jobs and perhaps even our lives. The suggestion was that AI is both totally powerful and totally evil.

This week, for instance, the prime minister, Rishi Sunak, warned that AI could be used to build chemical weapons or commit crimes, and might even escape human control entirely. He said that the “easy speech” to give would be one pointing to the exciting possibilities of AI. But the risk of an all-powerful, malevolent artificial intelligence was too strong to ignore, he suggested.

But other experts are warning that the biggest danger posed by artificial intelligence is in the fear that surrounds it. The panic over the threat from AI, they say, could become self-fulfilling: we could be in danger of creating what we fear.

Far from being purely evil, artificial intelligence has already contributed to a host of breakthroughs that have changed the world for the better – and there will be more to come. The technology is still in its early stages, at a point where its future can still be decided and its actual utility is limited. If we give into fear now, say the experts, the danger is that we will seriously limit development of the kind of technology that could improve our world. There are risks to humanity, yes, but those risks could be greater if we stopped AI now.

How AI can help feed the world

Andrew Nelson is the fifth in a long line of farmers. Today, he oversees a farm in Washington State that produces beans, peas and much more besides. He is the inheritor of an industry that has worked for thousands of years to solve perhaps the most ancient problem: how to ensure that people have enough to eat. The problem may be old, but it has never gone away: a UN report indicated that in 2021, as many as 828 million people did not have enough to eat, and that number is rising.

However, this ancient industry and the age-old problem it grew to address have been the focus of the newest technology in the world. Nelson has been working with Microsoft, integrating technology into all stages of his farming work. And it is paying off: thanks to artificial intelligence, he has been able to dramatically reduce the amount of chemicals and water needed on his farm. That is the result of closely monitoring data about the farm, and analysing what it might mean: understanding the potential consequences of planting a certain crop or using a certain herbicide, for instance. Some farmers work by tasting the soil to understand how it is doing; AI is not replacing that deep knowledge about agriculture, but augmenting it.

The world is in desperate need of this sort of help. We do not have enough food: compared to today’s levels, we will need to be growing 50 per cent more food by 2050 to feed the growing population.

A Kenya Airways unmanned aerial vehicle spreads fertiliser over a tea farm, the Kipkebe Tea Estate in Musereita, in 2022
A Kenya Airways unmanned aerial vehicle spreads fertiliser over a tea farm, the Kipkebe Tea Estate in Musereita, in 2022 (AFP/Getty)

“And it is not just about growing food; we need to grow good nutritious food, and we need to do this without harming the planet,” says Ranveer Chandra, Microsoft’s managing director of research for industry and chief technology officer for agri-food. “There’s not more arable land, the soils are not getting any richer, there’s climate change. So how do you get to this increased good food production? And that’s a fundamental problem.

“Using AI, you can enable techniques like precision agriculture: in the same piece of land, you can grow more food, you can reduce the emissions, you can sequester more carbon.” Chandra’s team at Microsoft have been working to do exactly that.

Nelson happens to live in Microsoft’s home state, but the food problem is global. And fixing it might mean relying on technologies that don’t seem obvious: the kind of large language model that powers ChatGPT, for instance, could be helpful in bringing information to farmers who cannot read or write but can speak to an AI helper, which could provide information such as what subsidies are available, news alerts about pests or disease, and the weather.

The AI health revolution

Agriculture is just one potentially transformative use for artificial intelligence. Another key focus is health. Cancer doctors, for instance, do not have time to look through every scan – and even if they do, they might miss important but very faint indications that there is a problem. Researchers, including those at Google, are already turning deep learning techniques to recognise breast, skin and other cancers in ways that were previously thought impossible. Again, the intention here isn’t to replace clinicians, but rather to augment their value by alerting them to issues sooner, and to anything they may have missed.

“I think we lose sight of the fact that there are many benefits of AI now – not necessarily generative AI, GPT and so on – that we all take for granted already,” says Andrew Rogoyski, director of innovation and partnerships at the Surrey Institute for People-Centred AI. “Things like drugs and materials discoveries, enabled by AI; things like AlphaFold, which has given insights into biological proteins that would have taken years and years to do without that technology; things like medical imaging, detection of cancers and so on. I was talking to a radiologist consultant friend of mine the other day, and he was saying that he now thinks they’re so good, he considers it medically negligent not to use it.”

AI in your pocket

Manufacturing, productivity and logistics are already using all sorts of different AI. All the search engines that you use to find information on the internet are enabled by this technology, which also orders and identifies the photographs on your phone. In fact, AI is already affecting your life in hundreds of ways every day, and in ways you wouldn’t want to change.

A robot at the Robotics Innovation Center of the German Research Center for Artificial Intelligence (DFKI) in Bremen, Germany
A robot at the Robotics Innovation Center of the German Research Center for Artificial Intelligence (DFKI) in Bremen, Germany (Getty Images)

And all these things are rooted in decades of AI research that has mostly happened quietly, away from the headlines that warn that our systems will go haywire and take over the world. Experts interviewed for this piece commonly expressed a kind of weary bemusement about the way in which the dangers of AI are presented, because they have heard it all so many times before.

Those in the know about AI are keen to stress that what changed this year wasn’t so much the technology itself, but access to it: ChatGPT’s big innovation was that it was easy, not that it was especially clever. The ease of use meant that ChatGPT very quickly became the fastest-growing website in the world, reaching 100 million users in just two months, and it suddenly allowed everyone to experience something that had actually been decades in the making. It may be that much of the fear around artificial intelligence is a sense that it came from nowhere and will continue on that trajectory; AI moves slowly, however, and every breakthrough is the result of fastidious and largely neglected research.

Much of the good work that is happening takes place as a kind of philanthropy, such as that carried out by companies like Google Deepmind or Microsoft, who spend some of their resources on research to help the world. Deepmind, for instance, has shared a range of health breakthroughs for free – last month, it released a new system called AlphaMissense, which can spot problematic genetic variants with a view to finding out how they lead to disease. The system categorised 89 per cent of all 71 million possible variants; only 0.1 per cent have been confirmed by human experts.

Microsoft’s work has largely come from its helpfully named AI For Good team, which now has about 40 people and offices at the company’s bases in Seattle, Nairobi and elsewhere. It works by borrowing from the same technology that powers Microsoft’s more expected and obvious artificial intelligence work. “There’s a quote from [data scientist] Jeff Hammerbacher that said: ‘The best minds of my generation are thinking about how to get people to click on ads,’ and it’s unfortunately true,” says Juan Lavista Ferres, vice-president and chief data scientist at the AI for Good lab at Microsoft.

“But it’s also true that the problem of detecting whether children will have a higher chance of infant mortality, or which person will click on your ad – even though these two products cannot be further apart from a societal point of view, from purely the science and AI perspective, these are basically the same problem.”

To do its more important work of helping to solve the world’s problems, Microsoft has brought those experts together and pointed them at humanity’s problems, explains Lavista Ferres. Every technology faces the question of whether it will be a tool or a weapon, but it’s only the uses it is put to that decide that – and the people who put it to those uses.

The fear about bringing in more regulation in relation to AI is that it ends up penalising only those who follow the rules. Any attempt to limit or shut down AI research would presumably be complied with by those working on positive applications of artificial intelligence, but not so much by those with amoral or immoral intentions. Money talks, and it also makes it easier to move around the world, dodging whatever regulations different countries try to put into effect.

Be afraid, but not too afraid

The danger might be, however, that an overindulgence in discussion of the dangers could obscure the real and important uses of artificial intelligence. At the moment, much of the challenge of bringing AI into healthcare, for example, is encountered in the resistance from some in the medical profession; they might not even be aware of the technology, and if they are, they might only have heard of it as a terrifying danger that is coming to wipe us all out. Artificial intelligence will only be able to save us if we let it.

The panic over artificial intelligence stealing our jobs and changing the world is often based, counterintuitively, on a view of the technology that is too favourable: in fact, even the most advanced systems can’t be trusted on their own, since they are given, for example, to “hallucinations” – making mistakes, and being incapable of realising they are wrong. For now at least, the technology will still need humans; the cancer-scanning systems are intended to work alongside oncologists, for instance, not make them redundant.

The original T-800 Endoskeleton robot used in the movie ‘Terminator Salvation’ displayed as part of the ‘Robots’ exhibition at the Science Museum in 2017
The original T-800 Endoskeleton robot used in the movie ‘Terminator Salvation’ displayed as part of the ‘Robots’ exhibition at the Science Museum in 2017 (Getty)

But many experts are relatively relaxed about the recent concerns, having seen panic about AI appear and then disappear from view as, behind the scenes, the actual development of it continues.

“I do think it’s interesting that the media representation of AI is very far removed from my day-to-day and the conversations I have with my colleagues... there’s a lot of excitement about the good [things] I can do,” says Catherine Breslin, an AI expert who runs Kingfisher Labs and previously worked on technology such as Amazon’s Alexa. “And there’s not a big consensus about the risks, and especially those existential risks; they’re still somewhat hypothetical in some people’s minds.

“Day to day, people are very excited about building technology that can help people. That’s why people go into this field, I think, because they see the real impact that it can have on people’s lives. And so on the flip side, I then hear from people who don’t necessarily know too much about AI, that they are scared about it.”

The really important thing is to make sure that discussion of those fears is done in the right terms, experts say. The biggest danger is that artificial intelligence is talked about as being magical or mystical – rather than a technological tool that can be used to make jobs or steal them, to save lives or destroy them.

“The field of AI has always had hype cycles, and narratives of AI have always oscillated from those of doom to those of infinite opportunity,” says Nello Cristianini, a professor of artificial intelligence at the University of Bath and the author of recent book The Shortcut, which attempts to address anxieties about artificial intelligence.

“It is true that AI can create problems, we have been working on those for many years, but talking about extinction without specifics is really not scientific. Similarly, the idea that every problem can be solved by AI is rather childish. I do not think that we settle scientific debates by public petitions.”

Cristianini argues that the anxiety around artificial intelligence is because “we are devolving control to intelligent machines, and we do not know how to trust them”. That is largely because people do not understand how they work, but the “recipe” behind AI is fairly simple, he says. AI depends on three things: swapping explicit theories for statistical patterns in data; swapping high-quality data for data-harvesting from the web; and replacing the idea of understanding users with observing them and guessing what made them click.

Those ideas underpin everything, and they are not calming: “It is clear that we cannot fully see what the machine believes or knows, or if it harbours some biases; and we cannot predict if it will have unintended effects on society, such as polarisation, addiction, misinformation” – all of which leads to anxiety.

“The cure for that is more understanding, more research, less hype and less generalised fear,” says Cristianini. “Instead, let us channel that energy into spelling out clearly what could go wrong, and why, and how things can be addressed.”

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in