The seductive, sci-fi terror about artificial intelligence risks blinding us to its benefits

A major health breakthrough this week is an important reminder that AI can keep us alive, not just wipe us out, writes Andrew Griffin

Monday 29 May 2023 00:02 BST
Comments
For years, this was the exact kind of outcome that those working in AI were so eager to highlight
For years, this was the exact kind of outcome that those working in AI were so eager to highlight (AFP via Getty)

It was far from the first time that AI had appeared alongside the word “kill” in a headline this year. But it was unusual in that artificial intelligence was being used not to kill us, in some dystopian sci-fi scenario, but rather to help create a new antibiotic that could wipe out a deadly species of superbug.

The setting was the announcement this week by researchers in Canada and the US that they had developed a new, powerful drug called abaucin. An AI system had been used to narrow down the chemicals that could be used in the drug, dramatically speeding up its development.

It was a rare piece of positive news about AI, which has spent much of the last year being demonised as a potent and malevolent force that is coming for us all. But in reality, it is just one of a number of thrilling breakthroughs around AI, which are in danger of being forgotten amid the panicked shouting about how we are all going to die – or at least have our jobs stolen by robots.

Last year, for example, DeepMind – probably best known for creating the “AlphaGo” bot that finally beat the world’s best human players at the world’s most strategic game, Go – said that it had used AI to “reveal the structure of the protein universe”. In short, that meant that it had catalogued the structures of almost all proteins known to science – an endeavour that was breathtaking in its scale and had consequences for everything from healthcare to breaking down plastic pollution.

Medical and biological applications might seem a long way from the digital world of ones and zeroes, but AI has proved itself to be particularly adept at this kind of work, and there have been a host of encouraging breakthroughs. The work that artificial intelligence is good at is, after all, a bit like that of a doctor – spotting patterns, predicting them, and pointing out when something is out of the ordinary – and AI is already being used widely in disease detection, with the hope that it could free up human doctors for extra productive work and reduce demand for resources in healthcare systems.

For years, this was the exact kind of outcome that those working in AI were so eager to highlight. Even major for-profit companies, such as Google, tended to include work on health breakthroughs in their presentations, showing how artificial intelligence could one day be useful for everything from diagnosis to treatment. That focus helps to explain why Google was taken off-guard when ChatGPT was revealed late last year and quickly became the only thing anyone wanted to talk about – and why, at least initially, Google seemed a little resistant to release its own work in the space, despite repeated suggestions that this meant it was lagging behind.

It was with the release of those generative AI systems that the panic really set in. ChatGPT was only one of them, and actually followed image-based systems such as Midjourney. Those generative systems – which create images and text in response to prompts – represent only a small sliver of the work being done in artificial intelligence, but they are now a major part of the conversation. And that conversation is rapidly becoming more negative.

DeepMind has its own chatbot, named Sparrow. Demis Hassabis, DeepMind’s chief executive, has said that the system has capabilities that are lacking in ChatGPT – but that it is not being released yet, as part of the company’s commitment to responsible and careful use of its AI.

“When it comes to very powerful technologies – and obviously AI is going to be one of the most powerful ever – we need to be careful,” he said in an interview earlier this year. “Not everybody is thinking about those things. It’s like experimentalists, many of whom don’t realise they’re holding dangerous material.”

It is easy to see why scepticism and fear about AI has become the dominant mode of thinking. The technology industry in recent years has done very little to engender any kind of excitement or trust: developments in the field, from NFTs to rampant surveillance, have left people in a state of fear about the negative consequences of new breakthroughs from Silicon Valley. Even before people knew how AI worked, they were worried about it; there was immediate speculation that it would steal jobs, lead to disinformation, and generally create chaos around the world.

Some of that distrust has even been sown by competitors within the industry. Obviously, AI boosterism serves in part as marketing for those companies that have technology to promote through it; less expected is the fact that AI fearmongering is also in part a marketing campaign. Everyone from Google to OpenAI have stressed that the possible consequences of runaway AI are so terrible that they should be trusted as the good people who can keep it safe, and the rest of the industry should be regulated.

Add to that distrust of the tech industry the fact that sci-fi authors have been writing about runaway AI and its dangers for years, and for good reason: it’s fun to read about the end of the world, and people want to imagine it. From AI villains such as 2001: A Space Odyssey’s HAL 9000 and Alien’s Ash onwards, culture has prepared us to worry about artificial intelligence, and has served our desire to do so. Those stories might scare us, but we also love them.

What’s more, all of those stories have been fed into the corpus of words that systems such as ChatGPT have been trained on, and so they come back out in the way these programmes answer our questions. As AI reporter James Vincent has written, this is something like the “mirror test” that is given to animals to see whether they recognise themselves in a mirror or think it is another being; in chatbots, we are seeing ourselves reflected back – complete with all our worst fears about artificial intelligence – but are confusing it for someone else.

Even the good news has its bad parts. For AI’s potentially revolutionary effect on healthcare to actually work, it needs access to people’s data to train its systems, and there have been a number of controversies about whether that is happening properly. DeepMind itself, for instance, has been involved in long-running legal arguments about whether it wrongly used confidential NHS data in its systems.

Talking up AI has its own dangers, of course. It can be a convenient way of brushing over today’s problems: why spend time fixing issues when AI is coming to change the world and might take them away? It is just as easy to make meaningless promises about the future as it is to make empty threats, and AI makes it possible to get away with both.

But this week’s news about the development of abaucin – together with that regular drumbeat of encouraging news that preceded it – is a welcome reminder that not all positivity is boosterism. Just as bacteria can both help us and hurt us, so can AI. The latest news should be treasured on both fronts.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in