The Independent's journalism is supported by our readers. When you purchase through links on our site, we may earn commission.
AI worm that infects computers and reads emails created by researchers
Morris II represents new breed of ‘zero-click malware’, researchers warn
Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.Security researchers have developed a self-replicating AI worm that can infiltrate people’s emails in order to spread malware and steal data.
Dubbed Morris II, after the first ever computer worm from 1988, the computer worm was created by an international team from the US and Israel in an effort to highlight the risks associated with generative artificial intelligence (GenAI).
The worm is designed to target AI-powered apps that use popular tools like OpenAI’s ChatGPT and Google’s Gemini. It has already been demonstrated against GenAI-powered email assistants to steal personal data and launch spamming campaigns.
The researchers warned that the worm represented a new breed of “zero-click malware”, as the victim does not have to click on anything to trigger the malicious activity or even propagate it. Instead, it is carried out by the automatic action of the generative AI tool.
“The study demonstrates that attackers can insert such prompts into inputs that, when processed by GenAI models, prompt the model to replicate the input as output (replication) and engage in malicious activities (payload),” the researchers wrote.
“Additionally, these inputs compel the agent to deliver them (propagate) to new agents by exploiting the connectivity within the GenAI ecosystem.”
The research was detailed in a study, titled ‘ComPromptMized: Unleashing zero-click worms that target GenAI-powered applications’.
Since the launch of ChatGPT in 2022, security researchers have noted the potential for hackers and cyber criminals to use some element of generative AI in order to carry out attacks.
The technology’s ability to realistically imitate human-generated text means non-native speakers could use it to generate convincing fraudulent emails and texts.
Cyber security firm CrowdStrike warned in its annual Global Threat Report, published last month, that its researchers had observed nation-state actors and hactivists experimenting with tools like ChatGPT.
“Generative AI [can] democratise attacks and lower the barrier of entry for more sophisticated operations,” a company representative wrote in an email to The Independent. “Generative AI will likely be used for cyber activities in 2024 as the technology continues to gain popularity.”
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments