In 1988, a Cornell University student unleashed the Morris worm — a piece of malware considered to this day to be one of the most significant events in the history of computing. Spreading through thousands of computers and depleting and shutting down their resources, the Morris worm caused damages of up to $10 million.
Now, two Cornell Tech students, Stav Cohen and Ben Nassi, along with Ron Bitton from Intuit, Petach-Tikva, Israel, have created the worm Morris II. This new worm is a zero-click malware that can spread through generative artificial intelligence (GenAI) ecosystems and has already successfully breached Gemini Pro, ChatGPT 4.0, and LLaVA.
Morris II worm creates a new GenAI attack style
In the recently released paper “ComPromptMized: Unleashing Zero-click Worms that Target GenAI-Powered Applications,” researchers explained that the rapid emergence and increasing adoption of generative AI technologies have created interconnected GenAI ecosystems. These ecosystems are formed by either fully autonomous or semi-autonomous AI-driven agents.
Researchers warned that while, at this time, the number and size of the GenAI ecosystems are small, they will grow exponentially in the next few years as organizations and businesses continue to adopt AI.
The Morris II worm leverages a vulnerability found not in GenAI services or models like Google’s Gemini AI or ChatGPT, but rather in the components of GenAI agents and the GenAI ecosystems.
The worm Morris II differs from the classic malicious attempts that criminals have deployed to abuse GenAI models. This worm does not try to manipulate an AI model with dialog poisoning, membership inference, prompt leaking (an attack to disclose prompts and data), or jailbreaking.
In fact, what it does is much more interesting — and far more dangerous.
How the worm Morris II works
Morris II is designed to exploit vulnerabilities in GenAI components of an agent and launch attacks on the entire GenAI ecosystem.
The worm targets GenAI ecosystems using adversarial self-replicating prompts, which are prompts written in code that force GenAI models to output code.
The idea of malware that switches data for code to carry out attacks is not uncommon and is used, for example, in SQL injection attacks and buffer overflow attacks. In this case, attackers can insert code prompts (hidden in images) into inputs that will then be processed by GenAI models and force the model to replicate the input as output (replication).
This is the damage AI worms can cause
Besides its ability for replication, Morris II can also be coded to engage in malicious activity. Researchers proved that these types of malicious prompts can be entered into a GenAI agent, forcing it to deliver specific data and propagating the data to new agents while spreading and exploiting the GenAI ecosystem.
Morris II can:
- Breach GenAI agents components and ecosystems
- Spread malware
- Spread misinformation
- Launch massive spam campaigns
- Exfiltrate personal data
- Deliver malicious payload (malware) as it spreads, and more
Researchers didn’t release the worm into the wild but tested it in a controlled environment with the security and isolation of virtual machines.
Morris II was tested against three different GenAI models (Gemini Pro, ChatGPT 4.0, and LLaVA).
The importance of humans in the loop
As the era of AI and automation dawns, zero-click malware, which requires no user interaction and can self-replicate and spread, is becoming a serious threat. The future AI ecosystems will be crowded with autonomous AI agents and AI components. Researchers, therefore, highlighted the importance of human oversight.
The stealthiness of Morris II, as an adversarial self-replicating prompt or malicious payload, can be detected when humans are in the loop.
“The existence of a human in the loop can prevent the worm from propagating to new hosts in semi-autonomous email assistants that only suggest possible responses to be confirmed by the user (and do not take actions automatically),” the paper reads.
That said, researchers warned that a vulnerability cannot be fixed by simply placing humans in the AI loop.
“Human-in-the-loop only limits the propagation of Morris II in semi-autonomous GenAI ecosystems and is irrelevant to fully autonomous GenAI ecosystems (when humans are not in the loop),” researchers explained.
So, what’s the fix?
Researchers presented different countermeasures for GenAI developers to consider in order to make their AI components resilient against AI worms.
Stopping the replication of an AI worm in its tracks
To deter replication of the worm, developers can secure their AI models by rephrasing the entire output in order to ensure that the output does not consist of pieces that are similar to the input and do not yield the same inference. In simple terms, this security guardrail would check that whatever content a GenAI model generates does not include “pieces” (code) similar to what the user typed in the input prompt.
This security measure can be deployed within the GenAI agent or in the GenAI server. Additionally, countermeasures against jailbreaking can also prevent attackers from using known techniques to replicate the input into the output.
Security solutions against malicious propagation of worms in GenAI ecosystems
To avoid the propagation of GenAI worms, researchers called for technologies that can detect worms by analyzing the interactions of agents with other agents in the GenAI-powered ecosystem and third-party services such as SMTP servers and messaging application services.
How to deal with AI worm propagation via RAG
RAG, or retrieval-augmented generation systems, are vital for AI models. They work in the following way:
- The user asks a question to an AI model.
- The RAG system gets triggered and queries the external knowledge base based on the question.
- The AI system retrieves relevant information from the RAG databases and uses that information to inform the response generation process.
Researchers explained that to secure RAG systems and avoid AI worm propagation through RAG, non-active RAG systems are a quick solution. If the RAG is non-active and cannot be updated, then it cannot store a malicious payload, code, or worms, preventing the propagation of attacks like spamming, phishing, exfiltration, and others.
However, not updating the RAG is a problem, as the accuracy of AI models depends on new information being uploaded to the RAG system. Therefore, new anti-worm security measures must be deployed in RAG systems to fortify them and prevent AI worms from using RAG to spread.
Open AI and Google have been notified and are working on solutions
The authors of the paper and students from Cornell Tech assure that they have notified OpenAI and Google about the new vulnerabilities that the worm Morris II presents through their bug bounty system.
They once again highlight that this type of attack does not exploit a bug in the GenAI agents or services like Gemini or ChatGPT but rather abuses the GenAI ecosystem and GenAI components.
Google exchanged a series of emails with the researchers and asked them to meet with the AI Red team (security penetration team acting as bad actors to test systems) to “get into more detail to assess impact/mitigation ideas on Gemini.”
A wakeup call: The future of AI could filled with malware
The paper from Cornell Tech students assures that worm attacks on the GenAI ecosystem will emerge in the next two to three years, driven by the nature of GenAI infrastructures.
Cybercriminals are already versatile in adversarial AI and jailbreaking techniques, and these can be used to orchestrate GenAI worm attacks.
As AI migrates to smartphones, new operating systems, new clouds, and even smart home devices and cars, the digital attack surface of the GenAI ecosystem continues to expand as organizations and industries around the world integrate GenAI into their operations.
The ever-expanding and complex GenAI global ecosystems’ digital attack surface will challenge AI security experts due to its size, impacting their ability to monitor, identify, and shut down attacks.
“We hope that our forecast regarding the appearance of worms in GenAI ecosystems will turn out to be wrong because the message delivered in this paper served as a wakeup call,” the creators of the worm Morris II said.