In the ever-evolving landscape of cybersecurity threats, a new and insidious adversary has emerged: AI worms attacks your system. Unlike traditional worms that exploit vulnerabilities in software or network protocols, these AI-powered worms leverage generative AI models to infiltrate systems, steal data, and propagate themselves.
One such worm, known as “Morris II,” has recently come to light, posing a significant risk to organizations and individuals alike. Named after the infamous “Morris” worm that wreaked havoc on the early internet in 1988, Morris II represents a new breed of malware.
Its creators—Ben Nassi from Cornell Tech, Stav Cohen from the Israel Institute of Technology, and Ron Bitton from Intuit—designed it specifically to exploit generative AI applications and email assistants.
These tools, powered by models like ChatGPT 4.0, Gemini Pro, and LLaVA, generate text and images, making them ideal targets for an AI worm. The worm initiates an attack by crafting an adversarial self-replicating prompt.
This prompt targets generative AI email assistants, such as those used by businesses and individuals. By exploiting the language model (LLM) of the email assistant, Morris II extracts data from outside its system.
The stolen data is then fed to more advanced AI models (such as GPT-4 or Gemini Pro) to create convincing text content. The worm effectively “jailbreaks” the GenAI service, gaining unauthorized access and stealing sensitive information.
Morris II encodes its self-replicating prompt within an image file. When the email assistant encounters this image, it forwards messages containing propaganda and abuse to other recipients. The infected email spreads the worm to new clients.
Throughout this process, confidential data including credit card details and social security numbers is mined. Morris II’s successful operation in controlled environments underscores the urgency of addressing such threats.
Responsible researchers have reported their findings to Google and OpenAI, urging the development of effective deterrents. Organizations must remain vigilant, simulate attacks, and verify security measures to prevent AI worms from compromising their systems.
This comes at a crucial time when AI and NPUs are integrated into GPUs and CPUs for various devices such as PCs, smartphones, cars, and email services. While AI-infused SSDs demonstrate the ability to detect and eradicate ransomware, there is a flip side, with worms and custom LLMs capable of generating malware.
In navigating this landscape, the industry must exercise caution, ensuring the implementation of countermeasures to combat or deploy effective solutions for every generative AI-based product introduced to the public.
As AI continues to advance, so do the risks associated with its misuse. Morris II serves as a stark reminder that even cutting-edge technology can be weaponized. Vigilance, collaboration, and robust defenses are essential to safeguarding our digital ecosystems.
Leave your Reply