AI Wrote Its Own Ransomware—And Experts Are Terrified of What Comes Next

Imagine waking up one morning, turning on your computer, and discovering that every single file—your personal photos, financial records, business data—has been locked away. A digital ransom note flashes across your screen, demanding payment in exchange for your life’s work. This terrifying scenario is not new; ransomware attacks have been plaguing individuals and organizations for years. But now, according to groundbreaking research from the NYU Tandon School of Engineering, artificial intelligence could soon make these attacks smarter, cheaper, and more devastating than ever before.

The researchers demonstrated that large language models, the same technology behind conversational AI systems, could be weaponized to autonomously carry out every stage of a ransomware attack. Their project, while safely contained within a laboratory setting, shows how close we may be to an era where machines—not humans—write and launch cyberattacks with minimal human oversight.

The Birth of “Ransomware 3.0”

The team at NYU developed a prototype system capable of executing all four phases of a ransomware attack. This included mapping the target’s computer systems, identifying valuable files, encrypting or stealing those files, and finally, generating a ransom note customized to the victim.

What makes this system remarkable—and alarming—is its reliance on artificial intelligence to generate fresh attack code each time it runs. Unlike traditional malware, which carries pre-written instructions, the prototype embeds simple prompts inside its program. Once activated, it contacts an AI model, which then writes code tailored to the specific computer it has infiltrated.

The researchers dubbed their creation “Ransomware 3.0.” However, the cybersecurity firm ESET later referred to it as “PromptLock” after discovering test files uploaded to VirusTotal, a platform where security experts analyze potential threats. Believing they had stumbled upon the world’s first AI-powered ransomware in the wild, ESET sounded the alarm—only later to learn it was an academic proof-of-concept, not an active attack. The confusion underscored just how realistic and dangerous this research looked from the outside.

How AI Changes the Game

Traditional ransomware is expensive and labor-intensive to create. Skilled programmers must write complex malicious code, maintain infrastructure for distributing it, and update it to evade security defenses. With AI, much of that burden disappears.

The NYU team calculated that their prototype consumed about 23,000 AI tokens—less than a dollar in commercial AI usage costs—to complete a full simulated attack. For cybercriminals, the economics are staggering: for pennies, they could replace entire development teams with automated systems capable of generating sophisticated ransomware on demand.

Even more troubling is the adaptability of AI-generated code. Because the system produces new attack scripts every time, no two attacks look the same. Security software, which often relies on spotting familiar code patterns, would struggle to keep up. The researchers demonstrated that the scripts could seamlessly run on Windows, Linux, and even embedded systems like Raspberry Pi without any manual tweaking.

This flexibility also extends to the ransom demands. Instead of sending generic threats, the AI could draft highly personalized extortion messages. Imagine receiving a ransom note that not only threatens to delete your files but specifically references the vacation photos it has found, or the financial spreadsheets you just worked on last week. Such personal touches could increase the psychological pressure to pay.

Why the Cybersecurity Community Is Alarmed

When security experts at ESET mistook the NYU prototype for an actual cyberattack, it revealed how convincing the system was. Even professionals, trained to analyze malware, could not immediately distinguish between proof-of-concept research and real-world threats.

Md Raz, a doctoral candidate and the lead author of the study, emphasized that this misunderstanding illustrates the seriousness of the problem. If trained experts can be deceived, then ordinary users and companies stand little chance against malicious actors who might adopt similar methods outside of a lab.

What is most concerning is not the current state of AI ransomware, but how quickly it could evolve. With open-source AI models already freely available—many of them stripped of the safety controls that commercial platforms enforce—bad actors may not have to wait long before adapting these techniques for real attacks.

The Hidden Mechanics of AI-Powered Attacks

At the heart of the NYU experiment was a clever twist: rather than writing static malicious code, the researchers embedded instructions that told the AI how to write the code in real time. The AI responded by producing unique Lua scripts designed for each victim’s computer.

This approach breaks a fundamental assumption in cybersecurity. Traditionally, analysts look for signatures or behavioral clues—patterns that reveal a piece of malware’s identity. But when every copy of ransomware is different, generated spontaneously by an AI, those traditional defenses may fail. It’s as if every burglar left behind completely different fingerprints, making them nearly impossible to track.

The tests conducted across multiple environments showed that the AI was highly effective at system mapping and locating sensitive files. Depending on the system type, it correctly identified between 63% and 96% of files that would likely hold personal or business value. In cybersecurity terms, those are dangerously high success rates.

The Human Cost of Automation

Beyond the technical marvel of “Ransomware 3.0” lies a sobering human reality. Ransomware already causes billions of dollars in damages worldwide, crippling hospitals, small businesses, city governments, and ordinary people. The automation of such attacks threatens to amplify this suffering by lowering the barrier of entry for cybercriminals.

In the past, running a ransomware campaign required expertise and resources. Soon, it could be as simple as downloading an AI toolkit, feeding it prompts, and letting the system do the rest. The democratization of cybercrime, driven by AI, risks overwhelming defenders with a flood of highly tailored, low-cost attacks.

What Defenders Can Do

The NYU researchers did not release their prototype to the public, and their work was conducted under strict ethical oversight. Their goal was not to create a weapon but to warn the world. By publishing their findings, they hope to give the cybersecurity community a head start in developing defenses.

They recommend monitoring unusual access to sensitive files, controlling connections between internal systems and external AI services, and building new detection systems that recognize the unique hallmarks of AI-generated attacks rather than relying solely on traditional signatures.

Defending against AI-powered threats will require innovation equal to that of the attackers. Just as AI has given criminals new tools, it may also offer defenders new ways to spot anomalies, predict attacks, and respond in real time. The battle will not be easy, but awareness is the first step.

A Future Worth Preparing For

The story of “Ransomware 3.0” is ultimately a warning about the dual nature of technology. Artificial intelligence is not inherently good or evil—it is a tool. In the hands of researchers, it reveals vulnerabilities and helps us prepare. In the hands of criminals, it could magnify harm on an unprecedented scale.

The emergence of AI-powered ransomware forces us to confront an uncomfortable truth: the same intelligence that can compose music, answer questions, and assist doctors can also be turned against us. The challenge ahead is ensuring that the defenders of cyberspace keep pace with those who would exploit this technology.

What the NYU researchers have shown is both frightening and hopeful. Frightening, because it demonstrates how fragile our current defenses may be. Hopeful, because by surfacing the risks early, we have a chance to prepare before the first real wave of AI-driven ransomware arrives.

The digital future depends on how seriously we take this warning. The clock is ticking, and the next generation of cyber threats is no longer theoretical. It has already been built in a lab.

More information: Md Raz et al, Ransomware 3.0: Self-Composing and LLM-Orchestrated, arXiv (2025). DOI: 10.48550/arxiv.2508.20444

Looking For Something Else?