News & Stories

Dark AI tools are gaining popularity with cybercriminals, report says

Ray Fernandez

Apr 3, 20255 min read

Dark AI tools are gaining popularity with cybercriminals, report says: Header image

It’s no secret that cybercriminals are using AI. They use the technology to write more engaging phishing messages, create fake websites, code malware, and run deepfake cyberattacks, among other things. 

In this trend, bad actors are abusing legitimate AI apps by jailbreaking them — removing the safeguards of AI models. A new report found that dark web chatter for jailbreaking has surged. We looked into the report and found that interest in dark AI is raging not just on the dark web but on the clear web (the public internet, which is available to everyone).

Dark web jailbreak AI chatter surges by 50% 

A recent report from KELA Cyber Threat Intelligence tracked dark web chatter and cybercriminal activity for a year. The report found that mentions of the word “jailbreaking” in underground forums have spiked by 50%. 

KELA collected dark web screenshots, proving that cybercriminals are using AI for specific attacks. They are also sharing jailbreaking techniques and distributing customized malicious dark AI tools. These are mostly reverse-engineered from legitimate public models. 

KELO shares screenshots of dark web AI jailbreak techniques chatter.
KELA shares screenshots of chatter about dark web AI jailbreak techniques. Image: Screenshot, Moonlock.

As mentioned, AI jailbreaking is the process of bypassing safety restrictions programmed into AI systems. While AI companies are constantly releasing new AI models and security patches to prevent jailbreaking, cybercriminals are quick to catch up. Jailbreaking techniques for new models are often easy to find. 

“AI tools lower the cost and skill barrier to enter the cybercrime ecosystem,” KELA said. “This enables inexperienced individuals to execute cyberattacks by equipping them with advanced hacking tools and malicious scripts for sophisticated campaigns.” 

AI tools lower the cost and skill barrier to enter the cybercrime ecosystem.

KELA Cyber Threat Intelligence

Dark AI app mentions increased by a stunning 200% year-over-year

Customized dark AI apps have the potential to transform cybercrime at a global level. AI models, once stripped of legal and ethical restrictions, are a nightmare for cybersecurity specialists and potential victims. 

Dark AI can be used for anything from straightforward harmful content generation to trick the average user to the most sophisticated automated vulnerability scans to find weak exploitation points in a digital surface. Not only that, but dark AI significantly lowers the bar for the technical requirements needed to run attacks while scaling the volume of threats.

KELA reports that dark AI tools saw a 200% increase in mentions in the dark web in 2024 compared to the previous year. The most sought-after of these tools include WormGPT, WolfGPT, DarkGPT, and FraudGPT.

WormGPT versions spread due to its popularity

WormGPT is one of the most famous dark AI tools to date. In fact, versions of WormGPT models can be found online without even logging into the dark web. The app has become so widely sought after that different versions exist on different pages, ranging from online AI sites to GitHub and the dark web. 

The most common use of WormGPT is phishing message generation. This AI app enhances the effectiveness of phishing and social engineering campaigns by generating more engaging messages that impersonate trusted sources or brands. The AI app can also be used to hide malicious JavaScript in phishing emails or phishing websites. 

KELO described WormGPT as “an uncensored AI tool that helps cybercriminals launch cyber attacks as business email compromise (BEC) attacks and phishing campaigns.”

Screenshot of the WormGPT official Telegram account used to distribute the dark AI.
The WormGPT official Telegram account is used to distribute this dark AI tool. Image: Screenshot, Moonlock.

AI can also go far beyond phishing. Using AI to code malicious info-stealing JavaScript into websites is also becoming a concerning trend. Recently, we covered how fishy JavaScript used on malicious file-converting sites attracted the attention of the FBI. 

“Educational” AI jailbreaking techniques and models

After doing a normal web browser search for jailbreak AI and dark AI models, we can conclude that they are everywhere.

Surprisingly, some of the content we found is posted as “educational” and found on platforms like YouTube in great volumes. 

A screenshot showing a YouTube AI Jailbreak technique tutorials.
YouTube AI Jailbreak techniques tutorials are abundant and accessible to anyone. Image: Screenshot, Moonlock.

Educating users on how these AI apps work is a positive thing. However, sharing code on how to unlock the dark side of an AI on a platform like YouTube opens many doors to potential risks and threats. 

Sophisticated and advanced AI tools on GitHub

Similarly, on GitHub, you can find almost any AI model, its versions, modified versions, and other resources. Once again, these can be hosted on GitHub for “educational” purposes or as resources for cybersecurity teams, red teams, or pentest (penetration test) training. They can also be accessed just as easily by threat actors.

A screenshot of the ReadMe file of a WormGPT version found on GitHub.
The ReadMe file of a WormGPT version we found on GitHub lists the capabilities of the model. Image: Screenshot, Moonlock.

AI resources found on GitHub and on some dark web forums are more advanced than the usual chatbots that most users are familiar with. Under this type of jailbroken AI distribution, more malicious tasks can be performed. For example, skilled black hat hackers can run automated vulnerability scans against their targets in search of an entry point into their networks and systems.

Screenshot of WormGPT V3.0 offered on the public internet.
WormGPT V3.0 is offered on the clear web (public internet). Image: Screenshot, Moonlock.

Vulnerability scan tools are widely accessible to cybersecurity industry professionals. Now, combined with AI, they have become more powerful. Cybercriminals use these same tools. 

“Attackers leverage AI-driven tools to automate penetration testing, uncovering vulnerabilities in security defenses that can be exploited easily,” KELO said in its report.

AI tools are being used to flood websites with traffic in distributed denial-of-service (DDoS) attacks. Plus, they can run brute force attacks, automate fraud, chat with victims of scams, develop malware, and customize infostealers.

The interest in uncensored AI has also created a market for these kinds of apps online. There are a great number of sites that offer unrestricted, guardrail-free AI.

A screenshot of an "anonymous, unfiltered, uncensored AI chat."
Supposedly unrestricted and uncensored AI providers and services have skyrocketed online. This is happening on the clear web as well as the dark web. Image: Screenshot, Moonlock.

Most of these sites, like other dark AI content we found online, exist on risky infrastructure. The downloads, results, and use of these sites are unlikely to be safe, with risks of malware and privacy leaks being top concerns.

On the other hand, these types of AI services, which are abundant, have been linked to cybercriminal activities. They have also sparked controversies, outraging parents when used in school environments to develop adult materials.   

A screenshot of the DarkGPT model being offered on Miniapps.ai.
Miniapps.ai offers several dark AI models, including a version of DarkGPT. Image: Screenshot, Moonlock.

Final thoughts 

There is a new, sketchy ecosystem of risk-prone AI apps on the dark web and on the clear web. This ecosystem is feeding off the advancements that AI companies are making in Generative AI and automation.

Additionally, the ecosystem is leveraging the hype of AI to offer enhanced cybercriminal services, helping them launch bigger, wider, faster, better, and more dangerous cyberattacks. 

As mentioned at the beginning of this report, it is no secret that cybercriminals are using AI. However, the scale at which malicious or jailbroken AI technology has grown in such little time is scary.

Basically, anyone with an internet connection can get their hands on these powerful AI models. Dark AI is everywhere. 

MoonLock Banner
Ray Fernandez Ray Fernandez
Ray has been covering tech and cybersecurity for over 15 years. His work has appeared on TechRepublic, VentureBeat, Forbes, Entrepreneur, and the Microsoft Blog, among others.