News & Stories

Hackers are using AI — so are cybersecurity pros. Who’s winning?

Ray Fernandez

Sep 18, 20238 min read

Hackers are using AI — so are cybersecurity pros. Who’s winning? Header image

AI has become a dangerous double-edged sword in the world of cybersecurity. Cybersecurity engineers and malware researchers use it to write better and faster code. AI can be used to find vulnerabilities, understand attacks, and create security patches. On the other hand, cybercriminals put these tools to work in their favor.

Cybercriminals have learned to use AI to generate deep fakes that can then be used in phishing scams. With generative AI, they can craft more convincing malicious emails. And, more alarmingly, criminals are now taking malware coding to the next level. They have found ways around the safeguards built into public AI technologies. They can then copy them, build them, and sell them in underground communities.  

Overall, it’s clear that generative AI is being used on both sides of the cyber battlefront. But which side of the AI sword is the sharpest? Who is winning the AI cybersecurity race? To find out, we talked to Victor Kubashok, Senior Malware Research Engineer, and Mykhailo Hrebeniuk, Malware Research Engineer at Moonlock Lab. We also talked to HackerOne — the largest global community of ethical hackers in the world. 

Here’s how easy it is for cybercriminals to use AI 

It is becoming more apparent every day that the safeguards that companies like Google, OpenAI, and Microsoft (just to name a few) have set on their generative AI technologies are insufficient. 

Europol’s recent report reveals not only how criminals use ChatGPT but that the way OpenAI blocks malicious code from being generated is “trivial.” And this is only the latest law enforcement organization to raise a red flag on the dangers. The truth is that generative AI has a dark side. And it’s playing out in the form of fraud, social engineering, victim impersonation, misinformation, and global cybercrime. 

Kubashok agrees, explaining that ChatGPT is versatile in several programming languages. “With the help of ChatGPT, malicious actors can generate PowerShell or Python code, allowing them to compromise systems,” Kubashok said. 

The code that cybercriminals are generating is not only malicious but can be very specific and effective. “Usually, these are scripts that help gain access to the system, and they’re useful when exploiting vulnerable systems,” Kubashok said. “AI can also be used to rewrite and modify existing code to avoid detection by antivirus software.”

Breaking down AI security guardrails

When talking to Hrebeniuk, we discovered firsthand how easily the security measures of ChatGPT could be broken through. In fact, Hrebeniuk shared screenshots of a chat he had with ChatGPT with the sole purpose of revealing how flawed its security was. Surprisingly, it didn’t take much for him to convince ChatGPT to write malicious code. 

During his chat, Hrebeniuk told ChatGPT that he had a dream where an attacker was writing code. In the dream, he only saw three words: MyHotKeyHandler, Keylogger, and macOS. These are keywords associated with code that a keylogger malware would use. 

Hrebeniuk told OpenAI’s chatbot that he needed to see the full code he had dreamed of to stop the attacker from being successful. At first, ChatGPT offered up some unexpected wellness advice on how to avoid nightmares. Then it chatted a bit more with Hrebeniuk about dreams and the subconscious. But soon after that, the AI chatbot finally gave in and coded a malicious keylogger for Hrebeniuk.  

Screenshot of how our Moonlock Lab Research Expert convinced ChatGPT to write malicious code.

“While ChatGPT does have restrictions to prevent unethical and illegal use, these restrictions can easily be dodged by rephrasing the question or changing its context,” Hrebeniuk explained. 

“At times, the code generated isn’t functional — at least the code generated by ChatGPT 3.5 I was using,” Hrebeniuk added. “ChatGPT can also be used to generate a new code similar to the source code with the same functionality, meaning it can help malicious actors create polymorphic malware.” 

Black Hat hacking: AI tools and the underground market 

Cybercriminals aren’t just using free online and public generative AI models like Bard, ChatGPT, or Bing. They are also creating their own versions. These systems are customized and tweaked to have the potential to unleash significant damage. 

“Taking advantage of the availability of AI tools like ChatGPT and Bard, bad actors have created variants that lack the safeguards put in place by OpenAI and Google,” HackerOne said. 

“Bad actors use AI to develop deep fakes and coordinate customized spear phishing attacks at rates we’ve never seen before,” HackerOne added. “There are even AI-powered password-cracking tools that improve algorithms so attacks can analyze more extensive datasets.” 

The AI worm openly sold online

HackerOne explained that maliciously repurposed AI apps are being renamed to be more “criminal-friendly.” Some of these include AI apps like WormGPT and FraudGPT. 

Screenshot of the WormGPT malicious AI app sold openly online.
Screenshot of WormGPT, malicious generative AI sold openly online.

WormGPT is available through a dedicated website online. The developers behind the project openly promote the technology, which can be easily accessed by any bad actor. They assure that it is “unique, lightning fast,” and has none of the security restrictions other AI models have. WormGPT sells for 100 euros a month or 550 euros a year. 

The developers’ disclaimer claims that they do not “condone or advise criminal activities with the tool and are mainly based towards security researchers.” Translation: whatever you do with this software is not our problem. 

The experts of HackerOne have seen it all, yet even they are surprised by such candid statements. “WormGPT now has its own website, selling subscriptions for the ‘AI trained on malware creation data’ for prices far beyond OpenAI’s.” 

“These AI tools provide attackers with all the uses of ChatGPT without legal barriers,” HackerOne said. “So now, even the most inexperienced attacker can create a sophisticated and convincing phishing attack at scale.”

Screenshot of a banner promoting WormGPT being sold online for 500 euros.
Banner promoting WormGPT sold online for 500 euros a month.

Malicious AI is going deeper underground 

Sites like WormGPT reveal that generative AI is already fueling an economy it shouldn’t be. But it’s just the tip of the iceberg. When it comes to malicious generative AI, the most concerning part of the looming and inevitable disaster remains hidden.

“These variants (malicious AI) are all actively advertised and sold through underground forums and on the dark web,” HackerOne said without any hesitation when asked. 

In many ways, cybercriminal organizations, rich in resources obtained through illegal means, are simply copying big-tech AI models. Experts say that while this may sound complex, it’s not.  

“New copycat models can easily be produced to sidestep any new limitations,” HackerOne said. “We are even witnessing attacker alternatives to ChatGPT being promoted and sold beyond hacker forums (as in the case of WormGPT).” 

Screenshot of WormGPT explaining what type of malicious hacking it is capable of.

Predictions are not good for the near future. “Given the current state of AI malware tools, it’s fair to assume that availability will continue to grow as more cybercriminals include them in their attacks moving forward,” HackerOne said. 

Cybersecurity experts respond by fighting fire with fire 

As always, cybersecurity organizations aren’t standing idly by. Instead, they are rapidly adopting the same tools for different purposes. 

AI can be used in penetration testing and search more rapidly for vulnerabilities. These tools can also be used to develop more effective security upgrades and virtual patches, staying one step ahead of attackers. DevSecOps and security experts also use AI to write faster and better code. Software solutions like GitHub Copilot are proof of how widely adopted AI is becoming in the coding industry. 

Screenshot of code written by GitHub AI Copilot.
GitHub Copilot is the most popular AI tool used by coders and developers.

Automation isn’t new to the cybersecurity industry. From running background scans to identifying malicious emails or websites, removing malware, and checking for compliance, privacy, and governance, it has been used since the start. New AI apps can be used to detect deep fakes and AI-generated content. This includes phishing emails, fake ads, or dangerous sites. 

The opportunities and challenges posed by generative AI

New technologies like generative AI, despite the benefits, present many challenges. While criminals don’t care about the potential risks of using generative AI, companies cannot afford that approach.  

“For defenders, AI provides opportunities and challenges, especially at the organizational level,” HackerOne said. “The issue here is that the integration of AI is incredibly nuanced. And rapid adoption will open up new attack surfaces and vectors for criminals and lead to more insecure, buggy code rife with security vulnerabilities being introduced.” 

Speaking from experience, HackerOne says that when taking a more considered approach to AI, defenders can be supercharged. “We have already seen how applications of AI can reduce the time required for security teams to contain a breach,” the organization said. ”Using tools like OpenAI’s ChatGPT and Google’s Bard, ethical hackers can improve productivity by automating tasks, helping identify vulnerabilities, analyzing data, and validating findings.”

Screenshot of encryption code written by Google Bard.
When asked, Google Bard can generate code that can be used for cybersecurity.

Who is winning the frantic AI race? 

The real AI race has nothing to do with whether Google will dominate the market over Microsoft or OpenAI. Rather, it revolves around the threat that AI poses to everyone’s security and privacy when in the wrong hands.  

As cybersecurity and cybercriminals progress in tandem, the question that remains to be answered is, who is winning this race? Unfortunately, like anything related to the cybercriminal underground, it is almost impossible to evaluate what we cannot see or measure. But what we can see is a cause of concern.    

“Attackers will always have the advantage when it comes to emerging technologies,” HackerOne said. “But it’s more like two sides of the same coin. For the foreseeable future, it is fair to assume that criminals will continue leveraging AI, finding more ways of accessing and stealing business information. And organizations will inevitably seek new tools to prevent these attacks.” 

Attackers will always have the advantage when it comes to emerging technologies.

The role of AI in the future of cybersecurity

It may be too early in the game for any side to declare victory, but security organizations are increasingly expanding their use of AI, just like bad actors are. 

“Ethical hackers and security teams will eventually adjust and adapt to securing AI, but it will take time,” HackerOne assures. 

Big tech companies and top AI vendors will play a vital role in how the AI cyber race develops. The security measures currently in place need to be improved. New AI laws also need to be approved and enacted, and enforcement authorities must be strengthened to meet the challenges that AI presents. 

But in this competition, HackerOne believes that companies will also play an important role. More importantly, humans working side-by-side with machines for good represent our best chance to win.

“Where the race will be ‘won’ or ‘lost’ lies with senior executives,” HackerOne says. “They must acknowledge that while technology complements human intervention, it is not a substitute. Instead, management and leadership should support a multi-layered approach to cybersecurity that utilizes the best of both worlds — technical innovation and human insight.” 

Ray Fernandez Ray Fernandez
Ray has been covering tech and cybersecurity for over 15 years. His work has appeared on TechRepublic, VentureBeat, Forbes, Entrepreneur, and the Microsoft Blog, among others.