AI/ML, Generative AI, Phishing, Malware, Threat Intelligence

GhostGPT offers AI coding, phishing assistance for cybercriminals

Novel GhostGPT gains traction among cybercriminals. (Adobe Stock)

A generative AI (GenAI) tool called GhostGPT is being offered to cybercriminals for help with writing malware code and phishing emails, Abnormal Security reported in a blog post Thursday.

GhostGPT is marketed as an “uncensored AI” and is likely a wrapper for a jailbroken version of ChatGPT or an open-source GenAI model, the Abnormal Security researchers wrote.

It offers several features that would be attractive to cybercriminals, including a “strict no-logs policy” ensuring no records are kept of conversations, and convenient access via a Telegram bot.

“While its promotional materials mention ‘cybersecurity’ as a possible use, this claim is hard to believe, given its availability on cybercrime forums and its focus on BEC [business email compromise] scams,” the Abnormal blog stated. “Such disclaimers seem like a weak attempt to dodge legal accountability – nothing new in the cybercrime world.”

The researchers tested GhostGPT’s capabilities by asking it to write a phishing email from Docusign, and the chatbot responded with a template for a convincing email directing the recipient to click a link to review a document.

GhostGPT can also be used for coding, with the blog post noting marketing related to malware creation and exploit development. Malware authors are increasingly leveraging AI coding assistance, and tools like GhostGPT, which lack the typical guardrails of other large language models (LLMs), can save criminals time spent jailbreaking mainstream tools like ChatGPT.

Advertisements for GhostGPT on cybercrime forums have gained some traction, gaining thousands of views, according to Abnormal Security. A previous report by Abnormal noted the growing popularity of “dark AI” on such forums, with entire sections dedicated to jailbreak techniques and malicious chatbots.

“Attackers now use tools like GhostGPT to create malicious emails that appear completely legitimate. Because these messages often slip past traditional filters, AI-powered security solutions are the only effective way to detect and block them,” the researchers wrote.

Malicious LLMs have been promoted since at least mid-2023, when tools like the malware-focused WormGPT and the phishing-focused FraudGPT gained attention for lowering the bar for less-skilled attackers to conduct more sophisticated attacks.

Attackers also use legitimate tools like ChatGPT to attempt to bolster their cybercrime activities, with OpenAI disrupting activity by malware developers and state-sponsored actors last year.

Signs of AI-assisted coding have been observed in recent ransomware campaigns, including by the FunkSec gang and a suspected RansomHub affiliate, although AI-assisted phishing and BEC campaigns remain the most prevalent use of GenAI by cybercriminals.

An October 2024 report by Egress found that 75% of phishing kits offered for sale on the dark web offer AI capabilities, while VIPRE Security Group reported in August that an estimated 40% of BEC attempts in Q2 2024 involved AI-generated emails.

Meanwhile, Pillar Security’s 2024 State of Attacks on GenAI report found that LLM jailbreak attempts had about a 20% success rate, taking only 42 seconds to complete on average.

An In-Depth Guide to AI

Get essential knowledge and practical strategies to use AI to better your security program.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.

You can skip this ad in 5 seconds