AI/ML, AI benefits/risks

AI makes the dark web even darker

AI Sphere: A glowing red AI logo illuminates a futuristic, sci-f

COMMENTARY: All we hear all day everyday is the greatness of AI. We hear about its promise to revolutionize countless industries, from healthcare to education and scientific research.

What we hear less: how it’s quickly becoming a powerful weapon in the arsenal of cybercriminals. Specifically, AI gets used more and more to accelerate common cybersecurity threats such as phishing and social engineering attacks.

[SC Media Perspectives columns are written by a trusted community of SC Media cybersecurity subject matter experts. Read more Perspectives here.]

While we always have to watch for more techniques and tools becoming available to cybercriminals, what worries me more about this phenomenon is how it’s fueling the dark web.

Old threats, new speed

AI isn't so much creating entirely new types of cyberattacks. Instead, it's taking existing threats and making them faster, more widespread, and harder to stop. Email scams, malware, and password theft—these attacks have existed for years. What's different now are the scale and sophistication that AI brings to them.

Take email scams. Criminals once had to write convincing fake messages themselves or use a clunky prewritten template that was difficult to customize to the victim. Now they can use AI to generate hundreds of personalized scam emails in minutes. These AI-written messages are often more persuasive than human-written ones, making them harder to spot.

The same goes for malware. Traditional malware follows set patterns that security software can detect. But AI-powered malware can change its behavior on the fly, finding new ways to avoid detection. When security teams block one version, the AI creates a new variant almost instantly. While these AI-powered attacks are less sophisticated than a talented hacker, they can extend the efforts of cybercriminals, allowing them to target more victims with less effort.

This acceleration applies to every type of cyberattack. What once took criminals days now takes hours. What once targeted hundreds now targets millions. AI has the potential to make everything faster and bigger.

Newer, faster, more insidious

While AI speeds up existing threats, it's also creating entirely new ones. Criminal versions of popular AI tools are now appearing for sale on the dark web. These AI systems are specifically designed for crime.

Take WormGPT and FraudGPT, for example. These tools are marketed as criminal alternatives to ChatGPT, but with their safety limits removed. They help create malware, design fake websites, and develop new hacking tools. Even more concerning are upcoming tools like DarkBARD and DarkBERT. These advanced AI systems can browse the internet and analyze images, giving criminals even more capabilities.

Even legitimate AI platforms aren’t safe

Criminals aren't just creating their own AI tools – they're also hijacking legitimate ones. Over 200,000 stolen OpenAI account credentials are currently for sale on the dark web. Each of these represents potential access to powerful AI systems that were designed for positive uses.

But stolen accounts are just the start. Criminal forums are filled with discussions about "jailbreaking" – finding ways to remove the safety limits built into AI tools like ChatGPT. When successful, these attempts turn helpful AI assistants into tools for fraud and cybercrime.

On the flip side, we now can use AI to defend against these AI threats. Security teams now deploy AI systems that work around the clock, monitoring dark web forums and marketplaces for signs of trouble. These AI tools can read through millions of posts in minutes, understand criminal coded language, and spot patterns that humans might miss.

Here's what AI brings to the table:

  • Password protection: AI tools scan the dark web 24/7 for stolen login details. When company passwords appear for sale, security teams can change them before criminals break in.
  • System access monitoring: Criminals often sell "keys to the kingdom" - access to company networks and systems. AI watches for these sales and lets companies lock down compromised access points immediately.
  • Security gap detection: By finding company network information on the dark web, AI helps spot weak points in company defenses. This early warning system lets teams patch holes before attackers can use them.
  • Learning from past attacks: AI analyzes old data breaches alongside current threats, connecting dots that humans might miss. This helps companies understand where they're most vulnerable and how to protect themselves better.

Most importantly, AI works at the speed and scale needed to match today's threats. When criminals use AI to accelerate their attacks, defenders can use AI to accelerate their protection.

This technology race between attackers and defenders isn't slowing down. Each advance in AI creates new opportunities for both protection and exploitation. We need coordinated action from tech companies, law enforcement, and government agencies to ensure AI remains a force for good rather than a criminal tool.

The recent breaches at major companies — remember TicketMaster, LinkedIn and AT&T — show what's at stake. If we don't act now to control how AI gets used in cybercrime, the problem will only get worse. We have the technology to fight back — we just need to use it more effectively.

Emma Zaballos, senior researcher, CyCognito

SC Media Perspectives columns are written by a trusted community of SC Media cybersecurity subject matter experts. Each contribution has a goal of bringing a unique voice to important cybersecurity topics. Content strives to be of the highest quality, objective and non-commercial.

An In-Depth Guide to AI

Get essential knowledge and practical strategies to use AI to better your security program.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.

You can skip this ad in 5 seconds