AI/ML, Generative AI
How ‘Agentic AI’ will drive the future of malware

(Adobe Stock)
COMMENTARY: AI adoption experienced explosive growth over the past two years and has been integrated within every single business unit, every business function, and every application we use online.Until now, we’ve only come across ChatGPT-like systems that can generate results if we ask it a question. Now, security teams need to start worrying about a fully autonomous AI system (agentic AI), which can operate independently from human oversight.[SC Media Perspectives columns are written by a trusted community of SC Media cybersecurity subject matter experts. Read more Perspectives here.]Agentic AI isn’t just about answering queries, it's about building things. It doesn’t just come up with an idea about something: it actually creates something. It’s a personal assistant that can perceive (gather data), reason (analyze data to understand what’s going on), act (take action based on its understanding), and learn (adapt itself based on learning, feedback and experience), instead of humans having to do those tasks.Agentic AI may consist of multiple independent agents, each specialized in handling a particular task, working cooperatively towards a common goal, and possibly, in the hands of threat actors, leading to the creation of self-driven AI-enabled malware.Train employees to detect AI-powered attacks: Educate staff on the growing risk of agentic AI’s malicious use by bad actors. Use social engineering and phishing simulation exercises, security awareness tests and red-teaming to teach employees what an AI-powered attack (deepfake, AI-drafted phishing emails) can look like and why they must report them immediately to IT or the security team. Fight AI with AI: Agentic AI is not exclusive to attackers. Defenders can also leverage agentic AI to improve their detection and response, their intelligence, as well as defense mechanisms. They can create an army of AI agents that can find and fix bugs and misconfigurations, perform proactive patching, run continuous simulation testing on employees, identify weaknesses in security policies and controls, search and destroy malicious programs, monitor networks, and traffic for anomalies. Deploy strong security controls and authentication: Implement phishing-resistant MFA on critical systems and user accounts to prevent unauthorized access. Use a layered security system that can detect and block adversaries (whether they are automated agents or not) from performing lateral movement. Leverage robust monitoring tools to flag unusual activity. Track user interactions to identify compromised accounts. It's highly likely that bad actors will have already begun weaponizing agentic AI. The sooner organizations can build up defenses, train employees, deploy their own AI agents, and invest in stronger security controls, the better they will be equipped to outpace AI-powered adversaries.Stu Sjouwerman, founder and CEO, KnowBe4SC Media Perspectives columns are written by a trusted community of SC Media cybersecurity subject matter experts. Each contribution has a goal of bringing a unique voice to important cybersecurity topics. Content strives to be of the highest quality, objective and non-commercial.
An In-Depth Guide to AI
Get essential knowledge and practical strategies to use AI to better your security program.
Get daily email updates
SC Media's daily must-read of the most current and pressing daily news
Related Terms
AlgorithmYou can skip this ad in 5 seconds