AI/ML, Identity, RSAC

RSAC 2025: Agentic AI highlights need for inclusive authentication methods

Abhilasha Bhargav-Spantzel (left) and Aditi Shah of Microsoft discuss the importance of inclusivity in identity management.

SAN FRANCISCO – Exploring overlaps between human diversity and the introduction of autonomous AI agents, Microsoft Partner Security Architect Abhilasha Bhargav-Spantzel and Senior Data & Applied Scientist Aditi Shah presented “AI Era Authentication: Securing the Future with Inclusive Identity” at RSAC 2025 on Monday.

The session went over the challenges of securing AI agent identities and discussed how flexible methods make secure authentication possible for agentic AI and human users with disabilities.

(For Complete Live RSAC 2025 Coverage by SC Media Visit SCWorld.com/RSAC)

Shah, who is blind, noted that “different identities can have different ways to authenticate,” such as how vision-impaired users may use an audio CAPTCHA rather than a visual CAPTCHA.

However, flexibility is key to ensuring that inclusive authentication maximizes both usability and security. A demonstration of an audio CAPTCHA illustrating how the prompt is difficult for a human user to memorize and trivial for a modern speech-to-text AI model to solve, showing how more alternatives are needed to modernize inclusive systems.

3 challenges of AI agent identity

For AI agents to provide the level of convenience and productivity users expect, AI identities need to be granted a certain level of permission to authenticate and access data on behalf of the user. Otherwise, the agent would need to ask the user to manually authenticate at every turn, as demonstrated in a video included in the presentation.

This need for AI agent authentication raises several questions including: Who is the AI agent? What can the AI agent do and for how long? and, how can AI identities be manipulated by attackers?

The question of “who is the agent” addresses the need to determine accountability for actions taken using the credentials of human users or service accounts. Organizations should be able to tell “when it is the user and when it is the agent” that authenticated, accessed data or took certain actions, Bhargav-Spantzel said.

What an AI agent can do, and for how long, can be more complicated than it seems, the presenters noted, due to the possibility for AI agents to “collude” with one another. If two agents, each with limited access, are able to interact, they could exchange knowledge to escalate privileges or infer sensitive data, meaning traditional least-privilege access controls “are not going to cut it for autonomous AI systems,” Shah said.

Lastly, the most common attack vector of phishing can be carried over to AI agents in the form of prompt injections, which manipulate the agent into taking adverse actions similarly to how social-engineering attacks would manipulate a human. This necessitates the inclusion of agentic identities in identity threat detection and response monitoring.

Rethinking authentication to include every user

The presenters see parallels in the need to securely authenticate AI agents and humans with unique needs. One example would be the use of an authentication system that relies on fingerprint biometrics.

“What about those people who do not have fingers? … Does that mean you are not able to authenticate?” Shah asked.

AI too, with its unique capabilities and limitations, needs alternative ways to authenticate that preserve its autonomy without “lowering the bar” of security.

“When we are inclusive, it helps everyone,” said Bhargav-Spantzel.

Shah noted that attackers are “equal opportunity disruptors,” and organizations must equally defend the identities of AI agents and humans with diverse abilities. A one-size-fits-all identity system does not work for every human user, nor does it work for every digital identity, including AI agents.

Therefore, Shah urges organizations to “give as many alternatives as you can” for secure authentication, as the method that provides the maximum usability and security may be different for each individual user.

Passive, continuous authentication methods such as behavior-based authentication are highlighted as especially accessible for both humans and agents. And while agents may be able to authenticate without human intervention, designing the AI with prompts be transparent about the actions its taking, and make “human in the loop” consent checks, can ensure its actions align with the user’s intent and do not stray from the necessary permissions.

(For Complete Live RSAC 2025 Coverage by SC Media Visit SCWorld.com/RSAC)

An In-Depth Guide to AI

Get essential knowledge and practical strategies to use AI to better your security program.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.

You can skip this ad in 5 seconds