Security technologist Bruce Schneier tackled some of cybersecurity’s toughest questions in a candid AMA-style session Thursday. In it he covered AI, NSA surveillance, cryptography and broader societal threats beyond typical cyber concerns.
Responding to “rage fatigue” tied to a drumbeat of recent bad news headlines in cybersecurity from the
gutting of CISA, recent
CVE-MITRE scares and the
DOJ’s targeting of Chris Krebs, Schneier said:
“We're at the receiving end of a strategy that is deliberately designed to be overwhelming and exhausting. All we can do is help where we can, and stay confident that others are doing the pieces we can't do."
The threat of tariffs on computer components, semiconductors and the potential restrictions of software exports is also taking a cyber toll, Schneier said. "The world's economies are deeply, inexorably international, and the current trade war is going to disrupt things in ways we are not anticipating. And it's not going to be good."
The AMA was moderated by Cecilia Marinier, Vice President of Innovation and Scholars at RSA Conference and part of an
RSAC Community Event. Schneier will speak later this month at
the RSAC 2025 cybersecurity conference at a
keynote titled AI, Security, and Trust.
At times philosophical, other times blisteringly direct, Schneier’s shed light on a cybersecurity landscape fraught not only with technical vulnerabilities but also with deep moral fissures as he sees it.
Cybersecurity: A ‘rounding error’ amid existential threats
When asked directly about the most pressing digital threats, be it AI misuse or quantum computing, Schneier quipped. "I generally hate ranking threats, but if I had to pick candidates for 'biggest,' it would be one of these: income inequality, late-stage capitalism, or climate change," he wrote. "Compared to those, cybersecurity is a rounding error."
Cryptography’s moral imperative
Schneier downplayed fears around quantum computing. "The engineering challenges are nowhere near solved. And two, the math is way ahead of the physics—we are slowly getting quantum-resistant cryptography algorithms," he explained.
Asked whether cryptography should formalize ethics as rigorously as it does security he citing Phil Rogaway’s seminal paper,
The Moral Character of Cryptographic Work (PDF) stating cryptographers can no longer afford to be apolitical technicians. Ethics, he implied, must be engineered.
“I’m intrigued by the idea that multiagent systems (networks of autonomous software agents that interact and make decisions) could be designed to embed certain values,” Schneier said. “It’s not something I’ve thought about much, but it makes sense. And it ties into a broader frustration I have with how people talk about bias in AI… We like some biases. Maybe a bias towards fairness, or justice, or kindness. When we like biases, we call them values. And, yes, we are going to want that.”
AI Snake Oil and the marketing mirage
Discussing artificial intelligence, Schneier anticipated SOC analysts and incident responders using AI as a vital support tool.
"What I expect to happen is for both SOC analysts and incident responders to have a bevy of AI tools at their disposal, and that those tools will make the humans faster and more effective," Schneier wrote.
At the same time, he also cautioned of widespread "AI snake oil," noting bluntly that many vendors push AI solutions that are mostly "marketing bulls#!t."
“I think that AI-assisted incident response is more likely to be on the real side,” he said.
Security Theater and the psychology of safety
He also reaffirmed the persistent relevance of his well-known critique, "
security theater," highlighting ongoing airport screening practices by the Transportation Security Administration as emblematic of giving the public a false sense of security. Schneier wrote, "The difference between the feeling of security and the reality of (cybersecurity) will continue to be a pervasive problem in our industry."
It’s an ongoing tension for CISOs, many of whom face boardroom pressures to deliver
visible security more than
effective security. Schneier’s advice, implicit throughout the conversation, is to resist that pressure — or at least be aware of its consequences.
NSA Post-Snowden: A decade without reform
Asked directly about NSA reforms post-Snowden, Schneier was skeptical, responding: "Well, they haven't had any leaks of any magnitude since then, so hopefully they did learn something about OPSEC. But near as we can tell, nothing substantive has been reformed."
Schneier further clarified, "We should assume that the NSA has developed far more extensive surveillance technology since then," stressing the importance of vigilance.
He touched on the fusion of AI and democracy - a theme of his upcoming book
Rewiring Democracy - noting that he didn't "think that AI as a technology will change how different types of government will operate. It's more that different types of governments will shape AI."
He is pessimistic that countries will harness AI's power to do good and help improving quality of life.
"It would be fantastic if governments prioritized these things," he said. "[This] seems unrealistic in a world where countries are imagining some sort of AI 'arms race' and where monopolistic corporations are controlling the technologies. To me, that speaks to the solutions: international cooperation and breaking the tech monopolies. And, yes, those are two things that are not going to happen."
The Ethics of Healthcare Data and the Illusion of Consent
As the Internet of Things (IoT) weaves itself into everything from heart monitors to insulin pumps, Schneier warned of privacy and integrity risks in healthcare. But the dilemma, he emphasized, isn’t just about safeguarding individual data — it’s about collective ethics.
“We want the data to be private, and we want it to be used collectively,” he said. “Navigating this will be a continual challenge, even more so as AI healthcare systems become prevalent.”
His solution? Not better design or voluntary compliance, but legislation — the one lever, Schneier noted with a hint of resignation, that the market systematically resists. “The market doesn’t reward companies that respect ‘user autonomy and dignity,’” he noted. “If we want those things, we need to compel companies to provide them.”
Rage fatigue in the face of constant chaos
Schneier addressed geopolitical risks and how tensions might affect cybersecurity and the broader IT ecosystem in light of Trump administration tariffs.
"The world's economies are deeply, inexorably international, and the current trade war is going to disrupt things in ways we are not anticipating," he cautioned. "And it's not going to be good," he wrote. "And when it gets better, it won't be in a world where the US is the dominant superpower."
That bleak prognosis came with a sliver of hope. Schneier's final takeaway was the growing awareness of these complex issues within society and the cybersecurity community. By openly addressing challenges and advocating for proactive, informed engagement, Schneier expressed confidence that collective action and intelligent solutions can ultimately enhance resilience and security.
Schneier was hopeful the development of multiple AI models will lead to more diverse and distributed AI landscape. Development of diverse AI models is already a reality as the EU pushes for sovereignty in AI technology.
"I am optimistic about DeepSeek's demonstration that you don't need hundreds of millions of dollars to create a foundation mode," he wrote, adding that clever hardware optimizations and collaborative model training could foster robust competition. "Monopolization is the major enemy here," Schneier said. "Robust market competition is the solution."
Bruce Schneier will explore these critical themes further at the upcoming
RSA Conference, discussing AI, trust, and data integrity.