AI Security Hype vs Reality: The Year Attackers Got Smarter
For years, AI in cybersecurity has been more theater than substance. Dashboards that promised “self-learning defense.” Vendors who sold Bayesian math as “machine intelligence.” Slide decks louder than their code.
Then came November 30, 2022. The day language models learned to think in sentences—and the security world quietly crossed a line it still hasn’t processed.
On the latest episode of the Secure AI Podcast, Richard Stiennon, founder of IT Harvest and one of cybersecurity’s longest-serving analysts, sat down with Bob Dubois, CEO of Wodan AI, to talk about what changed, who’s losing ground, and how fast the next phase is moving.
The End of “Good Enough” Security
For Stiennon, the biggest story isn’t that AI entered cybersecurity. It’s that AI is now good enough to make “good enough” dangerous.
“Before large language models, vendors were faking it,” he says. “They hired statisticians, not data scientists. Most of what they called AI was glorified anomaly detection.”
That pretense collapsed once models could read machine language, interpret logs, and summarize incidents faster than any analyst. Suddenly, triage—the unglamorous backbone of every Security Operations Center—could be automated end to end.
The result: systems that don’t just analyze alerts but act on them, isolating infected hosts or resetting credentials in seconds.
And that, Stiennon warns, changes the calculus.
“You can look at every single log now. You can do something about it. For a human, that’s burnout in 45 minutes. For an AI agent, it’s continuous.”
Attackers Didn’t Wait
The defenders weren’t the only ones watching.
AI has also become the great equalizer for attackers. Open-source models, stripped of ethical filters, can now build and execute full attack chains—from reading CVE feeds to generating live exploits.
“You can already drop a CVE into an LLM and get an exploit in minutes,” says Stiennon. “That collapses the time from disclosure to weaponization. If the exploit is the fuse, the bomb is already built.”
The implication is brutal: while companies still debate compliance checklists, attackers are automating reconnaissance, lateral movement, and data exfiltration. Entire intrusions can now unfold faster than a SOC shift change.
“Mean time to breach used to be months,” he says. “Now, it’s minutes.”
SOC Automation or Extinction
Stiennon’s advice for CISOs is blunt.
“Engage one of the 39 SOC automation platforms right now. Because next year, you’ll replace your SOC entirely with automation.”
By mid-2026, he predicts, manual SOCs will be legacy infrastructure.
The future isn’t just about alert reduction—it’s about removing the human bottleneck entirely. Teams that adapt will reassign their analysts to higher-order tasks like vulnerability management or DLP redesign. Those that don’t will drown in false positives until something slips through.
The Analyst’s View: Innovation vs Credibility
The conversation turns reflective when Dubois asks how large enterprises should navigate the explosion of AI-labeled vendors.
“Big analyst firms cover about 3% of the industry,” Stiennon notes. “I track over 4,300 vendors. Gartner lists 134. So the innovation pipeline is invisible to most buyers.”
That blind spot, he argues, isn’t neutral—it’s dangerous. Enterprise procurement rewards size over novelty, forcing CISOs to buy “safe” rather than “smart.” And the 30-thousand-dollar pay-to-play gatekeepers keep younger, better technologies off the radar.
“Every CISO says they want innovation,” he says, “but their RFPs are written to exclude it.”
Data in Use: The Blind Spot No One Talks About
If AI is rewriting defense, encryption remains the part still written in invisible ink.
“Companies think SSL saves them,” Stiennon says. “They see the lock icon and assume their data is safe. But SSL ends at the server. After that, it’s naked.”
In other words, most organizations encrypt data in motion and at rest—but not in use, the very moment it’s most vulnerable.
For Wodan AI, that’s where the industry must move next: keeping data encrypted while it’s being analyzed or computed on.
“Zero Trust means you hold the keys,” Stiennon adds. “No one touches your data unless you say so. That’s the real definition.”
Guardrails, DLP, and the New Arms Race
The market’s response to AI risk is already visible.
What began as “AI guardrails” has evolved into AI-powered DLP, designed to stop sensitive data from slipping into models or training pipelines.
It’s also becoming lucrative. Stiennon’s tracking shows 173 AI security startups and 12 acquisitions in the past year—representing $2.8 billion in returns on $2.5 billion invested.
The math tells its own story: security may be late to AI, but capital is catching up.
The Final Metric: Speed
Asked which metric every CISO should add to their dashboard tomorrow, Stiennon answers instantly.
“Mean Time to Detect. And most don’t even know it.”
He believes that as automation compresses attack timelines, detection speed becomes the defining indicator of survival.
“Attackers have minutes. You need seconds,” he says. “That’s where we’re headed.”
Beyond the Hype
For all the talk of hype, Stiennon ends with optimism. The tools are finally good enough to change the equation—if leaders move now.
The era of good enough is over. The new question isn’t whether AI belongs in security. It’s whether security can exist without it.
Listen to the full episode:
AI Security Hype vs Reality | Wodan AI Podcast
About the Guest:
Richard Stiennon is the Founder and Chief Research Analyst at IT Harvest and author of Security Yearbook.
About the Host:
Bob Dubois is the CEO of Wodan AI, enabling privacy-preserving computation through encrypted-in-use technology for data-driven industries.

