Will deception become cool again in the AI age?
How detection engineering and AI SOCs evolve
Disclaimer: Opinions expressed are solely my own and do not express the views or opinions of my employer or any other entities with which I am affiliated.

I’ve intentionally made all of my posts free and without a paywall so that my content is more accessible. If you enjoy my content and would like to support me, please consider buying a paid subscription:
I’ve spent the last few weeks writing about the “Efficiency Reckoning,” i.e., the reality that security teams can no longer scale by simply adding more people. As AI-driven development expands the threat surface at machine speed, we are reaching a breaking point. Human teams cannot write enough rules to stay ahead of nondeterministic threats.
This brings us back to a category that was once the “cool kid” of the security conference circuit: deception.
I haven’t thought about deception for a while, since 2010s, until I had coffee with Andy Smith, founder of Tracebit, which is making a cool canary product that I’m eager to try out. Back in the 2010s, startups like Illusive Networks, Attivo Networks, and TrapX dominated the scene. They promised high-fidelity alerts that would end the era of "guessing." But while the promise was high, the adoption hit a wall of operational friction. Research into the first generation of deception reveals "three deadly sins": scalability, maintenance, and integration friction. In short, it was an administrative headache. Keeping decoys "believable" in a changing environment was a full-time engineering task that most understaffed security teams couldn't afford.
But as we look toward 2026, we are seeing a "Strategic Rebirth" of deception. It’s moving from a niche "trap" to a core infrastructure primitive.
Redefining the “incident”
We’ve gotten too obsessed with the idea of “zero incidents.”
In general, the term “incident” is far too broad. A “zero incident” goal is actually counterproductive; it encourages teams to hide small security events and prevents the organization from building resilience. Much like in modern infrastructure or Site Reliability Engineering (SRE), we need to focus on learning from events rather than just suppressing them. I’ve written about this in the past.
Most major breaches are not sudden explosions; they are an escalation of a small, undetected “lurking” phase. Attackers, or malicious AI agents, will snoop around, exploring the environment to figure out what security systems are in place without actually triggering a high-level alarm. They are looking for the boundaries.
If your only strategy is prevention, you are blind to this lurking phase. But if you have a deceptive infrastructure, this snooping becomes your best source of intelligence. It is okay to have an “incident” if the blast radius is small and the detection is immediate. In fact, catching an attacker in a decoy provides better training data for your defense than a blocked connection ever could.
The deterrence model: law enforcement for the AI era
Historically, we built security like a high-security prison: fixed gates and high walls. But AI agents navigate “latent space” — they don’t use gates. In this nondeterministic world, being “reactive” is actually the only way to move fast.
Think of it like modern law enforcement. The police don’t stand at every corner to prevent every crime; they create a system of deterrence. They make the likelihood of being caught so high that the “crime” isn’t worthwhile. Deception is the deterrent that makes exploitation too expensive and too noisy for the attacker.
We need to move from a “Mean Time to Detect” (MTTD) metric to a “Mean Time to Deterrence.” The goal isn’t just to see the attacker; it’s to make their mission impossible by polluting their reconnaissance with fake data.
Why AI SOCs are a faster horse (and why they fail)
I’ve talked in the past about why “AI SOCs” are not the future. The current trend is fundamentally a “Copilot for a broken process.” These companies are taking the legacy SOC workflow—collecting millions of logs, generating thousands of alerts, and using an LLM to help a human triage them, and simply trying to make it 20% faster.
But we don’t need a faster horse; we need a fundamental change in how “work” is done. The AI SOC model fails because of three primary flaws:
Garbage In, Garbage Out: They rely on traditional logs (SIEM, EDR) that are inherently noisy. Feeding noise into an LLM just results in “hallucinated triage.”
The Multiplier Effect: Using AI to write more rules just generates more alerts. You haven’t fixed the “tool babysitter” problem; you’ve just given the babysitter more children to watch.
Lack of Intent: Traditional detection looks for patterns. Deception looks for intent.
Interaction with a decoy is a high-fidelity signal that bypasses the need for the complex, “janky” triage layers that AI SOCs are trying to automate.
The proper evolution: The Detection-as-Code engine
Instead of an AI SOC, the proper evolution is a deception-led detection engine. This moves the “work” from monitoring to Engineering.
In the legacy world, tools like Google Chronicle have been criticized for their steep learning curves and proprietary languages like YARA-L. If it’s “painful” to write a rule, a team isn’t going to have the bandwidth to manage a complex deception web. AI finally eliminates this friction by automating the tuning loop
Dynamic Decoy Generation: LLMs can now generate “Honey-Logic,” e.g., API keys, database columns, and file systems, directly into your CI/CD pipeline.
Autonomous Iteration: When an attacker touches a lure, the system doesn’t just alert; it iterates. It automatically writes and tests a new detection rule to block that specific behavior across the production fleet.
The “Security PM” Role: The security engineer moves from being a “tool babysitter” to a product manager for risk. They design the threat scenarios and audit the AI’s rule-writing to ensure coverage.
This is the “managed detection” that could finally beat incumbents like Expel. It’s not about having better analysts; it’s about having a better deterrence infrastructure.
This is particularly relevant for players like Google and Wiz. By integrating deception with frontline intelligence from Mandiant, they can use actual APT tactics to automate the creation of deceptive scenarios. However, to truly win, Google still needs to bridge the “prominence gap” of its SIEM to make it the foundational infrastructure for this automated deterrence.
The SRE parallel: Redefining risk in the AI era
The biggest hurdle to this shift isn’t the technology. It’s the philosophy. We have to accept that events and incidents will happen.
This is very similar to how we think about infrastructure incidents in Site Reliability Engineering (SRE). We don’t aim for zero downtime; we aim for an “error budget” and a culture of blameless post-mortems. In the AI era, your deception budget is your error budget. Every time a decoy is hit, you’ve bought yourself the intelligence needed to secure the rest of your infrastructure.
The job of the security team that understands AI is to redefine how we think about risk. If you allow small “lurking” events to happen in a controlled sandbox, your AI models can learn the attacker’s intent and strengthen production guardrails.
In this vision of an automated loop, where does the human sit? They are no longer triaging. They are designing. The human decides the “capture” logic based on the risk the company wants to tolerate. Is it an automated account lockout? Or a deceptive “rabbit hole” that keeps the attacker busy while the team investigates?
This is where the human moves from “operations” to “strategy,” designing the nightmare scenarios that the AI uses to harden the system.
A call to builders
The “Software Apocalypse” is a reckoning for point products, but it is an opportunity for those who understand developer-first security.
If you are a security engineer, stop buying “turnkey” tools that you have to spend months learning. Start building your own deterrents using available LLMs. The goal is a parallel and thriving security business, similar to what Microsoft did after it focused on the cloud, but optimized for the builder-centric, agentic world.
Deception isn’t a gimmick anymore; it’s the only way we scale. We are moving toward a world where the highest ROI security move isn’t building a better wall, but building a better mirror.




