What security categories will stay relevant
They need a moat against AI
Disclaimer: Opinions expressed are solely my own and do not express the views or opinions of my employer or any other entities with which I am affiliated.

I’ve intentionally made all of my posts free and without a paywall so that my content is more accessible. If you enjoy my content and would like to support me, please consider buying a paid subscription:
I needed a brief hiatus from the talk about how AI is going to kill all software companies. Well… not really. There’s just been a lot going on, and I needed some time to breathe. Sadly, I was hoping to come up with a topic that didn’t bring up AI or the impending structural changes in our industry. Sadly, AI will likely be a regular topic of this blog, but given the popularity of the [un]prompted conference, I think this is true everywhere, and the reality of where we are today.
I’m frankly not satisfied with most of the writing around this. A lot of the content on LinkedIn lacks the nuance that only exists in longer-form writing. People simplifying this haven’t spent enough time actually being a security professional, navigating the complexities of day-to-day operations where "turnkey" is a lie and "integration" is a four-letter word. This newsletter is meant to provide a practical evaluation of these shifts. Anyway, I digress.
The Catalyst: Claude Code Security
Where to start? Well, it really all started when Anthropic released Claude Code Security about 2 weeks ago. This didn't surprise me. I’ve been saying "AppSec is dead" since 2022. AI just accelerated the timeline.
What was surprising is that Anthropic, a foundational lab, chose to build this directly. For a third-party startup to build a reasoning-based scanner of this caliber, it would require a level of capital investment and computing resources that most simply can’t access.
Anthropic's focus on a specialized coding application makes strategic sense. There is a growing belief (which I share) that LLM infrastructure and raw models will eventually become a commoditized, competitive business. The real value and the "moat" will move to the application layer. But this is likely a longer discussion. However, this is an important assumption that I will carry throughout the post, and I believe this is especially true for security. Most security companies have historically made their money at the application layer, even "infrastructure" players like Cloudflare.
Do I think a lot of current security companies will go away? Yes, if they don’t adapt or create a real moat. In the Cloud/SaaS age, the moat was the sheer effort of development. You needed a small army of engineers to build a product, and a competitor had to raise massive capital to catch up.
But that moat was always thin. Switching costs in security are surprisingly low if the new tool is better at “finding stuff.” Most incumbents survived on GTM spend and FUD (Fear, Uncertainty, and Doubt). With AI, that barrier is gone. It is much easier to build a high-functioning product in a weekend now than it was a year ago in 2020.
Pramod Gosavi recently released a matrix on LinkedIn categorizing the risk levels for different security sectors. He argued that anything "hardware" or network-based (SASE, Endpoint, Firewalls) is safe, while workflows like IGA, ASPM, and SAST are essentially "dead" or at high risk of replacement by LLMs.
and this is the post describing it:
1) Anything "hardware" or sensor or agents or network is safe. This includes SASE, CWPP, Endpoint Agents, vuln/patching agents, firewall, data collection, etc.
2) Controls like Identity directory, Zero-Trust, PAM, CIEM, Data access governance are probably safe as well.
3) I think LLM based malware analysis will enhance antivirus. CSPM is mostly posture management and can be replaced with LLM based workflows. SAST is near dead with code assistants. Next will be SCA if LLMs can patch OSS similar to chainguard, etc. Supply chain/lineage can also be implemented with LLMs
4) Workflows like IGA, ASPM, SOAR, IR can be replaced by LLMs.
5) Most detection and response will be enhanced with LLMs. You need some ML for faster, pointed detection for historic patterns
6) In data security, classification and discovery can be done more elegantly without regex. With AI, DLP is a bigger problem across context and will need AI to solve it. Privacy workflows can be automated with LLMs as well.
7) Email/Human/Collaboration/Training: This will be a big area of attack with LLMs and need an AI first approach.
8) I struggle with exposure/vuln mgmt. LLMs are being used for remediation workflows and knowledge graphs can offer more context but still prioritizing what to fix remains an AGI problem. Pen testing becomes really cheap and should be used more often as defense than compliance.
I think Pramod is directionally correct, but there’s a lot more nuance here. In my opinion, a category is only “at risk” if one of two things happens:
The Foundational Labs choose to build it themselves.
Security teams choose to build it themselves with LLMs.
The lab risk: can they build everything?
There is always the risk that Anthropic, OpenAI, or Google will decide to eat the entire stack. They have the “brain” (the model) and the data. However, they can’t build every application and do it well. Security is, comparatively, a small market for a company with a $380 billion valuation.
Claude Code Security was a natural extension of a coding assistant. But I find it hard to believe these labs will push into traditional “gritty” security areas like Detection and Response (MDR/SOC) anytime soon. That requires a completely different GTM motion and a level of “boots-on-the-ground” service that labs aren’t built for.
That said, never say never. We used to ask why Google or Meta didn’t just build every startup idea. The answer was “focus.” There’s always some risk here, but I don’t think this is a worthwhile one to harp on.
The real risk: The “build” renaissance
The bigger, more subtle risk to the security industry is that teams will simply stop buying products.
Nothing would make me happier. Security needs to “build” again. We need more security generalists solving problems rather than “tool babysitters” triaging alerts from a dashboard they don’t control. The barrier to entry for building your own custom CSPM, automated remediation agent, or data classifier is now near zero.
If I can prompt Claude to write a custom script that audits my AWS IAM policies and auto-remediates over-privileged accounts, why am I paying an external vendor six figures for a tool that does the same thing (often with more “jank”)?
The companies that are truly safe are the ones that have three things: data, infrastructure, and a network. Think of it like DoorDash. The app itself is easy to replicate with an LLM. But the driver network, the restaurant partnerships, and the customer feedback loop are almost impossible to recreate overnight.
In security, this is the “Cloudflare/Crowdstrike/Zscaler” moat. They have the network effects. They see threats across millions of endpoints and billions of packets in real-time. An LLM might be able to analyze a threat, but it doesn’t have the “sensor” to see it first. I’d argue that detection and response (MDR) and email security are also safer than people think because they rely on this cross-customer network effect to identify emerging patterns.
People often push back on the “build” renaissance by citing technical debt. They argue that if every security team builds its own tools, we’ll end up with a fragmented mess of unmaintained code.
I think this is an outdated way of looking at debt. In the past, “homegrown” tools died when the person who wrote them left the company. But with LLMs, the cost of understanding, maintaining, and refactoring legacy code has plummeted. We might be entering an era where technical debt is actually okay because the “interest rate” on fixing it is near zero. If an LLM can explain a 5-year-old script to a new hire in seconds, is it really “debt” anymore? However, the effects of this are still yet to be seen.
The New Talent War
We are seeing a massive shift. Newcomers will inevitably disrupt IDPs (Identity Providers), data security, and compliance. The winners will be the ones who leverage AI-native operations to bridge the gap between “talent” and “scale.”
This is why there is such a fierce talent war right now. The gap between companies that can leverage AI properly and those that can’t is widening. We are going to see a new breed of companies, e.g., AI-enabled access management, LLM-driven data protection, that don’t just “have AI” as a feature, but are built on AI as the engine.
It feels daunting. Security teams are being forced to deal with entirely new threat vectors (agentic exfiltration, prompt injection) while simultaneously being pressured to become more efficient by adopting the very same technology.
Claude Code Security is a warning shot. It’s not the end of cybersecurity, but it is the end of “security as a dashboard.” The future belongs to the builders, the ones who can leverage the models to build their own moats, rather than just buying someone else’s wrapper.




