AI-enabled product security (part 1)
My initial thoughts on the space
Disclaimer: Opinions expressed are solely my own and do not express the views or opinions of my employer or any other entities with which I am affiliated.
I’ve intentionally made all of my posts free and without a paywall so that my content is more accessible. If you enjoy my content and would like to support me, please consider buying a paid subscription:
Last week, I wrote about how AI is breaking security categories and how current research firms don’t properly provide guidance that’s useful for modern companies. The first category I want to dive into is one I'm calling AI-enabled product security. Yes, I just made up that name, but I’ll explain more about why it’s a necessary distinction.
I want to preface this by saying this is the first analysis of its kind for this newsletter. I don’t think this even qualifies as a research report or a deep dive, given that it’s somewhat high-level. The format might change as I figure out how to write something substantive with the limited time I have. A long report isn’t practical, but I do want to make sure my thoughts are consumable.
So, I’m going to start by defining the space and the market in this post. I think a newsletter that defines the space and what makes a compelling product is its own substantive piece of work. People will get the format as it happens.
What is AI-enabled product security?
This is a timely topic given the release of Project Glasswing from Anthropic. It’s a move that is redefining a core part of application and product security: finding and remediating vulnerabilities.
Product security has gone through some complicated changes lately. Nowadays, it’s been reduced to mostly operational work, e.g., bug bounties, scanning, and security reviews. But it’s actually much more than that. It’s about finding issues in application business logic or helping with secure design. Much of that is being lost because there’s so much operational noise. Given the velocity of engineering today, there’s no time for deep work. The bigger problem is that vulnerability scanning tools like Snyk and Semgrep are actually making this harder by generating alerts that leave security teams stuck triaging priority with engineering. Aren’t tools supposed to reduce work?
This is where ai-enabled product security comes in. These are tools that help with the operational aspects by automating the triage and coordination. In the past, to meet engineering velocity, you had to hire more people. But with AI, hiring more people isn’t feasible or scalable. You have to combat AI velocity with AI velocity.
I didn’t think things would change this fast. Foundational models like Claude are outpacing the specialized application security companies. About 1.5 years ago, I thought companies like Semgrep could use AI to help with remediation. I didn’t realize Claude would be able to do this so quickly and effectively on its own.
Segmenting the customer: Who is this for?
Working at an AI company has given me a lot of perspective on how nuanced this market is. To understand what these products should look like, we need to segment the customer personas. This is an oversimplification, but it’s a good starting point.
I see three main customer types:
The AI-native company: They are heavy users of AI and have successfully integrated LLMs into most of their workflows without outside tooling. They aren’t likely to buy these products because they’ll just build them themselves using Claude or OpenAI.
The legacy company: They are slow to change. They haven’t figured out their AI strategy and will likely stick with traditional appsec companies that have “AI features.” This is a shrinking, an increasingly competitive market.
The AI-progressive company: This is the most interesting group. They want to use AI and have a strategy, but they haven’t figured out the execution yet.
I believe the AI-progressive companies are the real growing market. Their engineering teams are actively adopting AI and increasing feature velocity, which puts immense pressure on product security. These teams aren’t going to undergo a massive organizational change overnight; they are looking for AI as an augmentor. This could also include smaller companies that don’t have the resources to hire a dedicated security engineer, so software engineers are forced to do most of the security work themselves.
Two paths for successful products
There are two types of AI-enabled products that will be useful here.
The first is service as a software that Sequoia describes as the future of AI software. This is software that helps companies use Claude more effectively by providing data and abstraction layers. I believe companies like Snyk or Endor Labs can pivot into this as Claude takes over their core “scanning” business. These companies have insight into private codebases that aren’t available to Claude. They can apply reinforcement learning or fine-tune models to help companies become more AI-enabled. I know of some services helping with Claude Code usage, but almost no one doing it at scale to improve product security specifically. It’s hard to do without business or code context.
The second type focuses on operational velocity. These tools improve processes around threat modeling and security reviews. They do two things: they streamline the process to obviate the need for a manager to handle triage, and they provide context so a security engineer can do more reviews faster. They do a “first pass” to detect serious vulnerabilities before they are deployed.
There are many new companies here: Clover Security, Clearly AI, DevArmor, Prime Security, Seezo, and Irius Risk. Their marketing all sounds similar right now, which is actually clever because it allows organizations to buy into the promise of being AI-progressive.
Why this is good for the industry
What I find most exciting about this shift is that it democratizes talent. Historically, elite security expertise was concentrated in a few top-tier firms or specialized research teams. If you didn’t have the budget to hire a “rockstar” team, your product security suffered.
AI-enabled product security changes that. By spreading expertise through software, we are effectively decentralizing talent. A software engineer at a 10-person startup can now access the same level of threat modeling and secure design guidance that was once reserved for big tech companies. This leads to fewer vulnerabilities across the entire ecosystem over time. It’s not about replacing humans; it’s about making sure that high-level security knowledge is available to everyone, regardless of their headcount.
The roadmap to stickiness
The key for these startups, especially the ones in the second category focused on operational velocity, is to guide customers on their AI journey. This can’t be the final product; it’s too easy to churn once a team learns to use Claude Code and OpenAI Codex directly. To stay sticky, these companies must become the ramp that allows ai-progressive companies to look like ai-native ones. They provide security services enabled by AI, using the data they gather to tailor solutions to a company’s specific product security needs.
The era of winning through GTM spend is over. Product superiority is everything now. In my next post, I’ll talk about my thought process as I lightly evaluate a few of these products. I plan to watch demo videos or sit through 15-minute demos with companies that reach out.
The key is helping a company guide their own unique journey to becoming AI-native.




