How the AI security space will evolve
LLMs are the reason that most AI security companies won't last
Disclaimer: Opinions expressed are solely my own and do not express the views or opinions of my employer or any other entities with which I am affiliated.

We’re hiring for several roles at Headway in our Trust organization, led by the fearless Susan Chiang. Specifically, we’re hiring a software engineering manager, a product manager, and a product security engineer.
This post is coming a bit late, and it might be considered a 1.5 post. I was at our company hackathon last week, where I finally had some free time to try out many cutting-edge AI tools, such as Anthropic. I played around and saw the AI developer tools, such as Cursor and Copilot in action. I have to say it’s a different feeling being hands than seeing the demo. This is what I wanted to write about. Initially, I was confused by the range of AI security startups, and then I realized that it’s possible many security people haven’t tried a lot of these tools directly and actually developed in or with them. The lack of hands-on experience might be driving a lot of the noise in AI security.
As many of you know, I’ve been writing about the intersection of security and AI.
It goes without saying that there’s a lot of activity, and it’s frankly hard to keep up. I don’t blame anyone, especially in security. There’s already enough security activity, and now there’s AI activity on top of it.
I wrote that a lot is unknown about AI security, but in this newsletter, I want to revise that statement — a lot of the future market is unknown. That seems like a somewhat obvious statement. Of course, we don’t know what the future markets will be like! If we did, then we would not need experts to predict the market or spend time betting on the market. However, what I mean is that the AI security market is more unpredictable compared to other security markets. We don’t know how AI will mature and how companies will adopt maturing AI tools. In fact, if DeepSeek’s achievement proved anything, it’s that we can’t predict AI’s innovation progress. No one guessed that we could come up with AI models so cheaply so quickly. This likely means that AI models will improve faster than we imagined.
Now, that I’ve probably stated some obvious facts. What does this mean for AI security? (other than the fact, that it makes building a startup harder.)
What is the state of AI usage?
There are different types of AI models, and they are used differently. Large language models (LLMs), so the likes of Anthropic, DeepSeek, and OpenAI are prompt-based. This means you enter a prompt to give the LLM “instructions” and a corresponding message, and then it outputs a result. It’s pretty straightforward to use, and most people are using this. You don’t need very little, if any, machine learning or AI background to do this — you just send the prompt and message to Anthropic or OpenAI and wait for a response based on the “instructions” in the prompt.
There’s the more “involved” AI, which requires you to build your own models. This requires training data to build a model, and after that, you can put in your inputs and get outputs. You have to build a model for each application. This will require you to have some machine learning and AI knowledge. The impressive part about LLMs is that these models are already built and are general enough to be used for a wide range of applications without any need for specialized training!
What does this mean? Since LLMs are substantially easier to use, most companies and developers will use LLMs rather than going out to build and train their own models. Fly.io’s blog post confirms this trend.
But, of course, what does this mean for security?
Most companies looking to monitor models will face a small market or an uncertain market. It’s not clear when companies will start to use their own models (if ever!). If we look at the set of Latio Tech’s AI security startups. It seems like the AI model-focused ones like Noma, Operant, Aim security are unlikely to gain any sustainable traction.
Keep reading with a 7-day free trial
Subscribe to Frankly Speaking to keep reading this post and get 7 days of free access to the full post archives.