Disclaimer: Opinions expressed are solely my own and do not express the views or opinions of my employer or any other entities with which I am affiliated.
Over Thanksgiving, I had a chance to breathe and get through my backlog of LinkedIn posts and articles I’ve wanted to read. Honestly, there are quite a few articles about AI and security, which isn’t surprising because it’s the “hot” topic now. I don’t think AI security is the biggest risk right now, and I want to focus my blog proportionally on important security issues as shown by data. With that said, AI security is one of the biggest unknowns right now, and in security, unknown unknowns are terrifying. My articles on this topic have been relatively thin because I think there are bigger issues in security that need to be addressed. However, I admit that I don’t know how big the AI security issue is right now and how fast it will grow. In the future, my current focus will either be seen as an overestimation or an underestimation. Only time will tell.
Anyway, I do have more thoughts that I want to write about, and it’s helpful for the security community to have more perspectives, especially since some of them aren’t that substantial. That’s why I’ve decided to continue my series about AI security. In the past few weeks, I’ve written about how to use AI in security and how to secure AI.
This week, I’ll speculate how AI security startups might fail. I’ve discussed how companies like Wiz, Snyk, Crowdstrike, and Palo Alto Networks (among many others) will fail.
What is the current market for AI security?
There are plenty of articles about this, but I believe that James Berthoty who runs Latio Tech does a good job summarizing a lot of the current needs in this LinkedIn post.
I want to note that these are existing vendor categories, not an analysis of what they should be. I generally appreciate James’s analysis because he used to be a security engineer and has worked in organizations and used products, unlike most analysts. As a result, he has more of a security engineering-focused view, which lies between me (who is more software engineering-focused) and a traditional analyst (who is more security operations-focused). James also has a good summary of current AI security startups and brief thoughts. As you can see, there’s everything from browser plugins (Prompt Security, Aim, Unbound) to library wrappers (Pillar Security). This is typical for new security areas, i.e. security for new technologies, as companies “throw out” ideas and see what sticks. This happened for the cloud, but the problems were more well-defined — SaaS solutions led to the loss of control of data and infrastructure management changed.
Startups just fail
The most likely but also most uninteresting scenario is that these startups fail because well… startups fail. This could be because of poor marketing timing, bad operations, or just a product that couldn’t get the right GTM motion. Startups also fail because they are unlucky. Anyway, I won’t go into this more because it’s true of all startups not just AI security ones. Of course, there’s more risk because it’s a new category.
Market risk leads to failure
The market never appears for dedicated AI security products. Then, it doesn’t matter how good the product is. There can’t be product-market fit without a market.
Keep reading with a 7-day free trial
Subscribe to Frankly Speaking to keep reading this post and get 7 days of free access to the full post archives.