AI SOC Automation isn't the right problem to solve
We should focus on better detection engineering
Disclaimer: Opinions expressed are solely my own and do not express the views or opinions of my employer or any other entities with which I am affiliated.

Another week, another post about AI. I’ve been writing more about the intersection of AI and security for those who might be new or haven't been following recently. There are several interesting topics to explore, and I believe there aren’t enough substantial conversations about how AI and security can productively exist.
In a previous newsletter, I mentioned that many companies are working on applying AI to the SOC, and that was a topic for another newsletter. I’m delivering on that promise here!
I’ve spent some time thinking about the SOC and how AI could possibly modernize it. I have also talked to the founders of some AI SOC Analyst companies, such as Prophet Security, Culminate, and Dropzone AI, to learn more about their products and thinking.
My thoughts on security operations centers (SOCs)
Before, I dive deeper. I’m still pretty bearish on the concept of a security operations center. I’ve written in the past that most companies should probably have much smaller SOCs (or no SOCs at all!). Most companies are likely fine with an MDR (or MSSP) and some custom rules. Even if they need a more built-out SOC, it won’t look the same, specifically a SOC with many analysts looking at and triaging events. It would focus more on detection engineering and likely have fewer analysts, which would be closer to how modern data teams operate.
In other words, SOCs are already moving toward having fewer humans doing operational work and focusing more on automated tooling. As a result, in the next few years, we will see two trends for SOCs.
The up-and-coming companies, e.g. tech startups, will likely not build a SOC. A SOC is too much of a risk — it’s a large investment, which takes years to mature before you can see measurable ROI. On top of that, SOCs have high variances in terms of effectiveness. It seems that the money is better spent elsewhere!
I’ve predicted that cybersecurity is going to face a reckoning around efficiency, so for companies with existing SOCs, it seems that they are an obvious candidate for efficiency gains. As a result, security leaders will likely outsource tier 1 or even tier 2 support to an MDR or MSSP. We are already seeing this with the rise of Expel and Arctic Wolf as well as people using Crowdstrike’s endpoint MDR functionality. The insight here is that these companies will be more efficient in determining false positives, etc. given that they have insights from their other customers. Unless you’re a very large company, you are unlikely going to be the first target for an attack. The SOC will focus on more advanced threats, and over time, I would imagine these would be replaced with more detection engineering-focused tools that focus on custom detections specific to your business and product. As a result, the size of the in-house SOC would shrink.
What’s the problem with AI SOC agents?
There’s no “problem,” but they are dealing with a complicated and evolving market. As I described above, the traditional SOC market is shrinking. Security leaders are already thinking about how to further optimize their own SOCs. Because SOCs are operationally heavy and tend to have a set of rulebooks that are regularly run, applying AI here seems obvious. However, it’s just one of many things that security leaders are going to try.
Since these tools are nascent, they will realize that AI SOC agents won’t be able to handle the more advanced threats and that they are better off just procuring an MDR or MSSP. The reason is that AI SOC agents still require people to maintain and monitor. MDRs and MSSPs generally only have an initial setup cost but little to no maintenance whereas AI SOCs require someone to regularly configure, tune, and maintain them.
However, it does seem like MSSPs and MDRs could buy AI SOC agents. It’s more likely it’s MSSPs than MDRs. It seems that MDRs might buy them in the beginning but then choose to build them in-house for a variety of reasons, including to reduce costs and create a differentiator for themselves. MSSPs are more likely to buy them because they are typically servicing companies that don’t have the resources to build and maintain most security functions themselves. If this were true, then this would be a different go-to-market motion for these companies. It would have affected both the product and how they sell because MSSPs are different than large enterprises. There are several successful companies in this space, such as Datto and Huntress Labs.
Another issue now is that the various products don't have much differentiation. I know the market is early, so there might not be enough time for customer discovery and requirements — customers don’t quite know what they want or even know what is good when they see it. This also shows that no one has a strong opinion on how AI should apply the SOC or what problem it really solves.
These companies' main focus is to provide more context so that the SOCs can operate more efficiently by removing tier 1 and tier 2 work. However, this assumes that SOCs should exist in their current form. This feels like solving an incremental problem rather than changing how security teams and organizations operate or helping security teams adapt to the changing tech environment.
What do I think will happen?
The market for the SOC, meaning the aggregate budgets of SOCs across all companies including headcount, is large. I Googled around but couldn’t a good estimate of the size. The best I could find is that the SOC as a service market is about 7 billion, which feels about right. Let’s say the Fortune 500 spends on average 10M on a SOC, which doesn’t seem unreasonable. (Of course, the ones in the Fortune 50 probably spend more than the ones near the bottom of the Fortune 500). This comes out to be about 5B. There are also companies outside the Fortune 500 that spend money on SOCs on a smaller scale.
Although the market is large, it’s highly fragmented and inefficient since many companies have their own SOCs. Even if the market were to become more efficient and shrink overall, it would still be a large market because companies would still need to spend money to investigate their security events and issues. So, the market is large enough for SOC tools.
So what about these AI SOC agents? I predict they will get initial traction, but eventually, they will realize that the market isn’t large enough for just the agents themselves. Security leaders want to be more aggressive about solving the problem because AI SOC agents still require maintenance as I described above.
There are two paths for AI SOC agents. I can AI SOC agents moving into MDR because security leaders might realize it’s much easier and faster to outsource the SOC than they make their own SOC more productive, especially at the lower tiers. However, MDR works well for tier 1 and tier 2 alerts. They might be able to find more sophisticated threats, but some organizations might want to find ways to have earlier detections with custom rules because they have threats specific to their business. Another direction is that AI SOC agent companies move toward being detection engineers as a service. They either build tools that can help engineers with better detection rules and/or actually do many of the detection rules themselves and adapt them without requiring much human intervention.
Who will break out?
Regardless of how AI SOC agent platforms will evolve. The question is what will allow a specific different AI SOC platform to break out, similar to a Wiz situation? Wiz wasn’t the first CSPM/CNAPP on the market. Right now, most of the AI SOC agents are hard to differentiate. I predict a similar situation will happen, but there are differences between the CNAPP market and this. Wiz benefitted because it was easy to deploy and a needed first step rather than jumping to the finish line. It provided the necessary information and trust so that security teams could convince infrastructure/DevOps teams to have more monitoring.
However, for AI SOC agents, it’s not fully green field — there’s some baseline to compare against. That’s why I believe that having strong AI technology will be extremely important because it’s easy to measure ROI and see the improvement from the status quo.
If technologies end up being similar, then it comes down to product and GTM, which is a tale as old as time in security. The product here needs to help a security leader show improvements in their SOC metrics, whatever they might be. Specifically, if it can show increased productivity and effectiveness for lower cost, that would be a valuable product. For example, for an MDR, it’s cheaper than hiring headcount and helps a company have the foundations for a detection and response program. Sure, advanced capabilities might need to be built in-house, but it helps the company get started and gives the security leader the ability to report capabilities and effectiveness to the executives and board.
With all that said, I don’t know the future, and no one does. Honestly, I was initially surprised that Wiz gained so much traction in a new space, but now I understand. Improvements to the outdated SOC are long overdue, so I believe some change will happen here. Will AI SOC agents be the solution? Will it be MDRs, or will be a shift toward detection engineering? We’ll see!