Disclaimer: Opinions expressed are solely my own and do not express the views or opinions of my employer or any other entities with which I am affiliated.
![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fda0e10bc-11da-47ea-bfd4-89f359166f84_6733x4672.jpeg)
As I was celebrating New Year’s and thinking about potential blog topics, I remembered that I do a set of predictions every year. Last year, my predictions came a bit late, so this year, I decided to do them earlier. Last week, I reposted my 2024 predictions.
This year, I decided not to be prescriptive on how many I could create, so I didn’t put a number. However, this year’s theme focuses on efficiency and scaling. Those who read my blog know that I am a big advocate of turning security into a more engineering-forward practice rather than operations-focused.
Also, for those who are new, please consider buying a paid subscription to support my writing, especially for those who have a professional development budget!
I’ll be the first to admit that security is a heterogeneous place. There are tech companies that can invest and attract top-tier engineering talent to their security teams, and there are “traditional” companies with lots of legacy software. In the past, trends that have affected a large number of security organizations relate to new technologies, such as the cloud. It’s not a surprise that AI will have this type of effect, but most people are thinking about first-order effects. My trends relate to many of the second-order effects of AI, which I find to be more interesting. So, here they are!
Security budgets will change and focus on efficiency investments.
If you take a close look at security budgets, there’s a heavy focus on tooling. This makes sense because much of the current security culture comes from IT, which is very infrastructure and thus tool-heavy. However, when you look at engineering, historically, it’s been hard to sell to engineering because they tend to build and spend more money on headcount rather than tooling. The exception is infrastructure/DevOps, which spends a fair amount on hosting infrastructure.
I specifically didn’t say if security budgets will increase, decrease, or stay the same. I honestly think the overall budget might potentially increase. However, the budget for tooling will likely decrease. The truth is that most security tools don’t actually solve the problem. There are a number of reasons that I won’t dive into in this newsletter, but I’ve written about it in the past.
The problem is that most tools don’t capture the nuances of security issues in an organization. It’s not their fault because they have to build a product/platform to capture a lot of customers. That’s why I see tool budgets decreasing and focusing on “good enough” platforms, especially in areas such as application and product security. These areas require people to solve the problem, and tools are well… just tools to do that rather than the solution in and of itself.
Headcount is always the most expensive line item, but I do think that security leaders are willing to invest in more headcount to solve problems in their highest-risk areas. For example, a security leader should invest in headcount to figure out how to apply AI in high-risk areas like product security where it’s hard to keep up with engineering. Similarly, I can see a security leader hiring someone in areas of security where they are spending a lot of money on tools to find more capital-efficient ways to reduce risk. Although this might cost more in the short term, it’ll be easier to justify and likely lead to better results in the future. If anything, companies are spending money on tools, and major breaches are still happening. So, it’s definitely time for a different approach.
I believe the days of buying and maintaining tools as the primary way to solve problems are over. Security organizations will have budgets that look closer to engineering, focusing on spending money on headcount to solve problems.
AI will revitalize “outdated” security categories.
There are a lot of categories, such as data and application security where the products feel like they are “outdated.” They have trouble keeping up with new technologies, such as AI and the cloud. The reason for this is that the problems have been hard, and existing technologies haven’t been able to solve the problem.
An example is data security. One problem that’s always plagued these companies, such as BigID, is false positives. However, some automation is always better than none, but no solution has been compelling. The reason is that having proper data visibility and identification requires an understanding of context. LLMs are known to be good at this, so hopefully, a company will be able to leverage this for their product.
Another area is application security. Most solutions are pretty dated, and they are only good at finding “generic” vulnerabilities. In fact, the most critical issues are usually context-specific, requiring an understanding of the business logic. That’s why bug bounty programs and internal red teams are so popular. The people who work in those tend to have the most context about the application. However, LLMs are good at processing large amounts of text to gain context, so it’s possible that some change can happen here. We are already starting to see it with Semgrep’s AI assistant.
I talk about this in one of my previous newsletters on how to use AI in security. Historically, ramping up security people on new technologies has been difficult, and it’s likely also going to be a problem with AI.
In addition, many of the “legacy” vendors, such as Tanium, Ping, Fortinet, etc. can revitalize their business and become more competitive with their larger peers if they can figure out how to use AI effectively in their products to reduce the operational burden and costs on their customers.
SOCs will fundamentally change.
I also mentioned in the newsletter above that I don’t think selling a product that provides AI in the SOC is a good idea. There are far deeper and more nuanced problems in the SOC that AI unfortunately can’t solve. I won’t talk too much about that, but I am a fan of MDRs. They completely outsource most, if not all, of the SOC functions. AI for SOC already exists as an MDR.
Anyway, AI does start the conversation about the inefficiency of SOCs. They are hard to manage and take months, if not years, to mature and be productive. It seems like a big investment for unclear ROI. Most are ridden with inefficiencies, so it’s hard to convince executives and the board to invest heavily and take a risk with an unclear return. That investment is better spent on product, engineering, or other security functions.
I believe AI will force security leaders to address these inefficiencies. Sure, they might buy tools or use AI, but going through this somewhat disruptive innovation process will likely reveal fundamental problems of a SOC, e.g. should I even have a SOC? what are the benefits of an internal SOC?
As a result, AI will indirectly force change in the SOC. It’s likely they will become smaller and/or transform to focus more on detection and response engineering.
There will be a couple of leading vendors in security for AI.
In a previous blog post, I discussed that the future of security for AI is unknown. In fact, I went so far as to say that I’m not sure dedicated companies should even exist — they should just be features in existing products.
However, the security industry doesn’t always trend in the way we think, and I’m the first person to admit that. (I didn’t think we needed a product like Wiz.) I do think that CISOs and security leaders do see value in having a place to monitor all AI-related “posture” such as models, data, location of usage, etc. Essentially, they are willing to pay for visibility. I have some guesses on what might pull ahead, and ultimately, it will be the one that’s easiest to deploy and have the smoothest GTM function. Latio Tech has a good list of products that currently exist.
Being a private security company will still be the way to go.
Only one major cybersecurity IPO happened in 2024, Rubrik, which is questionably a security company. IPOs in general have been sparse since interest rates have gone up. However, in security, it seems that more companies have been either staying private or going private (usually through a private equity buyout). Also, larger acquisitions have been happening. It seems that being a public cybersecurity company, especially a pure-play one, is challenging.
As a result, we won’t see many, if any, cybersecurity IPOs in 2025. We will likely see more acquisitions and private equity buyouts. Cybersecurity is a tough business and usually has a long path to profitability because growth costs are so high. That makes them unattractive IPO targets, especially when rate cuts are unclear.
It’s easier to stay private for longer as many private companies, such as Databricks, have offered liquidity for their employees and early investors. That’s why we’re unlikely to see companies rush to go IPO or even choose that as an option.
Anyway, 2025 will be an interesting year, but as with every year, I hope it will result in positive change in the cybersecurity community.