Disclaimer: Opinions expressed are solely my own and do not express the views or opinions of my employer or any other entities with which I am affiliated.

This week, I will continue my series by conducting thought experiments on how AI might fundamentally change security.
For those who might have missed it, I’ve written about how having AI will make security easier by reducing risk throughout the organization through more predictability and less human error. I’ve also written how AI might affect how we develop security products and where it’s best used and not best used.
In this newsletter, I’m going to explore whether AI will lower cybersecurity costs. Before I dive into that, I’m going to discuss the cybersecurity poverty line.
Why has this been on my mind? I recently saw a LinkedIn post by Ross Haleliuk, who writes the Venture in Security Substack, where he brought up the concepts of an AI poverty line and a cybersecurity poverty line.
Ross often shares good and honest takes on cybersecurity, but in this case, I’m not sure what point he’s trying to make. The more I thought about it, the more I realized that this post—perhaps intentionally on Ross’s part—touched on a controversial topic. People don’t quite agree on what these lines are and whether it’s something we should even be thinking about at all!
In my opinion, I can’t quantify the impact of the cybersecurity poverty line, given that progress in cybersecurity is notoriously hard to measure. In addition, advancements like hosted LLMs, such as OpenAI and Anthropic, have made AI more accessible to everyone. Even companies with few resources can now leverage AI in ways that required substantially more resources in the past.
That spurred the question, so if hosted LLMs are lowering the barrier to entry to AI, what is its effect on cybersecurity?
What is the cybersecurity poverty line?
Before we dive in, I want to share my opinion on the cybersecurity poverty line. First of all, what is it?
In 2011, Wendy Nather, a cybersecurity expert at Cisco, created this term, and it has gained wider use in recent years. She defines it as “the line below which an organization cannot be effectively protected.” It marks the threshold between organizations that have the necessary resources, e.g. people, budget, etc., to meet minimum cybersecurity standards.
Of course, this line varies substantially across the industry. Large companies can still fall below the line because they lack the talent and resources. Similarly, small companies might operate above the line without needing as many resources by properly prioritizing and being clever with what they have, such as using effective tools and software engineers to help scale.
I don’t like this term for a few reasons.
First, even if an organization has the necessary resources, such as budget and even talent, it might not be using them effectively. This type of mismanagement is quite common.
Second, the term implies that cybersecurity capabilities are related to cost, but it’s more nuanced. Sure, talent and security products cost money. Prioritizing security on the roadmap also involves some cost. But, successful cybersecurity programs also require the ability to influence stakeholders, such as engineering, where most of the risk typically lies for most companies, and having a strong and right-sized security strategy. Two companies with similar budgets and resources can have vastly different security programs and outcomes.
Third, the line itself is hard to define — it feels subjective. It’s only obvious if your organization is clearly above or below the line. However, it doesn’t provide any indication of efficiency and effectiveness, which I believe are bigger challenges in security.
Finally, it’s easy to “give up” on security at your company once the team believes that they fall below the line. It can lead to inaction and the belief that good security posture isn’t achievable without significantly more resources. This might lead to requesting more budget, but as stated above, more budget without a clear strategy likely won’t improve results or outcomes.
How AI will lower the costs of cybersecurity
I believe that AI will reduce cybersecurity costs while maintaining, or even improving, effectiveness. Companies will achieve similar cybersecurity outcomes with smaller budgets, fewer tools, and leaner teams. Of course, this won’t happen overnight, but it’s likely going to reverse the recent trend of ever-increasing cybersecurity spending.
One of the biggest challenges of cybersecurity today is cost inefficiency. Over the past five years, cybersecurity spend has nearly doubled, yet the number of breaches hasn’t declined. While security programs have helped reduce the impact of breaches, the additional protection doesn’t seem commensurate to the increased costs.
It’s commonly felt in the industry that organizations are spending more on security tools without seeing better results. This is partly because cybersecurity vendors have prices that don’t reflect the real value of their products. As Jonathan Price describes well in his Substack post, cybersecurity teams are overpaying for products because they struggle to quantify their actual return. On top of that, many don’t factor in the cost of using and maintaining the product. Although these tools promise to reduce risk, it’s not clear if the total cost is worth the risk reduction.
How can AI help with this? AI can potentially disrupt the pricing model and make security tools more accessible. This will happen in a few ways:
AI is already accelerating software development, allowing engineering teams to move faster, and this means software engineers have more time to build security tools and build them quickly. Many security teams rely on vendors because they lack the time and/or expertise to develop their own solutions.
AI will make it easier for new tools to enter the market. That’s why SaaS and developer tools are such competitive markets; they have low barriers to entry. AI will make it easier and cheaper to develop tools that have been traditionally complex and expensive.
One of the biggest barriers in security is that many practitioners have limited engineering knowledge, making them dependent on vendors for solutions. AI will help bridge this gap by enabling security teams to do much of the engineering work themselves, such as automating workflows, reading and writing code, etc. This trend aligns with the broader shift in the industry toward security teams that are more engineering-forward, such as the rise of detection engineering over traditional SOC analysis.
As a result, security tools will become more of a commodity, which will reduce prices. I’ll be writing a separate post describing which companies I believe will do well in an AI-driven world.
Beyond tooling, AI will reduce operational costs in cybersecurity. Historically, cybersecurity staffing needs have scaled with company growth because of increased operational workload. Here are just some examples of some common challenges:
Audit and compliance processes increase as a company becomes larger and its infrastructure becomes more complex.
Application security demands more engineers as the software complexity increases
SOC operations require more analysts to process the growing number of security alerts as a company grows.
AI can automate much of this work:
AI-powered tools can automatically answer security questionnaires, handle audit documentation, and generate reports.
AI-assisted code analysis can help developers identify and fix security issues without involving the security team, or even the engineering team!
We are already seeing some of this work with MDRs for SOC operations. They are minimizing the need to have staff to respond to security alerts.
As AI automates more operational security tasks, security teams no longer have to scale at the same rate. This mirrors trends in AI-driven companies, where the teams are leaner and can generate more revenue with fewer employees. Similarly, it’s possible that companies overall will have leaner teams.
For example:
Cursor hit $100M ARR with less than 50 employees.
This shift is very much happening in software development, where AI tools are increasing developer productivity and reducing the need for more junior engineers. This same principle will apply to security — companies can have a strong security posture with fewer employees.
Finally, security isn’t just about technical solutions — it requires cross-functional collaboration, which introduces friction and slows down progress. AI will help streamline this in a couple of key ways:
As I discussed in my last newsletter, AI will lower risk across the organization by improving code quality, identifying security gaps, and reducing human error. This means security teams will have fewer issues to manage in the first place.
Doing security reviews and remediation work requires significant overhead, especially around product and program management. AI can automate much of this work, reducing the time security teams spend on administrative tasks and allowing them to focus on more strategic initiatives.
One of the largest areas of friction around security work is resource allocation — security fixes often require engineering time, which is limited. AI will ease this tension by increasing engineering bandwidth and enabling security teams to directly implement fixes in a non-disruptive manner.
This type of transformation shouldn’t be surprising. We’ve seen similar efficiency gains in other industries with technology advancements:
The internet drastically reduced software distribution costs, making software more accessible and affordable
Computers became cheaper and more powerful over time, with lower production costs
Even further back in history, the printing press cut the cost of books, improving knowledge distribution and education accessibility.
Each of these technological shifts reduced the barrier to entry for high-quality products. So, I wouldn’t be surprised if AI will have a similar effect on security.
I’ve discussed in my blog different ways to make security teams more efficient, so AI is not the only lever. However, it is a new one that has the potential to fundamentally reshape how security teams operate.
AI is making security both cheaper and smarter. Organizations that figure this out will not only have lower costs but also better security posture.