Why security is still hard (and getting harder)
There are too many problems, but a lot we create for ourselves
Disclaimer: Opinions expressed are solely my own and do not express the views or opinions of my employer or any other entities with which I am affiliated.

I’ve intentionally made all of my posts free and without a paywall so that my content is more accessible. If you enjoy my content and would like to support me, please consider buying a paid subscription:
About a year ago, I wrote two newsletters exploring why I thought security was easy and hard. With the explosion of AI tools and a never-ending wave of cybersecurity startups, I wanted to revisit those thoughts. What still holds true? What no longer does? And how have my own views evolved?
If you know me, you know my perspective is always evolving. I’m happy to admit when I’ve changed my mind. Security isn’t about being right — it’s about getting it right.
In this newsletter, I’m focusing on why security is still hard, and in many ways, getting harder. While I generally believe the industry is trending in the right direction, we have a long way to go.
First, a quick acknowledgment: Security has become easier in some ways.
I’ll write a full post about this soon, but for now, here are a few ways security has made real progress:
AI is reducing the burden of repetitive operational work, with promising applications ahead. (Example: Google’s Gemini for security)
Breach data is more widely available, which helps teams prioritize more effectively. (See: Verizon DBIR)
Security funding has increased, meaning there’s (sometimes too much) tooling available for every problem.
But nothing in security comes for free. In the same ways security has become easier, it has also become harder.
The Two Big Problems
There are two themes that cut across everything I’ve observed recently. First, security teams often understand the desired end state, such as better controls, reduced risk, and stronger posture, but they fail to execute. Second, the field is evolving too slowly to keep up with the businesses and technologies it’s meant to protect.
Why is that? Sometimes it’s due to hiring the wrong types of people. Sometimes it’s the organizational structure. Other times it’s cultural — an obsession with risk that outweighs pragmatism. Regardless of the cause, the result is that security has gotten stuck. And we need to build again.
AI: Helping and Hurting at the Same Time
AI has created real advantages for security teams, but it’s also accelerated risks in ways we weren’t prepared to handle.
Many security organizations responded to the emergence of tools like ChatGPT and GitHub Copilot by trying to limit their use. That might have been the safe option early on, but in practice, it meant security opted out of the learning curve while the rest of the company raced ahead. Now, security teams in many organizations are behind, both technically and organizationally.
The gap is especially visible in developer environments. I’ve seen teams adopt Copilot, automate code generation, and dramatically increase velocity. That’s great for shipping features. But if security isn’t involved early, it leads to what I call AI-induced security debt: new internal tools with poor auth, misconfigured data pipelines, LLM prompts hardcoded with secrets, etc.
Consider this scenario. Developers build an internal tool to automate analytics on customer feedback. They used a hosted LLM and embedded API keys directly in the prompt logic with no auth, no logging. The tool worked well, and adoption spread quickly. But no one realized until months later that the prompts were leaking internal identifiers into a third-party system. The damage wasn’t catastrophic, but fixing it took weeks and required pausing feature development. This doesn’t sound too far-fetched, and that’s what AI debt looks like. It’s invisible at first, but expensive later.
It’s easy to say, “just hire AI-savvy engineers into security.” That would help. But those people are rare. A more realistic step is embedding existing security engineers into AI projects and encouraging them to use AI tools themselves. That firsthand experience is the fastest way to build the intuition needed to secure these systems. Long term, this needs to be backed by executive support. Security can’t keep sitting outside the loop.
More Data, More Breaches
We now have more data than ever about what actually causes breaches. And still, many teams continue to over-index on hypothetical attacks and under-invest in known, repeatable risks.
This misalignment shows up constantly. Consider the following example. Sometimes, companies with red teamers spend months simulating a rare lateral movement path that requires a dozen steps and three chained misconfigurations. It’s impressive, but meanwhile, 90% of their endpoints still lacked phishing-resistant MFA. It’s not that the red team's work wasn’t valuable, but it wasn’t foundational.
Executives are often reluctant to fund security work unless the risk is clear, the mitigation is cost-effective, and the delivery plan is credible. Security people don’t always help their case. They state risks without being able to clearly explain tradeoffs, like how a given control might impact performance, customer experience, or developer velocity.
An approach is to anchor work in real attack data and use risk scenarios and thresholds to guide tradeoffs. I wrote more about this in an earlier post. The industry doesn’t lack data. It lacks focus, but focus is hard.
Too Many Tools, Not Enough Focus
The security market today is flooded with tools. On the surface, this sounds like a good thing. Every problem has a solution. But in practice, the abundance of tools is overwhelming. Many security teams now spend more time evaluating, integrating, and maintaining tools than actually solving problems.
The core issue is that most teams start with a tool, rather than starting with a problem. The more sustainable approach is to diagnose the problem first, figure out what a solution might look like, and then decide whether that solution is best addressed by buying, building, or doing something simpler.
In some cases, building makes more sense. Especially now, with AI-assisted development, it’s easier than ever to build internal tools that are tightly scoped to your environment. People have joked that I build to justify being a staff engineer. The reality is that many commercial tools are overengineered for the problem they claim to solve. They aim for a perfect end state, but real organizations need a ramp, that is, a way to get there incrementally without breaking everything along the way.
That said, there are categories of tools that are worth buying. CrowdStrike, for example, provides consolidated telemetry and threat intel that most companies could never collect on their own. The same is true for many appsec tools like Snyk or Semgrep, which benefit from broad vulnerability data and community-scale analysis.
But tool sprawl is real. And the mental overhead it creates is costly. The best security teams I know are extremely deliberate about their tooling decisions. They understand that every new tool adds friction, overhead, and dependencies. That’s why I keep coming back to this idea: security needs to build again. Not to reinvent the wheel, but to stay sharp, stay relevant, and stay focused on solving the problems that matter.
Execution: Where Most Security Teams Fall Short
All of these points show a bigger issue: security teams know what “good” looks like, but they struggle to deliver on it.
Sometimes it’s because they operate in silos. Other times, it’s a bandwidth issue. Teams are stretched so thin they’re stuck in triage mode, such as fixing bugs, responding to incidents, and evaluating tools, with no time left to build repeatable systems. This leads to security being reactive, not strategic.
And in many places, security has inherited a legacy of risk-aversion that makes it hard to prioritize value creation. Too often, the instinct is to say “no” instead of “let’s figure out how to make this work safely.”
That’s where we need to push harder. Execution is about building relationships, understanding constraints, and delivering value incrementally, not just surfacing problems and walking away.
Takeaway
Security is hard. Not because we lack tools or data, or because the threats are too sophisticated, but because we haven’t adapted fast enough to the world around us.
Too often, we focus on controlling risk instead of understanding it. We fall back on fear instead of leading with solutions. We reach for tools instead of thinking through problems.
It’s not too late to change this. But it will require rethinking how we build, how we prioritize, and how we collaborate. The future of security doesn’t belong to the loudest risk-raiser. It belongs to the best problem-solver.