Software engineering for security: Performance Tradeoffs
A primer on how to get context about performance so that you can better communicate risk
Disclaimer: Opinions expressed are solely my own and do not express the views or opinions of my employer or any other entities with which I am affiliated.

I’ve intentionally made all of my posts free and without a paywall so that my content is more accessible. If you enjoy my content and would like to support me, please consider buying a paid subscription:
I’m trying something different with this blog. I’ve done this in the past by breaking down security concepts for engineers.
Many of you know I’ve historically written about my thoughts on security and various products. One of the major themes of this newsletter has been advocating for more engineering principles in security. In the past, I’ve written about how security and engineering can work better together:
These were popular, and I’ve grown an audience of software engineers interested in security and security folks trying to work more like engineers. That’s great! However, if we truly want security and engineering to speak the same language, security professionals need to understand the fundamentals of engineering performance trade-offs, especially when our recommendations impose real costs.
Security advice often sounds simple: encrypt this, log that, block here, alert there. But building real systems involves real tradeoffs. When security doesn’t understand those tradeoffs, we lose credibility. Worse, we waste time and effort pushing for controls that either won’t ship or will ship poorly and cause friction later.
This post is meant to help fix that. Let’s walk through some core engineering tradeoffs every security person should understand, and more importantly, learn how to frame them in terms of risk.
Time vs. Space: You Can’t Have Both
This is one of the most basic tradeoffs in systems engineering.
If you want something to be fast, you cache or precompute. If you want to use less memory or disk, you compute everything from scratch. Faster systems tend to use more space. More compact systems tend to be slower.
Security tools bump into this constantly:
Want real-time malware scanning? You’ll probably store hashes or models in memory to avoid recomputation.
Want low-latency encryption? You may need to keep key material close to the app, which carries its own risk.
This is the kind of decision engineers make daily. But security teams often come in with absolutes, e.g., “We need encryption,” “We need to scan all traffic,” without understanding the time/space tradeoff under the hood. If we want our ideas implemented, we need to be able to participate in that tradeoff conversation.
Writes vs. Reads: The Indexing Problem
Reads and writes are often in tension. When you optimize for fast queries, you often slow down your inserts. This is especially true in databases, where every index needs to be updated as new data is written.
Security teams are deeply affected by this, whether we realize it or not.
Let’s say you want to log every user action in real time and be able to search across it instantly. That’s a read-optimized use case. But if your system is already ingesting thousands of events per second, adding indexes for fast querying might throttle ingestion or create queuing delays.
This is why so many SIEMs feel sluggish or expensive. They’re trying to serve both write-heavy and read-heavy workloads. And when security teams ask for new queries, new dashboards, or new filters, they’re often unknowingly pushing on that read/write tension.
Knowing this helps us avoid unforced errors, like asking for complex queries in systems not designed for them, or demanding 30-day retention with instant search when that comes at massive storage and compute cost.
Synchronous vs. Asynchronous Operations
Security controls often want to sit in the request path to block something if it looks suspicious. But doing that synchronously means adding latency. And in high-throughput systems, every millisecond matters.
Take authentication as an example. Should you call out to an external service to verify every token? Or should you do local checks with embedded metadata? The former gives you more control (you can revoke sessions in real-time), but it slows down every request. The latter is fast, but you lose some enforcement flexibility.
Or consider email scanning. You could scan every attachment before delivering the email (safe, slow) or scan it after delivery with the ability to retract (faster, riskier).
These decisions are about enforcement vs. speed. Security teams often lean toward maximum control, but that’s a luxury. Mature engineering teams think in terms of budgets: how many milliseconds are you willing to spend? How much CPU overhead is acceptable? We should, too.
Simplicity vs. Scalability
Many good security ideas don’t scale.
Adding mTLS, container isolation, proxy-based enforcement. All of these are good security practices, but they also add operational overhead, configuration complexity, and maintenance burden.
In early-stage systems, this complexity may be fine. In distributed systems with hundreds of services and thousands of nodes, it gets exponentially harder.
Security teams should ask: what’s the simplest solution that scales? Where can we centralize controls instead of distributing them? Where can we reuse primitives already adopted by engineering?
Sometimes, building one great internal tool does more for security than deploying ten separate agents.
Granularity vs. Performance
Least privilege is good in theory. But in practice, fine-grained access control adds overhead.
Evaluating complex policies, especially with third-party engines like OPA, takes time. Every condition you add (user attribute, resource tag, time of day, IP range) increases the cost of enforcement.
Now imagine you’re asking for per-field, per-object, per-service access control. Engineers will push back, and rightly so. They’ve probably already optimized the system with coarse-grained roles or scopes for performance reasons.
Security teams should understand this. If we want high-granularity enforcement, we need to:
Propose smart defaults and fallback behaviors.
Accept that enforcement will come with tradeoffs.
Be ready to invest in performance tuning or propose asynchronous evaluation when synchronous enforcement isn’t viable.
Engineering tradeoffs are risk decisions
Here’s the bigger picture.
All these engineering tradeoffs are ultimately risk decisions. When you cache something, you might leak stale or outdated data. When you delay scanning, you introduce exposure. When you simplify permissioning, you increase blast radius.
Security people should be excellent at weighing risk. That’s what makes understanding engineering tradeoffs so powerful because it turns performance concerns into structured risk conversations.
Too often, security gets stuck playing translator: trying to explain abstract risks to engineers, and then struggling to convince leadership to act. That’s the inefficient mode I’ve written about before.
But when security understands the system and can speak in terms of latency, throughput, memory, and tradeoffs, we stop being translators and start being collaborators. That’s what makes security more efficient. That’s when we start to build again.
On top of that, it shows that we add business value by translating and marrying security risks with engineering risks to help resolve potential business risks. This is especially useful in high-growth startups where we might have to trade off security risks with engineering velocity risks. That is, we might take on some security risk to ensure engineering velocity continues to be fast.
Security Engineering is an Agent of Change
This is why I keep pushing for more security engineering teams, not just security operations.
When security engineers understand both the technical architecture and the risk landscape, they can do two things:
Prioritize intelligently. They know which risks are real and worth fixing.
Act independently. They don’t have to throw the risk over the fence to engineering — they can fix it themselves.
This eliminates the costly back-and-forth where each side is working with partial context. It removes the friction of context transfer. It makes security efficient.
This is especially powerful in high-risk areas of the stack, such as auth, secrets management, and infrastructure. Even a small, senior security engineering team can drive impact here. And ideally, they’ll build the tooling that lets the rest of the organization move fast without constant oversight.
That’s how you scale security in modern software orgs, not by avoiding tradeoffs but by owning them.
Reader Exercise
Say a company wants to decide between using JWTs and session ids.
What kinds of engineering vs. security tradeoffs should they be considering?
What works better at scale?
What engineering context do you have to take into consideration when making this decision?
This is the kind of exercise that security teams should be doing regularly. It’s not about finding the “right” answer. It’s about learning to recognize tradeoffs, frame them as risk decisions, and collaborate with engineering to make the best call.