Disclaimer: Opinions expressed are solely my own and do not express the views or opinions of my employer or any other entities with which I am affiliated.

I’ve intentionally made all of my posts free and without a paywall so that my content is more accessible. If you enjoy my content and would like to support me, please consider buying a paid subscription:
I was chatting recently with Brian Joe, the founder of Impart Security, a WAF and API security startup you should definitely check out. But our conversation wasn’t just about his company. We got into a broader discussion about how AI might reshape security work itself.
Unlike most engineering roles, many security jobs have a large operational component. I’ve written about this before: security isn’t just about building, it’s about maintaining: triaging alerts, tuning rules, patching systems, managing exceptions, and reviewing logs. These tasks are often repetitive and rules-based. That’s precisely the kind of work AI is good at.
I started thinking more about how AI might change the delivery model of security itself.
The shift to AI-powered “managed” services
In security today, there’s an emerging pattern: AI is starting to absorb the repetitive operational burden, freeing humans to focus on higher-leverage work. And that changes not just the tools, but the structure of the market.
Historically, security teams had to choose between building in-house automation or outsourcing to MSSPs or MDRs. The outsourcing route usually meant trading depth for scale — external providers rarely had the same visibility or context as internal teams. But AI offers a third path: deeply integrated, AI-managed services that operate inside the product itself. Think of this as the next generation of “managed” where the management is largely done by AI, not humans.
This isn’t theoretical. We’ve already seen hints of it.
In detection and response, MDRs like CrowdStrike’s Falcon Complete were an early move toward this model, and I do think MDRs are going to benefit in the new AI world. They offered a bundled service with the product itself, creating better outcomes and a more consistent experience. And I believe we’ll see many more services head in this direction, now powered by AI under the hood.
Automating the boring, not replacing the critical
Security professionals often worry that AI will replace jobs. But that fear misses the point. AI will mostly replace the work people don’t want to do: the tedious maintenance and glue work that eats up time and drains attention.
A good analogy here is what’s happening in software development with tools like Cursor. Developers call it “vibe coding”, that is, using AI to fill in boilerplate, generate tests, or suggest functions. The criticism is that it might produce sloppy code or introduce security bugs. But in practice, it’s often the opposite: it frees developers from mechanical work and lets them focus on designing better systems or fixing tricky logic bugs.
There’s a deep computer science principle at play here: it’s often easier to verify something than to generate it from scratch. That’s exactly the loop that AI shortens — propose, review, tweak — rather than build from first principles every time.
Security has similar opportunities.
Take cloud IAM policies. This is high-leverage work, but it’s also painful. Many teams avoid owning it entirely. I’ve argued before that security should step in here — the work is nuanced and critical, and getting it right matters. AI isn’t likely to replace this kind of sensitive design work anytime soon. But it can assist in surfacing bad patterns, suggesting remediations, and even flagging over-permissive policies based on behavioral baselines.
On the other hand, things like WAF rule tuning, rate limit configuration, or managing agent exceptions. These are maintenance tasks that bog down teams. No one wakes up excited to manually update rate limits or review false positives in an IDS. These are prime candidates for AI to own entirely, with humans just overseeing or approving changes.
Semgrep’s approach is instructive. It already uses AI to triage findings and suppress false positives, something that would normally fall to an application security engineer. If they continue down this path, we may see them offer automated patch suggestions, dependency upgrades, or even refactoring proposals. That’s not just productivity; it’s redefining what application security means in practice.
A new crop of AI-managed services
We’re already seeing hints of this model beyond AppSec.
CrowdStrike continues to lead with Falcon Complete, and I can easily see AI being used to not just handle alerts, but also auto-tune agent policies and generate exceptions — tasks that today are often done manually on developer machines.
Tanium is pushing into autonomous endpoint management. While I’m not sure how deep their AI stack goes yet because I haven’t seen a demo or tried out their most recent product offerings, the branding suggests a future where these platforms run without daily human babysitting.
Okta has opportunities, too. Group management, SCIM token handling, and app provisioning are areas where security teams burn countless cycles. An AI-managed layer here could offload routine access management while surfacing anomalies for human review.
In each of these cases, what we’re really talking about is a new kind of managed service, not staffed with SOC analysts in a NOC, but orchestrated by AI behind the scenes. These are services that scale with your company without a corresponding increase in headcount.
What changes next
This trend has two major implications:
New product categories will emerge to wrap existing tools in an AI-managed service layer. Think of tools like Zscaler or Cloudflare, which are widely deployed but still require a lot of manual tuning and operational staff in most cases. Adding an AI co-pilot to handle policy evolution or rule maintenance could be a valuable upsell.
The tools that nail this AI-native managed layer will win. Security buyers are already used to paying for managed services. If you can offer the same thing, but faster, cheaper, and more consistent compared to managed service providers, where the quality depends on the managed service provider you choose. Thanks to AI, it becomes a no-brainer.
This model also has softer benefits. AI-managed services reduce toil and cognitive overhead. They allow security engineers to focus on the parts of the job that actually move the needle: designing new detections, threat modeling critical systems, and collaborating with product teams. These are the areas where security has the most leverage. Let’s get back to that.
Final thoughts
I’ve always been bullish on AI, especially in security. But I think we’re just now starting to see its real impact, not in the flashy demos, but in the plumbing. In the work that security teams have always had to do, but never wanted to.
I do recognize that this will have ripple effects, including on jobs. Pramod Gosavi recently shared some thoughtful commentary on how AI might reshape staffing models and budgets as well as business structures. Those conversations are important.
But I’m optimistic. Unlike software engineering, where layoffs and flattening orgs are becoming common, security still faces a talent shortage. There’s too much work and not enough trained people. AI doesn’t threaten that dynamic. It augments it. It helps security teams do more with less and spend more of their time on what matters.
This shift won’t happen overnight. But the tools are coming. And when they arrive, they won’t look like GPT wrappers. They’ll look like smarter, more helpful versions of the tools we already use. And we’ll call them “managed” services, even if they’re mostly managed by machines.