Disclaimer: Opinions expressed are solely my own and do not express the views or opinions of my employer or any other entities with which I am affiliated.

As many of you know, I’ve been writing extensively about the different ways AI will change cybersecurity. I believe AI will be the most impactful technology over the next five years (if not longer). It’s become so critical to business success that security teams no longer have the option to sit on the sidelines. We have to enable it.
Personally, I’ve been lucky to have a strong partnership with the AI/ML expert at my company. Our ongoing conversations haven’t just taught me more about AI — they’ve helped me navigate the nuanced balance between enabling innovation and managing risk.
But I realize this dynamic isn’t the norm everywhere. At many companies, security and AI/ML teams are either at odds or don’t talk much at all. That’s a missed opportunity. Just as we’ve learned how to work more effectively with software engineers, security needs to build a similar bridge with AI/ML teams.
I’ve written about how to be a security person that engineers don’t hate, and while some of those lessons carry over, working with AI/ML introduces new dynamics worth addressing on their own.
Learn the AI/ML lifecycle and figure out where security fits in
Let’s start with the obvious: AI/ML teams are slammed. Just like security teams, they’re facing increased ownership and visibility. Many of them went from being research or infrastructure support functions to becoming strategic centers of gravity for the company. Suddenly, every product has an AI roadmap. That wasn’t true even two years ago.
They’re under pressure from the boardroom to the individual contributor level — all while operating with lean teams and limited bandwidth. Meanwhile, the AI talent market is ultra-competitive. In some cases, big companies are paying their AI researchers to do nothing just to keep them from going to rivals.
What does this mean for security?
It means we shouldn’t be one more team adding to their cognitive load.
Security needs to put in the work to understand the AI/ML lifecycle — from model development and training to deployment, prompt tuning, and model updates. Even better: learn how your company is actually using AI. Are teams calling OpenAI and Anthropic APIs? Are they fine-tuning open-source models? Hosting their own infrastructure? Understanding the stack is the first step to being useful.
For example:
Model development is often exploratory — new code, new datasets, and unknown dependencies. A good place to start with secure defaults.
Training and fine-tuning involve large-scale data access to often sensitive, poorly governed, or copied informally.
Deployment usually lacks the maturity of traditional CI/CD and may involve user inputs being fed into models with little validation.
Each stage introduces unique risks, but also unique opportunities for security to help.
Tactical advice: Get involved during the deployment of an AI-powered feature. Sit in on model reviews. Run a product security review, but focus on understanding their workflows. You’ll walk away with clearer mental models, and they’ll feel supported rather than policed.
Help them with compliance without sounding like compliance
Most AI/ML practitioners haven’t had to work closely with legal, compliance, or risk teams — until now. As their work becomes more visible, so does the scrutiny: from regulators, internal audit teams, and eventually customers.
This is a great place for security to help.
Security folks are often more experienced in navigating ambiguous policies and regulatory frameworks. We’ve worked through SOC2, GDPR, HIPAA, FedRAMP, and we’ve seen how to make these requirements actionable. We’ve built logging and auditing systems. We’ve stood up review processes. We know how to implement controls without killing velocity.
We can bring that experience to AI teams.
This doesn’t mean taking over their work. It means co-owning a solution. For example, help them figure out how to log model inputs/outputs in a way that supports auditability without slowing them down. Help draft threat models for AI features so they’re defensible to legal without overengineering.
You’re not “enforcing policy.” You’re making it easier for them to ship responsibly.
Build responsible AI and better security will come
Security teams and AI/ML teams both care about the same thing: building trustworthy, high-integrity systems. But while security worries about breaches and misconfigurations, AI/ML teams are thinking about model drift, hallucinations, and bias. These might seem like different problems, but they all erode user trust.
That’s why we need to think of “responsible AI” as part of our shared responsibility.
But there’s a catch: with limited resourcing and lots of experimentation, responsibility can slip through the cracks, especially in the maintenance phase.
For instance, many teams are racing to integrate AI into the product, but few are thinking about how to maintain it. Who updates the prompt when models change? Who tracks cost spikes from bad model selection? Who’s testing for regressions when GPT-4 becomes GPT-4.5?
This is where security can help.
Managing prompt logic, model drift, and prompt upgrades is surprisingly similar to managing software dependencies. It’s “invisible work,” and no one owns it. Security teams can help formalize those processes and ensure they don’t introduce downstream risks.
Another opportunity is data governance. Most AI/ML practitioners understand that “garbage in, garbage out” is real. But in a fast-moving environment, they might not know where sensitive data lives, who can access it, or whether shadow data pipelines exist.
Security teams can provide clarity here. Help inventory where the data is being pulled from. Build access controls around that usage. This not only improves the quality of their models, but it also reduces the risk of exposure, leakage, or misuse.
Responsible AI isn’t just their job. It’s a shared goal.
Support their experimentation, don’t shut it down
By definition, AI/ML work is experimental. New models, new data, new toolchains — it’s a moving target. Security teams must recognize that the pace of innovation is fast, and perfection isn’t the goal.
We need to resist the instinct to say “no.” Instead, offer secure defaults.
Create sandbox environments where experimentation is safe.
Offer vetted tools and libraries with built-in guardrails.
Maintain a list of approved models, APIs, and datasets along with documented risks.
AI/ML teams don’t want their work to fail in production. They don’t want hallucinations, sensitive data leaks, or adversarial inputs. They just need help designing systems that give them freedom with guardrails.
Frame your involvement not as oversight, but as enablement.
Build informal trust, not just formal processes
Everything above is tactical, but none of it works without trust.
AI/ML teams need to believe that security isn’t just another gatekeeper. They need to see you as a partner — someone who understands their pressures, speaks their language, and is willing to roll up their sleeves when things get messy.
Some simple ways to build informal trust:
Attend their standups occasionally, not to report, but to listen.
Join internal Slack channels where AI experimentation is discussed.
Sit in on model reviews and design sessions, especially early in the process.
Security doesn’t need to be invited. We need to embed.
Once you’ve built informal relationships, the formal processes get easier. People start asking for your opinion before launching something risky. You become a sounding board, not an auditor.
Final Thoughts and Takeaway
These aren’t one-size-fits-all suggestions. Every organization is different. But security has years of experience in areas that AI/ML teams are just now encountering: incident response, secure software pipelines, access control, threat modeling, governance.
Rather than waiting for policy or mandates, we should lean in and offer help.
Security teams can be the quiet infrastructure behind responsible AI, not by getting in the way, but by building the bridges that make it easier to move fast and stay safe.
AI/ML teams are under pressure to move fast, experiment, and deliver. Security teams are under pressure to manage risk, ensure compliance, and protect the business. These goals aren’t at odds, but the relationship can feel strained without intentional effort. Security doesn’t need to slow down AI. We can be the partner that helps it scale safely. That starts with empathy, shared goals, and showing up early, not just to assess risk, but to share the load.