Third-party risk management needs to change
The current procurement process no longer makes sense in the AI era.
Disclaimer: Opinions expressed are solely my own and do not express the views or opinions of my employer or any other entities with which I am affiliated.

I’ve intentionally made all of my posts free and without a paywall so that my content is more accessible. If you enjoy my content and would like to support me, please consider buying a paid subscription:
I was going to write about another security topic until the OpenAI security disclosure about the Mixpanel breach landed in my inbox. The incident was pretty straightforward: an attacker breached Mixpanel and got hold of OpenAI customer data, which I assume OpenAI willingly gave to the vendor. OpenAI investigated, decided to stop using Mixpanel, and is now investigating all its other vendors more closely.
This brings up the whole question of the effectiveness of Third-Party Risk Management (TPRM) programs. There’s a lot to unpack. A good place to start is to look at how a vendor like Mixpanel likely ended up at OpenAI, and how this process no longer makes sense in the current state of high-velocity engineering and AI-enabled startups.
The procurement process is designed for a bygone era.
Every vendor at a sizable company starts with a lengthy sales call to scope out a proof-of-concept (PoC) to determine if the vendor is a good fit for the company. This usually involves multiple stakeholders, including the business user (in the case of Mixpanel, it’s likely marketing or growth) and other regular stakeholders, who have to help manage parts of the software after it’s integrated. This typically includes Finance, IT, and Security, because it’s not feasible to have the business owner figure out pricing, IT integration effort/management, and security risk, respectively.
I describe my frustration with this drawn-out initial process, especially from the vendor side, in this heartfelt piece on vendor engagement.
Anyway, it ends up being this long process that requires gathering a bunch of information from these various stakeholders and having an executive sponsor. Finally, with all this data, the executive and the business owner make a call because ultimately, they are paying for the software and “own” it. I say “own” in quotes because ownership is usually poorly defined, leading to accountability issues down the line. That’s only one part of the problem.
The problem of leverage and context
How does security come into play? Security plays the same role as the other stakeholders. Like the others, they provide a necessary data point, in this case, the “riskiness” of the software, so that the executive and business owner can make the best decision for the company. Of course, this sounds great, but in reality, it’s messy with a lot of nuance, creating significant organizational friction. This is why procurement periods are so long, and sales cycles at big companies take a while. Also, every company handles procurement and risk management differently. That’s another part of the problem.
In practice, security doesn’t really understand the vendor software’s actual use case, and the business owner doesn’t quite understand the implications of the security risks. Sure, the business owner will be accountable ultimately, but security usually has to deal with the consequences if there’s an incident, as seen in the OpenAI case. It is rare that a vendor is terminated purely on its security posture or because of a security incident. We’ve seen this especially with large, business-critical vendors like Okta and Workday that companies have kept around despite security issues.
I find it unfair that security is asked to assess a product whose use case it likely doesn’t understand. Similarly, most security people aren’t super technical, so it’s hard for them to fully understand the vendor’s architecture and its implications on an internal system, which they likely also don’t understand because the engineering team built it. They can always bring in other experts, but this drags the process on and adds more stakeholders. On top of that, most of these products, such as Salesforce, are complex, and the use cases are likely to evolve in ways that aren’t captured in the initial security review.
It’s unlikely that most organizations have the leverage to have an established product like AWS, Salesforce, Snowflake, etc., to change their security practices. Even large companies rarely have that leverage. It’s because there are few viable alternatives to these products, and if there are, the switching costs are likely higher than the damage caused by a security risk. Even small and medium-sized startups rarely make changes unless there are huge contracts at stake. Even then, in my opinion, it’s not in their best interest to tailor toward companies that won’t make an exception. This is another fundamental problem.
The illusion of compliance
Finally, a big part of the security risk management is to make sure the vendor has the proper controls. This is typically done through SOC 2, ISO, and an assortment of other security assessments. However, with compliance automation software like Vanta and Drata, these assessments have become easier to get and just look for the presence of the control, not the operational quality of it. Getting a SOC 2 is so simple now that if a startup doesn’t have it, it’s a red flag because it means they haven’t bothered to do even the security basics. The certification itself has become regulatory theater, an activity that offers the illusion of security without delivering actual risk reduction. This aligns with my perspective that most security tools are too theoretical and focus on abstract risk instead of concrete effectiveness.
Now, I’ll move on to my larger frustration with the procurement process. As I described above, it’s a long process that involves many parties. It’s no surprise that shadow IT is a problem. IT and security believe shadow IT to be a problem, but they have no control over the fundamental reason: procurement is outdated and takes too long. However, they are also part of the problem.
The velocity mismatch
Let’s take a step back. The procurement process theoretically makes sense, but it’s designed for the waterfall world, where someone could spend three months procuring a product and then another three months deploying it. This made sense when product cycles were long.
Then cloud and agile came along, drastically shortening product cycles. SaaS made it easy to trial the product initially because it required little deployment and distribution effort. It also made it easy to expand into other parts of the organization. This reduced the overall time of procurement, but certain parts, such as the financial, legal, and security aspects, didn’t shorten substantially. Products became more complex, and rightfully so, because they needed to be “sticky.” It was too easy to switch products, so instead of making a good product experience, companies packed their products with features that would make it hard to move to another company. This led to large requirement documents whenever there was a renewal, which brought us back to the original procurement process. In some ways, we’ve regressed.
What’s worse is that we are applying this type of procurement in the AI world, where products move fast and change overnight. The biggest difference is that teams are leaner now, so they can make decisions as fast as they can change the product. The product and engineering velocity is unprecedented, but procurement and security assessments have stayed the same. What’s happening is that individuals are self-serving to trial because by the time a normal procurement cycle is done, they might not need the product and/or have found an alternative. Or even worse, they would have just built it! This directly contributes to the security industry’s effectiveness problem.
In a lot of these “at scale” AI companies with large amounts of revenue, the procurement process is almost nonexistent. Companies like Ramp have made it easy to track and process these vendors if need be. These companies prioritize moving fast. Sure, this means there will likely be a lot of vendors, but it’s more important to try and decide fast rather than getting stuck in processes.
A path format: monitoring over auditing
What does this mean for vendors and for security? There are a few trends that need to accelerate:
First, software companies have to make an active decision if they want to match the selling motion of fast-moving AI companies, which have a lot of potential for upside. This involves shorter sales cycles with more self-service and trial. Unfortunately, many current products aren’t set up to work like this.
Next, security has to figure out how to be an enabler in fast-moving companies. If they don’t, they will likely fall behind. This means that they have to revamp how third-party risk management works and adapt it to the new world. Of course, vendors need to make this easier, but what is currently happening doesn’t make sense at the product development velocity.
I don’t have the ultimate answer here, but it seems that the most effective way is to focus on monitoring systems rather than assessing risk upfront. This means:
Do a light, short assessment upfront, focusing only on deal-breaker risks and providing quick guardrails.
Shift the majority of security’s effort to continuous monitoring for issues.
Focus on finding ways to mitigate risk themselves through security and engineering solutions rather than relying solely on the vendor’s attested compliance.
A lot needs to change in the way we handle procurement, especially third-party risk management. Currently, it feels outdated and, honestly, incredibly inefficient. It doesn’t match what the industry needs right now. It’s clear that things are going to change, and security shouldn’t try to slow the change but instead adapt to it.



