Thoughts on the MOVEit hack
Third party risk and vulnerability management back into the spotlight
Disclaimer: Opinions expressed are solely my own and do not express the views or opinions of my employer or any other entities with which I am affiliated.
I finally got around to improving my “About” page! Go check it out. It has more information about my newsletter, content, and myself. Also, there is more information to help expense a paid subscription as part of your professional development budget.
The major cybersecurity news this week is the MOVEit vulnerability. Initially, it affected US banks and universities, but as the week progressed, it was revealed that this affected multiple federal agencies with hackers listing more victims by the day. A previously little-known piece of software has come into the spotlight, and it has implications for cybersecurity and the broader software community.
What is MOVEit?
MOVEit is a managed file transfer software owned by Progress. Like most software, there is a cloud version, i.e. Progress hosts the MOVEit software on their cloud, and there is a software version you can host yourself. Interestingly, Progress didn’t originally develop MOVEit. It was released in 2002 by a company called Standard Networks. Ipswitch went on to acquire Standard Networks in 2008, and the cloud version of MOVEit came out in 2012. Then in 2019, Progress acquired Ipswitch. Progress owning MOVEit is the result of multiple acquisitions, and it’s not clear how much of the original team is around. As acquisitions happen, engineers leave, and naturally context is lost. Overall, MOVEit seems like it’s an old piece of software that has changed hands multiple times.
What happened?
You can follow the timeline in their official disclosure. On May 31, 2023, Progress disclosed a SQL injection attack that allowed external actors to access MOVEit file transfer transactions. This affected all versions and could be accessed via the HTTP/HTTPS protocol. More details on the CVE can be found here. What’s typical in these situations is that upon discovery of a zero-day vulnerability, i.e. a previously unknown vulnerability, researchers typically uncover additional vulnerabilities, which is what happened here. This led to an advisory to shut down all MOVEit HTTPs traffic, which meant they also had to shut down HTTPs traffic to their cloud as they deployed the patch.
The scramble happened here because hackers disclosed this vulnerability, so it wasn’t a responsible disclosure. This means that the security team didn’t have a “quiet” period to create a patch before disclosure. As a result, their security team was scrambling to generate a patch while there was potential exploitation, leading to their asking customers to shut down traffic and deploy mitigation techniques.
My initial thoughts
To start, I feel for the security team at Progress. Nothing causes more stress than a non-responsible disclosure. They did a great job asking people to shut down traffic to avoid exploitation because it was the only safe action. They moved fast and hired an external investigation firm. In cases of non-responsible disclosure, there are very few options because the public and the Progress security team learn about the vulnerability at the same time, and it takes time to create a working patch. I don’t know the details, but I wonder if there was worth spending time creating a web application firewall (WAF) rule that detected and blocked this vulnerability so that some traffic can continue. My guess is that 1. they decided that security team time is best spent focusing on the patch and/or 2. such a WAF rule was hard to create given the nature of the exploit. The troubling issue is that this has been going on since 2019, so I’m not sure if we will know the extent of this attack.
Overall, great job to the Progress security team on working with external researchers and communicating.
Implications to security and software
There are a few things to unpack here. For those who know me, this type of hack is uncommon — most hacks happen because of poor access control. This hack was the result of a vulnerability. However, other companies were breached because they were using MOVEit or one of their vendors was using MOVEit. In my mind, this is not a vulnerability management issue. In fact, even for Progress, since this was a zero-day, it isn’t a vulnerability management issue for them because typical vulnerability management programs deal with the management of known vulnerabilities.
However, given Progress’s customer base, it’s possible that they needed to invest more into their application and infrastructure security, but it’s difficult to uncover these types of issues through penetration tests and code reviews. This gets into the other issue around the software age and acquisitions.
One interesting fact is that it seems that hackers have been testing this exploit since 2019, which is when Progress acquired Ipswitch, which owned MOVEit at the time. It could be that something might have been lost in the mix during the M&A and integration period. My opinion is that security doesn’t have great standardized practices around M&A and integration.
Another issue is the software age. MOVEit has been around for over two decades. There must have been a ton of legacy code. It’s not clear to me how we understand, assess, and manage risks in this software, especially since it has changed hands so many times. I don’t know how many of the developers, security operations, and product security engineers Progress retained after the acquisition. However, there is definitely material security risk that’s introduced as a result of an acquisition. The security community is aware of this, but it’s hard to measure given and rarely discussed because there are relatively few data points. Much of these issues are foiled into third-party risk management, which segues into what I believe is the biggest security issue worth discussing for this hack.
Third-party risk management needs to change
To be honest, third-party risk management has always been a somewhat boring topic in security despite being important. Its increased importance is that compared to the past, more companies are buying SaaS, so they have much less visibility over the software and its traffic. This is good for companies because they take some burden off their IT and infrastructure teams for managing the software, but the tradeoff is that they need to trust the third-party/vendor more.
Vendors and sometimes even security teams see third-party risk management as just that — risk management. They see it as a requirement for compliance and a series of checkboxes that need to be met. The industry has gone through various iterations, and many companies regularly go through various versions of it only to eliminate or fold it into something else. In my opinion, the reason for this is simple. Business needs for a vendor tend to trump the security risks that might exist. As a result, the industry is stuck in a CYA mentality with long questionnaires that one side fills out and the other side doesn’t want to read but is sending out in an obligatory manner to keep the vendor “honest” and to create a baseline of corporate security.
Unfortunately, this is a sad state of affairs, and it’s partially because the industry sees this third-party risk management as a cost rather than a benefit. This MOVEit hack shows that we need to be more vigilant. In fact, especially for software that handles sensitive data, evaluating the security roadmap and alignment should be just as important as evaluating the product roadmap. It’s hard to measure how much a company “cares” or “aligns” with your values on security through a questionnaire just as how it’s hard to figure out if a vendor works for you without a PoC.
Security teams should have third-party risk management programs run by strong security leaders to determine whether a company provides the security culture that you are willing to accept. That should be an important consideration. For example, are you ok if a security team has mostly compliance people and no dedicated software engineers who work on security every day, or do you want a dedicated security engineering team? Will you pay for a product that has a dedicated security team? The answer varies based on the product.
There are numerous ways to achieve this, and honestly, I don’t know the best way. Do vendors need to have security product managers? Do we need to start considering more product or engineering-focused security leaders? Do we need more security engineers (you know my answer to this!)? The third-party risk management program needs to evaluate whether there is confidence in the security programs at the vendor rather than just making sure they are answering security questionnaires, which have been heavily gamified.
Takeaway
The MOVEit hack has affected many companies and brought to light many issues surrounding software age, responsible disclosure, and third-party risk management. Overall, the Progress team seems to be doing a good job. However, security needs to change the goal of third-party risk management. It needs to be less about questionnaires and more about figuring out whether you are comfortable with the vendor’s security roadmap and culture. This might require changes in the security leadership and organization both in the customer and vendor.