How AI changes open-source (and its security)
We will use open-source less, which changes the security calculus
Disclaimer: Opinions expressed are solely my own and do not express the views or opinions of my employer or any other entities with which I am affiliated.

This weekend, as I considered topics for my newsletter, AI and its implications for both security and software engineering have continued to be top of mind for me. For those who follow my blog, my initial focus has mainly been on AI’s cybersecurity impacts — how AI can improve or change cybersecurity strategy or operations.
However, I realize that technologies rarely have incremental operational shifts.
Let’s take the cloud as an example. It significantly changed infrastructure management — it shifted paradigms from traditional IT to DevOps, on-premise to SaaS solutions, and waterfall to agile processes. These changes shortened development cycles dramatically, allowing new releases within days instead of months. As a result, security had to adapt quickly. It made security harder and easier at the same time. More rapid releases increased the risk of security bugs, yet also made it much easier and faster to deploy security patches.
Downstream effects of AI on software development
As I was thinking about this example, tep Security recently reported a compromise of the popular GitHub Action tj-actions/changed-files, which turned out to be the symptom of a supply chain attack on reviewdog/action-setup. This made me think: We are catching these problems through research and scanning, but could this have been avoided if we developed the GitHub Action internally?
Traditional software engineering wisdom discourages this thought. Why spend time and resources on non-strategic software components that someone else is writing and maintaining functionally for free? Well… this compromise shows the reason. Although we are starting to think about these risks more, these are risks that honestly are ignored in favor of advancing business-critical software projects. It’s hard to cast blame on leaders at these companies. They are facing increased competition that forces them to be more efficient and creative, which naturally introduces more risk.
The Rise and Risks of Open Source
Over the past decade, rapid release cycles and increasing pressures to deliver new features have caused companies and software developers to turn to open-source solutions, especially given limited engineering resources. GitHub reported in 2022 that open-source code supports around 90% of global software, with open-source developer numbers increasing dramatically from 2.8 million to 94 million — a 30x increase!
Critical projects, like OpenSSL before the notorious Heartbleed vulnerability, have historically suffered from inadequate funding. While initiatives like the Core Infrastructure Initiative and foundations such as CNCF and Apache aim to support critical open-source projects, the demand far outstrips current support mechanisms.
However, many open-source maintainers volunteer their time without significant financial incentives or support. Some companies have the privilege to pay open-source developers to work on projects because it aligns with their strategic goals. This is especially common in most open-source projects that have an enterprise version — where improving the open-source project provides valuable context, leading to a competitive advantage and community support, e.g. Chromium, Hashicorp, Confluent, dbt Labs, Semgrep, Databricks, etc.
In the best cases, companies can directly support open-source projects as a way to give back to the broader community. One good example is the Go programming language. But, most open-source projects don’t have close to that level of support. Developers volunteer to maintain them and aren’t paid or financially incentivized to make improvements. In many cases, only a few people maintain it without clear succession. Funding usually comes from donations, which are rarely sufficient to fund and maintain the project. A notable example is the heavily used OpenSSL project received only $2000 a year in funding prior to the HeartBleed vulnerability.
Foundations such as the Core Infrastructure Initiative have emerged to invest in security-critical open-source projects. Other foundations, such as CNCF and Apache have existed, but they can’t keep pace with the increased demand in and growing number of open-source projects.
With more open-source adoption, there are inherently going to be more vulnerabilities. Increased usage will lead to greater overall risk. For example, HeartBleed, ShockShell, and Log4Shell are some examples of how much damage a vulnerability in a widely used project can cause. Even outside of the security realm, we can see how fragile open-source projects can be — a developer deleted 11 lines of code for a simple npm package called left-pad causing many websites to break.
The left-pad incident shows how reliant we have become on open-source software, almost to a fault. I’m sure many engineering and security leaders are questioning whether this reliance has gone too far and whether we should consider going back to developing more software internally. Of course, an option is to purchase the enterprise version of the open-source, but that only exists for infrastructure software, e.g. Databricks and Confluent. It rarely exists for basic software components like GitHub actions or OpenSSL. As mentioned earlier, engineering resources are limited and development velocity is critical. So far, open source has remained the practical choice—at least until a company reaches a significant scale.
How AI is changing software development
How does AI change? I’ve written before about how AI will boost developer velocity, freeing up time to spend on security and other areas where we’ve had to make tradeoffs due to limited engineering talent and resources. Its general use should bring more predictability and as a result, lower risk to an organization.
What’s surprising is just how much developer velocity has accelerated. Recently, the buzz around “vibe coding” highlights this shift — small teams are now building products and generating significant revenue at an amazing pace. Of course, this has happened before: Instagram had only 13 employees when it was acquired by Facebook/Meta for $1B; WhatsApp had only 55 employees when it was acquired for $19B. However, it seems more prevalent nowadays.
The cloud made it easy to create an application without mass infrastructure investments. AI has made it easy to write code itself. I tried this firsthand. With ChatGPT and Cursor, I was able to, with a simple prompt, write hundreds of lines of code in minutes. This would have otherwise taken me several hours. Most importantly, I felt less tired. The cognitive load was lower: I didn’t have to search and read multiple articles in StackOverflow. Most, but not all, of the code worked immediately. Sometimes, it didn’t work, but debugging was easy (or ChatGPT didn’t have enough time to reason, so a follow-up prompt solved it).
This changes the calculus for leaders wanting to use open-source software. If developers can write high-quality code quickly with the help of AI, they don’t need to rely as heavily on open source, especially given the potential security risks of poorly maintained packages. Writing code in-house becomes a more viable option, especially for open-source projects that lack active or stable maintainers.
One major advantage of building in-house is easier customization. Modifying open-source projects typically requires forking, which can quickly lead to tech debt as the fork falls behind on upstream updates. That gap can eventually turn into a security risk. It also requires spending time understanding the code’s context to customize it properly.
Maintenance is still a concern, even with AI, but I expect this to improve as more companies allow AI to take a more active role in their software development lifecycle. In fact, we are already seeing more activity in AI-driven tooling market for developers. I wouldn’t be surprised if we saw more AI-focused code tools that will create pull requests (PRs) to help with maintenance and QA. This tends to be unglamorous but necessary work for developers, it’s attractive to purchase some tooling so that developers can focus on customer-facing product features.
This will likely lead to a reduction in overall open-source usage. It won’t disappear, but it will become more concentrated around a few well-maintained and heavily used projects. I believe this is good for the open-source community — they can focus resources on the most critical projects rather than spreading already limited resources too thin. They might even use AI themselves to help scale!
What does it mean for the security market, especially application security?
Naturally, less open-source usage means reduced risk from third-party dependencies. Instead, the risk will move to the AI generating the code, but that feels like a lower risk, especially if there are guardrails in place, such as humans or other tooling checking it.
This shift does seem like bad news for the software composition analysis (SCA) and supply chain security-focused vendors, such as BlackDuck, Mend, and Snyk. As companies rely less on external dependencies, concerns around licensing and dependency vulnerabilities will subside. In contrast, there will be a renewed focus on the analysis of the code itself, e.g. static and dynamic analysis, to search for vulnerabilities in both human and AI-generated code.
Companies like Semgrep are well-positioned for this change because their product is heavily focused on a high-quality static analysis (SAST) product while having some basics of SCA met. It’s starting to seem that their focus on the security engineer rather than security operations could turn out to be a smart bet!
AI is allowing a software developer to write code much more quickly and build more in-house. This will likely reduce reliance on open source and shift focus to both maintenance and security of internal code. As a result, application security engineers will have their work out cut for them as they likely have to review more code and find tooling that can handle the increased code velocity so that they can adapt to this new reality.