Disclaimer: Opinions expressed are solely my own and do not express the views or opinions of my employer or any other entities with which I am affiliated.
I’ve written a lot of articles about application security recently. I believe that this security sector will be the first fatality of the security engineering shift. That is, more developers will start to do application security, reducing the need for the traditional application security engineer. In addition, there’s already momentum for this shift given security’s desire to “shift left.” By doing this, security has lost control over this function and scaled themselves out of the job. Here are some articles on my thoughts on application security and why I believe the landscape is changing.
Another threat to application security is that the developer platforms, e.g. Github and Gitlab, already can provide the basic features, and they can do it for little to no upcharge. That’s why I believe Snyk’s approach of creating a platform for comprehensive code scanning won’t succeed. So how can application security companies and products succeed in this new reality? The answer is through using AI.
Some relevant realities of application security
I won’t rehash my previous articles, but as described above, there are threats to the current application security companies. Many are either outdated or struggling to find relevancy and keep up in a world where development practices are rapidly changing. There’s also a fundamental disconnect between where developers, who write the code, and security, who audit the code, believe the vulnerabilities lie.
Recent research has shown that developers know more about the problems, which makes sense because they hold context on the nuances in the application. However, they tend to lack the security context that application security engineers have. As a result, it’s easier to enable developers to do security, which makes the jobs of application security engineers more irrelevant by the day.
Yet, I believe there’s a way to create a better environment for both sides through AI. This is going to be a fundamental paradigm shift.
AI as labor arbitrage, not labor eliminator
Before discussing further how AI can help application security, it’s important to see what the recent developments in AI can provide more broadly. AI and LLMs can understand complex human text and generate “responses” to them. Essentially, this allows us to automate more human tasks that have some amount of repetition, especially ones that require interpreting large amounts of text, e.g. code. This has caused some concern that this would eliminate operational-focused roles, of which security has traditionally been one.
However, I don’t see AI eliminating security jobs for a few reasons. First, security has traditionally been understaffed compared to the threats they face. They also continue to fall further behind because the issue is that security doesn’t scale well with application and infrastructure complexity. Second, more generally, people will still have high-paying and meaningful roles in an AI-driven world. Noah Smith recently wrote in his Noahpinion Substack about this:
Noah frames the role of humans in the AI world using the concept of comparative advantage. He defines it as the following:
Comparative advantage actually means “who can do a thing better relative to the other things they can do”. So for example, suppose I’m worse than everyone at everything, but I’m a little less bad at drawing portraits than I am at anything else. I don’t have any competitive advantages at all, but drawing portraits is my comparative advantage.
The key difference here is that everyone — every single person, every single AI, everyone — always has a comparative advantage at something.
To make it more concrete, he uses the example of Marc, a VC, and Marc’s secretary:
Marc is better than his secretary at every single task that the company requires. He’s better at doing VC deals. And he’s also better at typing. But even though Marc is better at everything, he doesn’t end up doing everything himself! He ends up doing the thing that’s his comparative advantage — doing VC deals. And the secretary ends up doing the thing that’s his comparative advantage — typing. Each worker ends up doing the thing they’re best at relative to the other things they could be doing, rather than the thing they’re best at relative to other people.
This is an important point to establish. I think Noah provides a more nuanced position than how most security products are approaching the use of AI and LLMs. Most products I’ve seen primarily focus on making security people more efficient and thus reducing cost. I do think the cost part makes sense, but the most important factor is that AI can free up security cognitive load to focus on other security issues.
Application security engineer as a service
A compelling application of AI and LLMs is to replace much of the core responsibilities of an application security engineer. What are those responsibilities? As an example, let’s pick the application security engineer job description for Anthropic, which seems relevant since they are in the AI space. (Maybe, they should be doing some automation here). Anyway, here’s a snippet of their application security job description.
Conduct secure design reviews and threat modeling. Identify and prioritize risks, attack surfaces, and vulnerabilities.
Perform security code reviews of source code changes and advise developers on remediating vulnerabilities and following secure coding practices.
Manage Anthropic's vulnerability management program. Triage and prioritize vulnerabilities from scans, audits, and bug bounty submissions. Track remediation and validate fixes.
Although difficult (because most LLM datasets exclude security data), all the above seem like LLMs can theoretically automate them because you can find or easily create datasets containing the information to train the LLM. The product has to be high-quality and platform-agnostic. That is, it can replace a good application security engineer or reduce the headcount needed on the application security team. Otherwise, basic features in a development platform can help do some of this work.
In my opinion, building this product is the future of application security. There’s a labor arbitrage because a company no longer needs to spend time finding a top-tier application security engineer. It doesn’t have to waste cycles trying to have a fully staffed application security team. The core is that having top-tier application security talent in-house is an operational advantage but not a core strategic advantage for a vast majority of companies. However, application security engineers shouldn’t worry that they are being fully replaced. They will be focused on other tasks where they have a comparative advantage.
Here are some other areas that application security engineers can focus on:
Oversee Anthropic's bug bounty program. Set scope, triage submissions, coordinate disclosure with engineering teams, and reward bounties. Cultivate relationships with the ethical hacker community.
Research and recommend security tools and technologies to strengthen defenses against emerging threats targeting machine learning systems.
Collaborate closely with product engineers and researchers to instill security best practices. Advocate for secure architecture, design, and development.
For the above, there either are no past datasets or training on past datasets might not be the best use of commute. Similarly, some of the other tasks require relationship building, which is currently more difficult for AIs and LLMs. Although AI and LLMs might help with this, it’s not suited to be fully automated.
With that said, this is a hard product to build, but it is a path forward for application security startups. It’ll provide more bandwidth for the security team, so the value proposition is strong for a security leader to buy. Overall, given the talent shortage in cybersecurity, this type of product will also be good for the cybersecurity community.
Takeaway
Cybersecurity has always faced a talent shortage that isn’t going away. However, given the operational nature of security and the pace at which technology is scaling, it’ll be difficult for security to catch up without some fundamental changes. Noah’s perspective on AI is an apt one. AI will help close security gaps by automating much of the work. I believe we’ll start seeing this usage in application security first (despite what’s going on overall) because much of that work is already shifting to the software development teams, who are also strapped for bandwidth. Having a platform that can provide security and free up bandwidth overall can provide a ton of value for a company. I’m excited to see how this might apply to other parts of security.
"Although difficult (because most LLM datasets exclude security data), all the above seem like LLMs can theoretically automate them because you can find or easily create datasets containing the information to train the LLM." Do they exclude the data or filter it from the public? If filtered, could an argument be made for a trusted partner of LLM owners to gain access for the very specific reason of "protecting the public"?