Disclaimer: Opinions expressed are solely my own and do not express the views or opinions of my employer or any other entities with which I am affiliated.
This is my first paid post in a few weeks, but good news for you! I’ve dropped my prices to $5 a month and $49 yearly. If you want to read this whole article (and many other paid ones), this is the right time to subscribe!
After a week not talking about AI, I’m back at it. It’s definitely something that’s been on my mind, and I’m trying to learn more about it and the security implications around it. As many of you know, I believe that understanding the technical aspects is important to fully understanding the risks.
I’ve increased my use of various AI applications over the past few weeks to better understand how AI can affect cybersecurity either positively or negatively. In fact, I used ChatGPT Pro to do some deep research for this article! I’ve also started to use ChatGPT to help make some edits to my newsletter to improve readability. I’ve also been playing around with the Anthropic APIs and prompt engineering.
Here are some basic learnings:
AI is good at processing large amounts of text, e.g. code, articles, etc., and providing a summary and/or analysis of them.
You can get far with the OpenAI and Anthropic LLMs. It doesn’t seem like most companies need their own models, but tuned versions of Llama and Deepseek make sense if you have proprietary data. However, most companies should predominantly use the hosted models.
LLMs are somewhat of a black-box, but they can be useful if you create the right prompt.
LLMs are accelerating many manual and repetitive tasks, including writing code.
Anyway, the purpose of my writing this newsletter isn’t to dump my AI/LLM learnings. I had an interesting thought experiment that I ran across a few people in my network. (If you ever want to engage in my thought experiments or propose one, please reach out!)
How will AI affect engineering?
A great starting point is to examine how AI has influenced engineering. Tools like Copilot and Cursor show that AI can substantially boost engineering velocity. Much of a developer’s time is spent manually generating code—a process that often involves understanding existing code, looking at code examples online, and then writing new functionality. Instead, they can leverage AI-generated code as a foundation and adapt it to their own codebase. Cursor and Copilot excel at capturing context from existing code, requiring fewer changes after the initial version.
This shift reduces the cognitive load associated with writing code from scratch, or adding code to the codebase. Instead, a developer can concentrate on creating the logic rather than this seemingly “repetitive” task. This is especially valuable for startups where a developer might switch between multiple parts of the codebase. With AI, this person doesn’t need to keep multiple contexts and remember various nuances of the code. The benefits are magnified if the company uses common frameworks like Flask, FastAPI, etc. because these tend to have more set patterns.
A slight digression. This is similar to my experience with AI and my writing. ChatGPT struggles to generate good blog posts when I give it little information. I have to drive most of the ideas and core content. However, I don’t have to worry too much about the clarity, flow, or diction. To relate this to coding, the functions and features represent the original ideas, and the code is the writing and prose itself.
However, AI is still far from writing large amounts of code by itself. Its strengths lie mostly in tactical tasks, such as writing functions or doing specific tasks rather than writing large parts of business logic.
A friend once told me that the main “innovations” of the big tech companies, such as Google and Meta, are the tools and processes that they make it easy to write high-quality code. More explicitly, they have the infrastructure that enables an average engineer to produce high-performing code and features with high velocity. It makes it easy to onboard engineers and have them contribute quickly. Over the years, people have tried to spin out some of these tools to some degree of success, but these tools don’t work well without structured development processes, such as a mature software development lifecycle (SDLC) paired with strong ownership processes and organizational management strategies.
How does AI benefit larger tech companies from an engineering standpoint? In my opinion, the benefits are less pronounced, but larger tech companies have more code and examples that will provide an advantage. But, AI does make it easier for startups to produce high-quality software engineering that will level the playing field. This is similar to how cloud computing made it easier to deploy applications without requiring a large investment in infrastructure, lowering the barrier to entry. Although the larger companies will still have the better tools and AI, it substantially reduces the gap.
Reduced variance is at the heart of the issue. Of course, better engineers are more likely to figure out how to use AI to their advantage (and also use it more effectively). So, AI might create more variance in that way. However, let’s consider that AI tools become so accessible that even the average engineer can benefit substantially. This isn’t such a crazy thought. We’ve seen this happen with other tools. In many ways, high-level programming languages, such as Python, are good examples. As a result, if all engineers use it and can use it effectively, it will reduce the quality variance between a mediocre engineer and a great engineer, i.e. no more 10x engineers (not that I believe those exist to start), only 0.8-1.2x engineers.
How does this benefit security?
Why am I talking so much about engineering? What about security?
If you’ve been following my blog, you probably know where this is headed.. Much of AI and security focuses on making security operations more efficient. In fact, when I asked ChatGPT to do deep research on how AI can enhance cybersecurity, it mostly talked about better threat protection and detection and response.
However, let’s think about where risks come from. Philosophically, risks come from uncertainty — the unpredictability of the future. As humans, we like certainty and predictability, so that’s why we are willing to pay for it. That’s the core thesis around insurance — a predictable loss (or payment) is better than a large, unpredictable loss. Similarly, that’s also why finance is such a good business. People want others to handle and navigate the market uncertainties.
Today, technology is core to most businesses, so most risks at a company are technical — they emerge from the uncertainties and variance in how the engineering organization develops applications and subsequently how the company uses technology. Many companies have different strategies to manage this variance/risk, which naturally leads itself to a variance in the types of security organizations. I’ve written about this in the past on how security organizations can differ substantially and that’s ok assuming you have the right expectations on the outcomes.
Going back to the theme of this blog post, will security become easier in the future because there’s less risk? Of course, today, the risk has merely shifted to AI because there’s a large variance in the quality of AI applications. However, it’s still early days. I’m confident that AI will get better if recent developments are any evidence. Let’s say that AI reaches a pretty good plateau. Sure, there’s still risk, but at least most of the risk is concentrated in AI. Concentrated risk is easier to manage.
If engineering (and the broader company) adopts AI, will that reduce variance, and thus result in less security risk? I believe so. Let’s focus on engineering and pick a few examples. If we use AI for software engineering, assuming we get the right patterns, then the likelihood of introducing vulnerabilities decreases. Similarly, we are less likely to write code that will lead to incidents.
Another example I’ve encountered is writing defensive code. It’s more secure, and I want to do it. But it’s time-consuming to implement properly, so many engineers will give up or take a shortcut. Engineers want to do the right thing, but deadlines are always a concern. I’ve discussed in the past, this is why developers don’t care about security even though they want to.
With AI, generating secure code will take about the same time as less secure alternatives. As a result, this actually results in security by design. Moreover, through security reviews, we’ve seen good designs that end up with insecure implementations for similar reasons. However, AI will substantially reduce this likelihood. Of course, one key assumption is that AI is capable of generating secure code, but I believe it can if given the proper examples.
Security engineers can focus on the specifications and security logic. Then, they can use AI to help the software engineers generate secure code.
Overall, just by using AI, a company will have less variance and risk in operations, not just in engineering but throughout a whole company. That’s why I feel like security should embrace AI, not fear it!
What does this mean for the future and how security’s role will evolve? It’s hard to say. I expect there will be fewer traditional security issues, such as basic operational risks around vulnerabilities and human error, but also new and different risks. Security risks will be more concentrated around AI platforms, where new risks will be the most prominent on top of traditional risks. Similar to how IT security teams shrunk with the rise of the cloud, security teams at companies will likely shrink and grow more slowly. Like how the cloud providers took on more security responsibilities, security will be critical for AI companies’ businesses, which could also be a good opportunity.
Honestly, I’m not sure what will happen to the security market. My intuition says that it will contract because it will be less fragmented since AI consolidates many risks. (I do think it’s currently too bloated with too many theoretical products that don’t solve real problems.) However, there’s a lot of opportunity for AI-driven platforms and applications to manage and reduce security risk. They can use this as a selling point to leaders and executives looking for more efficiency. Maybe, AI is finally what we need to solve security’s effectiveness problem!