Disclaimer: Opinions expressed are solely my own and do not express the views or opinions of my employer or any other entities with which I am affiliated.
This is my first paid post in a few weeks, but good news for you! I’ve dropped my prices to $5 a month and $49 yearly. If you want to read this whole article (and many other paid ones), this is the right time to subscribe!
After a week not talking about AI, I’m back at it. It’s definitely something that’s been on my mind, and I’m trying to learn more about it and the security implications around it. As many of you know, I believe that understanding the technical aspects is important to fully understanding the risks.
I’ve increased my use of various AI applications over the past few weeks to better understand how AI can affect cybersecurity either positively or negatively. In fact, I used ChatGPT Pro to do some deep research for this article! I’ve also started to use ChatGPT to help make some edits to my newsletter to improve readability. I’ve also been playing around with the Anthropic APIs and prompt engineering.
Here are some basic learnings:
AI is good at processing large amounts of text, e.g. code, articles, etc., and providing a summary and/or analysis of them.
You can get far with the OpenAI and Anthropic LLMs. It doesn’t seem like most companies need their own models, but tuned versions of Llama and Deepseek make sense if you have proprietary data. However, most companies should predominantly use the hosted models.
LLMs are somewhat of a black-box, but they can be useful if you create the right prompt.
LLMs are accelerating many manual and repetitive tasks, including writing code.
Anyway, the purpose of my writing this newsletter isn’t to dump my AI/LLM learnings. I had an interesting thought experiment that I ran across a few people in my network. (If you ever want to engage in my thought experiments or propose one, please reach out!)
How will AI affect engineering?
A great starting point is to examine how AI has influenced engineering. Tools like Copilot and Cursor show that AI can substantially boost engineering velocity. Much of a developer’s time is spent manually generating code—a process that often involves understanding existing code, looking at code examples online, and then writing new functionality. Instead, they can leverage AI-generated code as a foundation and adapt it to their own codebase. Cursor and Copilot excel at capturing context from existing code, requiring fewer changes after the initial version.
This shift reduces the cognitive load associated with writing code from scratch, or adding code to the codebase. Instead, a developer can concentrate on creating the logic rather than this seemingly “repetitive” task. This is especially valuable for startups where a developer might switch between multiple parts of the codebase. With AI, this person doesn’t need to keep multiple contexts and remember various nuances of the code. The benefits are magnified if the company uses common frameworks like Flask, FastAPI, etc. because these tend to have more set patterns.
A slight digression. This is similar to my experience with AI and my writing. ChatGPT struggles to generate good blog posts when I give it little information. I have to drive most of the ideas and core content. However, I don’t have to worry too much about the clarity, flow, or diction. To relate this to coding, the functions and features represent the original ideas, and the code is the writing and prose itself.
However, AI is still far from writing large amounts of code by itself. Its strengths lie mostly in tactical tasks, such as writing functions or doing specific tasks rather than writing large parts of business logic.
A friend once told me that the main “innovations” of the big tech companies, such as Google and Meta, are the tools and processes that they make it easy to write high-quality code. More explicitly, they have the infrastructure that enables an average engineer to produce high-performing code and features with high velocity. It makes it easy to onboard engineers and have them contribute quickly. Over the years, people have tried to spin out some of these tools to some degree of success, but these tools don’t work well without structured development processes, such as a mature software development lifecycle (SDLC) paired with strong ownership processes and organizational management strategies.
How does AI benefit larger tech companies from an engineering standpoint? In my opinion, the benefits are less pronounced, but larger tech companies have more code and examples that will provide an advantage. But, AI does make it easier for startups to produce high-quality software engineering that will level the playing field. This is similar to how cloud computing made it easier to deploy applications without requiring a large investment in infrastructure, lowering the barrier to entry. Although the larger companies will still have the better tools and AI, it substantially reduces the gap.
Reduced variance is at the heart of the issue. Of course, better engineers are more likely to figure out how to use AI to their advantage (and also use it more effectively). So, AI might create more variance in that way. However, let’s consider that AI tools become so accessible that even the average engineer can benefit substantially. This isn’t such a crazy thought. We’ve seen this happen with other tools. In many ways, high-level programming languages, such as Python, are good examples. As a result, if all engineers use it and can use it effectively, it will reduce the quality variance between a mediocre engineer and a great engineer, i.e. no more 10x engineers (not that I believe those exist to start), only 0.8-1.2x engineers.
How does this benefit security?
Why am I talking so much about engineering? What about security?
Keep reading with a 7-day free trial
Subscribe to Frankly Speaking to keep reading this post and get 7 days of free access to the full post archives.