Disclaimer: Opinions expressed are solely my own and do not express the views or opinions of my employer or any other entities with which I am affiliated.
We’re still hiring for a product leader and application security engineer! Come join me and our awesome CISO, Susan Chiang, in building a secure mental healthcare system that everyone can access. If you’re interested in what it’s like to work at Headway, check out the recent interview I did with Built-In.
If this is something you’re interested in, feel free to apply using those links or reach out to me!
Wow, it’s been quite a year, and I appreciate all the new people who have subscribed to my blog and those who have remained subscribed. It has led to many great discussions, in fact, and this is what keeps me excited about writing more next year.
It’s year-end, and I haven’t run a sale for a while. For those who still have some professional development budget left and want to find somewhere to spend it, I’m giving 50% off my blog for the holidays. Here’s an email template to help justify the expense.
Thanks again, and I appreciate the support!
I struggled for a while to decide if I should write about using AI in security. It feels kind of clickbaity, but I decided that it’s better that it’s my article vs. something else less substantial. Also, our yearly planning where we discussed both using and enabling AI and my dinner with Pangea where we talked about new AI applications inspired me to write down my thoughts. I wrote last time about how AI applications could potentially be a good place to do security by design though the market is small.
To start, it’s important to level set my beliefs about AI as both a software engineer and a security person. AI is one of the most, if not the most, important inventions of this technological cycle. It’s fundamentally changing the way we operate as a society similar to the internet. It’s going to be an integral part of every company’s strategy. Just like how almost every company has an “internet” strategy, every company will have an AI strategy. Of course, it’ll evolve. Similar to how companies discovered that having a website didn’t mean they had an “internet” strategy, using AI won’t just be applying AI, there will likely be a more nuanced and deeper strategy.
I bring this up because we’re at the point where as a security professional, resisting AI adoption is futile, and honestly, it’s the wrong use of energy and resources. Sure, security teams should come up with some policies so that teams can safely use AI without leaking sensitive company information. Still, these policies should focus on enablement rather than restriction. AI is here, and security teams need to accept that. It’s strategic for companies to use it, and security teams will lose leverage if they push too hard to block its usage because it'll be seen as holding back the business. I’ve written in the past that I believe that AI will benefit security, so security should embrace it.
There’s also no better way for security to understand the policies needed than to apply them to the AI that they use. It allows them to be empathetic to the needs and uses of others. Too many times, security sets policies without understanding the downstream effects, but this is a good opportunity for them to test out their own policy.
A few ideas on how to use AI in security
I generally believe that AI is like a calculator. It’s only as smart and knowledgeable as the person using it. That is, it’s a useful tool to do the more tedious parts of math, such as multiplying big numbers, but it doesn’t make a person better at math. This person won’t suddenly be able to do calculus or understand the purpose of finding zeros in a quadratic.
AI seems to be good at analyzing large amounts of text easily, and that can have a range of applications, especially handling operationally heavy tasks. Of course, these capabilities will evolve over time, but we can only work with what we have now.
One area where there are potentially many applications is appsec. I’ve written about this in the past. It’s one of the areas of security where there’s a huge talent gap, and the role itself is many times operationally heavy.
One appsec application is to use AI to assist in triaging and remediating vulnerabilities, including labeling and discovering false positives. Most appsec teams have to gather context to solve the problem or have a lot of back and forth with the corresponding engineering team, leading to inefficiency. This seems good because AI is good at having back-and-forth conversations and reading large amounts of code. Appsec tools like Semgrep already have the capabilities to do this! The next evolution is to have some customization around the AI capabilities.
Another appsec application is to assist with security reviews. Security reviews are typically a lengthy process that requires effort from both the creator and reviewer. The engineer creating the review can use AI to help generate the basics of the review and then, only the relevant parts will be sent to the appsec engineer where some risk decisions need to be made. Maybe, it’ll even help provide some context around those risk decisions to make it easier for the appsec engineer. This will reduce the amount of time spent on these reviews by both the creator and the reviewer.
With the increased use of SaaS, access has become more complicated because it’s siloed in each application. Okta has simplified this, but not every tool integrates with Okta and/or SCIM. Some companies might be unable to afford it because of the SSO tax, but they have to do these access reviews somehow. It’s very manual. Even if you could automate pulling the user roster from each application, you still have to go through and figure out if people have proper access. It’ll reduce operational effort if AI could do most of the review and flag some ambiguous cases.
Another area is to help manage incidents. There are currently tools like FireHydrant and incident.io that make incident management more seamless, but they still require a good amount of manual intervention, especially to summarize the incident, page people, and figure out certain contexts. It seems that the chat and summarization capabilities of AI can help materially here. I know there are a lot of companies focused on security events in the SIEM. Although I do think AI can help, I don’t think this is the most impactful application, and I’ll explain more about it below.
Finally, AI might be useful in data security. Historically, detecting sensitive data has been difficult and resulted in several false positives. The hard part is figuring out what data is considered sensitive. Using basic techniques like regular expressions was insufficient and led to issues. The reason is that what determines sensitivity requires context. For example, numbers in and of themselves are hard to classify, but picking up context might show they are a phone number or social security number. Similarly for PHI, how “identifying” some information is contextual. LLMs and AI are extremely good at understanding this context, so I’m surprised we’re not seeing more usage here given the size of the market.
Where AI might not be as useful
I know this might be controversial, but I don’t believe we will substantially benefit from applying AI to security events in the SIEM. That’s not to say there will be no benefit. There will be some, but it’ll only be incremental. The issue is that what I described above. AI won’t give people more knowledge and make people have better judgment. It might make people more efficient. For example, most security analysts struggle to get context on security events that show up, so they have to escalate it to a level higher. There’s also an argument that the context will help lower-level analysts respond more quickly. However, there’s a cost to maintaining an AI tool because AI isn’t free. In fact, AI is quite expensive, and it’s not clear how much productivity gain there is. If there is productivity gain, is it meaningfully reducing risk? That is, is the productivity gain actually driving toward a goal?
The question here is why not just outsource the SOC?
Applying AI to the SOC effectively is its own post, so I’ll leave it that there likely are better applications for AI that will get more gains than the SOC.
Anyway, I’m excited to see more AI usage in security. It won’t be perfect at first, but we need to start somewhere and iterate.