5 thoughts going into RSA
What I plan to observe, different understandings of AI, etc.
Disclaimer: Opinions expressed are solely my own and do not express the views or opinions of my employer or any other entities with which I am affiliated.
I’ve intentionally made all of my posts free and without a paywall so that my content is more accessible. If you enjoy my content and would like to support me, please consider buying a paid subscription:
This is going to be a new type of post. I’m starting to hear the low hum of BSides and RSA chatter in the air. Rather than having a single-topic focused newsletter, I’m going to share a stream of consciousness on what’s on my mind as we head into the Moscone Center. These aren’t all well-formed theories, but they are the ideas I plan to talk with others about on the ground.
Recently, I’ve felt like I’m in a bubble. I go to events where I mostly talk with AI-native companies and talk with people who are super excited about AI. I also live in SF, where AI activity is at its peak. However, I’m aware that this isn’t representative of how people and the security community are experiencing AI. As the community gathers, I want to use this time to better get a sense of the whole security landscape and knowledge space.
1. How much do AI security startups actually understand AI?
AI is changing so fast, and knowledge about its usage will be compounding with the amount of use. It’ll be interesting to see the types of customers these AI security startups are designed for. Are they designed for large companies that are just starting to use AI, or are they planning to design for AI startups that are looking to offload some of their security work so that they can be leaner?
As many of you know, I believe that designing for startups that have advanced knowledge is the way to go. I also believe that AI adoption will be much more rapid than cloud, which required significant organizational change. It’s actually quite easy to adopt and start using AI; it’s just hard to use properly as companies struggle with basics like effective prompts and how to integrate it into existing workflows.
The security companies that are focused more on understanding AI and using their knowledge to contextualize security threats will ultimately be the winners. They will be the ones that solve the operational and business problems for the customer rather than propagating fear, which has historically never been a long-term strategy. You have to sell a solution, not fear. Customers want a partner, not a validator. A lot of AI usage is quite nuanced, so any products and teams that can guide companies in navigating the quickly evolving AI landscape will come out ahead.
2. How will security organizations structurally change?
I’ve always thought that applying AI to existing security organizations isn’t a long-term strategy. It’s akin to the invention of the internet and how it was about more than just building a website; it was a strategy to make all companies more tech-focused and global. The same goes for AI. That’s why I believe that AI SOC companies won’t sustain.
I do think organizations will look leaner, and there will be more generalists. The reason is that AI, with the proper context, will fill in the gap for a lot of current expertise around tools and analysis. Organizations will be more effective, but the question is how this evolution will occur. We are seeing a lot of it with application security and how AI labs like Anthropic, with Claude Code Security, are making traditional application security tools irrelevant. It’s also empowering software engineers to do more security.
Like software engineers, I believe security generalists will train the AI where the AI lacks expertise. Security engineers will spend more time training the AI, which will do most of the security work. Smaller organizations require less justification of their existence, and focus will likely shift away from vanity metrics to concentrate on actual threat surfaces.
3. Security organizations are already starting to look different.
Related to the above, it’ll be interesting to see how AI-native companies are approaching security. I know there’s a heavy focus on building with AI, speaking from personal experience.
What’s most interesting is to see how companies that have pivoted to become AI-native, i.e., offering an AI-native product, have transformed their security teams. This is a fascinating case study that will give insight into how a potential evolution will occur. There will be a lot of value creation for products that help this transformation, though I’m not sure I’ll see many of those products at the booths this year. The gap in security effectiveness will likely grow because people tend to have groupthink in security. The AI-native security professionals will share notes and build on knowledge while the others are playing catch-up. It’s tough to have time to learn and grow AI knowledge when you have to handle all the overhead of a larger organization.
4. How will the AI frontier labs show up at RSA?
Do they even show up and have a presence? Google will be an interesting case because they actually have several security products, especially with Mandiant and the acquisition of Wiz. How will their AI side show up? Are they just spectators? It’s definitely new territory for them, especially given their SF presence and increased presence in the security market.
I think the security community wants them to participate more, but a lot of security vendors have a fear of too much participation, as evidenced by the reaction to Claude Code Security.
So far, I haven’t seen any indication of heavy participation from them; it’s mostly the traditional vendors. I know many labs tend to have a presence on more of the builder side, as evidenced by DEFCON participation last year instead of Black Hat. It’s possible they are shying away from RSA and focusing on BSides or having a “parallel track,” given their massive presence in San Francisco. They might realize the real buyers are the builders and not the executives anymore (or rather, the concept of executives is rapidly changing, but that’s for another post).
5. What will the actual theme for RSA be?
It’ll definitely be something related to AI security. Last year, I felt there was more “AI applied to security,” while “security for AI” wasn’t as prominent. There was a focus on AI SOCs and applying AI to operational aspects. However, a lot has changed in the past year. Agents are becoming more prevalent and better. Developments like Openclaw and Claude Code Security have completely changed the threat landscape.
I want to be qualified here and say it’s not fully clear to me that it’s opened up the threat landscape rather than just changed it. Either way, AI is here to stay and quickly evolving with agent usage. Will the main theme be broad, e.g., discussing AI and agents wreaking havoc, or will we have more nuanced themes around specific problems that are reasonable to solve? I worry that the security community is often behind the curve, so we might get distracted by the noise without fully understanding the nuances of AI. One thing is for sure: we need to keep finding ways to learn AI and learn faster as a community. I want to help, and I actually believe AI is a great equalizer for us to defend against AI-enabled attackers.



