Frankly Speaking - AI is a blessing to security
People care more about their personal information
Disclaimer: Opinions expressed are solely my own and do not express the views or opinions of my employer or any other entities with which I am affiliated.
I am actively hiring! I am building a team of strong software engineers who are excited to securely build a mental healthcare platform that everyone can access at Headway, please consider applying or reach out to me.
Without a doubt, the news has been swirling about AI, ChatGPT, and AI’s future in relation to security. There have been two sides to the conversation. First, how does AI help improve security? Second, what are the security implications of AI? In this newsletter, I’ll focus on the latter. I’ll address the former in the future, but there are talks about using AI to improve incident response and offensive security.
Rather than complaining about the difficulty of “securing” AI or how dangerous it is, security should fully embrace the rise in AI as it will fuel additional security spend and innovation. Of course, it’s up to people to capture this. The major corporate hacks of 2013-2014, e.g. Target and Home Depot, and the move toward the cloud and DevOps have substantially enlarged the security market. However, this has led to an increased focus on security engineering and left much of IT security behind (and shrunk that market). I believe something similar will happen with AI — those who embrace it and understand the security implications will benefit while others are left behind.
Increased focus on privacy
I started my PhD in 2012 focused on data security in large-scale web services (you can read my final PhD thesis here if you’re interested). At the time, people thought privacy was dead. Facebook and Twitter were in full force, and people were sharing all aspects of their lives online. One of the most popular features of Facebook was location sharing and the notion of “checking in.” No one was worried about the privacy and security implications.
In fact, I worked on a location privacy project in 2011. At the time, I didn’t even fully buy into why people would want to keep their locations private, especially since people seemed to embrace sharing every part of their life. Remember when Twitter was just full of tweets about what people ate for breakfast?
My PhD thesis topic was not popular initially either, although it seems highly relevant now. The tides did seem to turn circa 2014 when people’s personal information was leaked through major corporate hacks, such as Target and Home Depot. The Cambridge Analytica and Facebook scandal also created more awareness. These events made it clear to people that tech companies, such as Facebook and Google, hold large amounts of personal information. People were so caught up in how useful and cool the products were that they didn’t understand the implications of this information these companies held. I don’t blame them. The information seems benign at first, but what made people realize the powerfulness of this data was the Facebook psychological experiments. It was possible to influence people by understanding their preferences and targeting them with specific posts. Similarly, this was the crux of Cambridge Analytica when that data was used to help with the 2016 election. Quickly, it became clear to the general public that it wasn’t in their best interest to give these tech companies more data, and these companies were making huge profits off their data in exchange for free products.
Improved understanding of data-driven tech
After those various incidents described above, it became clear, especially during the 2018 Facebook congressional hearings, that many of our policies are outdated and hard to apply to tech. Similarly, it also became clear how little our policymakers know about how the underlying tech works.
However, over the last 5 years, unlike Congress, the general public has become more aware of the implications of providing their data. It has even spurred companies like Apple to provide privacy as a feature through their differential privacy work and the opt-out functionality for mobile app tracking, which has hit social media companies like Snapchat and Facebook hard because they have mobile-focused platforms that generate most of their revenue.
With that said, much has changed in the last decade as a result of new technologies, such as improved search and social media. The upside for security is that these platforms have made the public care more about their information and online security. This phenomenon might be related to the idea that humans desire to control their lives and what they share.
With that said, security is where it is now because of these new technologies. Also, much of the fear today, especially around AI, has been driven by the last decade of technological innovation.
Embracing AI
There’s been a lot of talk in the security community about the fear around ChatGPT and AI because there might be a lot of malicious use cases. I think this is highly misguided. A decade ago, these types of conversations would have been shut down, but given what I said above, the only reason these conversations are being entertained is that 1. security is being taken more seriously because of the large number of breaches and 2. people caring more about their personal information and data given that they understand the implications of these technological innovations in the past decade.
What do I propose? I believe security should embrace AI, and find ways to use it securely. We shouldn’t be trying to restrict AI usage and saying no. We should be trying to work with engineering and the appropriate stakeholders to see how we can use tools like ChatGPT safely and within reasonable risk bounds (even if it requires us to rethink or relax our risk).
I believe AI and tools like ChatGPT are great for security! Like the cloud and social media, it keeps us relevant and expands the security market into new segments. This attitude is a win-win for everyone as security will be seen as an enabler and collaborator rather than a blocker.
Well said, Frank! I completely agree that while AI security is an important concern, it shouldn't hinder the progress of this technology. It's better to be proactive and ensure that the development of AI is done correctly with proper safety measures in place, rather than trying to impede its growth and getting left behind. Let's stay ahead of the curve and make sure the train stays on track.