Disclaimer: Opinions expressed are solely my own and do not express the views or opinions of my employer or any other entities with which I am affiliated.
Ever since the ChatGPT launch near the end of 2022, there’s been excitement as well as fear surrounding artificial intelligence. Its capability impressed many in the computer science community, who have been struggling with this problem for decades. However, with new technology always comes fear.
Many companies, including security ones, such as Socket, have embraced and integrated the technology. Others have banned it, such as Italy over vague privacy concerns. There have been reactions all over the spectrum, which isn’t uncommon for new technologies as society tries to understand the implications of its powers and inevitably, the potential abuses of that power.
In this newsletter, I’m going to discuss why I believe there’s nothing to fear, and many concerns, albeit necessary, are overblown. I won’t talk about all the reasons why people fear AI. There are plenty of good articles on this, including one on Noahpinion focused on economic reasons for this fear.
There are definitely problems
Like any technology, it’s not perfect. The company has to balance business and security risks. Too much security would slow down the business. Taking too many business risks would cause people to lose trust. However, it’s always a tricky balance, and security leaders, even the best ones, sometimes get it wrong. So far, I think OpenAI is doing a good job, but it’s too early to say since it hasn’t been publicly available for that long.
There have been some vulnerabilities and data breaches, but this is no different than any other software. Thankfully, there haven’t been any egregious security issues. The immense usage has exposed issues faster, similar to the speed and volume of issues exposed for blockchain/crypto. I believe this is a good thing as it allows for faster iteration before tech debt increases over time.
Moreover, it’s not lost on the founders and investors that cybersecurity is an issue. An important first step is recognizing it’s a risk and taking it seriously.

Why are people worried?
There are many reasons people are worried about a powerful AI in general, and they have different levels of extremity from cheating concerns to robot takeovers. I believe the reason for this is that the unknown and new are generally scary. Specifically, for cybersecurity, there are concerns that it could masquerade as humans, leading to more fraud. For example, there continues to be a strong concern about deep fakes. Similarly, many people are putting sensitive information into ChatGPT, such as their personal information, proprietary company information, etc. As a result, people’s concerns are valid — a relatively small group of engineers has developed a powerful technology that has widespread use. We still haven’t figured out all the potential use cases and abuses, and it would be impossible for this group of engineers to have speculated all possibilities.
It’s important to remember that this was the reaction when many other technologies came to market. For example, there was a cybersecurity scramble when companies increased their usage of the public cloud as the threat surface and calculus shifted. Many places are banning the technology not because it’s actually dangerous, but it’s because they think it’s potentially dangerous. That’s the approach Italy is taking.
Why am I not worried?
Simply, I believe we will figure it out. On the surface, it seems that the technology is net beneficial, and we now have to work on the guardrails. Many times, the only way to figure out the guardrails, unfortunately, is to see how it ends up being abused. Like with any technology or tool in general, there will be abuses, but it doesn’t obviate the benefit of the actual tool. For example, the internet has made scams much easier, but it doesn’t mean we need to ban the internet.
Another reason I’m not worried is that compared to the past cybersecurity is being flagged as a risk much earlier on. For example, when the internet was created, there were no talks about cybersecurity issues. Similarly, when the public cloud became popular, security leaders scrambled to deal with this issue. However, the above Tweet by Sam Altman shows that cybersecurity issues are top of mind, which allows them to start considering risk mitigation techniques during design rather than making them an afterthought.
Finally, we managed to figure out how to solve cybersecurity issues for various new technologies, such as the internet, cloud, smartphones, etc. I’m confident we will figure it out. Rather than spending time complaining about the issues, we should focus our energies on solving them.
The solutions are yet to come.
I don’t have strong opinions on what potential solutions are other than we should dedicate efforts to understanding potential issues and resolving them pre-emptively. My intuition is that most of the core security issues are related to the data and algorithm. Since much of machine learning and artificial intelligence make predictions based on data, it’s easy to manipulate the data to achieve a selected outcome. There are two problems. First, a group of engineers decides what this data is, and there’s no visibility. They also control the algorithm and how it behaves. Second, others can manipulate the data to achieve the outcome they want. Therefore, it seems like we need to have a way to audit the algorithm and data without revealing them because they tend to be proprietary. Maybe, we decide that this information needs to be public and shouldn’t be commercialized, which might require companies to change their business models.
We are still in the early days, and we have a long way to go. It’s good that the creators are aware of these issues and creating visibility. This gives me confidence that we will figure it out!