Interview With Itamar Golan - CEO and Co-Founder at Prompt Security

Updated on: July 3, 2024
Shauli Zacks Shauli Zacks
Updated on: July 3, 2024

In a recent SafetyDetectives interview, we spoke with Itamar Golan, the CEO and co-founder of Prompt Security, who has dedicated 15 years to the field of AI. Starting his career in the Israeli Defense Forces (IDF) and moving on to significant roles in the private sector, including leadership positions at cybersecurity firms like Check Point Technologies and Orca Security, Golan has always been at the intersection of AI and security. His passion for data science, machine learning, and AI, combined with a keen interest in security, led him to co-found Prompt Security with Lior Drihem. Prompt Security aims to help companies adopt AI while ensuring privacy and security, making them competitive without sacrificing essential safeguards.

Can you share a bit about your background and what led you to co-found Prompt Security?

My name is Itamar Golan, and I’m the CEO and co-founder of Prompt Security. I’ve been working with AI for the last 15 years. I got my start in the IDF and then moved to the private sector, working at several cybersecurity companies like Check Point Technologies and Orca Security, where I led the AI and Data Science departments.

I’ve been living at the intersection of security and AI for a long time. My biggest passion is data science, machine learning, and AI, and my favorite market is security. It was about time to found a security company that actually protects AI, especially as this trend became more relevant in this era.

This led me to co-found Prompt Security with my co-founder, Lior Drihem. Essentially, we aim to help companies adopt AI to stay relevant and competitive without sacrificing privacy and security. So, that’s a high-level overview of my background and what brought me here.

How does Prompt Security address the unique security challenges posed by generative AI tools?

I think we address these challenges uniquely through a very straightforward prism, which says you need a holistic solution that gives you the right visibility, control, and protection across different AI modalities and verticals within your company.

Most of the current tools, whether from incumbents or new entrants, are solving pinpoint solutions. We decided that any CISO, security leader, or CIO needs one holistic and comprehensive platform. This platform should address the risks of both the internal usage of AI tools by employees and developers, and also the risk posed by homegrown AI applications when exposed to external users around security and content safety.

Everything will be consolidated in one place, visible to security leaders. On top of that, they can define their organizational policies, which we enforce in real time, either on the security side or around protecting generative AI.

What are the most common threats associated with the use of GenAI in enterprises today?

One of the most prevalent threats today is the rapid and widespread use of AI tools by employees. The speed at which these tools are being adopted makes it challenging to track and manage them effectively. Unlike physical assets, AI tools can multiply and spread within the organization, often without proper oversight. This lack of visibility, sometimes referred to as “shadow AI,” means that enterprises often don’t know which AI tools their employees are using or what data is being shared with these applications. This can lead to significant security risks.

Another major threat is the leakage of confidential data. Intellectual property and sensitive information are increasingly being shared with AI tools because they are so useful and efficient. This isn’t a new problem, but the scale has increased dramatically. Employees are more incentivized to share data with AI, leading to a substantial rise in the volume of data being shared compared to traditional tools. Additionally, AI systems are typically trained on the data they receive, which can further complicate data privacy and security.

A third threat arises when enterprises develop their own GenAI apps using third-party Large Language Models (LLMs), vector databases, and other components. These custom applications, when exposed to employees and customers through chat interfaces, can introduce new vulnerabilities. For instance, there is a risk of prompt injection attacks in chat interfaces. Therefore, these new augmented threats require careful consideration and robust security measures to mitigate potential risks.

What role does real-time monitoring play in securing GenAI applications, and how does your platform implement this?

We recognize that employees are already using various AI tools, and it’s essential to understand this usage comprehensively.

Our platform allows you to monitor AI tool usage in real time, enabling you to detect and respond to any violations of laws or organizational policies immediately. This proactive approach ensures that you can prevent potential security breaches as they happen.

We start by providing real-time visibility into the current usage of AI tools. This involves detecting and flagging any instances where Personally Identifiable Information (PII) or confidential data is being shared with AI applications. If such data is detected, our system can either prevent it from being shared or modify the prompt in real time to mitigate the risk.

Additionally, our platform can detect and block adversarial attacks, such as prompt injections, in real time. This capability ensures that any malicious attempts to manipulate AI outputs are thwarted instantly.

In essence, our real-time monitoring system creates a robust defense against potential threats by continuously analyzing AI interactions and enforcing security policies dynamically. This approach is fundamental to maintaining the integrity and security of your GenAI applications.

What are the biggest challenges enterprises face when integrating GenAI tools, and how can they overcome them?

Answer: I think the biggest challenges that enterprises face today when adopting AI is the fact that it’s a completely new era. It’s a brave new world. Not many of those security leaders really understood AI a few months ago, and suddenly the entire organization—from product development and R&D to legal, marketing, and sales—is using AI. You need to make that leap in education and awareness super fast, much faster than any technology in the past.

In parallel, the board, the CEO, customers, and the market are all demanding AI capabilities immediately, which creates a security gap. Organizations are trying to innovate quickly without adequate, corresponding security solutions to ensure that this rapid adoption doesn’t compromise their security posture and data privacy. This situation is unprecedented. Even with the adoption of the cloud or the internet, the pace wasn’t as intense as what we see today. This is what we’re trying to solve—the problem of securely adopting AI.

In your opinion, how will the landscape of cybersecurity evolve with the increasing adoption of GenAI? 

Regarding the landscape of cybersecurity, I think it can be split into two main areas. The first area is protecting AI usage and development within an enterprise from a wide variety of threats and the expanding attack surface. This includes ensuring that AI systems themselves are secure, which we’ve already touched upon.

The second area involves traditional cybersecurity attacks on your infrastructure, cloud, and identities. These attacks will increasingly utilize AI to discover new zero-day vulnerabilities, reverse-engineer technologies, and more. Essentially, we’ll see the same types of attacks we’ve seen in the past, but at a much faster pace. This means that, correspondingly, we need AI to build better security tools, enhance SOC (Security Operations Center) teams, and develop more efficient workflows. As the velocity and frequency of attacks increase, so must the robustness of your security workflows.

You can look at it as two categories: how you secure your AI usage and how AI itself is creating new kinds of threats. It’s a bit of a vicious circle. While we often focus on the negative aspects, it’s important to remember that AI also creates numerous opportunities. In my opinion, these opportunities outweigh the potential threats. AI can make employees more efficient and productive, deliver more features to customers faster, and even help detect cybersecurity attacks before they happen.

About the Author
Shauli Zacks
Updated on: July 3, 2024

About the Author

Shauli Zacks is a tech enthusiast who has reviewed and compared hundreds of programs in multiple niches, including cybersecurity, office and productivity tools, and parental control apps. He enjoys researching and understanding what features are important to the people using these tools.

Leave a Comment