Interview with Alex Polyakov - Co-Founder and CEO of Adversa AI

Shauli Zacks Shauli Zacks

In a recent interview with SafetyDetectives, Alex Polyakov, Co-Founder and CEO of Adversa AI, discusses the motivation behind founding the company and highlights the importance of securing AI systems in a world increasingly reliant on artificial intelligence. He shares insights into Adversa AI’s flagship services and the evolving landscape of AI security and privacy regulations.

Hi Alex, thank you for taking some time for us today. Can you talk about what motivated you to co-found Adversa AI?

Certainly, it’s a story I’m thrilled to share! Picture this: a world relying more and more on AI, using it as the backbone for our daily decisions. During my tenure at my previous startup, I stumbled upon a groundbreaking research paper, almost like an Aladdin’s lamp, which revealed that just by altering a few pixels, one could completely deceive an AI model. Imagine the shock! It was as if I discovered that the very walls of our digital fortresses could be dismantled by a mere whisper. Such an Achilles’ heel! I was captivated. Such vulnerabilities, in a world quickly getting wrapped in AI-driven technologies, present risks at an unimaginable scale. It wasn’t just about improving our startup’s security anymore; it was about the very future of our species. Just imagine, if fundamental loopholes in AI are not addressed now, we could wake up in a world where tech-security is an afterthought, just like many past technologies. Hence, Adversa AI was born first of all as AI Research lab focused on Security and Safety risks and then became a company providing solutions to all areas of AI Security. Our mission is not just about security; it’s about ensuring a sustainable and safe future of all innovative technologies.

What are Adversa AI’s flagship services?

Adversa AI, at its core, is committed to AI security and safety. Our crown jewel is the end-to-end platform designed meticulously to assess and protect AI systems and perform automated AI Red Teaming. This platform is not just a product but a culmination of relentless research, unparalleled expertise, and a passion to be the beacon in AI security. It not only uncovers vulnerabilities but also offers unmatched defense based on the combinations of the best-of-breed mechanisms selected by our patent. We aim to ensure that our clients’ AI systems aren’t just efficient but also invulnerable to threats.

But let’s not get lost in technical jargon. Imagine your AI as a medieval castle. Our platform doesn’t just put up walls; it foresees where the enemies will attack from and reinforces those areas preemptively. It’s like having a futuristic oracle fused with a master architect at your beck and call. Our AI Governance Module? Think of it as the wise council guiding the rulers, equipping security personnel to lead the charge against potential threats. The AI Validation Module is the ever-vigilant guardian, always on the lookout for weak points. And the AI Hardening Module is akin to the craftsmen and masons that fortify the castle walls. At Adversa AI, we don’t just react; we predict, adapt, and fortify. We’re the guardians of the AI realm, ensuring not a single breach.

As AI is becoming more and more mainstream, what are some of the risks that people may not realize?

Great question. We often skim the surface, missing the intricacies below. For instance, there are Manipulation risks, like jailbreaks for LLMs or adversarial examples, especially in Computer Vision models. Then there’s Infection, where threats range from data poisoning to model trojaning or even prompt injections. And, not to forget, Extraction risks which can reveal precious data through methods like model inversion.

For instance, an AI system trained on biased data can make unfair decisions, impacting lives and businesses. Furthermore, AI models can be tricked by adversaries into making incorrect predictions or revealing confidential data. Without vigilant security, these systems could be manipulated to serve malicious ends, leading to catastrophic outcomes. It’s essential for society to realize that the AI magic comes with its own Pandora’s Box of risks.

For instance, Manipulation isn’t just about tricking an AI to see a cat where there’s a dog; it can influence major systems, from financial markets to security surveillance. And Infection isn’t merely about corrupting an AI’s ‘thought process’, but it can lead to distorted news feeds, biased recommendations, or even malfunctioning automated vehicles. The risk of Extraction? Imagine personal secrets, once locked deep within, being unveiled because an AI’s ‘thought pattern’ was reverse-engineered. It’s a brave new world out there, and with great innovation comes great responsibility. We must be the stewards of this powerful force. Such complexities demand a nuanced understanding and a proactive approach, which is what we champion at Adversa AI.

What are some of the most common cyber threats targeting AI systems today?

AI systems, due to their pervasive nature, are a goldmine for cybercriminals. In this era, where AI is the reigning champion, malicious entities have sharpened their arrows. Most notably, we’re witnessing threats like Prompt Injections in LLMs. This isn’t just a fancy term; picture an AI system being subtly nudged off its course, akin to a ship being steered towards treacherous waters by a hidden hand. In a world where all the decisions are outsourced to LLMs, attacks such as Prompt Injection can manipulate any decision, be it a financial decision or a cybersecurity and identity management system that can be easily bypassed.

What else is there? AI Jailbreaks. Think of it as a mastermind finding a hidden entrance into the AI’s mind. Our very own research, which got spotlighted in the prestigious Gartner report, introduced the world to new Jailbreak methods like RabbitHole. It’s akin to uncovering a hidden trap in a maze, which everyone assumed was safe. Such methods can be used to make LLMs do whatever you want from it, even if it’s restricted by Guardrails, either internal ones developed by LLM vendors or even external ones introduced by a third party. We have already experienced a number of Guardrails implemented by our clients, which turned out to be very easy to bypass. That’s why it’s essential to have a third-party assessment. These are just the tip of the iceberg, and as AI evolves, so will the sophistication of these threats.

What are the best practices for securing AI systems against cyber threats?

One must take an end-to-end approach. It starts with understanding, much like an artist observes their subject. Recognize the risks, the dark corners of the canvas. Then, validation – ensuring every AI Model, every line of code, every algorithm, and Dataset example stands up to scrutiny. But it doesn’t end there. Hardening is the process of refining, perfecting, and ensuring resilience, making sure that the AI isn’t just functional but fortified. Finally, in the dynamic world of AI, one must be ready to detect any anomalies and respond swiftly, and its time to implement monitoring capabilities.

We have a Weekly and Monthly newsletters on the latest Secure AI news that you can follow to learn more and be always up-to-date.

How is the field of AI security being shaped by regulations such as GDPR, CCPA, or other government interventions?

Regulations like GDPR and CCPA are not just guidelines; they’re a strong testament to the world waking up to the gravity of AI security and privacy. These regulations underscore the significance of personal data protection and push organizations to be more transparent and accountable. They’re paving the way for a safer AI ecosystem by ensuring companies adhere to best practices and prioritize user security and privacy. However, there are even more relevant guidelines and recommendations, such as the NIST AI Risk Management Framework, to which we contributed and encourage everyone to take a look.

About the Author

About the Author

Shauli Zacks is a tech enthusiast who has reviewed and compared hundreds of programs in multiple niches, including cybersecurity, office and productivity tools, and parental control apps. He enjoys researching and understanding what features are important to the people using these tools.