Interview With Nabil Hannan - Field CISO at NetSPI

Published on: March 26, 2024
Shauli Zacks Shauli Zacks
Published on: March 26, 2024

In a recent interview with SafetyDetectives, Nabil Hannan, the Field CISO at NetSPI, shared insightful perspectives on the evolving landscape of cybersecurity, especially in the context of AI/ML technologies. Hannan emphasized the distinct roles of a CISO and a Field CISO, where his focus is on advising clients on proactive security measures, exposure, and vulnerability management. He delves into the integration of AI/ML in security strategies, the importance of education on these technologies for security leaders, and NetSPI’s innovative approach to AI/ML security through penetration testing. Hannan also highlights the trend of AI FOMO (fear of missing out) and its potential risks to security, underscoring the necessity of proactive security in today’s rapidly evolving cyber threat landscape.

Can you provide an overview of your role as Field CISO at NetSPI and how it intersects with AI/ML security?

There is often confusion about the differences between the CISO and Field CISO roles. A CISO is the most senior security executive accountable for the overall security posture of an organization. In contrast, a Field CISO is a security program advisor for the organization’s customers and partners. At NetSPI, I help clients solve their proactive security, exposure, and vulnerability management challenges. Regarding AI/ML security, my job is to work closely with our security experts and provide business context around the risks and vulnerabilities uncovered during an AI/ML pentest. More importantly, I help security leaders understand the highest priority vulnerabilities and the pragmatic steps they should take to address the risks to their AI deployments (e.g., LLMs) and the surrounding infrastructure.

How do you see the role of a CISO evolving with the increasing use of AI/ML in cybersecurity?

The role of the CISO should not change drastically with the increasing use of AI/ML. The emergence of new technologies and use cases should not distract security leaders from mastering the basic security fundamentals like risk management, compliance and regulatory

requirements, incident response, threat intelligence, and vendor risk management. However, I would recommend security leaders prioritize education on AI/ML technologies, as discussed in a recent webinar I hosted. This includes bringing security leaders into the implementation conversations at the very beginning so that security is top of mind throughout development, deployment, and beyond.

How does NetSPI approach the security of AI/ML models differently from traditional cybersecurity practices?

There is no silver bullet for AI/ML security, but penetration testing is a great opportunity to identify, understand, and mitigate risks and improve overall resiliency to attacks. NetSPI’s first-of-its-kind AI/ML Pentesting solution focuses on two core components: Identifying, analyzing, and remediating vulnerabilities in machine learning systems such as Large Language Models (LLMs) and providing grounded advice and real-world guidance to ensure security is considered from ideation to implementation.

During these tests, customers receive a partner through ideation, development, training, implementation, and real-world deployment. They’re also equipped with holistic and contextual security testing across their tech stack that leverages NetSPI’s application cloud and network security testing expertise. Suppose an organization is building or implementing its own models. In that case, NetSPI can test those implementations against real-world adversarial attack techniques, leveraging major adversarial examples such as evasion, poisoning, extraction, inference/inversion, availability, and so on. In tandem, customers receive an evaluation of their defenses against major attacks and tailored adversarial examples, guidance on how to build a robust pipeline for development and training, and comprehensive vulnerability reports and remediation instructions delivered via NetSPI’s PTaaS platform.

Can you explain the concept of AI FOMO and its potential impact on security strategies?

AI fear of missing out (FOMO) is a new trend in the tech industry, whereby companies worry that competitors will get a leg up if they don’t incorporate generative AI into their product offerings quickly. In other words, they fear missing out on the AI wave and, therefore, rush into the adoption process.

By rushing solely to keep pace with competitors, teams sometimes cast aside procedures they might otherwise take their time with, like conducting a landscape analysis, educating teams on new vulnerabilities, or conducting proactive security testing. In this rush to adopt AI, organizations often don’t take the time and care to determine if they are truly ready for AI-based

technology adoption – an example question to ask might be, “Do we have the proper data classification and inventory in the case that we want the AI solution to leverage internal data?”

While every organization should adopt new technologies and scale, rushing through these practices can increase security gaps, posing long-term threats to the company and its clients. With security looped into the process, organizations can innovate with confidence.

Why is proactive security more important now than ever for business leaders?

The impact and rate of cyber attacks are increasing at a profound rate – according to Statista,  the global cost of cybercrime is expected to grow from $9.22 trillion in 2024 to $13.82 trillion by 2028. Amidst this bleak outlook, organizations and leaders need the confidence to innovate without fear of security attacks hampering their progress. The key to this is proactive security, which equips teams with more clarity, speed, and scale than ever before, especially as the threat landscape continues to grow and evolve.

Teams with a proactive security program have the time and resources to focus on the entire scope of an organization’s security posture, discovering, prioritizing, and remediating vulnerabilities before an attack occurs. So, while penetration testing does support identifying and remediating risk – and is still recommended – a proactive security framework helps keep teams protected the other 50 weeks in a year that a pentest isn’t being conducted leveraging technologies like attack surface management and breach and attack simulation.

What are some of the biggest challenges in securing AI/ML models, and how can they be addressed?

AI is transforming how we work because it reduces the effort and costs of completing tasks. Still, we are only at the beginning of this technology’s potential and, therefore, are only aware of its current challenges. Some of the current challenges to securing AI/ML models include technical debt, insufficient resources and knowledge, and the pressure to implement AI quickly without the proper testing, all of which lead to security vulnerabilities. To address these concerns, teams should implement robust AI vulnerability training to bring a more holistic and proactive approach to safeguarding machine learning model implementations.

About the Author
Shauli Zacks
Published on: March 26, 2024

About the Author

Shauli Zacks is a tech enthusiast who has reviewed and compared hundreds of programs in multiple niches, including cybersecurity, office and productivity tools, and parental control apps. He enjoys researching and understanding what features are important to the people using these tools.