Interview With Joe Payne - President & CEO of Code42

Published on: February 21, 2024
Shauli Zacks Shauli Zacks
Published on: February 21, 2024

SafetyDetectives recently had the opportunity to sit down with Joe Payne, the President and CEO of Code42, a leading provider of data loss and insider threat protection solutions. With over two decades of experience in the security sector and a remarkable track record of leadership, including roles at eSecurity and iDefense, Joe brings unparalleled insight into the evolving landscape of cybersecurity. In this exclusive interview, Joe delves into the critical role Code42 plays in helping organizations mitigate the risks of insider threats, particularly amidst the integration of AI and automation into organizational processes. From discussing the challenges posed by insider risks to offering invaluable advice on striking the delicate balance between innovation and security, Joe’s expertise provides invaluable insights for businesses navigating the complexities of data protection in today’s digital age.

Joe, could you please begin by introducing yourself and providing background on your professional career and experience?

Today, I’m the president and CEO of Code42, the leading provider of data loss and insider threat protection. I’ve been in security for over 20 years, having served as CEO of eSecurity and President of iDefense back in the early 2000s. In addition to being a 5 time CEO, I have served on the boards of directors of multiple public and private companies, as well as the board of the not-for-profit First Focus Campaign for Children.

I’ve been working in tech for almost three decades and have led high-growth software firms, including Eloqua (NASDAQ: ELOQ), through its IPO and sale to Oracle.

You joined Code42 as CEO in 2015. What industry does Code42 operate in, and what problems does the company solve for customers?

In short, Code42 helps organizations prevent data loss from employees and contractors. Picture this: a sales executive accepts an offer for a new job. Before he gives his notice to his employer, he uses a personal laptop to download customer contact lists from the CRM system so that he can take the data with him to his new gig. How did we know about it? He was a Code42 employee, and our Incydr solution alerted us to the data exfiltration. Had we not caught it, he could have walked away with data critical to our customer and sales pipeline. It’s just one example of the many forms of data loss, leak and theft that Code42’s technology enables companies to protect against.

With a 32% year-over-year increase in the number of insider-driven incidents, insider risk threatens companies of all sizes. This issue is only getting worse given recent shifts in the workforce and industry, like hybrid work models and generative AI tools.

Today, data is highly portable. The same cloud technologies that allow employees to connect, create, and collaborate also make it faster and easier to leak critical data like customer lists, source code, and IP. It’s a huge threat to businesses. According to one of our reports, insider-driven data exposure, loss, leak, and theft events could cost companies $16 million per incident, on average.

At Code42, we approach insider risk by monitoring file movements and scoring the movements based on risk level. Companies need solutions that allow data traffic to flow so that employees can collaborate. Solutions that constantly throw up barricades or only monitor some traffic aren’t just ineffective. They also prevent people from doing their jobs effectively.

How has the advent of AI impacted the visibility and control that organizations have over their data?

Our CTO, Rob Juncker, recently tested how employees might inadvertently be leaking data in our own industry. He prompted ChatGPT to generate a competitor’s 2024 product roadmap with enough detail to suggest that their employees had been inputting sensitive corporate information. Bingo! In a matter of seconds, we had competitive intelligence that we could plan against.

Just as our competitors seem to be doing, employees across industries are using AI to streamline their workflows, automate repetitive tasks, and make data-driven decisions.

And this is presenting new challenges. Any sensitive or confidential data that employees share with those tools flies out of employers’ control and can put compliance obligations and IP protections at risk.

Companies need to have solutions in place that can monitor data movement in the cloud and AI tools, work across platform and system differences, and provide complete visibility into data sources, types, and destinations. Having the right data protection strategies in place allows security teams to detect data exfiltration in real-time, allowing their security team to take appropriate action. Most importantly, if a company has the right strategy, data can be recovered quickly — sometimes before it leaves for good — removing the need to engage forensic investigators or outside lawyers.

In your opinion, how has the challenge of insider risks evolved with the integration of AI and automation in organizational processes? What are the most pressing challenges that have emerged, and what implications does it have on companies?

AI is outpacing many corporate security programs and policies. Samsung is a prime example of this. After realizing employees were uploading critical source code to ChatGPT, it banned employees from using generative AI. While this might work for some companies, it doesn’t really address the essential question: What’s our strategy to keep pace with innovation while securing critical data?

Generative AI and LLM technologies make it easy for employees to input critical company data — customer lists, product plans, source code — putting organizations at risk of losing their competitive edge, damaging their reputation, and even impacting their profits, as competitors can use those same AI tools to gather intelligence. Companies can’t afford to ignore this.

Despite high-profile cases, a lot of companies still operate without proper protections in place. It’s not enough to just trust employees to protect your data. They’re handling proprietary information on a daily basis. And they aren’t just sharing it with ChatGPT. They’re moving it over email, text, and personal drives, to name a few. Without proper tools, it’s impossible for organizations to track all of the data that is changing hands.

This is why it’s so important to have a comprehensive strategy in place to protect against data loss from insiders. Companies can start by conducting risk assessments to help understand who their top threat actors are, how their critical processes might be affected, and their current risk-preventing capabilities. Your risk assessment can help you understand your strengths and weaknesses so you can adopt the right approach.

How can organizations strike a balance between leveraging the benefits of AI-driven automation for innovation and ensuring robust security measures to protect corporate data and IP?

In terms of AI, leaders should be concerned about two things: “Where will our data go?” and “Will it be protected?”. With the rise of new AI technologies, organizations are concerned about protecting their critical business data. Some companies are taking the approach of banning the use of generative AI tools. While that might work for some, other companies are seeing real benefits.

My advice? Implement security tools that will enable employees to innovate with AI while protecting critical data.

Before implementing new tech into an organization’s ecosystem, leaders should ask questions like: “Will the data we input into these models be hosted on the cloud or in our segregated environment?” and “Will our data be secure when employees use these tools?”.

The key is striking the right balance. Your strategy and technology should enable, rather than punish, collaboration between your teams. The goal is to keep your data safe without slowing down operations.

In your opinion, how should organizations adapt their security policies and protocols to address the evolving landscape of AI-driven automation and the potential risks associated with it?

Every successful security team, regardless of the industry, has one thing in common: they appreciate and account for the human element. This is a critical part of minimizing data loss. While systems, tools, and protocols play an important role in an organization’s security framework, it’s ultimately people who interact with, manage, and control access to sensitive data that are the biggest vector of risk.

Companies need to consider implementing human-centric principles and training to support real-time learning. We’ve found success in automating the sending of customized micro-trainings, which are triggered by employee behavior. These micro-trainings are more timely and relevant than quarterly or yearly company-wide training sessions. For example, when an employee accidentally uploads a file to a personal Dropbox account, an automated “nudge” training is sent to remind them of company policy and best practices for data handling.

We also encourage empathetic investigations to correct and educate employees. Most of the time, security investigations assume the end-user was acting maliciously. However, we’ve found thatby reaching out to employees with empathy, we’re in a much better place to actually get to the bottom of why employees are breaking policy.

About the Author
Shauli Zacks
Published on: February 21, 2024

About the Author

Shauli Zacks is a tech enthusiast who has reviewed and compared hundreds of programs in multiple niches, including cybersecurity, office and productivity tools, and parental control apps. He enjoys researching and understanding what features are important to the people using these tools.