Sitting down with Franco De Bonis, Marketing Director of VISUA, Aviva Zacks of Safety Detectives asked him about People-First AI.
Safety Detective: What do you love about working for VISUA?
Franco De Bonis: I’d worked in technology and SaaS sectors for 20-odd years, and like many others, I’d heard lots of horror scenarios about AI and how it was going to take all of our jobs and that people were using it for things that they shouldn’t be using it for. So, I had my trepidations and reservations about it, but what I love about VISUA, in particular, is that the co-founders, Luca Boschin and Alessandro Prest had very early on defined what they would get involved in and what they wouldn’t get involved in and that it wouldn’t matter if people turned up with a big wallet full of money. It was simply not something they wanted to do. An example of that is facial recognition, which can be heavily exploited and infringe on individuals’ privacy; therefore, they simply did not want to get involved in that.
Our approach is called People-First AI, which means that we don’t develop technology for its own sake or to exploit it. It has to enrich, enhance, and solve meaningful challenges.
For example, there are people in various social media companies—like Facebook and Instagram—who have to troll through hundreds of hours of video and thousands of images personally, and this can be the most awful type of hateful and harmful content that you could imagine, which can lead to burnout and even PTSD, whereas, technologies like our Visual-AI can take care of those problems without having to have any human beings involved. It’s not taking jobs away that are good jobs. It’s dealing with jobs that humans should never be involved in. There are also some jobs that humans simply can’t do, like look at every image and video posted to Twitter or Instagram and log every brand, every object, and every word embedded within it. But VISUA’s ability to do just that means that jobs are created in industries like social listening, brand monitoring, and brand protection, that would otherwise simply not exist.
I really love that they’ve looked at these challenges and said, “Where could we apply our technology to provide the most benefit to not only industry but the human race as well?” That really was appealing to me as an individual. I’ve always wanted to be involved in good technologies, and this was a perfect opportunity.
SD: Can you tell me a little bit more about Visual-AI?
FDB: We all have an idea of what artificial intelligence is. It’s an artificial brain and it can make decisions based on things that it learns. Now, the problem is that people have all these ‘Skynet’ and ‘HAL 9000’-type views of what AI is, and the reality is that AI is more like a toddler—like when they play that game where they pick up shapes and put them through the holes. Now, imagine a toddler that could process a million shapes a minute—that’s what AI is today.
With AI, you have a bunch of data. You teach it to look at that data and to look at the connections between the data and it can do that incredibly well. The problem is what happens when that data is not in black and white. What happens when that data is in the world around us? When it hasn’t been extrapolated yet into ones and zeros that it can analyze?
Visual-AI is the artificial eyes in front of that artificial brain. Visual-AI looks for specific signals and those signals can be logos. It could be objects. It could be text.
So, you’ve got all of these different technologies that can be combined and then fed into the artificial brain to then make those connections. When you have one image, you extract those signals, but when you merge it with 10 million, 100 million, or even a billion images and you start comparing all those signals, now you have very interesting trend data that companies love.
SD: What technologies does your company use to detect phishing and other hacking attempts?
VISUA started in brand monitoring and brand protection. We weren’t even thinking about cybersecurity. It wasn’t on our radar, and we were approached by one of the leading companies in the market with a particular challenge. They had seen an increase in the use of graphical attack vectors in phishing attacks. They’d noticed that in these graphical-based attacks graphics and images were being used for two reasons—one, to effectively confuse and build trust in their target victims, and two, to evade detection in the first place.
An example is where they’ll use logos to make the user believe that it is an email from their bank or from Netflix. Even things like in social engineering attacks, if they can just get into one of your accounts, that’s great.
The bad actors are also using AI systems, and they’re coming up with very ingenious techniques. So one way to do that, instead of putting one image of a logo, you break that logo up into multiple parts and then you rebuild it at render time. So if you look at that programmatically, line by line, you look at each image, it’s just a fragment. It means nothing.
We came up with a new approach that combined our Visual-AI tech-stack with a whole new methodology that doesn’t rely on programmatic analysis at all. Instead, the phishing detection platform renders the email or web page in a sandbox and then captures a flat image of what is in the browser/email reader; essentially a jpg of the email/web page. They then use our various technologies combined to look at that visually rather than programmatically and all the different potential risk elements are identified. For instance, is there a specific logo from a specific brand? We use object detection to highlight login or payment forms/fields, and we use text detection to highlight trigger words like “login” or “payment.” We pick up all of these signals, which the platform then uses, along with their other threat analysis technologies to come up with a final risk score. They weigh everything up and then they make the decision as to whether it is a threat or a genuine email. We don’t make that decision. We simply highlight all of the possible risks for them to assess.
We’re using our technology in a way that we never really envisioned, but it fits in incredibly well. That’s the interesting thing about it, and it’s working amazingly well. Our partners have been able to find risks that previously would have slipped through.
SD: What do you think are the worst cyberthreats out there today?
FDB: I think the worst cyberthreat that I saw—and it was absolutely terrifying—was shown to me at a virtual conference recently. There was a social engineering consultant who showed us how easily bad actors can get our information. Social engineering attacks typically happen at the early stage in order to gather a lot of information that they then use later on, whether it be business email compromise, spear-phishing, or whaling. He didn’t show where you could get these tools, but he showed the tools that he used. One of them is simply a phone-masking app.
In marketing, when we’re gathering prospect information, we do a lot of research. We want to prospect in a very personal way. These days, email marketing is not about sending out thousands of emails and hoping. It’s much more targeted than that. We will go to Linked-In and Google to look at articles you’ve written, what you’ve done, things that you’ve said, and conferences you’ve attended, and then build an email campaign around that.
There are various automation tools that allow me to import those things. What was really scary is that’s exactly what bad actors do, but they’re doing it for the wrong reasons. By the end of the demo, he’d shown how he could send an email on behalf of a Director of IT to an employee telling them to expect a call from XYZ company (they’re official support company—and they know who the support company is because they’ve done the research). The employee then got the phone call, which even showed the Caller-ID number from the real support company. They were then asked to go to a specific web address and click a link because they just need to get access to their system. The moment they opened that link, they were infected!
I think that’s the scariest thing—how easily they can manipulate people who are not trained, who don’t understand the process and procedure, and who don’t confirm before they do something.