The Canadian head of cybersecurity is warning of the dangers that fake AI-generated videos pose to the upcoming elections.
Sami Khoury, the head of the Canadian Centre for Cyber Security (CCCS), says that the AI technology used to make fake videos is growing at a significantly faster pace than the development of verification tools aimed at detecting the use of such technology.
This essentially means that Canada (or any other country) may not have the means to successfully catch all fake AI videos and audio that bad actors might use to spread disinformation related to the upcoming elections.
“AI can now be used to almost impersonate my voice,” Khoury told the National Post. “That’s the next evolution. Now, you can take a snippet of my voice, 30 seconds, one minute, and make it say something completely opposite to my message and it will be very authentic.”
“That can be done fairly easily using online tools,” he continued. “And then you evolve a little bit further, and you get into the deepfake videos. The technology is moving in that direction. We don’t know yet how to authenticate … or deauthenticate. How do I say this is not my voice, or how do I authenticate that a message is truly from me?”
In light of the Cyber Threats to Canada’s Democratic Process report that said Canada’s geopolitical opponents might use AI to create “deepfake” videos and images, Khoury also pointed to the increasingly more convincing phishing attempts — bad actors now have AI to help them perfect their attacks.
“Long gone are the days where a phishing email … has typos, that has funny punctuation, that is selling you something that is too good to be true,” he said.
Plus, it’s much easier than hacking into an organization.
“Companies are investing in making products a little bit more secure,” Khoury explained. “So, the only way often to bypass that hard shell, that perimeter security, is … to catapult yourself in the middle of a network. Phishing tends to be a way to do it.”
On ransomware attacks, Khoury said the Communications Security Establishment started a program to warn government agencies and other organizations when they detect a potential ransomware attack.
“We have come up with a technique to detect some of those steps in the dance with enough confidence now that we can issue an automated alert to say … that we’ve picked up some of those signals, there is some activity happening on your infrastructure that are steps to a potential ransomware incident,” he said.
So far, the organization has issued around 500 such notifications.
“In many cases, the feedback we hear is that it made a difference and they’ve managed to isolate the system and stop ransomware from being deployed,” Khoury said.