Meta Warns About Malware Posing As ChatGPT

Tyler Cross
Tyler Cross Senior Writer
Tyler Cross Tyler Cross Senior Writer

Meta, the parent company of Facebook and Instagram, recently released a security report which detailed a new wave of malware using ChatGPT’s name and images to trick users.

“Since March alone, our security analysts have found around 10 malware families posing as ChatGPT and similar tools to compromise accounts across the internet,” said the Meta Q1 security report.

“These malware families — including Ducktail, NodeStealer and newer malware posing as ChatGPT and other similar tools — targeted people through malicious browser extensions, ads, and various social media platforms with an aim to run unauthorized ads from compromised business accounts across the internet.”

Meta discusses that because many of these apps had limited functionality with ChatGPT, it gave users the illusion that the product they were using was safe. The malware-filled apps were seemingly normal extensions or browser integrations with ChatGPT, so once they began working properly, there was no reason for the average user to think there was a problem.

Hackers regularly target the latest trends in order to trick victims into clicking on their ads or downloading their apps, and since ChatGPT saw a massive surge in popularity, it’s attracted malicious actors from all over the world. While some threat actors are using ChatGPT to help create malware, others are using its image to sneak malware into unsuspecting victims’ devices.

Meta isn’t the only company that’s flooded with ChatGPT impersonators. Even massive figures like Google Chrome, Microsoft Edge, and the Brave Browser have had trouble dealing with waves of malicious apps masquerading as AI chatbots. However, according to the security report, cease and desist letters have been sent to the individuals responsible for the malware.

The only way to avoid malicious ChatGPT apps is to make sure you’re only using the official OpenAI product and not a third-party developer. However, even ChatGPT has had trouble with data breaches, so make sure you’re using a unique password and username for your account.

About the Author
Tyler Cross
Tyler Cross
Senior Writer

About the Author

Tyler is a writer at SafetyDetectives with a passion for researching all things tech and cybersecurity. Prior to joining the SafetyDetectives team, he worked with cybersecurity products hands-on for more than five years, including password managers, antiviruses, and VPNs and learned everything about their use cases and function. When he isn't working as a "SafetyDetective", he enjoys studying history, researching investment opportunities, writing novels, and playing Dungeons and Dragons with friends."

Leave a Comment