OpenAI Confirms AI Software Is Being Used By Hackers

Tyler Cross
Tyler Cross Senior Writer
Tyler Cross Tyler Cross Senior Writer

Tech giants Microsoft and OpenAI, the company behind ChatGPT, revealed that hackers have been using AI technology to rapidly develop new tools, improve scripts, and create better social engineering schemes.

“Over the last year, the speed, scale, and sophistication of attacks have increased alongside the rapid development and adoption of AI,” Microsoft said in a blog post released on Wednesday. “Defenders are only beginning to recognize and apply the power of generative AI to shift the cybersecurity balance in their favor.”

Researchers found evidence of various state-sponsored groups linked to North Korea, Russia, China, and Iran improving their software with AI. Methods include using software similar to ChatGPT to help with tedious scripting tasks, allowing them to lay the groundwork for more powerful software. Some hackers even used AI to create fully automated operations.

Threat actors have also released various learning language models (LLMs) of their own, such as WormGPT and FraudGPT, which aid in the creation of malicious software.

Recently, OpenAI has been facing mounting pressure against the company for violating data privacy laws, going against its promise to prevent its software from being used by the military, and its software being used by a variety of hacking groups.

“The objective of Microsoft’s partnership with OpenAI, including the release of this research, is to ensure the safe and responsible use of AI technologies like ChatGPT,” writes Microsoft. “As part of this commitment, we have taken measures to disrupt assets and accounts associated with threat actors adn improve the protection of OpenAI LLM technology and users from attack or abuse.”

At this point, neither Microsoft nor OpenAI have detected any “significant attacks” using AI software, but that’s likely to change as time goes on.

“We feel this is important research to publish to expose early-stage, incremental moves that we observe well-known threat actors attempting.”

About the Author
Tyler Cross
Tyler Cross
Senior Writer

About the Author

Tyler is a writer at SafetyDetectives with a passion for researching all things tech and cybersecurity. Prior to joining the SafetyDetectives team, he worked with cybersecurity products hands-on for more than five years, including password managers, antiviruses, and VPNs and learned everything about their use cases and function. When he isn't working as a "SafetyDetective", he enjoys studying history, researching investment opportunities, writing novels, and playing Dungeons and Dragons with friends."

Leave a Comment