Italian Authorities Accuse OpenAI Of Violating GDPR

Tyler Cross
Tyler Cross Senior Writer
Tyler Cross Tyler Cross Senior Writer

Italian data regulators made harsh accusations against OpenAI — the accusations revolve around it breaking several General Data Protection Regulation (GDPR) provisions.

The GDPR provides comprehensive data privacy laws meant to protect consumers from harmful violations of their information. Breaking these laws can have serious consequences.

“The Italian DPA (Guarantor for the protection of personal data) notified breaches of data protection law to OpenAI, the company behind ChatGPT’s AI platform,” the DPA said in a press release.

This isn’t OpenAI’s first brush with the Italian authorities, either — last March it was temporarily banned in Italy for alleged misuse of user data. The ban was lifted a full month later. Despite that, an Italian watchdog group continued a lengthy investigation into the company.

In this case, the Italian Garante (DPS) states that OpenAI was unlawful in the way its AI language model, ChatGPT, processed user data.

“OpenAI, will have 30 days to communicate its defense briefs regarding the alleged alleged violations,” the DPA said. “In defining the procedure, the Garante will take into account the work in progress within the framework of the special task force, established by the Board that brings together the EU Data Protection Authorities (EDPBs).”

It’s not just the EU that has been investigating OpenAI — the US Federal Trade Commission is also performing an independent investigation. In the past, OpenAI also walked back on its promise to not let its software be used for military purposes.

OpenAI is not deaf to the criticisms about its data privacy and retention policies, and has spoken back against some of these claims.

“We believe our practices align with GDPR and other privacy laws, and we take additional steps to protect people’s data and privacy,” said an OpenAI spokesperson. “We want our AI to learn about the world, not about private individuals. We actively work to reduce personal data in training our systems like ChatGPT, which also rejects requests for private or sensitive information about people.”

About the Author
Tyler Cross
Tyler Cross
Senior Writer

About the Author

Tyler is a writer at SafetyDetectives with a passion for researching all things tech and cybersecurity. Prior to joining the SafetyDetectives team, he worked with cybersecurity products hands-on for more than five years, including password managers, antiviruses, and VPNs and learned everything about their use cases and function. When he isn't working as a "SafetyDetective", he enjoys studying history, researching investment opportunities, writing novels, and playing Dungeons and Dragons with friends.

Leave a Comment