OpenAI has faced another privacy complaint in Europe over its viral chatbot’s tendency to hallucinate false information — and this time it’s going to be hard for regulators to ignore.
Privacy group Noyb is supporting a man in Norway who was horrified to discover that ChatGPT was returning fabricated information that he had been convicted of murdering two of his children and attempting to murder a third.
Previous complaints that ChatGPT is generating inaccurate personal data have involved issues such as incorrect birth dates or biographical information. One problem is that OpenAI doesn’t offer a way for people to correct the incorrect information the AI generates about them. OpenAI typically offers to block responses to such prompts. But under the European Union’s General Data Protection Regulation (GDPR), Europeans have a number of data access rights, including the right to have their personal data corrected.
Another component of this data protection law requires data controllers to ensure that the personal data they create about people is accurate – and this is what Noyb is addressing in its latest complaint against ChatGPT.
“The GDPR is clear. Personal data must be accurate,” said Joakim Söderberg, data protection lawyer at Noyb, in a statement. “If it is not, users have the right to request that it be amended to reflect the truth. Showing ChatGPT users a tiny disclaimer that the chatbot may make mistakes is clearly not enough. You can’t just spread false information and then add a small disclaimer at the end that whatever you said may simply not be true.”
Confirmed GDPR violations can lead to fines of up to 4% of global annual turnover.
Enforcement could also lead to changes in AI products. In particular, the early intervention of the Italian GDPR data protection authority, which temporarily blocked access to ChatGPT in the country in the spring of 2023, forced OpenAI to make changes to the information it discloses to users, such as the information it discloses to users. The authority subsequently fined OpenAI €15 million for processing people’s data without proper legal grounds.
Since then, however, it can be said that European privacy regulators have become more cautious about GenAI, trying to figure out how best to apply the GDPR to these popular AI tools.
Two years ago, the Irish Data Protection Commission (DPC), which plays a leading role in enforcing GDPR compliance, following a previous complaint by Noyb ChatGPT, urged against rushing to ban GenAI tools, such as this one. This suggests that regulators need to take their time to understand how the law is being applied.
It is worth noting that the complaint against ChatGPT, which has been under investigation by the Polish data protection supervisory authority since September 2023, has not yet received any resolution.
Neub’s new complaint against ChatGPT seems to be aimed at waking up privacy regulators when it comes to the dangers of hallucinations from artificial intelligences.