GDPR complaint after ChatGPT accuses man of murdering children

A complaint has been filed with Norway’s data protection authority on behalf of a man who was falsely described by ChatGPT as having murdered his two children.
European privacy campaign group noyb is acting on behalf of Arve Hjalmar Holmen, who says he tried to find out what information ChatGPT could give about him and was told that he was a convicted criminal who murdered two of his children and attempted to murder his third son.
ChatGPT’s fake story included real elements of his personal life, including the actual number and gender of his children and the name of his home town.
noyb alleges that ChatGPT violated the GDPR’s principle of data accuracy by providing a mix of clearly identifiable personal data and fake information.
Joakim Söderberg, data protection lawyer at noyb, said: “The GDPR is clear. Personal data has to be accurate — and if it’s not, users have the right to have it changed to reflect the truth.
“Showing ChatGPT users a tiny disclaimer that the chatbot can make mistakes clearly isn’t enough. You can’t just spread false information and in the end add a small disclaimer saying that everything you said may just not be true.”
Mr Holmen added: “Some think that ‘there is no smoke without fire’. The fact that someone could read this output and believe it is true is what scares me the most.”
noyb said it has previously been told by OpenAI that it cannot rectify or erase incorrect information, but can “block” data on certain prompts, which it argues is not sufficient under Article 5(1)(d) GDPR.
AI companies can “not just ‘hide’ false information from users while they internally still process false information,” noyb lawyer Kleanthi Sardeli said.
“AI companies should stop acting as if the GDPR does not apply to them, when it clearly does. If hallucinations are not stopped, people can easily suffer reputational damage.”
noyb has asked Datatilsynet, the Norwegian data protection authority, to order OpenAI to delete the defamatory output and fine-tune its model to eliminate inaccurate results. It also proposes an administrative fine on OpenAI to prevent similar violations in the future.