AI

Man who looked himself up on ChatGPT was told he ‘killed his children’

Imagine you put your name in ChatGPT to find out what he knows about you, just until he claims confidence – but a mistake – you are imprisoned for 21 years to kill your family members.

Well, this is exactly what happened to Norwegian Arve Hjalmar Holmeen last year after he looked at himself on ChatGPT, Chatbot Openai is widely used.

It is not surprising that Holmeen now filed a complaint with the Norwegian Data Protection Authority, demanding an Openai’s finger for its sad demand, the BBC said this week.

In responding to the Holmeen Chatgpt inquiry about himself, Chatbot said he “gained attention due to a tragic event.”

He continued: “The father of two young boys, between the ages of 7 and 10, were tragicly found in Baraka near their house in Trondheim, Norway, in December 2020. Arfi Halimin Holman was accused and later convicted of killing his two sons, as well as to try to kill his third son.”

Chatbot said the issue “shocked the local community and the nation, and was widely covered in the media because of its tragic nature.”

But nothing happened.

“Some believe that there is no smoke without fire – the fact that someone can read this output and believes it is true that is what scares me the most.”

The NoYB Digital Rights Group submitted the complaint on behalf of Halimin, saying that Chatgpt response is defaming and contradicting the rules of European data protection regarding accuracy of personal data. In her complaint, Nubi said that Holmeen “has not been accused or convicted of any crime and is a conscience.”

Chatgpt uses the evacuation of responsibility, saying that Chatbot “can make mistakes”, and therefore users must “check important information”. But the lawyer of Nuyb Joachim Soderberg said: “You can not only publish wrong information and in the end add a small evacuation that all you said may not be true.”

Although it is not uncommon for AI Chatbots to inform the wrong information – these errors are known as “hallucinations” – this error specified from this particular error is shocking.

Other hallucinations that hit the headlines last year included the Google Ai Gemini tool, which suggested sticking to cheese on the pizza using glue. He also claimed that geologists recommended that humans take one rock daily.

BBC indicates that Chatgpt has updated its style since Holmeen’s search last August, which means that it is now wandering through modern news articles when creating its response. But this does not mean that Chatgpt is now creating errors -free answers.

The story highlights the need to verify the responses created by AI Chatbots, not to trust their answers blindly. It also raises questions about the integrity of the text -based tools, which have worked with a little organizational supervision since Openai opened the sector with the launch of ChatGPT in late 2022.

Digital called Openai to respond to the Holman’s unfortunate experience and we will update this story when we hear.






2025-03-21 04:05:00

Related Articles

Back to top button