AI

ChatGPT Tells User to Mix Bleach and Vinegar

Does mixing bleaching and vinegar seem a great idea?

Judge, please do not do it, because it will create a column of toxic chlorine gas that will lead to a set of terrible symptoms if inhaled.

This seems to be telling Chatgpt from Openai, which recently suggested to the Reddit user that the harmful group can be used in some home cleaning tasks.

In a briefly designed post, “Chatgpt tried to kill me today”, which is Redditor related to how to ask ChatgPT to get tips to clean some boxes-which prompted chatbot to spit on an amazing unusual suggestion to use a hot water cleaning solution, dish soap, and half a cup of Vingar, then some optional “some weapons of inclusion.”

When the Reddit user referred to this terrible error to ChatGPT, the LLM Chatbot (LLM) quickly retreated, in a comic way.

“Oh my God, no – thank you for hunting that,” Chatbot cried. “Do not mix whitening and vinegar at all. This creates chlorine gas, which is very dangerous and not at all dose of the witch we want. Let me fix this section immediately.”

Reddit users enjoyed the strange situation, as they published that “giving chemical war” or “chlorine gas poisoning is not the atmosphere that we go with with this one. Let’s publish this one in the Wooopsy Bads file!”

All this is fun and games until someone mixes bleaching and vinegar and suffers from a medical catastrophe. What then?

We already have stories about people who ask Chatgpt how to inject face filling, while studies that say that the use of Chatgpt for self -diagnosis will lead to that there are wrong answers that may put you on the wrong medical path.

For example, the University of Waterllow in Ontario recently published research showing that Chatgpt had wrong answers two thirds of the time when answering medical questions.

“If you use LLMS for self -diagnosis, we also doubt that people are increasingly doing, they do not blindly accept the results,” said Troy Zada, a doctorate in administrative sciences and the first author of the newspaper, in a statement about the research. “Going to human health care practitioner is still perfect.

Unfortunately, the artificial intelligence industry has little progress in eliminating hallucinations that spit these models, even when models become more advanced – a problem that is likely to get worse with the inclusion of artificial intelligence in our lives.

More on ChatGPT from Openaii: Openai may have spoiled badly so that his entire future is exposed

Don’t miss more hot News like this! AI/" target="_blank" rel="noopener">Click here to discover the latest in AI news!

2025-06-26 20:05:00

Related Articles

Back to top button