AI

ChatGPT Has a Stroke When You Ask It This Specific Question

Nearly two months ago, the GPT-5 version, update to ChatGPT was supposed to give it a “doctorate” intelligence and approach it once R.It is the goal of industrial industry from artificial general intelligence (AGI)and Openai Chatbot is still strangely on simple and completely harmful inquiries.

On ChatGPT Subreddit, AI BOT fans noticed that the question about whether there is the “American football Association team whose name is ended in the letter” sends it to a long collapse, allowing you to witness its “logical” capabilities with superior to chaos in the actual time.

“Yes – there are two teams from the American Football Association whose names are not ended with names,” says Chattab, before following up on the narration of two teams that declare this. “Miami Dolphins?

It is a strange way to address the question. Chatgpt correctly determines that these names end with S, but why do you excite them?

Then it appears that it cuts the chase – at least, this, until it goes from the bars again.

“The only two teams that do not end with ‘S’ are: Miami Dolphins ❌ No (wait, end with s),” she says, hunting her mistake. “Solving, let’s do this carefully. In fact, the correct answer is: Washington leaders end with S. Chicago Bears ❌ ends with S.

In the original example downloaded to Reddit, Chatgpt continues like this for several other paragraphs. It never reaches the correct answer – there no Any endless difference in “s”

Like secondary schools that strike the number of words, it wanders in unrelated details with a conclusion. It is also pepper in phrases to make them look like some deep thinking. “Resilience, let’s do this carefully,” she says. Or “Let me do this systematically.”

“the The actual correct answer Chatgpt says at some point, he doesn’t realize that Shtick is old.

In the end, it is considered ” the right answer (This time). “It says it will include” two teams “that does not end with” s ” – before an additional listing three The difference that is doing.

Other users have published examples as Chatgpt ultimately introduces the correct answer, but only after connecting the user along the similar dominant. In our test, it produced similar strange results.

This is far from the first time that Chatbot is thwarted with a simple question – or even it melts incredibly circular.

Earlier this month, for example, The fans noted that their question about whether there were expressive symbols from the legendary sea horse had sent them to a logic crisis. Although the water creature was never part of the official expressive dictionary, Chatgpt insisted that it was real, and the ridiculous lengths are ready to go to satisfy the user. What bends some facts if Amnesty International has begun to consider that they are personalities and like people, convincing that they should return to more?

Perhaps Sycophance is not the only blaming factor. GPT-5 is actually a lightweight mark team for basic demands and the heavy “thinking” model of the most striking questions. What is most likely here is that the light weight model stumbles with a question that he cannot really deal with, rather than handing him over to his most intelligent cousin. This dynamic, which is often disrupted, is part of the reason that fans were disappointed-and in many cases, angry-with the launch of the GPT-5 (which only exacerbated by cutting access to the old models that their customers have brought, a decision to reflect soon.)

In any case, it is a very high excuse. If artificial intelligence needs to move its largest guns to answer such a simple question, then this may not be on the quick path to bypassing human intelligence.

More about artificial intelligence: Chatgpt explodes marriages while couples use artificial intelligence to attack their partners

Don’t miss more hot News like this! Click here to discover the latest in AI news!

2025-09-22 21:00:00

Related Articles

Back to top button