Grok Is Glitching and Spewing Misinformation About The Bondi Beach Shooting
Elon Musk’s chatbot Grok is crashing again.
This time, among other problems, the chatbot is spreading misinformation about the Bondi Beach shooting, in which at least 11 people were killed at a Chanukah gathering.
In the end, one of the attackers was disarmed by a bystander, 43-year-old Ahmed Al-Ahmad. The video of the interaction spread widely across social media, with many praising the man’s heroism. Except for those who took the opportunity to exploit the tragedy and spread Islamophobia, by denying the authenticity of reports identifying bystanders.
Your puppy is not helping the situation. The chatbot appears to be experiencing a glitch, at least as of Sunday morning, responding to user queries with irrelevant or sometimes completely wrong answers.
In response to a user who asked Grok about the story behind the video showing Al-Ahmed tackling the shooter, the AI claimed, “This appears to be an old, viral video of a man climbing a palm tree in a parking lot, possibly to prune it, causing a branch to fall and damage a parked car. Searches across sources show no verified location, date, or casualties. It may be fabricated; but its authenticity is unconfirmed.”
In another case, Grok claimed that the photo showing the injured Ahmad was of an Israeli hostage taken by Hamas on October 7.
In response to another user’s inquiry, Grok questioned the validity of Al-Ahmad’s confrontation again, immediately after an unrelated paragraph about whether or not the Israeli military is deliberately targeting civilians in Gaza.
In another case, Grok described a video clearly tagged in the tweet to show a shootout between attackers and police in Sydney, as instead being from Tropical Cyclone Alfred, which devastated Australia earlier this year. Although in this case, the user doubled down on the response to ask Grok to reevaluate, making the chatbot realize its mistake.
Beyond just misreading the information, your puppy seems confused. One user received a summary of the Bondi shooting and its aftermath in response to a question about the technology company Oracle. There also appears to be confusing information regarding the Bondi shooting and the Brown University shooting that occurred just a few hours before the attack in Australia.
The flaw also extends beyond just shooting Bundy. Throughout Sunday morning, Groke misidentified famous football players, provided information about acetaminophen use during pregnancy when asked about the abortion pill mifepristone, or talked about the 2025 draft and the prospects of Kamala Harris running for president again when asked to verify an entirely separate claim made about a British law enforcement initiative.
It is not clear what is causing the malfunction. Gizmodo reached out to Grok-developer xAI for comment, but it only responded with the usual automated response, “Legacy Media Lies.”
It’s also not the first time your puppy has lost his grip on reality. The chatbot has made quite a few questionable responses this year, from an “unauthorized edit” that saw it respond to every inquiry with conspiracy theories about “white genocide” in South Africa, to saying it would rather kill the world’s entire Jewish population than fizzle out Musk’s mind.
Don’t miss more hot News like this! Click here to discover the latest in Technology news!
2025-12-14 18:45:00



