xAI blames Grok’s obsession with white genocide on an ‘unauthorized modification’

Xai blamed a “unauthorized modification” of a mistake in Chatbot Grok, which is supported by artificial intelligence, which has repeatedly returned to “white collective genocide in South Africa” when it is invoked in certain contexts on X.
On Wednesday, Grok began responding to dozens of publications on X with information about the white genocide in South Africa, until in response to unrelated topics. The strange responses from the X Grok account stem, which responds to the users created by AI whenever a person does a “@Grok” brand.
According to the participation of Thursday from the official Xai account for Xai, a change was made on Wednesday morning to the Grok Bot-high-level instructor that directs robots-which Grok was directed to provide a “specific response” about “political topic”. Xai says that the disk “violate [its] Internal policies and basic values, “and that the company” conducted a comprehensive investigation. ”
This is the second time that Xai has publicly admitted that an unauthorized change in Grok symbol causes artificial intelligence in controversial ways.
In February, Grook briefly reminded Donald Trump and Eileon Musk, the founder of the Xai billionaire and Malik X. Igor Babuschkin, which provides Xai engineering, that Grok has obtained instructions by the employee who was stretched to ignore the sources mentioned by Musk or Trump the spread of the wrong form, and Xai selection soon.
Xai said on Thursday that he would make several changes to prevent similar incidents in the future.
Starting today, Xai will post Grok on GitHub as well as ChangeLog. The company says it “will also set additional tests and standards” to ensure that Xai staff is unable to adjust the system’s router without reviewing and creating “a monitoring team around the clock throughout the week to respond to accidents with Grok answers that are not discovered by automatic systems.”
Despite the frequent Musk warnings of the risk of artificial intelligence that has not been verified, Xai has a bad record of artificial intelligence safety. A recent report found that Grock would mix women’s images when asked. Chatbot can also be more bleak than artificial intelligence such as Gueini and Chatgpt than Google, which fills without significant control of it.
A study conducted by Saferai, a non -profit organization aimed at improving the accountability of artificial intelligence laboratories, has found the bad level in safety among its peers, due to its “very weak” practices of risk management. Earlier this month, Xai missed the deadline it imposed to spread the intelligence integrity integrity framework.
Don’t miss more hot News like this! Click here to discover the latest in Technology news!
2025-05-16 01:42:00