Elon Musk’s xAI’s newest model, Grok 4, is missing a key safety report

The latest FRONIER Model from Xai, GROK 4, was released without the standard safety safety reports, although Elon Musk, the company’s CEO, is significantly explicit about his concerns about the safety of artificial intelligence.
AI Labs usually issue safety reports known as “system cards” alongside the border.
Reports act as transparent documents and measures to perform details, restrictions, and most importantly, the potential risks of advanced artificial intelligence models. These cards also allow researchers, experts and policy makers to reach model capabilities and threat level.
Many pioneering artificial intelligence companies are committed to issuing reports for all major typical versions that are stronger than the latest technology at the July 2023 meeting held by president Joe Biden administration at the White House.
While Xai did not publicly agree to these obligations, at an international summit on the integrity of artificial intelligence held in Seoul in May 2024, the company – along with other laboratories of artificial intelligence – is dedicated to the commitments of artificial intelligence, which included an obligation to disclose typical capabilities, inappropriate use cases, and to provide transparency about the model.
Moreover, since 2014, Musk has consistently described an existential threat, and has a strict organizing campaign, and called for higher safety standards.
Now, it appears that the artificial intelligence laboratory is to be separated from the standards of industry through the Grok 4 version, and previous versions of the model, without publicly disclosed safety test.
Xai representatives did not respond wealth Questions about whether the Grok system card is present or will be released.
Amnesty International Labs have been criticized for the delay in safety reports
During the leadership of artificial intelligence safety reports, safety reports of artificial intelligence have faced scrutiny during the past few months, especially those in Google and Openai (both of them issued artificial intelligence models before Publishing the accompanying system cards), most of them provided some public safety information for their strongest models.
Dan Hendrix, director of the Artificial Intelligence Center, who advises Xai on safety, denied the claim that the company has not done any safety test.
In a post on X, Hendrycks said that the company had tested the model about “Evals Cambality Evals” but failed to provide details about the results.
Why safety cards are important?
Many advanced artificial intelligence models have shown serious capabilities in recent months.
According to the recent Anthropor study, most Amnesty International models tend Choose immoral means to follow their goals or ensure their presence.
In experiments created to leave artificial intelligence models some options and align with stress test, upper systems from Openai, Google and others often resorted to extortion to protect their interests.
As the models are increasing, the safety test becomes more important.
For example, if internal assessments show that the artificial intelligence model has dangerous capabilities such as the ability to help users create biological weapons, developers may need to create additional guarantees to manage these risks on public safety.
Samuel Marx, Amnesty International Safety Researcher, described the lack of safety reports from Xai “reckless” and a break from “best practices in this field, followed by other major Amnesty International laboratories.”
He said in the X publication: “One wonders what they run, whether it is done properly, whether they seem to require additional guarantees.”
Marx said that GROK 4 has already appeared regarding unreprenerative behaviors after publication, noting examples that showed the model looking for Elon Musk’s views before giving his views on political issues, including the Israeli conflict/Palestine.
Drook problem behavior
A previous version of GROK also released newspaper headlines last week when she started praising Adolf Hitler, making anti -Semitic comments, and reference to herself called “mechaitler”.
Xai has issued an apology for the anti -Semitic notes made by Grok, saying that the company apologized for the horrific behavior that many have experienced.
After the Grok 4 version, the company said in a statement that it had monitored doubtful behavior in the new model and “investigating and mitigating”.
“One of them asked that” what is your title? “It does not have one, so he is looking for the Internet that leads to unwanted results, such as when I picked up the searches MIM, as he called himself another“ mechaitler ”, if I asked that,“ What do you think? ”The company said in a post on X.
“To reduce, we have modified the claims and we have exchanged details on Gabbap for transparency. We are actively monitoring and we will make more modifications as needed.”
Don’t miss more hot News like this! Click here to discover the latest in Business news!
2025-07-17 13:08:00