Ex-staff claim profit greed betraying AI safety
The “The Openai Files” report, which is the collection of voices from the former employees concerned, claims that the most prominent artificial intelligence laboratory in the world is the betrayal of safety for profit. What started as a noble endeavor to ensure that the artificial intelligence will now serve the Humanitarian Association on the edge of becoming an other founder, as it chases enormous profits while leaving safety and morals in dust.
At the heart of all of this, a plan to tear the original rules book. When Openai began, he made a decisive promise: it sets the maximum amount of money that investors can achieve. It was a legal guarantee that if they succeeded in creating the global variable artificial intelligence, the vast benefits will flow into humanity, and not only a handful of billionaires. Now, this promise is about to be erased, apparently to satisfy investors who want unlimited returns.
For people who have built Openai, this axis away from the integrity of artificial intelligence seems to be a deep betrayal. “The task was not to win a promise to do the right thing when the risks rise,” says Carol Winreret, a former employee. “Now that the risks have become high, the non -profit structure is abandoned, which means that the promise was finally empty.”
Deepening the confidence crisis
Many of these voices indicate extreme anxiety to one person: CEO Sam Al -Tamman. Fears are not new. Reports indicate that even in his previous companies, his senior colleagues tried to remove him from what he called “deception and chaotic” behavior.
The same feeling of confidence followed him to Openai. The co -founder of the company, Elijah Sutskv, who has been working alongside Altman for years, and since its launch on his starting start, has reached a chilling conclusion: “I don’t think Sam is the man who should have a finger on the AGI button.” He felt that Al -Taman was not honest and created chaos, a terrifying mixture of a person who is likely to be responsible for our collective future.
Mira Moratti, the former CTO, felt the same amount of discomfort. She said, “I do not feel comfortable with the leadership of Sam to AGI.” She described a poisonous pattern as Altman tells people what they want to hear and then undermine them if they face their way. It indicates that the former member of the Openai Board of Directors, Tasha McCalli says, “It must be unacceptable” when the safety classes of artificial intelligence are high.
These confidence crisis were serious consequences. Opinioners say that the culture in Openai has turned, with the decisive work of the integrity of artificial intelligence in obtaining the back seat to launch “shiny products”. Jean -Lake, who led the team responsible for the long term, said they were “sailing against the wind”, struggling to obtain the resources they need to conduct their vital research.
Even another former employee, William Sonders, until he presented a terrifying certificate to the US Senate, and revealed that for long periods, security was so weak that hundreds of engineers could have stole the most advanced artificial intelligence in the company, including GPT-4.
AI-safety-at-openai">A desperate call to set the priorities of the safety of artificial intelligence in Openai
But those who left not only walk. They have placed a road map for Openai again from BRINK, another effort to save the original task.
They are calling for a non -profitable heart a real power again, with the prior veto of iron on safety decisions. They are demanding a clear and honest leadership, which includes a new and comprehensive investigation in the behavior of Sam Altman.
They want real and independent supervision, so Openai can not only define its homework on the integrity of artificial intelligence. They are pleading with a culture where people can talk about their concerns without fear of their jobs or their savings – a place with real protection for those whose violations.
Finally, they insist that Openai adhere to its original financial promise: the profit caps must remain. The goal should be the general benefit, not unlimited private wealth.
This is not only about the internal drama of the Silicon Valley. Openai builds a technique that can reshape our world in ways we can barely imagine. The question that forces us to all of its former employees to ask is simple but deep: Who do we trust in building our future?
Helen Tonner, a former board member of her own experience, also warned of “internal handrails when the money is at stake.”
Currently, people who know Openai Best tells us that safety handles have been broken.
See also: The adoption of artificial intelligence is ripened, but the obstacles to publishing remain

Do you want to learn more about artificial intelligence and large data from industry leaders? Check AI and Big Data Expo, which is held in Amsterdam, California, and London. The comprehensive event was identified with other leading events including the smart automation conference, Blockx, the digital transformation week, and the Cyber Security & Cloud.
Explore the upcoming web events and seminars with which Techforge works here.
Don’t miss more hot News like this! Click here to discover the latest in AI news!
2025-06-19 11:12:00



