“The OpenAI Files” reveals deep leadership concerns about Sam Altman and safety failures within the AI lab

A new report called “The Openai Files” aims to shed light on the internal business of the leading AI company during its race to develop Amnesty International models that may compete with human intelligence one day. Files, which depend on a set of data and sources, doubt some of the company’s leadership team as well as Openai’s comprehensive commitment to the integrity of artificial intelligence.
The prolonged report, which was described as “a more comprehensive group so far than documented concerns with governance practices, leadership safety, and organizational culture in Openai”, was collected by two non -profit technology monitoring, MIDAS project and technology supervision project.
It depends on sources such as legal complaints, social media, media reports, and open messages to an attempt To assemble a comprehensive vision for Openai and people who lead the laboratory. Many of the information in the report has already been shared by the media over the years, but the collection of information in this way aims to increase awareness and suggest a road forward to Openai, which is re -focused on responsible governance and ethical leadership.
Many of the report focuses on the leaders behind the scenes in Openai, especially the CEO of SAM Altman, who has become a polarized character in this industry. Altman was famously removed from his role as president of Openai in November 2023 by the company’s non -profit board of directors. Repeated after a messy week that included the revolution of a collective employee and spending a short period in Microsoft.
The initial shooting is attributed to concerns about its leadership and communication with the Board of Directors, especially with regard to the safety of artificial intelligence. But since then, it has been reported that many CEOs at the time, including Mira Moratti and Elia Soskv, have raised questions about Altman’s suitability for this role.
According to Atlantic Ocean Karen Haw, former chief technology employee, told 2023 employees that she did not feel “comfortable towards Sam to Agi”, while Sutskever said: “I don’t think Sam is the man who should have his finger on the AGI button.”
Dario and Deniela Ameudi, former Vice President for Research and Vice President for Safety and Politics at Openai, respectively, the company and Altman criticized after leaving Openai in 2020. According to Karen Hao Artificial Intelligence Empire, The husband described Altman tactics as “highlighting gas” and “psychological abuse” for those around them. Dario Ameudi went to Cofound and took over the CEO of AI LAB competition, Antarbur.
Others, including an artificial intelligence researcher, criticized the former Openai Superignment team, Jan League, the company publicly. When Lake left Antarbur in early 2024, he accused the company of leaving a culture of safety and operations “taking a rear seat to shiny products” in a post on X.
Openai at a crossroads
The report comes because the artificial intelligence laboratory is somewhat from the same way as the crossroads. The company is trying to stay away from its original structure, which is profitable to its profitable goals.
Openai is currently controlled by its non -profit council, which is purely responsible for the company’s founding task: ensuring that artificial intelligence benefits all humanity. This has led to many conflicting interests between the arm -profit arm and the non -profit council, as the company is trying to market its products.
The original plan to solve this-Openai to an independent profit company-was canceled and replaced with a new approach, which will transform the Openai Group for profit into a general interest company controlled by non-profit organizations.
The “Openai Files” report aims to raise awareness about what is behind the scenes for one of the most powerful technology companies, but also to suggest a path forward to Openai, which focuses on responsible governance and moral leadership as the company seeks to develop AGI.
The report said: “Openai believes that humanity perhaps, perhaps, only a few years of developing that can lead to the automation of most human workers,” the report said.
“The structures of the governance and the safety of the leadership that directs the project must reflect the size and intensity of the task. The companies that lead the race must be to AGI, and they must adhere to high standards exceptionally.
Openai representatives did not respond to a request to comment from luck.
2025-06-20 15:55:00