Technology

Karen Hao on the Empire of AI, AGI evangelists, and the cost of belief

There is an ideological empire at the center of an ideological empire, a system that leads the system forward and justifies the expansion – even if the cost of this expansion directly defies the declared task of ideology.

For European colonial powers, Christianity and promise to save souls were while extracting resources. For the Empire of Artificial Intelligence today, from artificial intelligence “benefit from humanity”. Openai is the main missionary, which spreads enthusiasm throughout the industry in a way to reformulate how to build artificial intelligence.

“I was an interview with people whose voices were shaken by the enthusiasm of their beliefs in Aji,” Karen Hao, a journalist and author of the “Artificial Intelligence Empire”, said in a recent episode of stocks.

In her book, Hao is similar to the artificial intelligence industry in general, and Openai in particular, to an empire.

“The only way to understand the scope of Obayy’s behavior … is actually the realization that they have already grown more powerful than any national country in the world, and they have strengthened an extraordinary amount of economic power, but also political power,” Hao said. “They are raising the earth. They re -connect political geography throughout our lives. Therefore, you can only describe it as an empire.”

Openai Agi described as “a very independent system that surpasses humans in most of the work of economic value”, which will somewhat lead to “raising humanity by increasing abundance, discouraging the economy, and helping to discover new scientific knowledge that changes the limits of possibility.”

These mysterious promises have fueled the growth of the industrial industry – the requirements of the massive resources, the ocean of intensive data, the tense energy networks, and the desire to launch unpaid systems in the world. All this in the service of a future says many experts may never reach.

TECHRUNCH event

San Francisco
|
27-29 October, 2025

Hao says this path was not inevitable, and this scaling is not the only way to get more progress in artificial intelligence.

“You can also develop new technologies in algorithms,” she said. “You can improve the current algorithms to reduce the amount and account of the data they need to use.”

But this tactic meant the sacrifice of speed.

“When you determine the pursuit of the use of useful AGI as one where the victor takes everything – which Openai did – the most important thing is the speed on anything else,” Haw said. “Speed ​​to efficiency, speed above safety, speed on exploratory research.”

Image credits:Pictures of Kim Jay Hwan/Sob

She said that the best way to ensure speed is to take the current technologies and “just do the cheap thing in an intellectual point of view, which is to pump more data, and more super computers, to these current technologies.”

Openai set the theater, instead of the backward, other technology companies decided to fall into the queue.

“And because the artificial intelligence industry has successfully seized most of the best researchers of artificial intelligence in the world, and these researchers are no longer present in the academic circles, you are now formed through the agenda of these companies, not through real scientific exploration.”

It was spending, and it will be astronomer. Last week, Openai said it expected to burn through $ 115 billion in cash by 2029. META said in July it would spend up to 72 billion dollars to build an Amnesty International’s infrastructure this year. Google expects to reach $ 85 billion of capital expenditures for 2025, most of which will be spent on expanding the infrastructure of the cloud.

Meanwhile, the target posts continue to move, and the most upper “benefits of humanity” have not been achieved, even with damage. It hurts like job loss, wealth concentration, and AI Chatbots that nourish delusions and mind. In her book, Hao also documents workers in developing countries such as Kenya and Venezuela who have been exposed to annoying content, including sexual assault materials, and paid very low wages – about $ 1 to $ 2 per hour – in roles such as moderate content and data data.

Hao said it is a wrong barter to incite progress in artificial intelligence against current damage, especially when other forms of artificial intelligence provide real benefits.

She noted the Nobel Prize-winning alphafold from the Google DeepMind, which is trained in amino acid sequence data and complex protein folding structures, and can now accurately predict the 3D protein structure of amino acids-very useful to discover and understand drugs.

“These are the types of artificial intelligence systems that we need,” said Hao. “Alphafold does not create the mental health crises of people. Alphafold does not lead to tremendous environmental damage … because it is trained on a much lower infrastructure. It does not create moderate damage to the content due to [the datasets don’t have] All the toxic foolishness you uploaded when you are collecting the Internet. ”

Besides the semi -religious commitment to AGI, it was a narration about the importance of the race to overcome China in the artificial intelligence race, so that Silicon Valley could have a free effect on the world.

“Literally, the opposite happened,” said Hao. “The gap continued to close between the United States and China, and the Silicon Valley had an uncomfortable effect on the world … and the only actor from which he came out safely, you can say, he is the Silicon Valley itself.”

Of course, many will argue that Openai and other artificial intelligence companies have benefited from humanity by launching Chatgpt and other large language models, which are huge gains in productivity by automating tasks such as coding, writing, research, customer support and other work tasks.

But the way Openai-a non-profit part, is part of the profit-complicates how it is defined and measures its effect on humanity. This is more complicated by the news this week that Openai reached an agreement with Microsoft, which is in the end.

Two former researchers at Openai Safementy have told them that they are afraid that the artificial intelligence laboratory is to confuse his profitable and non-profit tasks-because people enjoy using Chatgpt and other LLMS products, and this puts a mark on the humanity box.

Hao repeated these concerns, describing the risk of consumption by the mission so that the reality is ignored.

“Although the evidence accumulates that what they build actually hurts large quantities of people, the task continues to implement all of this,” Haw said. “There is a truly dangerous and dark thing, [being] It was concluded in a belief system that I created to lose reality.

Don’t miss more hot News like this! Click here to discover the latest in Technology news!

2025-09-14 17:00:00

Related Articles

Back to top button