AI

The two people shaping the future of OpenAI’s research

“There are many consequences of artificial intelligence,” he said. “But what I think most is automated research. When we look at the history of mankind, many of it relate to technological progress, about human building new technologies. The point that computers can develop new technologies themselves look very important, UM.

“We already see these models help scientists. But when they are able to work on longer horizons – when you are able to create research programs for themselves – the world will feel very different.”

For Chen, this ability to work on itself for a longer period is the key. “I mean, I think everyone has their own definitions of AGI,” he said. “But this concept of independent time – just how much time the model can spend in making fruitful progress in a difficult problem without reaching a dead end – this is one of the big things we follow.”

It is a bold vision – and it exceeds the capabilities of today’s models. But I am surprised by this how Chen and Bashuki made a worldly voice. Compare this with how Sutskever responded when I spoke to him 18 months ago. He told me: “It will be huge, destroying the earth.” “There will be before and after.” In the face of the huge what he was building, about Sutskever focus his career to design better and better models to know how to control technology that I think will be more intelligent than himself.

Two years ago, Sutskever created what he called a super team that participated in the bullets with another researcher in the safety of Openai, Jan Leike. The claim was that this team would turn into five Openai resources to know how to control the default cancellation process. Today, most people have left for Superalignment, including Sutskever and Leke, the company and the team no longer exists.

When Lake resigned, he said the reason for this was that the team did not get the support he felt deserved. This has been published on X: “Building more intelligent machines of man is a dangerous endeavor by its nature. Openai bears a tremendous responsibility on behalf of all of humanity. But over the past years, the culture of safety and operations has taken a seat to shiny products.” Other departures researchers shared similar data.

I asked Chen and Bashuki what they raised from such fears. “Many of these things are very personal decisions,” said Chen. “You know, the researcher can somewhat, as you know -“

He started again. “They may have a belief that the field will develop in a certain way and that their research will come out and will pay off.

Don’t miss more hot News like this! AI/" target="_blank" rel="noopener">Click here to discover the latest in AI news!

2025-07-31 09:06:00

Related Articles

Back to top button