Should Humanity Make Way for an AI “Worthy Successor”?

These days, it is not unusual to face “Domers” in the circles of artificial intelligence – people who believe that excellent artificial intelligence will erase humanity either because it wants our purposes or finds us troublesome. There are many serious artificial intelligence researchers working on “alignment”, or ensuring that the goals of artificial intelligence systems match with us, so that artificial intelligence supports human prosperity rather than ending. But not many people think about how to provide the best result of the universe if, in fact, the excellent artificial intelligence is ready to leave humanity in dust.
Enter Daniel Vagla. The founder of the Emero IQ Research Company argues that it is very important that we build the “caliphly caliphs” artificial intelligence systems. A few weeks ago, he hosted a symposium at a palace from the side of the side of the side in San Francisco, where Amnesty International broadcast their hopes and concerns about the future of human being. IEEE SICTRUM He signed with Faggella to hear more about his controversial and provocative vision.
How do you explain the concept of the worthy successor to a person who has not faced him before?
Daniel Vaglala: The essence of the backward is worthy of the post -human intelligence is so capable and a moral value that you will consider it better if you take, instead of humanity, the future cloak and determine the future path of intelligence.
The primary belief here is that artificial general intelligence is unlikely to be aligned [with human goals]. So, if the flame of humanity is valuable, what is concerned with flame? I think all the torches eventually come out, and in the end the marriage of any one flame is contempt for the flame itself. My hypothesis is that the flame is awareness, self -development, or self -creation. If Agi [artificial general intelligence] He has these two things, and he will carry a flame in the future. Because we cannot carry this flame forever: I argue that We may have a generation with this flame until it turns something else. So we must make sure that what we create has these ethical features. Because when we are Gonzo, is the universe filled with value, or is everything ended?
“The essence of the back is worthy of the post -human intelligence is very capable and an ethical value that you will consider it better if you take, instead of humanity, the mantle of the future.” – Daniel Vaglala, Emeraj
What is your timeline when this caliphate becomes important? Is it during our life?
Faggella: Oh, definitely. I think there is a really good snapshot within the decade, we already feel destructive and transformation forces. I think this flame really falls into a threat within two to two decades. I think we may deal with the final flashing here.
Imagine that everything is going wonderfully according to your evaluation model, and the worthy back is determined. What happens to humans?
Faggella: We must do our best to get the best vibration we can get. Some people may say that the best shaking is: Let’s give us the earth. I think this may be an unrealistic request. It is likely to be the best shaking like something like: every human ablution creation on an individual of science that arises into a type of sugar cubes for a billion years of bliss, but it only establishes six hours at a time around the clock. We must try to get the best final retirement, but I don’t know how much control you will get what is happening to us.
I think you do not have children.
Faggella: no i don’t. If you are considering timelines the way I do, you may not do that. when People have children, this is an investment in the future in the form of hominid. It may make them more face to face around the transformational and destroyed forces. It is like, I bought the lake’s house so that my grandchildren can ski. I will not think about the future as they do not.
Who will form the future of AGI?
What are your goals for the symposium?
Faggella: The goals were in fact opening the state space for the possible future that two parties seen: people who rule artificial intelligence and align artificial intelligence, people who are creators, and people who write the symbol. The goal was to make these two parties think: hey, if artificial intelligence did not turn into alignment, and if our basic human experience changes dramatically, how can we determine the future contracts that are good? We got people from all major American laboratories.
Do you think that people from the artificial intelligence industry should be the ones who make decisions about entering into a worthy successor?
Faggella: I do not see this in any company or the hands of one person. Current dynamic means of weapons [the AI companies] It is not even possible to think about merit. They should only think about what is strong economically and military. Does this make them evil? No, this means that it is vulnerable to incentives. So we need international coordination. We need to judge strict and strict incentives that do not allow anyone to seize the steering wheel and direct everyone in a terrible direction.
I have said that people within adult artificial intelligence companies know that AGI is likely to end humanity. But they are trying to build it anyway. Why do you think this is?
Faggella: If you are talking about laboratory leaders, they all know. If you are from Sam Altman, Elon Musk or Demis Hassabis, you have two options. The first number will know that it is likely to kill and every other person at some point, but he builds it and has the final victory. There is no victory that is higher for all intelligence to be on the entire planet. Now, here is your Another option. Go to Fintech, invest in real estate, or go on vacation, and then, on a random day, devoured the silicon god for another person.
Is anyone seeking to work on AGI wise and especially thoughtful?
Faggella: There is a report issued by a research tank in Canada named Seji, I think it has really taken into account the governance. They talk about governance that reaches different levels of ability and danger. They say if artificial intelligence never develops these capabilities, we will not judge them in this way. But if that happens, we must have mechanisms. A “narrow path”, written by people who write from artificial intelligence, is a reasonable proposal about the form of entry into international coordination.
How do you realize your conviction that we are on the path of accelerating AGI with cold water moments like this modern paper of Apple that says: Actually, LLMS does nothing like thinking?
Faggella: I don’t know if what we do is thinking. How much of the random parrot do we do? What is the mechanism in our brain? Humans have long believed that the journey would include some lifting, because everything that flies the paintings, right? but As it turns out, the journey does not include the mutation on certain measures. I think the agency, logic, and even likely to have great contrasting manifestations, however, over time, they will fully retreat.
From your site articles
Related articles about the web
Don’t miss more hot News like this! Click here to discover the latest in AI news!
2025-06-24 15:00:00