AI Experts Urgently Call on Governments to Think About Maybe Doing Something

It seems that everyone is aware of the fact that artificial intelligence is a fast -growing and emerging technique that has the possibility of tremendous damage if it is operated without guarantees, but no one (except for the European Union, somewhat) can agree on how to organize it. So, instead of trying to prepare a clear and narrow path for how to allow us to work, experts in this field have chosen a new approach: What about discovering extremist examples that we all believe are bad and agree to that?
On Monday, a group of politicians, scientists and academics moved to the United Nations General Assembly to announce the global call to artificial intelligence lines, a call to the world’s governments to meet and agree on a broader handrail to prevent “unacceptable risks” that may result from the publication of AI. The goal of the group is to put these red lines at the end of 2026.
The proposal has collected more than 200 signatures so far from industry experts, political leaders and Nobel Prize winners. Former president of Ireland, Mary Robinson, and former President of Colombia, Juan Manuel Santos, on board, as well as many winners in Nobel. Jeffrey Henton and Yoshua Bingio, two three men are usually referred to as “Angels of AI” because of their founding work in space, adding their names to the list.
Now, what are those red lines? Well, it is still up to governments to make a decision. The call does not include specific political recipes or recommendations, although it requires some examples of what could be a red line. The group says that the prohibition of launching nuclear weapons or use of collective monitoring efforts will be a potential red line for AI’s uses, with an artificial intelligence that cannot be ended with human transcendence, will be a possible red line for artificial intelligence behavior. But they are very clear: do not put it in the stone, they are just examples, you can set your own rules.
The only thing that the group provides in a significant way is that any global agreement should be built on three columns: “a clear list of prohibitions; strong, auditable verification mechanisms; and the appointment of an independent body established by the parties to oversee implementation.”
Details, though, are for governments. This is somewhat. The call recommends that the two countries host some summits and working groups to know everything, but there are definitely many competing motives in those conversations.
The United States, for example, has already committed to not allowing Amnesty International to control nuclear weapons (an agreement concluded under the Biden Administration, so the Lord knows whether that is still in play). But recent reports indicated that parts of the Trump administration’s intelligence community have already been disturbed by the fact that some artificial intelligence companies will not allow them to use their tools for local monitoring efforts. Will America get this proposal? Perhaps we will discover by the end of 2026 … if we make it long.
Don’t miss more hot News like this! Click here to discover the latest in Technology news!
2025-09-22 20:50:00