AI

The problem with AI agents

The flash accident is likely to be the most popular example of the risks raised by the agents-excellent systems that have the ability to take action in the real world, without human supervision. This force is the source of its value; For example, agents who have made flash trucks can be traded much faster than any person. But this is why they can cause a lot of harm. “The great paradox of the agents is that the thing that makes them useful – is that he is able to accomplish a set of tasks – gives control,” says iense Gabriel, a Google DeepMind, who focuses on artificial intelligence ethics.

“If we continue the current path … we mainly play Russian roulette with humanity.”

Yoshua Bengio, Professor of Computer Science, University of Montreal

The agents are already everywhere – and they were for several decades. The thermostat is a factor: it automatically runs the heater or turns off to keep your home at a specific temperature. As well as the anti -virus and ROOBAS program. Like high -frequency traders, programmers for buying or selling in response to market conditions, all of these agents are designed to carry out specific tasks by following the specific rules. Even the most advanced agents, such as Siri and self -driving cars, follow pre -written rules when performing many of their actions.

But in recent months, a new group of agents has reached the scene: those that were built using large language models. The operator, an Openai agent, can independently navigate the browser to order groceries or make reservations on dinner. Systems such as Claude Code and Mursor Chat can adjust the entire code rules using one thing. Manus, a viral agent of the effect of the Chinese operating butterfly, can build and publish web sites with a little human supervision. Any procedure that can be captured by text – from playing a video game using written orders to running a social media account – is likely to be within the scope of this type of system.

LLM agents do not have a lot of a busy record, but to hear the CEOs tell that, they will turn the economy – and soon. Samnai CEO of Openai says that agents may “join the workforce” this year, and the CEO of Salesforce Marc Benioff is promoting Agentforce strongly, a platform that allows companies to adapt agents for their own purposes. The US Department of Defense has recently signed a contract with Scale AI to design and test agents for military use.

Scientists, too, take the agents seriously. “The agents are the following limits,” says Dawn Song, Professor of Electrical Engineering and Computer Science at the University of California, Berkeley. But she says, “To really benefit from artificial intelligence, in fact [use it to] Solve complex problems, we need to know how to make them work safely and safely. ”

Patrick Legger

This is a long arrangement. Like Chatbot Llms, agents can be chaotic and unpredictable. In the near future, an agent with your banking account can help you manage your budget, but it may also spend all your savings or leak your information to one of the infiltrators. The agent that manages your social media accounts can reduce some praise of maintaining the presence of online, but it may also spread lies or ill -treatment in other users.

Yoshua Bengio, Professor of Computer Science at the University of Montreal and one of the so -called “Advathers of Ai”, is among those who care about such risks. What worries it more than anything else, though, is the possibility that LLMS can develop its priorities and intentions-and then to behave on them, using its capabilities in the real world. LLM trapped in the chat window cannot do much without human help. But strong artificial intelligence can repeat itself, bypass guarantees, or prevents itself from closing. From there, he may do what you want.

As of now, there is no guaranteed way to ensure that the agents are acting because their developers are intended or to prevent harmful actors from misuse. Although researchers like Bengio are working hard to develop new safety mechanisms, they may not be able to keep pace with the rapid expansion of agents. “If we continue the current path to build an agent’s systems, we mainly play Russian roulette with humanity,” Bengo says.

Don’t miss more hot News like this! Click here to discover the latest in AI news!

2025-06-12 10:00:00

Related Articles

Back to top button