AI

Beyond A2A and MCP: How LOKA’s Universal Agent Identity Layer changes the game


Join daily and weekly newsletters to obtain the latest updates and exclusive content to cover the leading artificial intelligence in the industry. Learn more


The vocalist operation of the steam agent is acquired, but the continuity of the proposal protocols for the new inter -employment protocols with the industry continues to know the criteria that must be adopted.

A group of researchers from the University of Carnegie Mellon suggested a new protocol for inter -employment that governs the identity, accountability, accountability and ethics of artificial intelligence autonomy. Classy coincidences of knowledge factors, or Loka, can join other proposed criteria such as Google’s Agent2agent (A2A) and the MPC’s context protocol (MCP) of anthropor.

In a paper, the researchers pointed out that the rise of artificial intelligence factors emphasizes the importance of their control.

The researchers wrote: “With their existence expanding, the need for a unified framework for their interactions becomes very important,” the researchers wrote. “Despite their increasing spread, artificial intelligence factors often work within the horrific systems, and they lack a joint communication protocol, moral logic, and compliance with judicial regulations. This retail poses great risks, such as interconnection cases, moral error, and accountability gaps.”

To address this, they suggest open source Loka, which will enable agents to prove their identity, “exchanging rich messages in a significant way, morally explanation”, adding accountability, and establishing moral governance during the decision -making process.

Loka depends on what the researchers refer to as the identity of a global agent, a frame that appoints agents a unique identity.

“We imagine Luka as a basic engineering and a call to reconsider the basic elements – identity, intention, confidence and moral consensus – which must support interactions from factors. With the expansion of artificial intelligence agents, it is important to evaluate whether the current infrastructure can facilitate this transition.”

Luka layers

Loka works as a layer stack. The first stack revolves around the identity, which puts the agent. This includes a decentralized, or “unique identifier.” This would allow users and other agents to check the identity of the agent.

The next layer is the contact layer, where the agent informs another agent of his structure and the task he needs to achieve. This is followed by ethics at a later time and the security class.

Luka’s ethics layer puts how the agent behaves. It includes “a flexible and strong moral framework to make decisions that allow agents to adapt to various moral standards depending on the context they work.” The Loka protocol uses collective decision -making models, allowing agents in the framework of working to determine their next steps and evaluate whether these steps are compatible with the criteria of moral and responsible artificial intelligence.

Meanwhile, the security class uses what researchers describe as “quantity encoding.”

What distinguishes Luka

The researchers said that Luka emerges because it defines important information for agents to communicate with other agents and work independently through various systems.

Luca may be useful for institutions to ensure the safety of the agents they publish in the world and provide a method that can be tracked to understand how decisions are made. Fear of many institutions is that the agent will benefit from another system or access to private data and make a mistake.

Ranjan said the regime “highlights the need to determine who are agents, how they make decisions and how to take responsibility.”

“Our vision is to shed light on the important questions that are often overwhelmed in the rush to expand the scope of artificial intelligence: How can we create environmental systems where these agents can be trusted, accountable, and morally disobedient through various systems?” Ranjan said.

Luka will have to compete with other agent protocols and standards that are now appearing. Protocols like MCP and A2A found a large audience, not only because of the artistic solutions they present, but because these projects are supported by the organizations that people know. The Antarbur MCP started, while Google supports A2A, and both protocols have collected many companies open to use – and improve – these standards.

Luca is working independently, but Ranjan said they had received “very encouraging and exciting reactions” from other researchers and other institutions to expand the Luka research project.


Don’t miss more hot News like this! Click here to discover the latest in AI news!


2025-04-28 20:35:00

Related Articles

Back to top button