AI agents can talk to each other — they can’t think together yet



AI agents can talk to each other now – they can’t understand what the other is trying to do. That’s the problem Cisco’s Outshift is trying to solve with a new architectural approach it calls the Internet of Cognition.

The gap is practical: protocols such as MCP and A2A allow agents to exchange messages and identify devices, but they do not share intent or context. Without that, multi-agent systems burn out coordination cycles and cannot combine what they have learned.

"The point is, we can send messages, but the agents do not understand each other, so there is no basis, negotiation or coordination or common purpose," Vijoy Pandey, general manager and senior vice president of Outshift, told VentureBeat.

The practical effect:

Consider a patient scheduling a specialist appointment. With MCP only, the symptom-checking agent passes a diagnosis code to the scheduling agent, which finds available appointments. The insurance agent verifies coverage. The pharmacy agent checks the availability of the drug.

Each agent completes his task, but none of them reason about the patient’s needs. The pharmacy agent may recommend medication that contradicts the patient’s history — information that the agent has on the symptoms but did not pass because "potential drug interactions" out of its scope. The scheduling agent books the closest available appointment without knowing that the insurance agent has found better coverage at a different facility.

They are connected, but they are not aligned with the goal: Find the right care for the specific condition of the patient.

Current protocols handle agent communication mechanics – MCP, A2A, and The AGNTCY of Outshiftwhich it donated to the Linux Foundation, let agents discover devices and exchange messages. But it works in what Pandey calls the "connection and identification layer." They handle syntax, not semantics.

The missing piece is shared context and purpose. An agent completing a task knows what it is doing and why, but that reasoning cannot be passed on when it is given to another agent. Each agent interprets goals independently, which means that coordination requires constant clarification and learned insights remain implicit.

For agents to move from communication to collaboration, they need to share three things, according to Outshift: pattern recognition in datasets, causal relationships between actions, and clear goal states.

"Without shared purpose and shared context, AI agents remain semantically isolated. They can do it individually, but the goals are interpreted differently; Coordination burns cycles, and no compounds. An agent learns something valuable, but the rest of the multi-agent-human organization still starts from scratch," Outshift said in a paper. Outshift says the industry needs it "open, interoperable, enterprise-grade agentic systems that work together semantically" and proposed a new architecture it called the "Internet of Cognition," where a multi-agent environment works within a shared system.

The proposed architecture introduces three layers:

Cognition State Protocols: A semantic layer that sits on top of message passing protocols. Agents share not just data but intent — what they’re trying to accomplish and why. This allows agents to align goals before acting, rather than having to explain after the fact.

Cognition fabric: Infrastructure for building and maintaining a shared context. Think of it as distributed working memory: context graphs that persist agent interactions, with policy controls on what gets shared and who can access it. System designers can define what "common understanding" seems to be for their use case.

Machines of Cognition: Two types of capabilities. Accelerators allow agents to combine insights and compound learning – one agent’s discovery can be used by others to solve related problems. Guardrails enforce compliance boundaries so that shared reasoning does not violate regulatory or policy constraints.

Outshift positions the framework as a call to action rather than a finished product. The company is working on implementation but stresses that semantic agent collaboration will require industry-wide coordination – just like early internet protocols that needed to be bought into standards. Outshift is in the process of writing code, publishing specs and releasing research around the Internet of Cognition. It hopes to have a demo of the protocols soon. Noah Goodman, co-founder of frontier AI company Humans& and a professor of computer science at Stanford, said during VentureBeat’s AI Impact event held in San Francisco that innovation happens when "some people wonder which people to pay attention to." The same dynamic applies to agent systems: as individual agents learn, value increases when other agents recognize and use that knowledge. The practical question for teams deploying multiple agent systems today: Are your agents just connected, or are they actually working toward the same goal?



Source link

  • Related Posts

    ‘Uncanny Valley’: Misinformation in Minneapolis, TikTok’s New Owner, and Moltbot Hype

    In today’s episode, Hosts Brian Barrett and Zoë Schiffer are joined by Tim Marchman, director of science, politics, and security at WIRED, to discuss the week’s news—including how far-right influencers…

    Amazon is reportedly in talks to invest $50 billion in OpenAI

    OpenAI, is now a company worth $500 billionindicates that it is wanted another $100 billion to invest. Such a round of funding could lead the company’s valuation to reach a…

    Leave a Reply

    Your email address will not be published. Required fields are marked *