Practical AI
Interoperability Rides Again!
Every new emergent technology eventually leads to a required focus on interoperability. Interoperability enables us to avoid creating yet another Tower of Babel that becomes unwieldy and then impossible to navigate.
Users of smarthome devices saw this when they first became available. It didn't take long for the Z-Wave and Zigbee standards to emerge. Inevitably, the emergence of more than one will ultimately lead to the victory of one over the others. A technology world that craves and requires standardization demands it.
The challenge for device makers is to remain distinctive while adhering to accepted standards. Failure to do so will result in complete market failure.
Interoperability and standardization enable all manufacturers to create their devices knowing they will be compatible with other devices. One clear example is the E26 standard light bulb socket. Every lamp adheres to it, so every bulb can be used in any one of them.
We Live with Interoperability Every Day
Every networked computer, peripheral, and communication device connected to the global internet depends upon a venerable set of interoperability standards known as transmission control protocol/internetworking protocol or TCP/IP.
In the earliest days of TCP/IP starting back in 1969 when Vint Cerf, Bob Kahn, and their team started developing the suite, many different organizations developed their own "protocol stack" to enable them to connect to the nascent network. It is impossible to estimate how many hours were wasted by how many people in their efforts to make these stacks work together. Eventually, universally accepted standards emerged, and today the entire world depends upon the TCP/IP suite of protocols to enable their interoperability with everyone else.
The New Interoperability Challenge – Agentic AI
The Google definition of AI Agent may be the simplest one available:
"AI agents are software systems that use AI to pursue goals and complete tasks on behalf of users. They show reasoning, planning, and memory and have a level of autonomy to make decisions, learn, and adapt. Their capabilities are made possible in large part by the multimodal capacity of generative AI and AI foundation models. AI agents can process multimodal information like text, voice, video, audio, code, and more simultaneously; can converse, reason, learn, and make decisions. They can learn over time and facilitate transactions and business processes."
There's one more sentence in this definition, which I've excluded for emphasis. It reads. "Agents can work with other agents to coordinate and perform more complex workflows." That wasn't really true until April 9, 2025.
The idea of software that can pursue goals and complete tasks on behalf of users is very attractive. By assigning mundane tasks to AI Agents, we no longer need to do them ourselves. They're very easy to create using any of the various AI Agent Studios that have been introduced by various software developers, so users are free to create really simple ones or more complicated ones.
There are, however, limits to how much an individual AI agent can do—in part by a given user's ability to build enough sophistication into any one agent. It's also impractical to pack multiple activities into one agent. The far better approach would be to create agents that could work with other agents to complete more complex, multi-phased tasks. To accomplish that, they'd need to be able to find other agents with the abilities needed to do what they need done. They'd need the ability to communicate with each other and collaborate with each other. And since most of what they do involves data assets existing in various contexts, they'd also need to be able to share a common context for the data and other resources they're using while they're using them.
New Emerging Standards
Responding to this need to enable AI Agents to find capable partner Agents and collaborate with them efficiently and effectively, two major AI developers have recently introduced complementary protocols.
First, on November 25, 2024, Anthropic released and open sourced Model Context Protocol (MCP) which they describe as "an open protocol that standardizes how applications provide context to LLMs. Think of MCP like a USB-C port for AI applications. Just as USB-C provides a standardized way to connect your devices to various peripherals and accessories, MCP provides a standardized way to connect AI models to different data sources and tools."
As AI assistants become more widely used, the focus across the industry has shifted toward enhancing model performance, the company explains, leading to significant strides in reasoning and output quality. However, even the most advanced models remain limited by their lack of integration with real-world data. Isolated within disconnected systems and outdated infrastructure, they require custom solutions for each new data source, making it challenging to build scalable, interconnected applications.
MCP offers a solution to this problem by introducing a universal, open standard that links AI systems to data sources. Instead of relying on a patchwork of custom integrations, it uses a unified protocol, making it easier and more dependable to provide AI models with the data they require.
Complementing MCP, on April 9, 2025 Google launched their new, open protocol, called Agent2Agent (A2A). Google, however, brought along more than 50 of their friends, technology partners including Atlassian, Box, Cohere, Intuit, Langchain, MongoDB, PayPal, Salesforce, SAP, ServiceNow, UKG and Workday; and leading service providers including Accenture, BCG, Capgemini, Cognizant, Deloitte, HCLTech, Infosys, KPMG, McKinsey, PwC, TCS, and Wipro.
A standard can't become a standard unless everyone agrees to adhere to it. With A2A, Google made sure they'd accomplish this.
The A2A protocol is designed to enable AI agents to communicate, securely share data, and coordinate tasks across diverse enterprise systems and applications, Google says. This framework aims to deliver meaningful benefits to customers by allowing their AI agents to operate cohesively across their full suite of enterprise tools. And the joint initiative reflects a common commitment to a future where AI agents—regardless of their technical foundations—can work together seamlessly, streamlining complex enterprise workflows and unlocking new levels of efficiency and innovation.
"A2A is an open protocol that complements Anthropic's Model Context Protocol (MCP)," Google says, "which provides helpful tools and context to agents."
"I'm a big believer that MCP will become TCPIP, because AI cannot work in isolation," said Vijay Rayapati, CEO of AI developer Atomicwork. "It has to talk to existing data sources. It has to talk to existing applications. Think of it as the HTTP of AI. Both of them are required. They need to cooperate and coexist."
Rayapati sees the introduction of A2A and MCP being critical to "eliminating sprawl, complexity, more chaos, and more headache."
The Next Step Toward Agentic AI
It seems like only yesterday Agentic AI was being introduced. In fact, it really was. This time, the development and adoption curves are dramatically accelerated. TCP/IP took years to completely develop. The standard for Agentic AI interoperability between model makers and software developers is emerging and shows every sign of maturing quickly. With so many AI industry players raising their hands in support of A2A and MCP, we can look forward to agents finding other agents and getting to work with them in very short order.
About the Author
Technologist, creator of compelling content, and senior "resultant" Howard M. Cohen has been in the information technology industry for more than four decades. He has held senior executive positions in many of the top channel partner organizations and he currently writes for and about IT and the IT channel.