/vnd/media/media_files/2025/11/18/a2a-communication-gap-2025-11-18-14-23-06.png)
India’s AI Mission is progressing at a rapid pace—from funding indigenous large language models to rolling out subsidised compute infrastructure. Yet, as the country builds out its foundational AI stack, one critical piece remains underdeveloped: security and trust in Agent-to-Agent (A2A) communication.
Agent-to-agent communication refers to the structured exchange of information and coordination between AI agents—autonomous software entities that can perceive, reason, and act autonomously. These agents, often powered by large language models, are designed to complete complex tasks collaboratively, often with minimal human oversight.
The transformative potential of A2A systems is clear. Agents can divide and conquer problems, automate workflows, and accelerate decision-making. However, this also introduces a new layer of vulnerability. As India scales up AI deployment across sectors—from agriculture to finance—secure, interoperable agentic systems will be essential.
The Rise of Agentic Ecosystems
Global developments point to a rapidly maturing A2A ecosystem. In April 2025, Google introduced the A2A protocol, now managed by the Linux Foundation. This protocol defines how agents advertise their capabilities via “Agent Cards”, discover one another, and manage task lifecycles. Other protocols, such as IBM’s Agent Communication Protocol and Anthropic’s Model Context Protocol, have added layers of standardisation to agent interoperability, including access to external APIs and data sources.
Even more experimental approaches, such as Gibberlink—a protocol using data-over-sound for ultra-fast agent interactions—demonstrate the pace of innovation. Open-source multi-agent frameworks such as AutoGen, CrewAI, LangGraph, and Semantic Kernel are enabling developers to build collaborative AI systems that operate with varying levels of autonomy and role specialisation.
Recent announcements at the SAP Connect event in Las Vegas further serve as a case in point. The company introduced more than 40 new AI agents across business functions and previewed support for the A2A protocol within its Joule platform, enabling agents to collaborate across enterprise systems. These agents are designed to automate tasks such as financial reconciliation, supplier bid analysis, and international trade classification—working in coordination to reduce manual effort and accelerate outcomes.
This rapid adoption underscores a clear reality: multi-agent architectures are no longer experimental—they are shaping global enterprise software today, and India will soon encounter similar architectures across public sector and mission-critical deployments.
However, this also presents a new challenge. If agents are allowed to issue commands, access sensitive data, or make procurement decisions, how can we verify their identity, authorisation, and intent? What safeguards ensure that adversarial agents do not manipulate interactions or trigger unintended consequences? Who audits inter-agent behaviour?
India AI Governance Guidelines and Gaps
The recently released India AI Governance Guidelines acknowledge the emergence of autonomous agents and explicitly warn that highly capable AI agents, operating with self-directed action and multi-agent coordination, may require rethinking existing governance approaches.
They go further to recognise that autonomous A2A communication could enable the creation of covert protocols, increase the risk of loss of control, and introduce new forms of systemic vulnerability. This recognition is particularly important, as India’s AI ecosystem continues to grow in complexity and scale.
However, the current framework stops at identifying the threat. It does not set out any technical or policy mechanisms to govern agent identity, authenticate agent actions, or establish secure channels for inter-agent communication. It does not address how agents should be authorised to perform tasks, how their interactions should be logged for auditing, or how their behaviour should be monitored to detect anomalies.
The guidelines emphasise fairness, accountability, transparency, and human oversight, but remain general in nature and are not extended into the specific context of multi-agent ecosystems where interactions are autonomous, rapid, and opaque to human observers.
Risks of Unregulated A2A Communication Layer
The absence of A2A-specific standards has significant implications for India’s future in AI. As agents become more capable and are deployed across various domains, including governance, healthcare, banking, telecom, and logistics, the integrity of their interactions will directly impact public services and citizen safety.
Moreover, identity verification becomes essential when agents issue commands, approve transactions, or coordinate supply chains. Without a trusted mechanism to confirm which agent initiated an action and whether it was authorised to do so, the risk of impersonation, spoofing, or system-level manipulation increases substantially.
The possibility that agents may develop emergent or covert communication patterns adds another layer of concern. Such interactions could bypass human oversight, generate unintended behaviours, or create vulnerabilities that adversaries could exploit.
Without dedicated audit trails and behaviour-monitoring frameworks, it would be difficult for regulators or investigators to reconstruct events in the event of system failures or coordinated agent malfunctions. This challenge becomes even more serious when agentic systems intersect with critical infrastructure such as power grids, transportation networks, fintech systems, and emergency response platforms.
Building an A2A Governance Layer for India
To close this gap, India needs a dedicated governance layer within its AI framework. This should include a standard mechanism to verify agent identity and establish authentication before agents interact with one another. It should also define how agents are authorised to perform tasks, what actions they can take under specific conditions, and how these permissions can be revoked when needed.
Similarly, secure and interoperable communication protocols must be established to ensure that inter-agent coordination cannot be spoofed or manipulated. In addition, behavioural logging and monitoring are necessary to ensure that agent interactions remain transparent and auditable. Imagine what would happen if a rogue agentic AI takes over the communication of an airport—the recent GPS spoofing attack on Delhi airport is just a mild reminder of the ramifications.
Regulators should also create multi-agent sandboxes, allowing controlled testing of emergent behaviours, stress scenarios, and adversarial interactions. While India has already set up an AI Safety Institute (AISI), it should develop a certification regime for agents deployed in high-impact or sensitive sectors and maintain a national registry of such systems on a priority basis. It is also critical that “secure A2A communication protocol" is listed as a core mandate for the proposed AI Governance Group (AIGG) and the Technology and Policy Expert Committee (TPEC).
This approach would mirror the Guidelines’ emphasis on safety, accountability, and risk mitigation but translate these principles into concrete mechanisms suited to the agentic era.
Next Frontier of Trust in India’s AI Ecosystem
A2A communication is becoming a foundational layer of modern AI deployments. As global enterprises adopt multi-agent architectures and India prepares similar deployments across public and mission-critical systems, the need for robust, structured governance becomes urgent.
India’s AI Guidelines lay the necessary groundwork. Still, without explicit provisions for A2A security, identity, authorisation, and behaviour monitoring, the country risks building an advanced AI ecosystem without the guardrails needed to keep it safe and trustworthy.
If India aims to lead in safe, sovereign, and globally respected AI, it must focus not only on what AI agents can do, but on ensuring they can be trusted to do it right. Ensuring that AI agents can communicate and collaborate securely and transparently, as per human-defined principles, is essential to shaping an AI ecosystem that the world can trust.
/vnd/media/agency_attachments/bGjnvN2ncYDdhj74yP9p.png)