One Registry to Rule them All - Sonny Merla, Mauro Luchetti, & Mattia Redaelli, Quantyca
TL;DR
Amplifon built a central registry system because AI at global scale was turning into chaos — with teams across 26 countries, 20,000+ employees, and 10,000 stores, they saw every group reinventing security, infrastructure, and integrations for agents.
The January 2025 Amplify program split the problem into governance, platform, and factory — a central “control tower” sets legal, security, and tech rules, while committees prioritize country and corporate use cases to scale AI responsibly.
An AI gateway became the single front door for model access, security, and budgets — developers hit one endpoint for approved models, authenticate with Entra ID, and operate under trackable monthly or weekly budgets with centralized auditing and monitoring.
The real backbone is three linked registries: MCP, A2A, and use cases — Amplifon catalogs tools, agents, and business use cases together so it can see ownership, environments, auth models, cost attribution, and lineage across the whole stack.
Quantyca made agent development ‘self-documenting’ with GitHub templates and CI/CD — when teams deploy an MCP server or agent, the pipeline publishes both the Docker image and the metadata like server.json or agent card straight into the registry.
The payoff is impact analysis when something breaks, not just nicer documentation — if a model outage or tool issue happens, Amplifon can trace which use cases, agents, and systems are affected instead of hunting through disconnected teams and repos.
The Breakdown
The opening problem: too many teams, too many one-off agent stacks
Sonny Merla opens with the nightmare scenario: dozens of teams across three continents all building AI agents their own way, each with custom connections, security, and infrastructure. For a company like Amplifon — the hearing care giant with operations in 26 countries, 20,000+ people, and 10,000 stores — that kind of local freedom quickly becomes enterprise-level chaos.
Amplify: the 2025 program to put rules around AI adoption
To get ahead of that, Amplifon launched the Amplify program in January 2025 as a global, cross-functional AI operating model. Sonny frames it around a “control tower” that sets central strategy, legal, security, and technology guidelines, plus committees that turn that strategy into prioritized use cases across countries and corporate teams.
Governance, platform, factory — and the three scaling problems underneath
The team breaks Amplify into three layers: governance, platform, and factory. The pain points underneath are very practical: LLMs change fast, compliance teams need to know where AI is used, and developers shouldn’t have to rebuild deployment, auth, and security every time instead of focusing on business logic.
The AI gateway: one door for models, security, and spend control
Mauro Luchetti introduces the first technical building block: an AI gateway that gives developers one unified endpoint for all approved models in Amplifon’s catalog. It also handles Entra ID authentication, budget controls by use case, and centralized auditing and monitoring, so model usage is no longer scattered across random endpoints and spreadsheets.
Three registries that tie tools, agents, and business use cases together
From there, Mauro gets to the heart of the architecture: an MCP registry for tools and integrations, an A2A registry for agents, and a use case registry that links everything together. The private MCP registry extends the community MCP registry with enterprise metadata like owner, environment, auth model, cost attribution, and use-case linkage — not just for cataloging, but for auditability and impact analysis.
A2A agents become discoverable the moment they ship
The A2A registry is built around the agent card standard, capturing identity, endpoint, capabilities, modalities, and auth requirements. The clever bit is that deployed agents automatically publish their agent cards through CI/CD, so new agents become instantly discoverable to other developers and agents; Mauro describes the whole system as making agent development “self-documenting.”
The live platform demo: catalogs, inspectors, widgets, and lineage graphs
Mattia Redaelli walks through the platform UI, showing catalogs for use cases, MCP servers, A2A agents, and the AI gateway’s approved model list. The memorable piece is the lineage view: open a use case like “ticket optimization with AI,” and you can literally see its connected agents, MCP servers, and models — which means if something has an outage, teams can quickly see what else gets hit.
GitHub blueprints and CI/CD pipelines turn standards into default behavior
To keep developers from starting from scratch, Quantyca built two GitHub template repos, one for MCP and one for A2A, with boilerplate, FastAPI servers, Docker files, auth, cost tracking, and Langfuse observability baked in. Once a team tags a branch, GitHub Actions publish both the Docker image and the metadata — server.json or agent card — into the registry backend, giving Amplifon standardized production deployment plus up-to-date governance data by default.
What they actually achieved — and what’s still being built
Sonny brings it back to the business outcome: Amplifon now has a real catalog for governance, full traceability across use cases, tools, agents, and models, and production-ready blueprints that let teams focus on business value instead of plumbing. They’re clear the platform is still evolving, but the core win is already in place: one enterprise system that makes AI development standardized, visible, and manageable across multiple teams.