Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
152361 stories
·
33 followers

Implementing A2A protocol in NET: A Practical Guide

1 Share

As AI systems mature into multi‑agent ecosystems, the need for agents to communicate reliably and securely has become fundamental. Traditionally, agents built on different frameworks like Semantic Kernel, LangChain, custom orchestrators, or enterprise APIs do not share a common communication model. This creates brittle integrations, duplicate logic, and siloed intelligence. The Agent‑to‑Agent Standard (A2AS) addresses this gap by defining a universal, vendor‑neutral protocol for structured agent interoperability.

A2A establishes a common language for agents, built on familiar web primitives: JSON‑RPC 2.0 for messaging and HTTPS for transport. Each agent exposes a machine‑readable Agent Card describing its capabilities, supported input/output modes, and authentication requirements. Interactions are modeled as Tasks, which support synchronous, streaming, and long‑running workflows. Messages exchanged within a task contain Parts; text, structured data, files, or streams, that allow agents to collaborate without exposing internal implementation details.

By standardizing discovery, communication, authentication, and task orchestration, A2A enables organizations to build composable AI architectures. Specialized agents can coordinate deep reasoning, planning, data retrieval, or business automation regardless of their underlying frameworks or hosting environments. This modularity, combined with industry adoption and Linux Foundation governance, positions A2A as a foundational protocol for interoperable AI systems.

A2AS in .NET — Implementation Guide

Prerequisites
•    .NET 8 SDK
•    Visual Studio 2022 (17.8+)
•    A2A and A2A.AspNetCore packages
•    Curl/Postman (optional, for direct endpoint testing)

The open‑source A2A project provides a full‑featured .NET SDK, enabling developers to build and host A2A agents using ASP.NET Core or integrate with other agents as a client. Two A2A and A2A.AspNetCore packages power the experience.

The SDK offers:

  • A2AClient - to call remote agents
  • TaskManager - to manage incoming tasks & message routing
  • AgentCard / Message / Task models - strongly typed protocol objects
  • MapA2A() - ASP.NET Core router integration that auto‑generates protocol endpoints

This allows you to expose an A2A‑compliant agent with minimal boilerplate.

Project Setup

  • Create two separate projects:
    1. CurrencyAgentService → ASP.NET Core web project that hosts the agent
    2. A2AClient → Console app that discovers the agent card and sends a message
  • Install the packages from the pre-requisites in the above projects.

Building a Simple A2A Agent (Currency Agent Example)

Below is a minimal Currency Agent implemented in ASP.NET Core. It responds by converting amounts between currencies.

Step 1:  In CurrencyAgentService project, create the CurrencyAgentImplementation class to implement the A2A agent. The class contains the logic for the following:

a)   Describing itself (agent “card” metadata).
b)   Processing the incoming text messages like “100 USD to EUR”.
c)   Returning a single text response with the conversion.

 

The AttachTo(ITaskManager taskManager) method hooks two delegates on the provided taskManager -
a) OnAgentCardQuery → GetAgentCardAsync: returns agent metadata.
b) OnMessageReceived → ProcessMessageAsync: handles incoming messages and produces a response.

 

Step 2:

In the Program.cs of the Currency Agent Solution, create a TaskManager , and attach the agent to it, and expose the A2A endpoint.

Typical flow:

  • GET /agent → A2A host asks OnAgentCardQuery → returns the card
  • POST /agent with a text message → A2A host calls OnMessageReceived → returns the conversion text.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

All fully A2A‑compliant.

Calling an A2A Agent from .NET

To interact with any A2A‑compliant agent from .NET, the client follows a predictable sequence: identify where the agent lives, discover its capabilities through the Agent Card, initialize a correctly configured A2AClient, construct a well‑formed message, send it asynchronously, and finally interpret the structured response. This ensures your client is fully aligned with the agent’s advertised contract and remains resilient as capabilities evolve.

Below are the steps implemented to call the A2A agent from the A2A client:

  1. Identify the agent endpoint:
    1. Why: You need a stable base URL to resolve the agent’s metadata and send messages.
    2. What: Construct a Uri pointing to the agent service, e.g., https://localhost:7009/agent.
  2. Discover agent capabilities via an Agent Card.
    1. Why: Agent Cards provide a contract: name, description, final URL to call, and features (like streaming). This de-couples your client from hard-coded assumptions and enables dynamic capability checks.
    2. What: Use A2ACardResolver with the endpoint Uri, then call GetAgentCardAsync() to obtain an AgentCard.
  3. Initialize the A2AClient with the resolved URL.
    1. Why: The client encapsulates transport details and ensures messages are sent to the correct agent endpoint, which may differ from the discovery URL.
    2. What: Create A2AClient using new Uri (currencyCard.Url) from the Agent Card for correctness.
  4.   Construct a well-formed agent request message.
    1. Why: Agents typically require structured messages for roles, traceability, and multi-part inputs. A unique message ID supports deduplication and logging.
    2. What: Build an AgentMessage:
      •    Role = MessageRole.User clarifies intent.
      •    MessageId = Guid.NewGuid().ToString() ensures uniqueness.
      •    Parts contains content; for simple queries, a single TextPart with the prompt (e.g., “100 USD to EUR”).
  5.   Package and send the message.
    1. Why: MessageSendParams can carry the message plus any optional settings (e.g., streaming flags or context). Using a dedicated params object keeps the API extensible.
    2. What: Wrap the AgentMessage in MessageSendParams and call SendMessageAsync(...) on the A2AClient.
    3. Outcome: Await the asynchronous response to avoid blocking and to stay scalable.
  6.  Interpret the agent response.
    1. Why: Agents can return multiple Parts (text, data, attachments). Extracting the appropriate part avoids assumptions and keeps your client robust.
    2. What: Cast to AgentMessage, then read the first TextPart’s Text for the conversion result in this scenario.

 

Best Practices

1. Keep Agents Focused and Single‑Purpose

Design each agent around a clear, narrow capability (e.g., currency conversion, scheduling, document summarization). Single‑responsibility agents are easier to reason about, scale, and test, especially when they become part of larger multi‑agent workflows.

2. Maintain Accurate and Helpful Agent Cards

The Agent Card is the first interaction point for any client. Ensure it accurately reflects:

  • Supported input/output formats

  • Streaming capabilities

  • Authentication requirements (if any)

  • Version information

A clean and honest card helps clients integrate reliably without guesswork.

3. Prefer Structured Inputs and Outputs

Although A2A supports plain text, using structured payloads through DataPart objects significantly improves consistency. JSON inputs and outputs reduce ambiguity, eliminate prompt‑engineering edge cases, and make agent behavior more deterministic especially when interacting with other automated agents.

4. Use Meaningful Task States

Treat A2A Tasks as proper state machines. Transition through states intentionally (Submitted → Working → Completed, or Working → InputRequired → Completed). This gives clients clarity on progress, makes long‑running operations manageable, and enables more sophisticated control flows.

5. Provide Helpful Error Messages

Make use of A2A and JSON‑RPC error codes such as -32602 (invalid input) or -32603 (internal error), and include additional context in the error payload. Avoid opaque messages, error details should guide the client toward recovery or correction.

6. Keep Agents Stateless Where Possible

Stateless agents are easier to scale and less prone to hidden failures. When state is necessary, ensure it is stored externally or passed through messages or task contexts. For local POCs, in‑memory state is acceptable, but design with future statelessness in mind.

7. Validate Input Strictly

Do not assume incoming messages are well‑formed. Validate fields, formats, and required parameters before processing. For example, a currency conversion agent should confirm both currencies exist and the value is numeric before attempting a conversion.

8. Design for Streaming Even if Disabled

Streaming is optional, but it’s a powerful pattern for agents that perform progressive reasoning or long computations. Structuring your logic so it can later emit partial TextPart updates makes it easy to upgrade from synchronous to streaming workflows.

9. Include Traceability Metadata

Embed and log identifiers such as TaskId, MessageId, and timestamps. These become crucial for debugging multi‑agent scenarios, improving observability, and correlating distributed workflows—especially once multiple agents collaborate.

10. Offer Clear Guidance When Input Is Missing

Instead of returning a generic failure, consider shifting the task to InputRequired and explaining what the client should provide. This improves usability and makes your agent self‑documenting for new consumers.

Read the whole story
alvinashcraft
24 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

From GitLab to Kilo Code (Interview)

1 Share

We’re joined by Sid Sijbrandij, founder of GitLab who led the all-in-one coding platform all the way to IPO. In late 2022, Sid discovered that he had bone cancer. That started a journey he’s been on ever since… a journey that he shares with us in great detail. Along the way, Sid continued founding companies including Kilo Code, an all-in-one agentic engineering platform, which he also tells us all about.

Join the discussion

Changelog++ members save 8 minutes on this episode because they made the ads disappear. Join today!

Sponsors:

  • Depot10x faster builds? Yes please. Build faster. Waste less time. Accelerate Docker image builds, and GitHub Actions workflows. Easily integrate with your existing CI provider and dev workflows to save hours of build time.
  • Tiger Data – Postgres for Developers, devices, and agents The data platform trusted by hundreds of thousands from IoT to Web3 to AI and more.
  • Notion – Notion is a place where any team can write, plan, organize, and rediscover the joy of play. It’s a workspace designed not just for making progress, but getting inspired. Notion is for everyone — whether you’re a Fortune 500 company or freelance designer, starting a new startup or a student juggling classes and clubs.

Featuring:

Show Notes:

Something missing or broken? PRs welcome!





Download audio: https://op3.dev/e/https://cdn.changelog.com/uploads/podcast/672/the-changelog-672.mp3
Read the whole story
alvinashcraft
24 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

What’s New in React 19.2: Compiler, Activity, and the Future of Async React - JSJ 670

1 Share
In this episode of JavaScript Jabber, I sat down with Shruti Kapoor, independent content creator and longtime React educator, to dig into what’s actually new — and worth getting excited about — in React 19.2. While it may sound like a “minor” release on paper, this update delivers some genuinely powerful improvements that can change how we build and reason about React apps.

We talked through React Compiler finally becoming stable, how the new Activity component can dramatically simplify state management and UX, what View Transitions mean for animations, and why new tooling like Performance Tracks in Chrome DevTools is such a big deal for debugging. If you care about performance, async React, or writing less code with better results, this one’s for you.

Links & Resources

Become a supporter of this podcast: https://www.spreaker.com/podcast/javascript-jabber--6102064/support.



Download audio: https://dts.podtrac.com/redirect.mp3/api.spreaker.com/download/episode/69349715/jsj_670.mp3
Read the whole story
alvinashcraft
24 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Rare books, burned letters, and Johnson’s dictionary, with John Overholt

1 Share

1149. This week, we look at the life and legacy of Samuel Johnson, the man behind the 1755 Dictionary of the English Language. We talk with John Overholt, curator at Harvard’s Houghton Library, about Johnson's eclectic career. We also look at what it’s like to manage a collection of 4,000 rare books and why even the most "unremarkable" items deserve a home in a library.

Find John Overholt on Mastodon.

Houghton Library's website

Links to Get One Month Free of the Grammar Girl Patreon (different links for different levels)

🔗 Share your familect recording in Speakpipe or by leaving a voicemail at 833-214-GIRL (833-214-4475)

🔗 Watch my LinkedIn Learning writing courses.

🔗 Subscribe to the newsletter.

🔗 Take our advertising survey

🔗 Get the edited transcript.

🔗 Get Grammar Girl books

🔗 Join GrammarpaloozaGet ad-free and bonus episodes at Apple Podcasts or SubtextLearn more about the difference

| HOST: Mignon Fogarty

| Grammar Girl is part of the Quick and Dirty Tips podcast network.

  • Audio Engineer: Dan Feierabend
  • Director of Podcast: Holly Hutchings
  • Advertising Operations Specialist: Morgan Christianson
  • Marketing and Video: Nat Hoopes, Rebekah Sebastian

| Theme music by Catherine Rannus.

| Grammar Girl Social Media: YouTubeTikTokFacebook. ThreadsInstagramLinkedInMastodonBluesky.


Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.





Download audio: https://dts.podtrac.com/redirect.mp3/media.blubrry.com/grammargirl/stitcher.simplecastaudio.com/e7b2fc84-d82d-4b4d-980c-6414facd80c3/episodes/4c059274-e1cd-4fa5-97e2-c1b09d5e1970/audio/128/default.mp3?aid=rss_feed&awCollectionId=e7b2fc84-d82d-4b4d-980c-6414facd80c3&awEpisodeId=4c059274-e1cd-4fa5-97e2-c1b09d5e1970&feed=XcH2p3Ah
Read the whole story
alvinashcraft
24 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Our top articles for developers in 2025

1 Share

As we kick off the new year, we're taking a moment to look back at the content that resonated most with our community of developers, architects, and IT practitioners. In 2024, the rapid rise of generative AI dominated the conversation. In 2025, we saw that momentum shift toward practical, high-performance implementation.

The year’s top articles reflect a community focused on moving beyond the basics. We saw a surge of interest in building agentic systems, benchmarking LLM performance with tools like vLLM and Ollama, and optimizing development environments through the Windows Subsystem for Linux (WSL). Beyond AI, foundational technologies remained a priority, with deep dives into the latest GCC 15 features and advanced Linux networking.

Whether you are here to boost your AI coding practices or orchestrate complex multicluster environments, these are ten stories that shaped the Red Hat Developer experience in 2025.

#10: An overview of virtual routing and forwarding (VRF) in Linux

By Antoine Tenart

Author Antoine Tenart, a specialist in Linux kernel networking, provides a comprehensive look at VRF, a lightweight solution for isolating Layer 3 traffic. This guide explains how to create independent routing and forwarding domains to support multi-tenancy and overlapping networks without the overhead of full network namespaces.

Read it: An overview of virtual routing and forwarding (VRF) in Linux 

#9: How we optimized vLLM for DeepSeek-R1 

By Michael GoinRobert ShawNick HillTyler Smith, and Lucas Wilkinson

The performance engineering team from Neural Magic (now part of Red Hat) details their work scaling the massive DeepSeek-R1 model. This technical deep dive explores kernel-level optimizations like Multi-Head Latent Attention (MLA) and Multi-Token Prediction (MTP) that allow this 671B parameter model to run efficiently in production environments.

Read it: How we optimized vLLM for DeepSeek-R1 

#8: How to build a simple agentic AI server with MCP

By Saroj Paudel

AI engineer Saroj Paudel delivers a hands-on tutorial for connecting AI agents to real-world data using the Model Context Protocol (MCP). Using a weather-fetching tool as a practical example, this article demonstrates how developers can build secure, standardized adapters that allow LLMs to interact with external APIs and databases.

Read it: How to build a simple agentic AI server with MCP 

#7: A quick look at MCP with large language models and Node.js 

By Michael Dawson

Michael Dawson, Red Hat’s Node.js lead and IBM’s community lead for the project, explores the interoperability of MCP across different frameworks. He shows how tools written once in TypeScript can be seamlessly consumed by both the Bee agent framework and Ollama, proving the potential of MCP to eliminate the need for custom wrappers.

Read it: A quick look at MCP with large language models and Node.js 

#6: How to automate multi-cluster deployments using Argo CD

By Radu DomnuIlias Raftoulis

GitOps architects Radu Domnu and Ilias Raftoulis present a strategic approach to managing application life cycles across multiple OpenShift clusters. They break down the pros and cons of standalone versus hub-and-spoke architectures, providing a roadmap for platform teams to automate complex multi-tenant environments using Argo CD.

Read it: How to automate multi-cluster deployments using Argo CD 

#5: How spec-driven development improves AI coding quality 

By Rich Naszcyniec

Rich Naszcyniec introduces spec coding, a structured alternative to vibe coding. By defining functional and language-specific specifications first, engineers can guide AI coding assistants to produce code with over 95% accuracy, ensuring that AI-generated output is maintainable and adheres to corporate standards.

Read it: How spec-driven development improves AI coding quality 

#4: New C++ features in GCC 15 

By Marek Polacek

GCC C++ front-end maintainer Marek Polacek previews the release of GCC 15.1. This post details new C++26 features, including pack indexing, variadic friends, and the ability to provide specific reasons for deleted functions, helping developers prepare for the next generation of the C++ language.

Read it: New C++ features in GCC 15 

#3: Ollama vs. vLLM: A deep dive into performance benchmarking 

By Harshith Umesh

AI performance engineer Harshith Umesh settles the debate between popular inference engines with raw data. By benchmarking throughput and latency on NVIDIA A100 hardware, this post demonstrates why Ollama is ideal for local prototyping while vLLM remains the clear choice for high-concurrency enterprise production.

Read it: Ollama vs. vLLM: A deep dive into performance benchmarking 

#2: 6 usability improvements in GCC 15

By David Malcolm

David Malcolm, a primary contributor to GCC’s diagnostic systems, explains his work making compiler errors easier to read. Highlights include ASCII-art execution paths for static analysis, a new SARIF machine-readable output, and a prettier look for notoriously complex C++ template errors.

Read it: 6 usability improvements in GCC 15 

#1: Getting started with RHEL on WSL   

By Eliane PereiraSanne RaymaekersTerry Bowling

Red Hat Enterprise Linux experts Eliane Pereira, Sanne Raymaekers, and Terry Bowling explain how to bring the world’s leading enterprise Linux platform directly to the Windows desktop. This top-ranked guide covers creating custom RHEL images via the Lightspeed image builder and setting up a seamless workflow between Windows and Linux environments.

Read it: Getting started with RHEL on WSL

Expand your technical toolkit

We also released new long-form e-book and cheat sheet downloads in 2025. While our articles provide timely insights, these resources offer the in-depth guidance and tactical references you need to master a new stack or navigate complex migrations. 

Read on for the top additions from last year.

New e-books

Our e-books, written by Red Hat subject matter experts, can help you bridge the gap between getting started and being production ready:

New cheat sheets

When you're in the middle of a deployment, you need the right command at your fingertips. Our cheat sheets are designed to be your quick-reference companion for the most critical tasks:

Looking ahead to 2026

From the low-level compiler optimizations in GCC 15 to the high-level orchestration of agentic AI, the common thread across all these articles is bridging the gap between community innovation and enterprise stability.

At Red Hat Developer, our goal remains the same: to provide you with the deep, engineering-led insights you need to create better software. We are grateful to the engineers and maintainers who took the time to share their expertise this past year, and to you, the builders and makers who continue to push these technologies to their limits. We can't wait to show you what we're building in 2026.

The post Our top articles for developers in 2025 appeared first on Red Hat Developer.

Read the whole story
alvinashcraft
24 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Article: Agentic Terminal - How Your Terminal Comes Alive with CLI Agents

1 Share

In this article author Sachin Joglekar discusses the transformation of CLI terminals becoming agentic where developers can state goals while the AI agents plan, call tools, iterate, ask for approval where needed, and execute the requests. He also explains the planning styles for three different CLI tools: Gemini, Claude, and Auto-GPT.

By Sachin Joglekar
Read the whole story
alvinashcraft
25 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories