Over the past year, we’ve made significant progress with Microsoft Discovery by working closely with research and development (R&D) organizations. Today, we’re sharing how those efforts are translating into real momentum for customers and partners, while also expanding preview access to Microsoft Discovery. This next phase reflects what we’ve learned as we continue to broaden access to enterprise-grade, agentic AI capabilities for R&D. The Microsoft Discovery platform continues to evolve with new capabilities, expanded partner interoperability, and a growing set of results with real-world scientific outcomes and engineering transformation. We believe what comes next can meaningfully change how R&D teams operate and empower them to achieve more.
Agentic AI opens a new chapter for R&D where autonomous agent teams, guided by human expertise, perform the core research and engineering tasks in a redefined agentic loop. Specialized agents can reason on top of vast amounts of organizational and public-domain knowledge, create hypotheses on an expanded search space, test and validate those hypotheses at scale, analyze the results, and feed conclusions into iterative loops. Empowering science and engineering experts with agentic AI has the potential to reshape the future of science and engineering, enabling organizations to lead boldly in the new Frontier R&D era.
This fundamental shift requires a deep transformation that encompasses both technological and organizational challenges. Scientific discovery has always been defined by ambition and the relentless pursuit of what comes next—a more sustainable material, a cleaner source of energy, a more effective treatment. But for many R&D teams the hardest work can begin after an idea shows promise. Turning concepts into outcomes requires repeated development cycles that involve reformulating candidates as new datasets emerge, re-engineering existing materials to meet evolving regulatory and performance requirements, or adjusting designs when performance, yield, or manufacturability fall short. As R&D grows more complex, tooling must evolve to help close the distance between what researchers and engineers want to pursue and what they can practically deliver.
Earlier generations of AI offered incremental relief through faster search and better retrieval, but lacked the deeper reasoning that genuinely complex, multi-disciplinary science demands. Tradeoffs across cost, performance, yield, compliance, and timelines must be revisited repeatedly as development progresses. But the convergence of large-scale reasoning models, agentic AI architectures, and high-performance cloud infrastructure has created a genuine opportunity to rethink how R&D work gets done—not only to improve existing processes at the margins, but to help teams iterate faster and move from hypothesis to candidate development to outcome with greater confidence.

When Microsoft Discovery was introduced in private preview last year, it was an early expression of that possibility: an agentic AI platform purpose-built for R&D, bringing together the reasoning depth and collaborative intelligence that complex, real-world R&D requires. The response from engineers and researchers across life sciences, chemistry and materials science, physics, semiconductors, and other fields made clear that the need was real and the approach was right.
Microsoft Discovery is an extensible platform that brings together agentic orchestration, advanced reasoning, a graph-based knowledge foundation, and high-performance computing. It helps drive the three principles outlined in Figure 1 for effective agentic discovery—enabling agent empowerment, discovery loop automation, and quality at scale. Because it is built on Microsoft Azure’s enterprise cloud infrastructure, Microsoft Discovery is designed to operate within the security, compliance, transparency, and governance frameworks used to manage sensitive real-world R&D environments.

Agents are equipped with a broad range of digital, physical, and analytical tools used across R&D. This includes in silico experimentation environments such as high-performance compute (HPC) clusters, specialized large quantitative models (LQMs) and agents, and potential future integration with quantum capabilities as they become applicable to commercial R&D. It also allows interoperability with physical labs, facilitating the lab procedure generation and even direct operation with robotics, lab instrumentation, and Internet of Things (IoT)-enabled devices that agents can operate under human oversight.
At the heart of Microsoft Discovery is the Discovery Engine that mimics the scientific method where specialized agents reason over large amounts of knowledge, generate hypotheses, and validate them in a complex tree across a vast search space. The Discovery Engine connects proprietary research data with external scientific literature—not solely to retrieve isolated facts but to reason across conflicting theories, experimental results, and domain-specific assumptions in a way that reflects how science actually works. This contextual depth is what separates Microsoft Discovery from general-purpose AI tools and enables the platform to function as a genuine thinking partner across the full arc of a research program.
Built-in governance controls help ensure that agent driven research remains aligned with strategic priorities, security and compliance standards, and safety requirements. These systems provide centralized management, audit trails, and checkpoints that help maintain reliability as agentic throughput grows. The platform is extensible by design which enables integration with existing business tools and assets, partner solutions, and open-source models. Integration with Microsoft 365, Microsoft Foundry, and Microsoft Fabric enables organizations to interoperate across business agents, enterprise data, and institutional knowledge.
Previously we shared how a team of Microsoft researchers leveraged advanced AI models and HPC tools from Microsoft Discovery to identify a novel, non-PFAS, immersion datacenter coolant prototype in about 200 hours. We’re excited to share a few examples of how customers have been using the platform during preview.
A global leader in advanced materials and specialty chemicals, Syensqo is advancing a bold, multi-year transformation of its technology landscape to accelerate data-driven science, advanced simulation, and AI-enabled discovery. Building on early success with Microsoft Discovery, Syensqo is now scaling these capabilities enterprise-wide to unlock greater scientific and business impact. This next phase focuses on modernizing R&D knowledge foundations, expanding access to scalable, cost-efficient, cloud-based compute, and establishing a unified operating model that brings together data, high-performance computing, and emerging agentic AI to power the future of innovation.
As Microsoft Discovery workflows gained momentum, Syensqo expanded its ambition to scale these capabilities across both R&D and commercial organizations, unlocking new opportunities for end-to-end innovation. This evolution is enabling teams to unify scientific and business datasets, scale simulation environments in line with increasingly complex development needs, and integrate engineering workflows within a connected digital ecosystem. Together, these advancements are establishing a strong, future-ready foundation to accelerate innovation-led growth—from early-stage discovery through engineering and large-scale formulation.
To realize this vision, Syensqo is advancing its science and commercial data and simulation platforms on Azure. By centralizing critical datasets within a governed, enterprise-grade data backbone and extending Microsoft Discovery workflows onto highly scalable cloud compute, the company is establishing a modern, standardized operating model for innovation. This shift enables more seamless collaboration, supports advanced analytics and simulation at scale, and lays the groundwork for next-generation, AI-powered workflows across priority research and innovation (R&I) domains.
We are entering a new phase of our partnership with Microsoft, focused on scaling AI agents across research, sales and marketing to drive near-term growth. By connecting customer demand to scientific development and back to market execution, agentic AI is enabling faster cycles, sharper prioritization, and tangible impact on revenue growth and business performance.”
—Mike Radossich, Chief Executive Officer (CEO), Syensqo
Modern oncology increasingly depends on understanding tumors not only by appearance, but by the biological signals that shape cell behavior, immune response, and treatment outcomes. GigaTIME addresses this need by using AI to infer spatially resolved tumor microenvironment signals from routine hematoxylin and eosin (H&E) pathology slides. This approach makes insights such as immune infiltration, checkpoint context, and tumor proliferation more accessible at scale without the cost and throughput constraints of experimental assays. GigaTIME and its outputs within Microsoft Discovery are intended for research use only. They are not a medical device and are not intended for clinical diagnosis, treatment, prevention, or patient-management decisions.Â
The impact of GigaTIME increases when its outputs are embedded into real research workflows. Within Microsoft Discovery, virtual multiplex immunofluorescence (mIF) predictions move beyond standalone visualizations and become inputs to ongoing scientific reasoning. Spatial phenotypes can be generated consistently across cohorts, localized to single cell context, and connected to supporting evidence such as literature, biomarkers, and downstream endpoints. This allows researchers to interpret results systematically, question assumptions, and refine biological hypotheses over time.
Microsoft Discovery supports this work in a way that is reproducible, scalable, and governed end to end. GigaTIME can be used alongside additional models, data sources, and tools within a shared environment that supports iteration, comparison, and validation. Rather than accelerating a single analytical step, Discovery supports a full discovery loop—where spatial biology informs hypotheses, hypotheses guide validation, and results feed the next cycle of learning with clarity and confidence.
Learn more about the GigaTIME and Microsoft Discovery integration to see how virtual mIF outputs are applied within Microsoft Discovery for oncology R&D.
PhysicsX, a leader in physics AI for industrial engineering and manufacturing, is partnering with Microsoft to bring agentic engineering into production through Microsoft Discovery. At the core of this collaboration is the PhysicsX platform—combining Large Physics Models and AI-native workflows to deliver near-real-time simulation by inference across the full engineering lifecycle.
Integrated into Discovery’s agentic environment, the PhysicsX platform enables engineers to move beyond sequential, solver-driven workflows and explore significantly larger design spaces, evaluating thousands of manufacturable candidates in days, without compromising physical fidelity.
The collaboration is already delivering impact at Microsoft Surface. Faced with tightly coupled constraints across thermal performance, acoustics, and form factor, the Surface engineering team used the PhysicsX platform through Discovery to reimagine their cooling fan design process. What previously required weeks of simulation and manual setup is now compressed into days. Discovery agents orchestrate the generation, evaluation, and optimization of thousands of geometries, surfacing high-performing, production-ready designs for validation.
The result is a step change in engineering productivity: faster iteration, broader design-space coverage, and more confident decision-making. The approach is now being extended across additional components in the Surface portfolio.
Engineering is still constrained by workflows built for the pre-AI era. This partnership changes that. PhysicsX’s frontier physics AI models, combined with Microsoft Discovery’s agentic orchestration and Azure infrastructure, give engineers the ability to explore design spaces that were previously out of reach—at the speed and scale that modern industrial development demands.
—Jacomo Corbo, CEO, PhysicsX
Synopsys is a leader in electronic design automation (EDA), computer aided engineering (CAE) tools, and intellectual property (IP), and plays a central role in the design and development of the most complex chips and systems for the leading semiconductor and systems companies of the world.
Synopsys and Microsoft have been partnering since 2019, helping pioneer software-as-a-service (SaaS) models on Microsoft Azure. Synopsys also launched the first Silicon Copilot in collaboration with Microsoft and is continuing that journey by leveraging Microsoft Discovery to roll out solutions for chip design.
The semiconductor industry is facing an unprecedented set of challenges—demand for high performance chips is growing exponentially, complexity of sustainable, power-efficient chip design, and a critical shortage of skilled engineering. Agentic systems can help mitigate these challenges while accelerating design cycles.
Synopsys agentic AI stack with multi-agent workflows built on AgentEngineer™ technology, supported by Microsoft Discovery, have defined a new paradigm for the industry.
Chip design sits at the intersection of extreme complexity and outsized impact—exactly where AI can make the biggest difference. By bringing together Synopsys’ AI‑driven design leadership with Microsoft Discovery, we are enabling agentic AI to redefine semiconductor engineering workflows, unlock step‑function productivity gains, and accelerate the next era of technology innovation.
—Ravi Subramanian, Chief Product Management Officer, Product Management & Markets Group, Synopsys
Microsoft Discovery works with an expanding ecosystem of partners offering integrated tools and specialized expertise.

Expanding the preview marks an important step in making agentic AI available to a broader set of R&D organizations. Microsoft Discovery reflects our belief that the next generation of scientific progress can come from systems that combine human expertise with AI that can reason, plan, and act at scale.
We look forward to partnering with organizations that want to rethink how discovery happens and to help shape the future of enterprise R&D.
For organizations looking to get started with Microsoft Discovery be sure to review the technical documentation to understand requirements, onboarding prerequisites, and infrastructure considerations.
Microsoft Discovery is offered in preview. Features, availability, integrations, and performance characteristics described in this post may change prior to, or without, general availability and are not commitments. Statements about future capabilities (including any potential quantum integration) are forward-looking and subject to change. Customer and internal outcomes described reflect specific workflows and data; individual results will vary.
The post Microsoft Discovery: Advancing agentic R&D at scale appeared first on Microsoft Azure Blog.
We consistently hear common realities from leaders: data infrastructure is a critical accelerator for AI adoption, and many organizations haven’t been able to fully realize the value of their data. 60% of AI projects unsupported by AI-ready data will be abandoned.1 Modernization is a key enabler of AI readiness, with 75% of organizations that migrated to Azure reporting significantly reduced barriers to AI and machine learning.2
This highlights a clear opportunity. Organizations that modernize with fully managed, AI-optimized databases can unlock faster performance, real-time insights, and the ability to build intelligent applications and agents at scale.
Today, I am excited to introduce Azure Accelerate for Databases—an offering designed to help organizations modernize their databases and build AI‑ready capabilities on Azure, faster and with greater confidence. Save up to 35% (vs. pay-as-you-go) with the savings plan for databases, receive delivery funding and Azure credits, and benefit from zero-cost delivery support. Azure Accelerate for Databases brings together expert guidance, investments, savings, and skilling into a single offering, helping teams move from legacy constraints to systems ready to support real-time, intelligent applications. 
Azure Accelerate for Databases is built for organizations modernizing at scale while preparing both their platforms and teams for what comes next. Modernization initiatives are often complex, requiring time, investment, and coordination across teams, while legacy environments can leave data fragmented and difficult to operationalize for AI.
Azure Accelerate for Databases is designed to simplify this journey. It brings together Microsoft Cloud Accelerate Factory delivery support, Azure specialized partner expertise, flexible savings and investments, AI-enhanced tooling and assessments, and role-based skilling into a cohesive experience.
The goal is straightforward: to help organizations move faster, reduce friction, and turn database modernization into a durable, AI-enabling strategy.
With Azure Accelerate for Databases, customers can:
Modernization outcomes depend on execution as much as strategy. With the right expertise in place, organizations can reduce risk and move forward with greater confidence.
This removes financial barriers so customers can modernize faster, with more predictable economics and more flexibility to keep momentum as needs evolve.
The savings plan for databases5 offers a flexible, spend-based pricing model that adapts to evolving database needs. Customers commit to a fixed hourly spend, and savings are automatically applied to the most valuable usage each hour on select services. This helps reduce the complexity of managing multiple reservations and supports scaling without managing individual SKUs, regions, or configurations. When usage exceeds the commitment, pay-as-you-go pricing applies—helping costs remain predictable as usage grows.
Modernization succeeds when teams can operate and innovate confidently. This helps organizations build durable capability—not just complete a project.
One example is Thomson Reuters, which modernized its tax preparation platform by migrating more than 18,000 databases, totaling over 500 terabytes of data, to Azure SQL Managed Instance. The goal was not only to address performance and scalability challenges during peak tax season, but to establish a more resilient and reliable data foundation for the future.

Running on Azure has helped improve application performance and scalability for 7,000 tax firms and 70,000 users. With a modern, fully-managed platform in place, Thomson Reuters is now better positioned to scale services and support continued innovation. The migration was accelerated through Microsoft’s Cloud Accelerate Factory, the zero-cost delivery benefit of Azure Accelerate, which provided hands-on engineering support, automation, and structured execution to help reduce risk and streamline the transition at scale.

Azure Accelerate for Databases is designed to support this kind of modernization progress, so they can build a stronger data foundation for AI.
Modernizing your database estate is a critical step in preparing for AI. Azure Accelerate for Databases is designed to make that step more achievable by bringing together the resources, expertise, and investments needed to move forward with confidence.
To learn more, visit the Azure Accelerate for Databases page and explore savings, as well as access expert-led resources.
Join us at the Migrate & Modernize Summit (April 23 and on demand) to learn more about modernizing your database estate.
For more details, connect with your Microsoft account team.
1Lack of AI-Ready Data Puts AI Projects at Risk
2The Total Economic Impact™ Of Migrating To Microsoft Azure For AI-Readiness. Commissioned study.
3Zero‑cost delivery support for eligible customers through Microsoft‑funded programs. Availability and eligibility criteria apply.
4Eligible customers may receive delivery funding (for partner-led services) and Azure credits through approved Azure Accelerate programs. Funding is subject to application, project scope, and regional availability.
5Customers may see savings estimated to be between 0% and 35%. The 35% savings estimate is based on one Azure SQL Database serverless running for 12 months at a pay-as-you-go rate versus a reduced rate for a 1-year savings plan. Based on Azure pricing as of March 2026. Prices are subject to change. Actual savings may vary based on location, database service, and/or usage.
6Skilling benefits are subject to eligibility, approval, and availability.
The post Introducing Azure Accelerate for Databases: Modernize your data for AI with experts and investments appeared first on Microsoft Azure Blog.
In an enterprise AI scenario, the goal was to map structured feature data to relevant sections within large technical documents.
At a glance, this appears to be a straightforward semantic matching problem. Initial results using semantic search were promising. However, as the system was used more extensively, certain issues became apparent:
Despite multiple optimizations, the system continued to produce outcomes that lacked reliability.
This pointed to a deeper realization:
The challenge was not just retrieval quality, but the absence of structure in how retrieval was being guided.
Initial Approach: Retrieval-Augmented Generation (RAG)
The system followed a standard RAG architecture:
RAG is highly effective in scenarios involving unstructured data, offering flexibility and strong contextual understanding.
However, an important limitation emerged:
RAG operates on semantic similarity but does not inherently understand relationships or domain constraints.
Concepts with similar terminology were sometimes mapped across unrelated domains due to overlapping language. Without domain awareness, the system struggled to enforce meaningful boundaries.
The data already contained valuable structure:
This structure was not incorporated into the retrieval process, leading to missed opportunities for improving accuracy.
Some mappings followed clear and consistent rules. However, treating all queries as probabilistic retrieval problems introduced unnecessary variability and reduced confidence in the results.
To address these challenges, a structured layer based on Knowledge Graph concepts was introduced.
At a high level, relationships were modeled as:
This enabled:
Rather than replacing RAG, the system evolved into a hybrid architecture:
Step 1: Knowledge Graph for filtering
Step 2: RAG for semantic refinement
The transition from a retrieval-first approach to a constraint-guided retrieval model significantly improved consistency and relevance.
RAG is sufficient when:
A hybrid approach is beneficial when:
As enterprise AI systems scale, it becomes increasingly important to balance:
These approaches are not competing—they are complementary. When combined effectively, they enable systems that are both flexible and reliable.
A key realization from this experience was:
Instead of focusing only on improving retrieval, it is equally important to understand how domain structure can guide and constrain that retrieval.
This article reflects personal learnings and general architectural patterns.
Let’s start from a real engineering pain point.
In their engineering blog Managed Agents, Anthropic describes a sobering observation: while building the agent scaffolding (the “harness”) for Claude Sonnet 4.5, they noticed the model suffered from “context anxiety,” so they added context-reset logic into the harness. But when the same harness ran on the more capable Claude Opus 4.5, those resets became dead weight — the stronger model no longer needed them, yet the harness was actively holding it back.
This is the fundamental dilemma of the harness: it encodes assumptions about the current model’s capabilities, and those assumptions rot quickly as models evolve.
That’s not a minor concern. In an era when AI capabilities shift qualitatively every few months, any infrastructure tightly coupled to a specific model’s abilities becomes a bottleneck on engineering progress.
Anthropic’s answer borrows from a problem operating systems solved decades ago: how do you provide stable abstractions for programs that haven’t been imagined yet?
The answer is virtualization. Just as the OS virtualizes physical hardware into stable abstractions — processes, files, sockets — Managed Agents virtualizes the Agent runtime into three independent interface layers. For readability this post follows Anthropic’s own metaphors — Brain / Hands / Session — which map to the more conventional engineering terms reasoning orchestrator / execution sandbox / durable event log:
Note: Brain, Hands, and Session are not standard industry terminology — they are metaphors Anthropic uses in its engineering blog. In more conventional engineering vocabulary they correspond, respectively, to the agent reasoning loop / orchestrator, the tool executor / execution sandbox, and the durable event log / state store. The rest of this post uses the two styles interchangeably.
This layer is the harness itself — it calls the model and routes tool calls; think of it as the agent’s reasoning orchestrator. Key design point: it must be stateless. All its state comes from the event log; as long as it can call wake(sessionId) to resume, any harness crash is recoverable.
This means the harness can evolve independently as model capabilities evolve, without disturbing in-flight tasks.
This layer holds the execution environments the orchestrator calls into — Python REPLs, shells, HTTP clients, even remote containers — i.e. the tool executor / execution sandbox. The contract is brutally simple:
execute(name, input) -> stringJust that one interface. The orchestrator doesn’t care whether a sandbox is a local process or a remote container; if a sandbox crashes, it is treated as an ordinary tool error — the model decides whether to retry on a fresh sandbox. This is the “cattle, not pets” philosophy applied to AI engineering .
This layer is an append-only, durable event log. It is not the model’s context window. This distinction matters enormously. When a task outgrows the context window, the harness can use getEvents(start, end) to slice history on demand, and filter, summarize, or transform it before feeding back to the model — all without changing the underlying interface.
The event log also plays a key role in credential isolation: when execute calls are logged, the Vault redacts first, so raw tokens never enter the log — and never enter the model’s context window.
This decoupling yields measurable wins:
The reason: the old architecture had to provision a container before inference could begin. After decoupling, inference can start as soon as the event log is readable, with sandboxes provisioned lazily on demand
Microsoft gives the enterprise-grade infrastructure answer in Microsoft Foundry.
Foundry Agent Service offers three Agent types:
| Type | Requires Code? | Hosting | Best For |
|---|---|---|---|
| Prompt Agent | No | Fully managed | Rapid prototyping |
| Workflow Agent | No (optional YAML) | Fully managed | Multi-step automation |
| Hosted Agent | Yes | Containerized hosting | Fully custom logic |
This post focuses on Hosted Agents. They let developers package their own Agent code (LangGraph, Microsoft Agent Framework, or fully custom) as a container image and deploy it on Microsoft’s fully managed, pay-per-use infrastructure.
The core abstraction in Hosted Agents is the Hosting Adapter. It does three things:
Microsoft Agent Framework (9.7k , now generally available) is a multi-language, multi-provider Agent orchestration framework that supports:
This matters a lot: Microsoft’s own Agent framework natively supports Anthropic’s Claude models, providing an official path for cross-ecosystem integration.
Now let’s see how this real project fuses the two architectural philosophies.
What’s happening here?
The orchestrator’s toolset follows Anthropic Managed Agents’ minimalism strictly — every capability funnels through the single gateway execute(name, input_json); the reasoning layer knows nothing about concrete sandbox implementations.
Look at the finally block: every execute call destroys its sandbox afterwards, success or failure. That guarantees sandboxes are genuinely stateless units — leftover processes, temp files, in-memory state all vanish with the sandbox.
Built-in sandboxes include:
The event log is a .jsonl append-only file — one JSON event per line. In production you can drop in Azure Cosmos DB, Event Hub, or any durable store; the interface doesn’t change.
The model references credentials by logical name: execute("http_fetch", {"url": "...", "credential": "github"}) — it only knows the logical name "github". The real token is injected by the Vault inside the sandbox, and tool return values are redacted before being written to the event log.
| Dimension | Anthropic Managed Agents | Microsoft Foundry Hosted Agents |
|---|---|---|
| Core abstraction | Reasoning orchestrator / execution sandbox / durable event log (Anthropic: Brain / Hands / Session) | Hosting Adapter + Agent Framework |
| Sandbox strategy | Cattle (destroyed after use) | Container (managed lifecycle) |
| Credential security | Vault proxy injection, invisible to the model | Managed Identity + RBAC |
| Context management | External event log, sliced on demand | Responses API session management |
| Observability | Event log + custom | OpenTelemetry → Azure Monitor |
| Scaling | Many orchestrators Ă— many sandboxes, concurrent | minReplicas / maxReplicas |
| Cross-model support | Claude model family | Many providers (Claude included) |
The core philosophy of both architectures aligns tightly: decouple reasoning (the orchestrator), tool execution (the sandbox layer), and memory (the event log) so each layer can evolve independently. The difference is emphasis:
That’s the value of this project: it proves the two philosophies can live together in one codebase.
Back in 2016 the industry was still arguing whether microservices were over-engineering. Today nobody doubts the value of service decoupling at scale.
I believe 2025–2026 is the “microservices moment” for Agent engineering — people are starting to realize that an Agent that couples reasoning, tool execution, and state memory inside a single monolithic container simply cannot keep pace with model evolution.
Anthropic’s Managed Agents supplies the architectural philosophy; Microsoft’s Foundry Hosted Agents supplies the enterprise infrastructure; and this open-source project shows that they are not an either/or choice — they are complementary, and they make each other better.
Join us for a new 3‑part livestream series where we deploy AI agents on Microsoft Foundry using Microsoft Agent Framework and LangChain/LangGraph, then level them up with tools, observability, and evals.
You'll learn how to:
Throughout the series, we’ll use Python for all examples and share full code so you can run everything yourself in your own Foundry projects.
👉 Register for the full series.
Spanish speaker? ¡Tendremos una serie para hispanohablantes! RegĂstrese aquĂ
In addition to the live streams, you can also join Join the Microsoft Foundry Discord to ask follow-up questions after each stream.
If you are new to generative AI with Python, start with our 9-part Python + AI series, which covers topics such as LLMs, embeddings, RAG, tool calling, MCP, and agents. If you are new to Microsoft Agent Framework, watch our 6-part Python + Agent series which dives deep into agents and workflows.
To learn more about each live stream or register for individual sessions, scroll down:
27 April, 2026 | 5:00 PM - 6:00 PM (UTC) Coordinated Universal Time
Register for the stream on Reactor
In our first session, we'll deploy agents built with Microsoft Agent Framework (the successor of Autogen and Semantic Kernel). Starting with a simple agent, we'll add Foundry tools like Code Interpreter, ground the agent in enterprise data with Foundry IQ, and finally deploy multi-agent workflows. Along the way, we'll use the Foundry UI to interact with the hosted agent, testing it out in the playground and observing the traces from the reasoning and tool calls.
29 April, 2026 | 5:00 PM - 6:00 PM (UTC) Coordinated Universal Time
Register for the stream on Reactor
In our second session, we'll deploy agents built with the popular open-source libraries LangChain and LangGraph. Starting with a simple agent, we'll add Foundry tools like Bing Web Search, ground the agent in Foundry IQ, then deploy more complex agents using the LangGraph orchestration framework. Along the way, we'll use the Foundry UI to interact with the hosted agent, testing it out in the playground and observing the traces from the reasoning and tool calls.
30 April, 2026 | 5:00 PM - 6:00 PM (UTC) Coordinated Universal Time
Register for the stream on Reactor
In our third session, we'll ensure that our AI agents are producing high-quality outputs and operating safely and responsibly. First we'll explore what it means for agent outputs to be high quality, using built-in evaluators to check overall task adherence and then building custom evaluators for domain-specific checks. With Foundry hosted agents, we can run bulk evaluations on demand, set up scheduled evaluations, and even enable continuous evaluation on a subset of live agent traces. Next we'll discuss safety systems that can be layered on top of agents and audit agents for potential safety risks. To improve compliance with an organization's goals, we can configure custom policies and guardrails that can be shared across agents. Finally, we can ensure that adversarial inputs can't produce unsafe outputs by running automated red-teaming scans on agents, and even schedule those to run regularly as well. With all of these evaluation and compliance features available in Foundry, you can have more confidence hosting your agents in production.