Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
149237 stories
·
33 followers

AI Agent and Copilot Podcast: Microsoft Agent Framework Enables Complex, Multi-Agent Actions

1 Share

Welcome to this AI Agent & Copilot Podcast, where we analyze the opportunities, impact, and outcomes that are possible with AI.

In this episode, I speak with Will Hawkins and Rich Hakwins of RitewAI regarding the forthcoming Microsoft Agent Framework and their impressions based on testing the open-source platform for building, orchestrating, and deploying AI agents

Highlights

Microsoft Agent Framework Overview and Introduction (1:36)

Will explains that the new framework effectively combines Semantic Kernel and AutoGen. Semantic Kernel was a cornerstone of Azure AI Agent Service and Copilot Studio. AutoGen is a research project on multi-agent orchestration systems. The agent framework simplifies the stack and adds newer features, supporting complex multi-agent actions.

Graph Based Workflow Functions (2:51)

Rich explains that graph-based workflows simplify orchestration, offering pre-made orchestration options like concurrent, sequential, and group chat. Custom workflows can be created using declarative workflows, which use a YAML string or document. An example of a process is orchestrating multiple agents for sales lead data prioritization and email drafting.

Multi-Vendor Support (5:14)

Will discusses the importance of being able to choose model providers due to potential outages or changes. (The Anthropic Claude outage that was taking place at the time the discussion was recorded is a good example.) The agent framework also allows users to bring their own models, which is crucial for specific use cases like lead identification or analysis.

The framework also support Agent2Agent (A2A) and Model Context Protocol (MCP); Rich explains that these protocols enable the integration of pre-made agents or tools into workflows. Will adds that they allow for a more seamless integration of different agents and tools, similar to connecting Lego bricks. The protocols enhance workflow and orchestration capabilities by enabling agents to share tools and communicate more effectively.

Community Support (9:29)

Rich evaluates the documentation and open-source community support positively, highlighting an active Discord channel and weekly office hours. Will emphasizes the open source nature of the agent framework and the collaborative approach from Microsoft. The community is actively involved in developing and contributing to the framework. The open source mindset encourages collaboration and innovation among developers.

Framework Comparisons (10:56)

Will Hawkins notes their focus on Microsoft tools, but they have compared it with open-source projects like Langchain. The key differentiator is the central governance by Microsoft, blending community-driven innovation with central management. Rich adds that the managed runtime and SDK by Microsoft provide a stable and reliable foundation for development.

More AI Development Insights:


The post AI Agent and Copilot Podcast: Microsoft Agent Framework Enables Complex, Multi-Agent Actions appeared first on Cloud Wars.

Read the whole story
alvinashcraft
56 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

The 2027 Chevy Bolt is the McRib of the automotive world

1 Share
TechCrunch drove the 2027 Chevy Bolt and found that incremental improvements build on a solid basic recipe to deliver an affordable EV.
Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Microsoft Sharpens AI Toolkit for VS Code in Foundry Update

1 Share
Microsoft's February 2026 Foundry update includes broader platform changes, but the most immediate developer-facing news for VS Code users is an AI Toolkit refresh centered on tool discovery, agent debugging, and test-style evaluation.
Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Livestream: UI Freezes in JetBrains IDE Plugins and How to Avoid Them

1 Share

UI freezes are among the most frustrating issues plugin developers encounter. Investigating them can be particularly challenging due to the complexity of concurrent programming. What makes things even trickier is that UI freezes in JetBrains IDEs are not always caused by heavy work on the event dispatch thread.

On Thursday, March 19, at 03:00 PM UTC, join us for a live session with Yuriy Artamonov, Product Manager for the IntelliJ Platform at JetBrains, hosted by Patrick Scheibe, IntelliJ Platform Developer Advocate. Together, they’ll explore common causes of UI freezes in JetBrains IDE plugins and share practical strategies to prevent them.

During the livestream, we’ll dig into several surprisingly common problems that affect many plugins, including:

  • Long-running non-cancellable read actions
  • Network calls made while holding a read lock
  • Misuse of external processes


You’ll also learn:

  • Why APIs like ReadAction.compute and runReadAction can block write actions and freeze the UI even when your code doesn’t run on the event dispatch thread
  • Which patterns to avoid when working with read actions and background tasks
  • How to write cancellable, freeze-safe plugin code that keeps JetBrains IDEs fast and responsive

Whether you’re new to plugin development or have already published plugins on JetBrains Marketplace, this session will help you better understand common causes of UI freezes and how to avoid them in your own code.

At the end of the session, stay for a live Q&A where you’ll have the chance to ask questions and discuss real-world scenarios with our experts.

Speaking to you:

Yuriy Artamonov

Yuriy Artamonov is a Software Developer in IntelliJ Platform at JetBrains. Yuriy has been developing libraries, frameworks, and tools for developers for the past 15 years. As a member of the JetBrains IntelliJ Platform team, Yuriy loves introducing useful new tools to the daily routines of millions of developers around the world. His professional aspiration is to raise the bar of the software industry by helping developers improve their craft.

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Under the hood: Security architecture of GitHub Agentic Workflows

1 Share

Whether you’re an open-source maintainer or part of an enterprise team, waking up to documentation fixes, new unit tests, and refactoring suggestions can be a true “aha” moment. But automation also raises an important concern: how do you put guardrails on agents that have access to your repository and the internet? Will you be wondering if your agent relied on documentation from a sketchy website, or pushed a commit containing an API token? What if it decides to add noisy comments to every open issue one day? Automations must be predictable to offer durable value.

But what is the safest way to add agents to existing automations like CI/CD? Agents are non-deterministic: They must consume untrusted inputs, reason over repository state, and make decisions at runtime. Letting agents operate in CI/CD without real-time supervision allows you to scale your software engineering, but it also requires novel guardrails to keep you from creating security problems.

GitHub Agentic Workflows run on top of GitHub Actions. By default, everything in an action runs in the same trust domain. Rogue agents can interfere with MCP servers, access authentication secrets, and make network requests to arbitrary hosts. A buggy or prompt-injected agent with unrestricted access to these resources can act in unexpected and insecure ways.

That’s why security is baked into the architecture of GitHub Agentic Workflows. We treat agent execution as an extension of the CI/CD model—not as a separate runtime. We separate open‑ended authoring from governed execution, then compile a workflow into a GitHub Action with explicit constraints such as permissions, outputs, auditability, and network access.

This post explains how we built Agentic Workflows with security in mind from day one, starting with the threat model and the security architecture that it needs.

Threat model

There are two properties of agentic workflows that change the threat model for automation.

First, agents’ ability to reason over repository state and act autonomously makes them valuable, but it also means they cannot be trusted by default—especially in the presence of untrusted inputs.

Second, GitHub Actions provide a highly permissive execution environment. A shared trust domain is a feature for deterministic automation, enabling broad access, composability, and good performance. But when combined with untrusted agents, having a single trust domain can create a large blast radius if something goes wrong.

Under this model, we assume an agent will try to read and write state that it shouldn’t, communicate over unintended channels, and abuse legitimate channels to perform unwanted actions. By default, GitHub Agentic Workflows run in a strict security mode with this threat model in mind, and their design is guided by four security principles: defense in depth, don’t trust agents with secrets, stage and vet all writes, and log everything.

Defend in depth

GitHub Agentic Workflows provide a layered security architecture consisting of substrate, configuration, and planning layers. Each layer limits the impact of failures above it by enforcing distinct security properties that are consistent with its assumptions.

Diagram of a three-layer system architecture with labeled sections Planning layer, Configuration layer, and Substrate layer. Each layer contains three blue tiles:

Planning: Safe Outputs MCP (GitHub write operations), Call filtering (call availability, volume), Output sanitization (secret removal, moderation).
Configuration: Compiler (GH AW extension), Firewall policies (allowlist), MCP config (Docker image, auth token).
Substrate: Action runner VM (OS, hypervisor), Docker containers (Docker daemon, network), Trusted containers (firewall, MCP gateway, API proxy).

The substrate layer rests on a GitHub Actions runner virtual machine (VM) and several trusted containers that limit the resources an agent can access. Collectively, the substrate level provides isolation among components, mediation of privileged operations and system calls, and kernel-enforced communication boundaries. These protections hold even if an untrusted user-level component is compromised and executes arbitrary code within its container isolation boundary.

Above the substrate layer is a configuration layer that includes declarative artifacts and the toolchains that interpret them to instantiate a secure system structure and connectivity. The configuration layer dictates which components are loaded, how components are connected, what communication channels are permitted, and what privileges are assigned. Externally minted tokens, such as agent API keys and GitHub access tokens, are critical inputs that bound components’ external effects—configuration controls which tokens are loaded into which containers.

The final layer of defense is the planning layer. The configuration layer dictates which components exist and how they communicate, but it does not dictate which components are active over time. The planning layer’s primary responsibility is to create a staged workflow with explicit data exchanges between them. The safe outputs subsystem, which will be described in greater detail below, is the primary instance of secure planning.

Don’t trust agents with secrets

From the beginning, we wanted workflow agents to have zero access to secrets. Agentic workflows execute as GitHub Actions, in which components share a single trust domain on top of the runner VM. In that model, sensitive material like agent authentication tokens and MCP server API keys reside in environment variables and configuration files visible to all processes in the VM.

This is dangerous because agents are susceptible to prompt injection: Attackers can craft malicious inputs like web pages or repository issues that trick agents into leaking sensitive information. For example, a prompt-injected agent with access to shell-command tools can read configuration files, SSH keys, Linux /proc state, and workflow logs to discover credentials and other secrets. It can then upload these secrets to the web or encode them within public-facing GitHub objects like repository issues, pull requests, and comments.

Our first mitigation was to isolate the agent in a dedicated container with tightly controlled egress: firewalled internet access, MCP access through a trusted MCP gateway, and LLM API calls through an API proxy. To limit internet access, agentic workflows create a private network between the agent and firewall. The MCP gateway runs in a separate trusted container, launches MCP servers, and has exclusive access to MCP authentication material.

Although agents like Claude, Codex, and Copilot must communicate with an LLM over an authenticated channel, we avoid exposing those tokens directly to the agent’s container. Instead, we place LLM auth tokens in an isolated API proxy and configure agents to route model traffic through that proxy.

Architecture diagram showing several connected Docker containers. A Codex token connects to an api-proxy container, which connects to an OpenAI service icon. A separate flow shows an agent container (linked to chroot/host) communicating over http to a gh-aw-firewall container, then over http to a gh-aw-mcpg container (linked to Host Docker Socket), then over stdio to a GitHub MCP container (linked to a GitHub PAT). A GitHub icon appears above the GitHub MCP container.

Zero-secret agents require a fundamental trade-off between security and utility. Coding workloads require broad access to compilers, interpreters, scripts, and repository state, but expanding the in-container setup would duplicate existing actions provisioning logic and increase the set of network destinations that must be allowed through the firewall.

Instead, we carefully expose host files and executables using container volume mounts and run the agent in a chroot jail. We start by mounting the entire VM host file system read-only at /host. We then overlay selected paths with empty tmpfs layers and launch the agent in a chroot jail rooted at /host. This approach keeps the host-side setup intact while constraining the agent’s writable and discoverable surface to what it needs for its job.

Stage and vet all writes

Prompt-injected agents can still do harm even if they do not have access to secrets. For example, a rogue agent could spam a repository with pointless issues and pull requests to overwhelm repository maintainers, or add objectionable URLs and other content in repository objects.

To prevent this kind of behavior, the agentic workflows compiler decomposes workflows into explicit stages and defines, for each stage:

  • The active components and permissions (read vs. write)
  • The data artifacts emitted by that stage
  • The admissible downstream consumers of those artifacts

While the agent runs, it can read GitHub state through the GitHub MCP server and can only stage its updates through the safe outputs MCP server. Once the agent exits, write operations that have been buffered by the safe outputs MCP server are processed by a suite of safe outputs analyses.

Diagram showing a GitHub-centric workflow with green arrows and two rows of components. At the top, a GitHub icon points down into three boxes: Agent (Untrusted), GitHub MCP (Read-only), and MCP config (Write-buffered). Below are three processing steps labeled Filter operations, Moderate content, and Remove secrets, each marked 'Deterministic analysis.' Green arrows indicate data flow from GitHub into the system, down through configuration to 'Remove secrets,' then left through 'Moderate content' and 'Filter operations,' looping back toward the agent.

First, safe outputs allow workflow authors to specify which write operations an agent can perform. Authors can choose which subset of GitHub updates are allowed, such as creating issues, comments, or pull requests. Second, safe outputs limits the number of updates that are allowed, such as restricting an agent to creating at most three pull requests in a given run. Third, safe outputs analyzes update content to remove unwanted patterns, such as output sanitization to remove URLs. Only artifacts that pass through the entire safe outputs pipeline can be passed on, ensuring that each stage’s side effects are explicit and vetted.

Log everything

Even with zero secrets and vetted writes, an agent can still transform repository data and invoke tools in unintended ways or try to break out of the constraints that we impose upon it. Agents are determined to accomplish their tasks by any means and have a surprisingly deep toolbox of tricks for doing so. If an agent behaves unexpectedly, post-incident analysis requires visibility into the complete execution path.

Agentic workflows make observability a first-class property of the architecture by logging extensively at each trust boundary. Network and destination-level activity is recorded at the firewall layer, model request/response metadata and authenticated requests are captured by the API proxy; and tool invocations are logged by the MCP gateway and MCP servers. We also add internal instrumentation to the agent container to audit potentially sensitive actions like environment variable accesses. Together, these logs support end-to-end forensic reconstruction, policy validation, and rapid detection of anomalous agent behavior.

Pervasive logging also lays the foundation for future information-flow controls. Every location where communication can be observed is also a location where it can be mediated. Agentic workflows already support the GitHub MCP server’s lockdown mode, and in the coming months, we’ll introduce additional safety controls that enforce policies across MCP servers based on visibility (public vs. private) and the role of a repository object’s author.

What’s next?

We’d love for you to be involved! Share your thoughts in the Community discussion or join us (and tons of other awesome makers) in the #agentic-workflows channel of the GitHub Next Discord. We look forward to seeing what you build with GitHub Agentic Workflows. Happy automating, and keep an eye out for more updates!

The post Under the hood: Security architecture of GitHub Agentic Workflows appeared first on The GitHub Blog.

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Organizing productive platform teams

1 Share
So many platforms feel heavy because they mirror the organization, not the architecture the organization claims to want.
Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories