Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
148216 stories
·
33 followers

Seattle’s tech paradox: Amazon’s layoffs collide with the AI boom — or is it a bubble?

1 Share
Image created by Google Gemini based on the audio of this week’s GeekWire Podcast.

This week on the GeekWire Podcast: Why is Amazon laying off 14,000 people in the middle of an AI boom — and is it really a boom at all? We dig into the contradiction at the heart of Seattle’s tech scene, discussing Amazon CEO Andy Jassy’s “world’s largest startup” rationale and what it says about the company’s culture and strategy. And we debate whether AI progress represents true transformation or the familiar signs of a tech bubble in the making.

Then we examine the vision of Cascadia high-speed rail — the ambitious plan to connect Portland, Seattle, and Vancouver, B.C., by bullet train. Is it the regional infrastructure needed to power the Pacific Northwest’s next chapter, or an expensive dream looking for a purpose?

With GeekWire co-founders John Cook and Todd Bishop

Related headlines from the week

Amazon layoffs

Amazon earnings

Microsoft Azure, earnings and OpenAI

Seattle-Portland-Vancouver

Subscribe to GeekWire in Apple Podcasts, Spotify, or wherever you listen.

Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Making Desktop Frameworks More Accessible with Electron | Episode 4 | The GitHub Podcast

1 Share
From: GitHub
Duration: 21:01
Views: 314

In this episode of the GitHub Podcast, Abby and Kedasha are joined by Keeley Hammond, a longtime maintainer of the Electron project. They explore the inner workings of one of the most widely used open source desktop frameworks, dive into how Electron empowers developers to build cross-platform desktop apps with web technologies, and discuss what it takes to build a welcoming and sustainable open source community at scale. The conversation touches on contributor culture, project governance, automation tools, and the role of AI in open source, in both its promise and its challenges.

Links mentioned in the episode:

https://github.com/electron/electron
https://github.com/electron/forge
https://summerofcode.withgoogle.com/
https://openjsf.org/
https://github.com/vert-d
https://www.weforum.org/reports/the-future-of-jobs-report-2025/
https://github.com/electron/governance
https://github.com/electron/electron/issues
https://github.com/unjs/issue-triage

The GitHub Podcast is hosted by Abigail Cabunoc Mayes, Kedasha Kerr and Cassidy Williams. The show is edited, mixed and produced by Victoria Marin. Thank you to our production partner, editaudio.

— CHAPTERS —

00:00 - Introduction: a deep dive into Electron
00:54 - Meet the guest: Keeley Hammond
01:56 - What is Electron and who uses it?
03:27 - The secret to a welcoming contributor culture
04:45 - How to be intentional about community building
06:28 - The biggest misconception about Electron
08:05 - The dynamics of paid vs. volunteer maintainers
12:32 - A lesson for all maintainers: the power of automation
14:20 - A spicy take: the challenge of AI-generated spam
17:12 - Why critical thinking is a top skill for the future
19:19 - Final thoughts and how to get involved

Stay up-to-date on all things GitHub by subscribing and following us at:
YouTube: http://bit.ly/subgithub
Blog: https://github.blog
X: https://twitter.com/github
LinkedIn: https://linkedin.com/company/github
Instagram: https://www.instagram.com/github
TikTok: https://www.tiktok.com/@github
Facebook: https://www.facebook.com/GitHub/

About GitHub:
It’s where over 100 million developers create, share, and ship the best code possible. It’s a place for anyone, from anywhere, to build anything—it’s where the world builds software. https://github.com

Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Random.Code() - Minimizing Expectation Naming Structure in Rocks, Part 8

1 Share
From: Jason Bock
Duration: 1:05:45
Views: 8

In this stream, I'll work on a workaround to eliminate explicit implementation type name clashes in the expectation API rework I've been doing in Rocks.

https://github.com/JasonBock/Rocks/issues/394

Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Organizations as Ecosystems — Understanding Complexity, Innovation, and the Three-Body Problem at Work With Simon Holzapfel

1 Share

BONUS: Organizations as Ecosystems — Understanding Complexity, Innovation, and the Three-Body Problem at Work

In this fascinating conversation about complex adaptive systems, Simon Holzapfel helps us understand why traditional planning and control methods fail in knowledge work — and what we can do instead.

Understanding Ecosystems vs. Systems

"Complex adaptive systems are complex in nature and adaptive in that they evolve over time. That's different from a static system." — Simon Holzapfel

 

Simon introduces the crucial distinction between mechanical systems and ecosystems. While mechanical systems are predictable and static, ecosystems — like teams and organizations — are complex, adaptive, and constantly evolving. The key difference lies in the interactions among team members, which create emergent properties that cannot be predicted by analyzing individuals separately. Managers often fall into the trap of focusing on individuals rather than the interactions between them, missing where the real magic happens. This is why understanding your organization as an ecosystem, not a machine, fundamentally changes how you lead.

In this segment, we refer to the Stella systems modeling application.

The Journey from Planning to Emergence

"I used to come into class with a lesson plan — doop, doop, doop, minute by minute agenda. And then what I realized is that I would just completely squash those questions that would often emerge from the class." — Simon Holzapfel

 

Simon shares his transformation from rigid classroom planning to embracing emergence. As a history and economics teacher for 10 years, he learned that over-planning kills the spontaneous insights that make learning powerful. The same principle applies to leadership: planning is essential, but over-planning wastes time and prevents novelty from emerging. The key is separating strategic planning (the "where" and "why") from tactical execution (the "how"), letting teams make local decisions while leaders focus on alignment with the bigger picture.

"Innovation Arrives Stochastically"

"Simply by noticing the locations where you've had your best ideas, we notice the stochasticness of arrival. Might be the shower, might be on a bike ride, might be sitting in traffic, might be at your desk — but often not." — Simon Holzapfel

 

Simon unpacks the concept of stochastic emergence — the idea that innovation cannot be scheduled or predicted in advance. Stochastic means something is predictable over large datasets but not in any given moment. You know you'll have ideas if you give yourself time and space, but you can't predict when or where they'll arrive. This has profound implications for managers who try to control when and how innovation happens. Knowledge work is about creating things that haven't existed before, so emergence is what we rely on. Try to squash it with too much control, and it simply won't happen.

In this segment, we refer to the Systems Innovation YouTube channel.

The Three-Body Problem: A Metaphor for Teams

"When you have three nonlinear functions working at the same time within a system, you have almost no ability to predict its future state beyond just some of the shortest time series data." — Simon Holzapfel

 

Simon uses the three-body problem from physics as a powerful metaphor for organizational complexity. In physics, when you have three bodies (like planets) influencing each other, prediction becomes nearly impossible. The same is true in business — think of R&D, manufacturing, and sales as three interacting forces. The lesson: don't think you can master this complexity. Work with it. Understand it's a system. Most variability comes from the system itself, not from any individual person. This allows us to depersonalize problems — people aren't good or bad, systems can be improved. When teams understand this, they can relax and stop treating every unpredictable moment as an emergency.

Coaching Leaders to Embrace Uncertainty

"I'll start by trying to read their comfort level. I'll ask about their favorite teachers, their most hated teachers, and I'll really try to bring them back to moments in time that were pivotal in their own development." — Simon Holzapfel

 

How do you help analytical, control-oriented leaders embrace complexity and emergence? Simon's approach is to build rapport first, then gently introduce concepts based on each leader's background. For technical people who prefer math, he'll discuss narrow tail distributions and fat tails. For humanities-oriented leaders, he uses narrative and storytelling. The goal is to get leaders to open up to possibilities without feeling diminished. He might suggest small experiments: "Hold your tongue once in a meeting" or "Ask questions instead of making statements." These incremental changes help managers realize they don't have to be superhuman problem-solvers who control everything.

Giving the Board a Number: The Paradox of Prediction

"Managers say we want scientific management, but they don't actually want that. They want predictive management." — Simon Holzapfel

 

Simon addresses one of the biggest tensions in agile adoption: leaders who say "I just need to give the board a number" while also wanting innovation and adaptability. The paradox is clear — you cannot simultaneously be open to innovation and emergent possibilities while executing a predetermined plan with perfect accuracy. This is an artifact of management literature that promoted the "philosopher king" manager who knows everything. But markets are too movable, consumer tastes vary too much, and knowledge work is too complex for any single person to control. The burnout we see in leaders often comes from trying to achieve an impossible standard.
In this segment, we refer to the episodes with David Marquet

Resources for Understanding Complexity

"Eric Beinhocker's book called 'The Origin of Wealth' is wonderful. It's a very approachable and well-researched piece that shows where we've been and where we're going in this area." — Simon Holzapfel

 

Simon recommends two key resources for anyone wanting to understand complexity and ecosystems. First, Eric Beinhocker's "The Origin of Wealth" explains how we developed flawed economic assumptions based on 19th-century Newtonian physics, and why we need to evolve our understanding. Second, the Systems Innovation YouTube channel offers brilliant short videos perfect for curious, open-minded managers. Simon suggests a practical approach: have someone on your team watch a video and share what they learned. This creates shared language around complexity and makes the concepts less personal and less threatening.

The Path Forward: Systems Over Individuals

"As a manager, our goal is to constantly evaluate the performance of the system, not the people. We can always put better systems in place. We can always improve existing systems. But you can't tell people what to do — it's not possible." — Simon Holzapfel

 

The conversation concludes with a powerful insight from Deming's work: about 95% of a system's productivity is linked to the system itself, not individual performance. This reframes the manager's role entirely. Instead of trying to control people, focus on improving systems. Instead of treating burnout as individual failure, see it as information that something in the system isn't working. Organizations are ever-changing ecosystems with dynamic properties that can only be observed, never fully predicted. This requires a completely different way of thinking about management — one that embraces uncertainty, values emergence, and trusts teams to figure things out within clear strategic boundaries.

Recommended Resources

As recommended resources for further reading, Simon suggests: 

 

About Simon Holzapfel

 

Simon Holzapfel is an educator, coach, and learning innovator who helps teams work with greater clarity, speed, and purpose. He specializes in separating strategy from tactics, enabling short-cycle decision-making and higher-value workflows. Simon has spent his career coaching individuals and teams to achieve performance with deeper meaning and joy. Simon is also the author of the Equonomist newsletter on Substack, where he explores the intersection of economics, equality, and equanimity in the workplace.

 

You can link with Simon Holzapfel on LinkedIn.





Download audio: https://traffic.libsyn.com/secure/scrummastertoolbox/20251101_Simon_Holzapfel_BONUS.mp3?dest-id=246429
Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

OpenAI’s Apps SDK: A Developer’s Guide to Getting Started

1 Share
app in a box

“The ChatGPT app ecosystem is built on trust,” OpenAI’s guidelines for app developers proclaim.

As the AI ecosystem expands, OpenAI’s ChatGPT needs help drilling down on local information — and this is primarily what the OpenAI Apps SDK, introduced at the beginning of the month, is for. The problem for OpenAI is that it is actually the web that is built on trust; but for all intents and purposes, OpenAI wants to bypass that.

The development process is relatively austere and unapproachable, so this post just focuses on the basic concepts you need to understand before you can get down to development.

What Are ChatGPT Apps and What Is Their Purpose?

What exactly is a ChatGPT App for? The primary point of an app is to enhance a ChatGPT conversation, nothing more. Think of them as visual experiences that appear in various guises (for instance, carousels) that integrate seamlessly into a conversation without breaking the user context.

OpenAI is relying on your app’s metadata to accurately describe the questions that it can answer — either as a whole, or as part of the app. We will see that come up later. But beyond this, I imagine this will play out like the Apple App Store, with some apps being favored by OpenAI and some being blackballed. Without favor, your app will not be able to act as a merchant.

Let’s take a quick look at the effect OpenAI is hoping for.

Understanding the Inline App Display

The inline display mode appears directly in the flow of the conversation. Imagine that you are on ChatGPT and are asking about pizzas in San Francisco:

Let’s take a look, from top to bottom, at the various fixed aspects:

  • At the top is the query; this will be what your app’s metadata suggests it can answer.
  • Then, just above the main visuals, is an icon with the app name “Pizzazz.”
  • Then we see the inline display, which is your “app content.”
  • Finally, it tails off with a model-generated response. (Although it might also be largely driven by your app; the documentation is ambiguous here.)

I have no doubt the introduction of OpenAI’s Atlas browser will increase the interest in providing apps — much like the spread of the web increased interest in websites.

In fact you can even ask Atlas to run an App example. This one didn’t work, but at least it tried:

But don’t read anything into the above until you’ve understood what the main technology you need is.

The Core Technology: Model Context Protocol (MCP)

The central star of any OpenAI App is Model Context Protocol (or MCP), which has fortunately been written about quite a bit a few months ago. We’ve even written about rich components through MCP, so even that isn’t new. But don’t read any further until you have grokked the basics of MCP, because the Apps SDK leans heavily on it — treating your app as a set of tool calls.

The idea is that you describe enough about your tools that ChatGPT knows when it can use them, and will return some type of fixed data format that it can plug into a design.

But why MCP? A lot of the reasons come down to its protocol-agnostic nature, and use of natural language in descriptions — which large language models are good with. OAuth 2.1 flows are used for access control. Plus, you can easily run local MCP servers.

So, your minimal MCP server for Apps SDK has to implement these three capabilities at least:

  1. It must list (or advertise) all of its tools, and the shape of the incoming and return data.
  2. It must respond to the model calling the tool via call_tool.
  3. The tool must then return structured content, and optionally point to anything else needed by the ChatGPT client to render your content.

One of the slightly sinister but well-intentioned quotes in the OpenAI Apps SDK document is this: “Good discovery hygiene means your app appears when it should and stays quiet when it should not.” What they mean is that each tool response needs to be against an “action-oriented” request initiated by the user in the conversation.

The Role and Rules of Your MCP Server

So your MCP server is the foundation of your app. It exposes tools that the model can call, and returns the packaged structured data plus component HTML that the ChatGPT client renders inline. It also might need to deal with any authentication needed to access certain resources.

Within your server, there are a number of rules. There should be one “action-focused” job per tool. This may not be efficient to write, but it makes the customer journey more refined. For example, read and write behavior would certainly be different calls.

Apart from the action-oriented tool name, the model is also looking for action-oriented sentences within the description of the tool that start “Use this when ….”  The example in the docs is, “Use this when the user wants to view their kanban board.”

In addition to returning structured data, each tool on your MCP server should also reference an HTML UI template in its descriptor. This HTML template will be rendered in an iframe by ChatGPT. We will look at this briefly below.

Example App

The fastest way to get started is to use the supported official Python SDK or the Typescript SDK. Then add your preferred web framework.

We’ll end this post with the HTML UI template registration that ChatGPT will render with an iframe, as we saw in the examples at the top. These are all in the documentation. Otherwise, use the given examples to fashion your first attempts.

The node example is the one featured. First, the MCP server is created:

// Create an MCP server
const server = new McpServer({  name: "kanban-server",  version: "1.0.0"});


Then we register the UI template:

// UI resource (no inline data assignment; host will inject data)
server.registerResource(
  "kanban-widget",
  "ui://widget/kanban-board.html",
  {},
  async () => ({
    contents: [
      {
        uri: "ui://widget/kanban-board.html",
        mimeType: "text/html+skybridge",
        text: `
<div id="kanban-root"></div>
${KANBAN_CSS ? `<style>${KANBAN_CSS}</style>` : ""}
<script type="module">${KANBAN_JS}</script>
        `.trim(),
        _meta: {
          /* 
            Renders the widget within a rounded border and shadow. 
            Otherwise, the HTML is rendered full-bleed in the conversation
          */
          "openai/widgetPrefersBorder": true,

          /* 
            Assigns a subdomain for the HTML. 
            When set, the HTML is rendered within `chatgpt-com.web-sandbox.oaiusercontent.com`
            It's also used to configure the base url for external links.
          */
          "openai/widgetDomain": 'https://chatgpt.com',

          /*
            Required to make external network requests from the HTML code. 
            Also used to validate `openai.openExternal()` requests. 
          */
          'openai/widgetCSP': {
              // Maps to `connect-src` rule in the iframe CSP
              connect_domains: ['https://chatgpt.com'],
              // Maps to style-src, style-src-elem, img-src, font-src, media-src etc. in the iframe CSP
              resource_domains: ['https://*.oaistatic.com'],
          }
        }
      },
    ],
  })
);


From now on, the resource is recognised via that uri of ui://widget/kanban-board.html. Also,  note the required mime type of text/html+skybridge. (It is assumed that CSS and HTML  content exists for the kanban board itself.)

Finally, the tool is registered:

server.registerTool( 
  "kanban-board", 
  { 
    title: "Show Kanban Board",
    _meta: { 
      // associate this tool with the HTML template 
      "openai/outputTemplate": "ui://widget/kanban-board.html", 
      // labels to display in ChatGPT when the tool is called 
      "openai/toolInvocation/invoking": "Displaying the board", 
      "openai/toolInvocation/invoked": "Displayed the board" 
     }, 
     inputSchema: { tasks: z.string() } 
   }, 
   async () => { 
     return { 
       content: [{ type: "text", text: "Displayed the kanban board!" }],
       structuredContent: {} 
     }; 
   }  
);


Note that the outTemplate matches the template uri.

Conclusion

I’m fairly certain a friendlier way of doing all this will emerge — either through pick and matched UI templates, or with better libraries. But for now, the pioneers of ChatGPT Apps must work with what OpenAI has provided; the prize being an early presence in the burgeoning AI-based economy.

The post OpenAI’s Apps SDK: A Developer’s Guide to Getting Started appeared first on The New Stack.

Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Microsoft Agent Framework: First Look

1 Share

In my earlier blog post, we introduced the Microsoft Agent Framework.

We saw how the new open-source SDK from Microsoft blends the best features from Semantic kernel, Auto Gen and the Process Framework.

In this blog post we take an initial first look at the Microsoft agent framework in terms of the components that form this new open-source SDK.

Specifically, we cover the following:

  • What is an AI agent
  • Available agent types
  • When to use an AI agent
  • When not to use an AI agent
  • Conversations and threads
  • Agent Function tools
  • Agents as function tools
  • Memory and memory types
  • Middleware
  • Background processing
  • Observability
  • Workflows

 

This blog post is about the main concepts, components and patterns that are core to shipping AI agents using the Agent Framework.

Based on prior project experience, I find some of these components and tooling to be similar to others I’ve used in the past.  For example, ChatHistory in Bot Framework and Semantic Kernel.

These are also the topics I will be digging into deeper over the next few weeks and months.

~

What Is an AI Agent

The industry has grappled with definitions for an AI agent the dust is straight over and most agree that AI agent is a software application that consists of:

  • Mechanisms to accept human prompts
  • A language model to generate responses from human prompts
  • features to identify and select relevant tooling to taken actions to satisfy a human prompt

 

Access to vector databases and more recently, MCP servers are also common capabilities that we’re seeing being part of the agentic AI landscape.

~

Available Agent Types

The Microsoft Agent Framework provides support for multiple types of agents.  All agents are derived from a common base class, AIAgent.  The available agent types can be seen below.

Throughout my Semantic Kernel projects, I typically use the OpenAI ChatCompleteion types.

Learn more about available agent types here.

~

When to Use an AI Agent

AI agents are useful in situations where you need autonomy, iterative decision making and conversational experiences.

Common use cases include:

  • customer support
  • education and self-development
  • code generation or analysis
  • brainstorming and ideation

 

Due to their non-deterministic nature, I tend to advise that AI agents are best deployed in low to medium risk use cases.

~

When Not to Use an AI Agent

AI agents can go rogue and hallucinate. Due to this, I often don’t use or advise using AI agents in high-risk use cases.

Hallucinations and the non-deterministic nature of AI agents mean you need some form of guard rails to verify an AI agents output or action.

Typically, the ways you can enforce guard rails:

  • introduce human in the loop
  • write traditional code, methods
  • create a specific function tool for the agent to call (a form of traditional code)

 

All this said, an AI agent can struggle with complex tasks that involve multiple steps or decision points.

Complex tasks can require the invocation of multiple tools, API calls, and custom code.

It can be hard for a single agent to orchestrate, maintain state and manage.

The older Process Framework was an initial attempt to remedy this.  A new technology is available to within the Agent Framework to help with this -Workflows.

We’ll cover Workflows in detail in a future blog post.

I dive into hallucinations in my Pluralsight course “Vector Databases and Embeddings for Developers”.

You can find this course here.

~

The Anatomy of an AI Agent

Multiple Components come together to form an AI agent.  Lights on park where some of these are you’ll notice that many of them are like what was available in Semantic Kernel.  Some are new.

Conversations and Threads

Agents are stateless and don’t maintain any between calls. To have a multi-turn conversation with an agent, you need to create an object to hold the conversational state and pass this object to the agent when running it.

This is where conversations and threads come into play.

To create the conversation state object, you call the GetNewThread method on an agent instance.

If you don’t call this method, an disposable in-memory thread is created.  This is only available for the duration of the run.

Agent Function Tools

Most agents require access to external capabilities and functionalities.  This is where function tools come in handy.  These are like Native Functions and Plugins that are available in Semantic Kernel.

Human in The Loop

A new feature that I don’t recall in Semantic Kernel -but is available in the Agent Framework, is being able to implement a function tool that require human approval before execution.

You can implement this manually of course but having an out of the box option is helpful.

To implement human-in-loop, you simply wrap your function tools with the  ApprovalRequiredAIFunction command.  A nice touch.

Agents as Function Tools

When building function tools, they will likely handle a discrete task.  A handful of inputs and single output or action.

The Agent Framework had a feature that lets you use an instance of the AIAgent class as a function tool.

This is done by calling the AsAIFunction() method and providing it as a tool to another agent.

This makes it possible for you to daisy-chain agents and build more complex workflows.

One agent passing the baton to the other.

Memory and Memory Types

You need a way to maintain context between conversations.  This is where memory comes into play.

Memory can also be used to store user or application settings that you want to load when a human or machine interacts with your AI agent.

There are 2 flavours of memory in the Agent Framework:

  • short term memory
  • long term memory

 

Several memory mechanisms are available.  You can use in-memory, databases or a specialised memory service.  This offering reminds me of the Bot Framework days.

Short-Term Memory – Chat History

Multiple memory offerings are available when working with short term memory.

  • In-memory chat history
  • Service / model inferred chat history storage
  • 3rd party chat history storage

 

In-memory Chat History Storage

This a good option to use in several use cases. Mainly during development or if you are using a service that doesn’t support in-service stored of chat history.

When using this option, the AgentFramework stores chat history and agent memory in the AgentThread object by default.  Both agent and human data is stored.

One nice feature is out of the box chat reduction.

When interacting with a model, the chat history can grow. You don’t want it continually growing throughout the conversation.

I published a blog post about this when working with Semantic Kernel.  In that post, I detail how you can optimise chat history.  With the Microsoft Agent Framework, you get this out of the box.

It’s done by implementing the Microsoft.Extensions.AI.IChatReducer interface.

 

Service / Model Inferred chat history storage

Certain models require or have options to store chat history in-service.  One example of this is the Open AI Responses API.

When using a service/model inferred chat history, the Agent Framework will store the id of the remote chat history in the AgentThread object.

Learn more about Open AI Responses API here.

 

3rd Party Chat History Storage

If you want a more robust chat history storage mechanism and none are available in the service/model inference option you are using, you can use a 3rd party chat storage mechanism.

Learn how to create your own custom message store here.

Middleware

Sometimes you need to intercept agent interactions at different points.  This is where middleware comes into play.

Often I’ll use middleware to inject logging perform security checks, trap errors or to sanitise content being received or sent by an agent.

Logic in middleware often sets outside your core agent or application code.

The agent framework offers three different types of middleware:

  • agent run middleware
  • function calling middleware
  • a chat client middleware

 

All the types of middleware are implemented via a function callback.

Learn more about middleware here.

Background Responses

In my opinion, this is one of the most exciting features and the Agent Framework.  Most of the agents we have seen in the last few years involved a traditional chatbot experienced -human prompt and agent response. A one-to-one interaction.

These are helpful but in the real world, an approach is often needed to handle operations that can take hours to complete.

This might be tasking an agent with processing thousands of database records, crunching financial figures or generating new metrics.

Consider implementing background responses as part of your RAG pipeline during content ingestion, chunking, pre-processing or post-processing operations.

Background responses are enabled by setting AllowBackgroundResponses = true in the AgentRunOptions parameter.

At the time of writing, only agents that use the Open AI Responses API support background processing.

These are available via directly via Open AI and Azure Open AI Responses Agent.

Find a working example here.

Observability

Observability helps you track and monitor how your agent is performing.  Use this to help you diagnose any issues that may arise during the execution of your agent.

Like ASP.NET, logs can be captured via logging framework of your choice.  The Agent framework also integrates with Open Telemetry.  You can also use the Open Telemetry SDK to export metrics to an Azure Monitor Resource.

Learn more about observability here.

~

Workflows

In my mind, Workflows are the replacement for the Process Framework.  Agent Framework Workflows help you enforce guardrails within your agentic AI system.

Whereas an AI agent is dynamic and non-deterministic in terms of how it reaches a defined goal, a workflow is explicitly defined.

In my opinion, this has been a missing piece in the agentic AI ecosystem. It helps provide more comfort around using agentic ai in medium to high-risk use cases.

Of course, you still need a human-in-the-loop process for verification purposes.  I will cover Workflows in future blog posts.

Learn more about workflows here.

~

Summary

In this blog post, we’ve taken an initial first look at the Microsoft agent framework.  We’ve look at some of the key components that form this new open-source SDK.

In future blog posts, we’ll explore each of these in more detail.

~

Enjoy what you’ve read, have questions about this content, or would like to see another topic?

Drop me a note below.

You can schedule a call using my Calendly link to discuss consulting and development services.

~

Further Reading and Resources

Some more resources to help you:

Code and Engaging

Videos

Learn

~

 

 

 

JOIN MY EXCLUSIVE EMAIL LIST
Get the latest content and code from the blog posts!
I respect your privacy. No spam. Ever.
Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories