Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
150470 stories
·
33 followers

Pantone’s ‘Cloud Dancer’ color party is a recession indicator

1 Share
Pantone color party Cloud Dancer

Pantone's color of the year for 2026 is white - sorry, Cloud Dancer. Pantone announced the shade on Thursday, and describes it as a "discrete white hue offering a promise of clarity." The accompanying image shows a person with cropped hair in billowy white clothing, arms outstretched over a background of clouds.

"PANTONE 11-4201 Cloud Dancer encourages true relaxation and focus, allowing the mind to wander and creativity to breathe, making room for innovation," the company writes. But all I can see is a recession indicator.

This is the third year in a row where the Pantone color has slid more and more into unobtrusive, dispassionat …

Read the full story at The Verge.

Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Microsoft shareholders invoke Orwell and Copilot as Nadella cites ‘generational moment’

1 Share
From left: Microsoft CFO Amy Hood, CEO Satya Nadella, Vice Chair Brad Smith, and Investor Relations head Jonathan Nielsen at Friday’s virtual shareholder meeting. (Screenshot via webcast)

Microsoft’s annual shareholder meeting Friday played out as if on a split screen: executives describing a future where AI cures diseases and secures networks, and shareholder proposals warning of algorithmic bias, political censorship, and complicity in geopolitical conflict.

One shareholder, William Flaig, founder and CEO of Ridgeline Research, quoted two authorities on the topic — George Orwell’s 1984 and Microsoft’s Copilot AI chatbot — in requesting a report on the risks of AI censorship of religious and political speech.

Flaig invoked Orwell’s dystopian vision of surveillance and thought control, citing the Ministry of Truth that “rewrites history and floods society with propaganda.” He then turned to Copilot, which responded to his query about an AI-driven future by noting that “the risk lies not in AI itself, but in how it’s deployed.”

In a Q&A session during the virtual meeting, Microsoft CEO Satya Nadella said the company is “putting the person and the human at the center” of its AI development, with technology that users “can delegate to, they can steer, they can control.”

Nadella said Microsoft has moved beyond abstract principles to “everyday engineering practice,” with safeguards for fairness, transparency, security, and privacy.

Brad Smith, Microsoft’s vice chair and president, said broader societal decisions, like what age kids should use AI in schools, won’t be made by tech companies. He cited ongoing debates about smartphones in schools nearly 20 years after the iPhone.

“I think quite rightly, people have learned from that experience,” Smith said, drawing a parallel to the rise of AI. “Let’s have these conversations now.”

Microsoft’s board recommended that shareholders vote against all six outside proposals, which covered issues including AI censorship, data privacy, human rights, and climate. Final vote tallies have yet to be released as of publication time, but Microsoft said shareholders turned down all six, based on early voting. 

While the shareholder proposals focused on AI risks, much of the executive commentary focused on the long-term business opportunity. 

Nadella described building a “planet-scale cloud and AI factory” and said Microsoft is taking a “full stack approach,” from infrastructure to AI agents to applications, to capitalize on what he called “a generational moment in technology.”

Microsoft CFO Amy Hood highlighted record results for fiscal year 2025 — more than $281 billion in revenue and $128 billion in operating income — and pointed to roughly $400 billion in committed contracts as validation of the company’s AI investments.

Hood also addressed pre-submitted shareholder questions about the company’s AI spending, pushing back on concerns about a potential bubble. 

“This is demand-driven spending,” she said, noting that margins are stronger at this stage of the AI transition than at a comparable point in Microsoft’s cloud buildout. “Every time we think we’re getting close to meeting demand, demand increases again.”

Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Microsoft Agent Framework:  Implementing Human-in-the-Loop AI Agents

1 Share

In recent blog posts we have implemented a personal trainer agent called Iron Mind AI.

We implemented features in this agent to let it make bookings and find high protein recipes using function tools.

Consider a scenario where our agent may have to interact with a real-world system such as Stripe for payments.  In these types of use case, human oversight is required.

In this blog post we’ll see how to implement human in the loop approval when creating agents using the Microsoft Agent Framework.

The following topics are covered:

  • why human in the loop matters
  • how to implement human in the loop
  • creating a human in the loop experience to handle payments
  • extending the Iron Mind AI agent to handle human in the loop

 

Full code and a video demo are also included. Let’s dig in.

~

Why Human-in-the-Loop Matters

Agents can solve a variety of tasks and are great for creating automations.  In certain use cases however you may prefer to still have human oversight.  For example, use cases may include:

  • processing payments
  • deleting production data
  • handling sensitive data

 

An AI agent misinterpreting its goal can have disastrous effects in situations like these. This is why it’s important to consider where human in the loop checkpoints should or can be inserted when designing your agentic AI system.

Human in the loop checkpoints can occur in real-time whilst someone interacts with your agent.

Alternatively, an agent may push a record onto a database table for a human to examine later. Human approval can then be granted or denied, the agents’ thread can be rehydrated from the file system or database, and the agent can resume the task with the appropriate approval status that was captured.

Your AI agent handles the tedious work of understanding requests and preparing actions, but humans retain control over key decisions.

For regulated industries (finance, healthcare, legal), this pattern can be the difference between “we can’t use AI” and “we’re deploying AI safely in production”.

~

Guardrails for the Security Conscious

Guardrails are important for regulated industries such as finance, law or healthcare.

AI agents don’t have to be black boxes that you don’t trust.  Human-in-the-loop patterns let you build supervised automation.

Implementing a human in the loop pattern can provide protection and help soothe any security or compliance concerns.  The approval loop creates a hard stop.

For example, an ecommerce agent cannot proceed without human authorisation for defined operations.

If your systems go down at 2am, the agent won’t keep retrying payments or sending emails, it will wait for a human to review and approve.

~

How to Implement Human in the Loop

To implement human in the loop, you wrap any function tools with the method  ApprovalRequiredAIFunction.

We can see an example of this here:

AIFunction paymentFunctionWithApproval =
    new ApprovalRequiredAIFunction(
        AIFunctionFactory.Create(PaymentsAgent.ProcessPaymentAsync));

 

When our agent identifies that a human (or other agent) has expressed the intent they want to make a payment, the agent will stop the conversation and ask the human in they want to proceed with the invocation of the function tool ProcessPaymentAsync.

~

An Example

Let’s look at a more concrete example.  In this example we implement 2 methods within our Iron Mind AI personal trainer agent.

  • CollectPaymentDetailsAsync – handles collection of payment details
  • ProcessPaymentAsync – submits the payment details

 

Collecting payment details involves human input whereas processing payment performs the submission and requires human verification.

 

Step 1: Define Your Functions

First, we create these 2 function tools within a new PaymentsAgent class:

[Description("You can collect payment details. You dont make a purchase.")]
public static async Task<string> CollectPaymentDetailsAsync(
    [Description("Credit card number (16 digits)")] string cardNumber)
{
    Console.WriteLine("Collecting payment details...");
    return $"Collected payment details using card ending in {cardNumber.Substring(Math.Max(0, cardNumber.Length - 4))}";
}


[Description("Make the actual payment")]
public static async Task<string> ProcessPaymentAsync()
{
    PaymentsService service = new PaymentsService();
    // ... payment processing logic
    return result;
}

 

Under the hood, we have a PaymentService class.  We can see a definition of the payment service class here:

public class PaymentsService
  {
        public async Task<string> ProcessPaymentAsync(
          string productName,
          decimal amount,
          string currency,
          string cardNumber,
          int expiryMonth,
          int expiryYear,
          string cvv,
          string cardholderName)
      {
          Console.WriteLine($"Processing payment for {productName} - {currency} {amount / 100m:F2}...");
          // Simulate payment processing
          await Task.Delay(1000);
          var transactionId = Guid.NewGuid().ToString();
          
          var result = new
          {
              success = true,
              transactionId = transactionId,
              product = productName,
              amount = amount / 100m,
              currency = currency,
              cardLast4 = cardNumber.Substring(Math.Max(0, cardNumber.Length - 4)),
              timestamp = DateTime.UtcNow
          };

          Console.WriteLine($"Payment successful! Transaction ID: {transactionId}");

          return JsonSerializer.Serialize(result);
      }
  }

 

The real world example this may be an integration to the payment providers such as Stripe.

Step 2: Selective Approval Wrapping

Not all function tools need approval. Collecting payment details is harmless, but actually processing the payment requires human confirmation.

// No approval needed for collecting details
AIFunction collectPaymentDetailsFunction =
    AIFunctionFactory.Create(PaymentsAgent.CollectPaymentDetailsAsync);

// Wrap the dangerous operation with approval
AIFunction paymentFunctionWithApproval =
    new ApprovalRequiredAIFunction(
        AIFunctionFactory.Create(PaymentsAgent.ProcessPaymentAsync));

 

This selective approach keeps the conversation flowing naturally while protecting critical operations.

Step 3: The Approval Loop Pattern

With the function tools defined it’s time to implement our conversation loop.  Here we have a nested loop pattern:

  • An outer loop to handle user conversation
  • An inner loop to handle approval requests

 

We can see this here:

// Outer loop: handles user conversation
while (true)
{
    var response = await agent.RunAsync(input, agentThread);
    var userInputRequests = response.UserInputRequests.ToList();
   
    // Inner loop: handles approval requests
    while (userInputRequests.Count > 0)
    {
        var userInputResponses = userInputRequests
            .OfType<FunctionApprovalRequestContent>()
            .Select(functionApprovalRequest =>
            {
                // Show what the agent wants to do
                Console.WriteLine($"Function: {functionApprovalRequest.FunctionCall.Name}");
              
                // Get human approval
                string? approval = Console.ReadLine();
                bool isApproved = approval?.Equals("yes", StringComparison.OrdinalIgnoreCase) == true;
          

                // Return the decision
                return new ChatMessage(ChatRole.User,
                    [functionApprovalRequest.CreateResponse(isApproved)]);
            }).ToList();
       

        // Send approvals back to the agent
        response = await agent.RunAsync(userInputResponses, agentThread);
        userInputRequests = response.UserInputRequests.ToList();
    }
}

 

The inner loop is essential because a single user request might trigger multiple function calls that need approval.

Bringing It Together

The entire code listing is included for reference:

static async Task Main(string[] args)
   {

       // Wrap the function with ApprovalRequiredAIFunction
       AIFunction collectpaymentDetailsFunctionWithoutApproval = AIFunctionFactory.Create(PaymentsAgent.CollectPaymentDetailsAsync);
       AIFunction paymentFunctionWithApproval = new ApprovalRequiredAIFunction(AIFunctionFactory.Create(PaymentsAgent.ProcessPaymentAsync));


       AIAgent paymentAgent = new OpenAIClient(apiKey)
                     .GetChatClient(model)
                     .CreateAIAgent(
                         instructions: "You are a helpful payment processing assistant.  You can only use local function tools",
                         name: "IronMind AI",
                         null,
                         tools: [collectpaymentDetailsFunctionWithoutApproval, paymentFunctionWithApproval]
                     );
       await RunChatLoopWithThreadAsync(paymentAgent);
   }


   private static async Task RunChatLoopWithThreadAsync(AIAgent agent)
   {
       AgentThread agentThread = agent.GetNewThread();

       Console.WriteLine("Payment Processing Agent (type 'exit' to quit)");
       Console.WriteLine("Try: 'Process a payment for me'\n");

       // outer chat loop for user input and agent responses
       while (true)
       {
           Console.Write("You: ");
           string? input = Console.ReadLine();

           if (string.IsNullOrWhiteSpace(input) || input.Equals("exit", StringComparison.OrdinalIgnoreCase))
           {
               break;
           }


           // Run the agent with the user input
           var response = await agent.RunAsync(input, agentThread);
           var userInputRequests = response.UserInputRequests.ToList();

           // inner loop to handle approval requests (human-in-the-loop) - there may be multiple so we keep going until there are no more
           while (userInputRequests.Count > 0)
           {
               var userInputResponses = userInputRequests
                .OfType<FunctionApprovalRequestContent>()
                .Select(functionApprovalRequest =>
                {
                    Console.ForegroundColor = ConsoleColor.Yellow;
                    Console.WriteLine($"\nApproval Required");
                    Console.WriteLine($"Function: {functionApprovalRequest.FunctionCall.Name}");

                    // Display arguments properly
                    if (functionApprovalRequest.FunctionCall.Arguments is IDictionary<string, object> argsDict)
                    {
                        if (argsDict.Count > 0)
                        {
                            Console.WriteLine("Arguments:");
                            foreach (var arg in argsDict)
                            {
                                Console.WriteLine($"  {arg.Key}: {arg.Value}");
                            }
                        }
                        else
                        {
                            Console.WriteLine("Arguments: (none)");
                        }
                    }
                    else
                    {
                        Console.WriteLine($"Arguments: {functionApprovalRequest.FunctionCall.Arguments}");
                    }


                    Console.Write("\nApprove? (yes/no): ");
                    Console.ResetColor();

                    string? approval = Console.ReadLine();
                    bool isApproved = approval?.Trim().Equals("yes", StringComparison.OrdinalIgnoreCase) == true;

                    if (isApproved)
                    {
                        Console.ForegroundColor = ConsoleColor.Green;
                        Console.WriteLine("✓ Approved\n");
                        Console.ResetColor();
                    }
                    else
                    {
                        Console.ForegroundColor = ConsoleColor.Red;
                        Console.WriteLine("✗ Denied\n");
                        Console.ResetColor();
                    }

                    return new Microsoft.Extensions.AI.ChatMessage(ChatRole.User, [functionApprovalRequest.CreateResponse(isApproved)]);
                }).ToList();


               // Pass the approval responses back to the agent
               response = await agent.RunAsync(userInputResponses, agentThread);

               // Check for any further approval requests in the new response
               userInputRequests = response.UserInputRequests.ToList();
           }

           // Display the final response
           Console.ForegroundColor = ConsoleColor.Green;
           Console.WriteLine($"IronMind AI: {response}\n");
           Console.ResetColor();
       }
   }

 

Seeing It In Action

Here we can see a real conversation flow in action.  In this example, we submit the credit card number, and the agent automatically asks if the person would like to process the payment.

You: submit my payment details

IronMind AI: Please provide your credit card number (16 digits)

You: 4545454545454545
IronMind AI: I have collected your payment details for Premium Subscription
of USD 49.99 using the card ending in 4545. Would you like me to process
the payment now?

You: yes


  Approval Required
  Function: ProcessPayment
  Arguments: (none)


Approve? (yes/no): yes

✓ Approved
IronMind AI: Payment processed successfully! Transaction ID: 7f8a9b2c...

 

Notice how the agent collected details without interruption but paused for human approval before executing the payment.

~

Demo

Here, we can see the above in action in a short demo.

 

~

Further Thoughts on AI Agents, Governance, and Human in the Loop

Other things to consider during your human-in-the-loop implementation are detailed below.

Explicit Function Tool Boundaries

Ensure any function tools your agent has defined are explicit with clear descriptions.   Leave no room for ambiguity about what the agent can or cannot do.  it’s all in your codebase.

Granular Control Over Risk Levels

Try and categorise function tool operations by risk and apply different approval strategies.

For example:

  • High Risk (payments, deletions, external API calls): Require approval
  • Medium Risk (read-only database queries): Log but auto-approve
  • Low Risk (calculations, formatting): No approval needed

 

Applying this approach helps you create guardrails proportional to the actual risk.

Audit Trails

Agent interactions are part of a thread.  Threads can contain:

  • What the agent wanted to do
  • What parameters it would use
  • Who approved or denied it
  • When the decision was made

 

Serialise and store AgentThread objects.  These will provide you with compliance-ready audit logs.

Start Restrictive, Loosen Gradually

For organisations new to AI agents, a phased approach can look like the following:

  • Phase 0: Find the low hanging fruit and low risk use cases. Ship a small number of discrete agents and monitor
  • Phase 1: Wrap medium and high-risk with the ApprovalRequiredAIFunction. method Yes, it’s painful, but you’ll quickly learn which functional tools and operations are safe
  • Phase 2: Remove approvals from read-only operations and calculations. For now, keep them on anything that modifies state
  • Phase 3: Once you’ve built trust, consider auto-approving certain operations for specific users or contexts (for example, payments under £10).

 

Taking a gradual approach lets you build confidence in your AI system without taking unnecessary risks early on.

~

Summary

In this blog post we’ve seen how to implement human in the loop verification when developing agents using the Microsoft Agent Framework.

We’ve looked at why human in the loop matters, how to implement it, its relevance and how it can help you enforce guardrails.

We’ve also seen an example of human in the loop in action.

In the next blog post in this series, we’ll see how you can give you ai agent memory capabilities by implementing the AIContextProvider.

This can be used in conjunction with an AgentThread to run custom logic before and after an AI inference service is used and to provide additional context.

Stay tuned.

~

Enjoy what you’ve read, have questions about this content, or would like to see another topic? Drop me a note below.

You can schedule a call using my Calendly link to discuss consulting and development services.

~

JOIN MY EXCLUSIVE EMAIL LIST
Get the latest content and code from the blog posts!
I respect your privacy. No spam. Ever.
Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Microsoft Changing AI Targets?

1 Share
From: AIDailyBrief
Duration: 6:14
Views: 278

Microsoft quota revisions ignite debate over AI demand versus company execution and trigger investor jitters. Nvidia CEO Jensen Huang frames AI as long-term infrastructure amid geopolitical competition, and OpenAI moves to deepen model-training observability with a Neptune acquisition. Black Friday analytics show AI shopping assistants markedly boosting referrals and conversions, with Adobe, Sensor Tower, Apptopia, and Salesforce reporting major gains.

Brought to you by:
KPMG – Go to ⁠www.kpmg.us/ai⁠ to learn more about how KPMG can help you drive value with our AI solutions.
Vanta - Simplify compliance - ⁠⁠⁠⁠⁠⁠⁠https://vanta.com/nlw

The AI Daily Brief helps you understand the most important news and discussions in AI.
Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614
Get it ad free at
Join our Discord: https://bit.ly/aibreakdown

Read the whole story
alvinashcraft
9 hours ago
reply
Pennsylvania, USA
Share this story
Delete

IoT Coffee Talk: Episode 290 - "Probabilistic Determinism" (Get your neuro-symbolic AI on!)

1 Share
From: Iot Coffee Talk
Duration: 1:05:36
Views: 3

Welcome to IoT Coffee Talk, where hype comes to die a terrible death. We have a fireside chat about all things #IoT over a cup of coffee or two with some of the industry's leading business minds, thought leaders and technologists in a totally unscripted, organic format.

This week Rob, Stephanie, Dimitri, Pete, and Leonard jump on Web3 to host a discussion about:

🎶 🎙️ GOOD KARAOKE! 🎸 🥁 "I Wanna Woman" by Ratt
🐣 Stephanie discovers the killer physical AI use case!!!
🐣 Why a personal ranch is the perfect environment for GenAIoT!
🐣 What do we think will be the key theme of CES 2026?
🐣 How is an AI robot different from a,...... robot?
🐣 The great CSP disregard of their core business - cloud.
🐣 Leonard and Rob give an overview of AWS re:Invent 2025.
🐣 The agentic AI industrial revolution! Going from Six Sigma to ONE Sigma!
🐣 Why Werner Vogels is awesome for making basing his keynote on Metallica!
🐣 The importance of the man in the middle and the developer in agentic AI.
🐣 Is neuro-symbolic AI the excuse for LLMs that don't work for business?
🐣 Why wearing a stop sign can save you from a Waymo with a NY cab driver attitude.
🐣 AI for AI. Making a case for GenAI.
🐣 What happened to AGI and ASI? Does anyone care anymore?!!

It's a great episode. Grab an extraordinarily expensive latte at your local coffee shop and check out the whole thing. You will get all you need to survive another week in the world of IoT and greater tech!

Tune in! Like! Share! Comment and share your thoughts on IoT Coffee Talk, the greatest weekly assembly of Onalytica and CBT tech and IoT influencers on the planet!!

If you are interested in sponsoring an episode, please contact Stephanie Atkinson at Elevate Communities. Just make a minimally required donation to www.elevatecommunities.org and you can jump on and hang with the gang and amplify your brand on one of the top IoT/Tech podcasts in the known metaverse!!!

Take IoT Coffee Talk on the road with you on your favorite podcast platform. Go to IoT Coffee Talk on Buzzsprout, like, subscribe, and share: https://lnkd.in/gyuhNZ62

Read the whole story
alvinashcraft
9 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Daily Reading List – December 5, 2025 (#679)

1 Share

If you work in tech, it’s likely you make a respectable salary. Especially in the global scheme of things. My goal, regardless of how much I make, is to always be a bargain to my employer. I think Google got its money’s worth this week, at least if they’re paying me per meeting and per written word.

[blog] Architecting efficient context-aware multi-agent framework for production. Very good post about “active context engineering” and our different approach for how we treat context in our agent framework.

[blog] Angular Signals: The Essentials You Need to Know. I definitely understand this major feature more after reading this. We use so many reactive web apps, but might not always know how to build one.

[blog] How to Use Google’s Gemini CLI for AI Code Assistance. Good walkthrough here, and it follows a specific example to bring the concepts to life.

[article] Spring AI tutorial: Get started with Spring AI. Learn more about how to build AI apps using this popular Java framework. And it just got updated with a handful of new features.

[article] AI in CI/CD pipelines can be tricked into behaving badly. Yikes, this seems like an attack vendor to pay attention to. AI code review tools are great, but can be manipulated in bad ways.

[blog] Accelerate model downloads on GKE with NVIDIA Run:ai Model Streamer. Sheesh, this offers some fairly dramatic performance improvements for starting up your inference server.

[blog] Best Chrome Extensions for Developers in 2026. I didn’t know most of these, which isn’t a surprise since I’m a pretend developer nowadays.

[blog] Accelerate medical research with PubMed data now available in BigQuery. This is now a public dataset so that doctors and other researchers can find what they’re looking for across millions of biomedical articles.

Want to get this update sent to you every day? Subscribe to my RSS feed or subscribe via email below:



Read the whole story
alvinashcraft
9 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories