Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
146677 stories
·
33 followers

What We Learned from OpenAI's Town Hall

1 Share
From: AIDailyBrief
Duration: 7:55
Views: 179

Sam Altman hosted a town hall covering GPT-5.2's strong reasoning but clumsy prose, a hiring slowdown, and goals for deep personalization with portable memory. OpenAI unveiled premium ad pricing near $60 CPM, pledged not to sell personal data to advertisers, and introduced a 4% fee on Shopify sales routed through ChatGPT. Microsoft revealed the Maya200 inference chip, Nvidia expanded CoreWeave investments to build AI factories, and discussions emphasized an accelerating gap between AI capability and workforce impacts.

Brought to you by:
KPMG – Go to ⁠www.kpmg.us/ai⁠ to learn more about how KPMG can help you drive value with our AI solutions.
Vanta - Simplify compliance - ⁠⁠⁠⁠⁠⁠⁠https://vanta.com/nlw

The AI Daily Brief helps you understand the most important news and discussions in AI.
Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614
Get it ad free at
Join our Discord: https://bit.ly/aibreakdown

Read the whole story
alvinashcraft
52 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Using GitHub Copilot CLI as a Beginner to Translate English into Shell Commands

1 Share

Why I built this

I am still learning my way around the command line. Many times I know what I want to do, but not how to express it as a shell command.

This challenge gave me a chance to explore GitHub Copilot CLI as a bridge between natural language and terminal commands.

What I built

I created a small helper workflow where I:

  • Describe a task in plain English
  • Ask GitHub Copilot CLI to suggest a command
  • Use Copilot again to explain how the command works

What I Leaned

Example:

Task

count number of folders

Copilot Suggestion


bash
find . -maxdepth 1 -type d | wc -l

What I learned

Copilot CLI is best used interactively, with a human in the loop

It is very helpful for beginners who understand goals but not syntax

Understanding CLI limitations is as important as using AI tools

This experience helped me become more confident with terminal commands instead of blindly copy-pasting them.

This post is my submission for the GitHub Copilot CLI Challenge.
Read the whole story
alvinashcraft
52 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Breaking down the facts about secure development with Power Platform

1 Share

Today, organizations are being measured by how quickly they can innovate. Whether it’s launching new digital experiences, streamlining operations, or responding to customer needs in real time, the ability to move fast has always been a competitive differentiator. And it only grew on importance in the agentic era. But speed alone isn’t enough. Innovation must be scalable, secure, and sustainable.

Microsoft Power Platform is designed to meet that challenge. It empowers teams to build solutions faster, automate more processes, and scale across the business within a framework that puts security and governance first. With tools that are AI-ready and built for enterprise-grade environments from Copilot-assisted development to intelligent threat detection and posture management, the platform helps organizations move with both agility and control.

Let’s break down the facts about building secure, modern applications.

Fact: Low code does not mean low security

Despite the ever-growing usage and strong ROI, there are still people who think that low-code tools are not built for enterprise grade applications. Power Platform proves otherwise by delivering a comprehensive, layered security model designed to meet the demands of large organizations. As part of a managed security approach, the platform integrates governance and security controls directly into the development lifecycle ensuring that policies are consistently applied across environments.

From identity and access management to data protection and network security, Power Platform provides native capabilities that reduce risk without slowing innovation. Features like role-based access control, conditional access for individual apps, and data loss prevention policies are all included. Azure Virtual Network (VNet) helps keep apps and data private by creating a secure connection that blocks public internet access and limits traffic to only trusted sources.

Visibility and access control are central to this approach. Power Platform includes tenant-level analytics and inventory tracking that allow IT teams to monitor what’s being built, which connectors are in use, and whether apps are operating within approved environments. Advanced connector policies complement these tools by helping enforce data boundaries and prevent unauthorized connections, rather than providing direct visibility or access control. With tools like IP filtering, cookie binding, and role-based permissions, IT can ensure that only the right users have access to sensitive data. This helps prevent shadow IT before it starts giving teams a secure space to innovate while ensuring IT retains oversight.

The platform’s approach to security also extends to AI and agents. Security is enforced across all components of the platform, including apps and AI agents. As organizations adopt tools like M365 Copilot and Copilot Studio, Power Platform provides a secure foundation for building and deploying AI agents. These agents follow existing data loss prevention policies, access controls, and network protections, ensuring AI adoption does not create new exposure.

Power Platform also provides the flexibility to extend Copilot Studio agent protection beyond default safeguards with additional runtime protection. Organizations can choose to integrate additional monitoring systems such as Microsoft Defender, custom tools, or other security platforms for a defense-in-depth approach to agent runtime security.

Centrica, the UK’s largest retailer of zero-carbon electricity, is a good example of secure low-code innovation. With over 800 Power Platform solutions and 15,000 users, Centrica maintains enterprise-grade governance by embedding security, oversight, and controls into every stage of development.

Accenture also demonstrates how Power Platform helps reduce risk at scale. By giving more than 50,000 employees the ability to build within defined guardrails, the company reduced demand for short-term IT projects by 30%. Their approach to low-code governance helped them gain visibility into platform activity while supporting global collaboration. As one Accenture executive put it, “For us, we define shadow IT as things we cannot see or control when we need to. By standing up the platform and inviting our people to create and build—at its very core we have gained visibility into what people are doing and how they are connecting, which starts governance at the platform level.”

Fact: You do not have to outsource to be compliant

There is a perception that distributed development models increase compliance risk. Power Platform addresses this with centralized administration and clear visibility into who is building, what they are building, and how data is being used.

From the Power Platform admin center, IT teams can configure environments, enforce policies, and monitor usage across the entire organization. Tools like Dataverse audit logging, Microsoft Purview integration, and Lockbox support provide deep visibility into sensitive operations and data access.

Purview enhances compliance by enabling data classification, sensitivity labeling, and activity tracking across Power Platform environments. It also helps organizations enforce retention policies and ensure data governance requirements are met supporting alignment with global regulations like GDPR and HIPAA.

AI capabilities introduce new governance needs, which Power Platform meets with built-in support for risk assessment and proactive recommendations. Copilot capabilities also assist admins in identifying misconfigurations and streamlining compliance reporting.

Power Platform also integrates with Microsoft Sentinel and solution checkers to detect anomalies, surface vulnerabilities, and alert administrators to unusual behavior. Security posture management tools help teams assess and adjust configurations over time, helping organizations scale AI responsibly while maintaining strong governance.

PG&E is a case in point. With more than 4,300 developers and 300 Power Platform solutions, the company has embedded governance and risk management into its development lifecycle. This approach has helped PG&E achieve more than $75 million in annual savings, while ensuring that compliance and oversight remain strong.

Fact: You are not alone in your administering. You have guidance and support.

Another misconception is that managing low-code platforms at scale requires external tools or consultants. Power Platform includes everything needed to govern, secure, and scale app development from within your organization.

IT admins can use Power Platform admin center and advisor to receive AI-driven, real-time recommendations tailored to their environment. These insights help assess environment health, refine governance policies, and proactively manage security posture. Advisor also provides a security score, giving teams a clear view of how well they are securing their environments and a concrete way to demonstrate progress and accountability to leadership.

The platform is designed to adapt to each organization’s structure and needs. Recommendations can be dismissed when covered by other controls, and environmental groups allow governance to be tailored to specific business units or departments. This flexibility ensures that security doesn’t get in the way of progress but works alongside it.

Advanced features like test automation, environment isolation, and integrated observability help maintain consistent performance. VNet integration allows organizations to connect securely to on-premises systems without exposing resources to the public internet.

An example of one of leading automotive manufacturers highlights these capabilities. The company used VNet support in Power Platform to securely connect AI agents to internal systems without relying on an on-premises data gateway. The result was faster deployment, better compliance with internal security policies, and more than 3,000 hours saved through improved data access.

Start building secure, scalable solutions

Foster innovation while still maintaining security and governance principles. Microsoft Power Platform gives IT leaders and developers the ability to move quickly while maintaining the control their organizations require. With built-in governance, privacy protections, and AI-powered insights, teams can confidently scale low-code development without introducing risk. You no longer have to choose between innovation and security. With Power Platform, you can deliver both.

Explore real-world success stories and best practices. Visit the Power Platform site and follow this blog for the next article in the series breaking down the facts of the modern development.

The post Breaking down the facts about secure development with Power Platform appeared first on Microsoft Power Platform Blog.

Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Author of Systemd Quits Microsoft To Prove Linux Can Be Trusted

1 Share
Lennart Poettering has left Microsoft to co-found Amutable, a new Berlin-based company aiming to bring cryptographically verifiable integrity and deterministic trust guarantees to Linux systems. He said in a post on Mastodon that his "role in upstream maintenance for the Linux kernel will continue as it always has." Poettering will also continue to remain deeply involved in the systemd ecosystem. The Register reports: Linux celeb Lennart Poettering has left Microsoft and co-founded a new company, Amutable, with Chris Kuhl and Christian Brauner. Poettering is best known for systemd. After a lengthy stint at Red Hat, he joined Microsoft in 2022. Kuhl was a Microsoft employee until last year, and Brauner, who also joined Microsoft in 2022, left this month. [...] It is unclear why Poettering decided to leave Microsoft. We asked the company to comment but have not received a response. Other than the announcement of systemd 259 in December, Poettering's blog has been silent on the matter, aside from the announcement of Amutable this week. In its first post, the Amutable team wrote: "Over the coming months, we'll be pouring foundations for verification and building robust capabilities on top." It will be interesting to see what form this takes. In addition to Poettering, the lead developer of systemd, Amutable's team includes contributors and maintainers for projects such as Linux, Kubernetes, and containerd. Its members are also very familiar with the likes of Debian, Fedora, SUSE, and Ubuntu.

Read more of this story at Slashdot.

Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete

What's new in Astro - January 2026

1 Share
January 2026 - Astro joins Cloudflare, Astro v6 beta is released, and more!
Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Model Context Protocol (MCP): Building and Debugging Your First MCP Server in .NET

1 Share

In my earlier Microsoft Agent series of posts, we’ve explored how AI agents can be extended using function tools and plugins.

In this post, we look at the Model Context Protocol (MCP) -the emerging standard that lets AI assistants discover and call tools exposed by your AI applications.

Specifically, we cover the following:

  • What is MCP
  • Why MCP matters for AI applications
  • Setting up an MCP server in ASP.NET Core
  • Transport options: SSE vs STDIO
  • Testing and debugging your MCP server
  • Configuring popular MCP clients

 

A practical walkthrough is also included that shows how to setup a local MCP server in Visual Studio, then reference and call this from Code.

Let’s dig in.

~

What Is MCP

The Model Context Protocol (MCP) is an open standard that enables AI assistants to discover and invoke tools exposed by external servers.

You can think of it as a universal adapter that lets any MCP-compatible client communicate with any MCP-compatible server.

Rather than building custom integrations for each AI tool, you build one MCP server and multiple clients can consume it.

In some ways, it reminds me of older web services. ASMX? WCF? remember those?

The protocol uses JSON-RPC 2.0 over different transport mechanisms, making it both standardised and flexible.

~

Why MCP Matters

Traditional AI integrations require custom code for each platform. If you want your documentation search to work in Claude Desktop, Cursor, or VS Code Continue, you’d typically need three separate integrations.

MCP changes this. Build once, connect everywhere.

Key benefits include:

  • Tool Discovery: Clients automatically discover what tools your server offers
  • Standardised Communication: JSON-RPC 2.0 provides a well-defined message format
  • Multiple Transports: Choose HTTP/SSE for development, STDIO for production
  • Growing Ecosystem: Major AI tools are adopting MCP support

 

Ideal if you or your org contains unique IP and want to increase the footprint of your solution to AI agents or AI enabled tools, services, and IDEs.

~

Setting Up an MCP Server in ASP.NET Core

Let’s build an MCP server that exposes a documentation search tool. We’ll use Microsoft’s MCP SDK.

First, add the required NuGet packages:

dotnet add package ModelContextProtocol --prerelease
dotnet add package ModelContextProtocol.AspNetCore --prerelease

 

Next, configure the MCP server in Program.cs:

var builder = WebApplication.CreateBuilder(args);

// Configure MCP based on environment
if (builder.Environment.IsDevelopment())
{
    builder.Services.AddMcpServer()
        .WithHttpTransport()  // SSE for dev testing
        .WithToolsFromAssembly();
}
else
{
    builder.Services.AddMcpServer()
        .WithStdioServerTransport()  // STDIO for production
        .WithToolsFromAssembly();
}


var app = builder.Build();


// Map the MCP endpoint
app.MapMcp();
app.Run();

 

The key decision here is the transport mechanism.  This depends on your deployment model.

During development, HTTP/SSE is easier to debug.

HTTP/SSE is ideal when your MCP server runs as a web API that multiple clients can connect to.

STDIO is used when a client like Claude Desktop spawns and manages the MCP server process directly via stdin/stdout – common for simple local-only tools.

For production deployments to Claude Desktop, STDIO is preferred.

~

Creating Your First MCP Tool

MCP tools are like function tools in the Microsoft Agent Framework.

I say this as just like the Agent Framework, with MCP Tools, you annotate methods with descriptive attributes to help AI clients understand when to invoke them.

For example, here’s a documentation search tool:

using ModelContextProtocol.Server;
using System.ComponentModel;

namespace MyApp.MCP.Tools;

[McpServerToolType]
public class SearchDocs
{
    private readonly IDocumentSearchService _searchService;
    private readonly ILlmService _llmService;

    public SearchDocs(
        IDocumentSearchService searchService,
        ILlmService llmService)
    {
        _searchService = searchService;
        _llmService = llmService;
    }

    [McpServerTool]
    [Description("Searches documentation and returns a synthesised answer")]
    public async Task<string> Search(
        [Description("The natural language search query")]
        string query)
    {
        // 1. Vector search for relevant chunks
        var chunks = await _searchService.SearchAsync(query);
 
        if (!chunks.Any())
        {
            return "No relevant documentation found for your query.";
        }


        // 2. Fetch full documents
        var documents = await _searchService.GetFullDocumentsAsync(chunks);

        // 3. Synthesise answer using LLM
        var context = string.Join("\n\n", documents.Select(d => d.Content));
        var answer = await _llmService.SynthesiseAsync(query, context);

        // 4. Return with source attribution
        var sources = string.Join("\n", documents.Select(d => $"- {d.Url}"));

        return $"{answer}\n\n**Sources:**\n{sources}";
    }
}

 

The [Description] attributes are crucial.   They tell the AI client what the tool does and how to use it.

~

Testing with MCP Explorer

You can test your MCP tool using MCP Explorer on Windows.  Download it from https://mcp-explorer.com and install.

Key features include:

  • No Node.js Required: A standalone Windows application
  • Auto-Detection: Automatically discovers servers from your Claude Desktop config
  • Visual Tool Execution: Test tools with a clean UI
  • JSON-RPC Monitoring: See the raw protocol messages for debugging

 

To connect manually, enter the SSE endpoint URL hhttp://localhost:5125/sse, when found, the MCP tool search is discovered from the Visual Studio instance:

You click this and supply the relevant parameter with the value Getting Started.

Clicking Execute Tool invokes the MCP tool and returns data:

MCP Explorer is useful when you want to inspect the JSON-RPC traffic between client and server.  Ideal for understanding exactly what’s happening under the hood.

~

Configuring VS Code and the Continue Extension

Once your server is running, you can connect various AI tools.  I used VS Code and the Continue extension.

To configure the Continue extension, perform the following steps:

  1. Install the Continue plugin in Code.
  2. Next create a file at ~/.continue/mcpServers/my-docs.yaml

 

Note: You may need to create the .continue and mcpServers folders if they don’t already exist.

Next, add the following content and save:

name: MCP-Demo-Tools
version: 0.0.1
schema: v1
mcpServers:
- name: mcp-docs
  type: sse
  url: http://localhost:5125/sse

 

With the configuration defined, it can now be tested.

~

Running Your MCP Server in Visual Studio and Testing with VS Code

A common development workflow is to run your MCP server in Visual Studio while consuming it from VS Code with the Continue extension.

This gives you the best of both worlds: full debugging capabilities in VS and AI-assisted coding in VS Code.

Step 1: Configure Visual Studio

Open your solution in Visual Studio 2022. Ensure your launchSettings.json has the HTTP profile configured:

{
  "profiles": {
    "http": {
      "commandName": "Project",
      "dotnetRunMessages": true,
      "launchBrowser": false,
      "applicationUrl": "http://localhost:5125",
      "environmentVariables": {
        "ASPNETCORE_ENVIRONMENT": "Development"
      }
    }
  }
}

 

Run the application and start debugging. You’ll see output console confirming the server is listening:

 Step 2: Configure VS Code Continue

In VS Code, install the Continue extension if you haven’t already.  Ensure you have the MCP server configuration from earlier setup,  Ensure it contains the following content:

name: MCP-Demo-Tools
version: 0.0.1
schema: v1
mcpServers:
- name: mcp-docs
  type: sse
  url: http://localhost:5125/sse

 

Restart VS Code or reload the Continue extension for the changes to take effect.

Step 3: Configure Continue’s LLM

Continue needs its own LLM to decide when to use tools. Open Continue’s settings and configure your preferred model.

You can use OpenAI, Anthropic, or a local model via Ollama.

Example config.yaml for OpenAI:

models:
  - title: GPT-4
    provider: openai
    model: gpt-4
    apiKey: your-api-key-here

 Step 4: Verify the Connection

Back in Visual Studio, watch the Output window or console. When Continue connects, you should see requests like: Request: GET /sse and Request: POST /message

The initialize and tools/list JSON-RPC calls confirm Continue has discovered your tools.

Step 5: Use Agent Mode

In VS Code, open the Continue panel and ensure you’re in Agent Mode (not Chat mode).

The mode selector is near the input box.

Now type a prompt that should trigger your tool:

Search the documentation for authentication examples

If everything is configured correctly, you’ll see tools/call requests in Visual Studio’s console, and your breakpoints in the tool method will be hit:

The data is returned from your MCP tool in Visual Studio, back to VS Code:

We can further confirm the mock data we have in our Visual Studio MCP Server instance:

// Sample documents for demo purposes
// In production, this would query a vector database like Elasticsearch
private static readonly List<Document> _sampleDocs = new()
{
    new Document(
        "1",
        "https://docs.example.com/api/authentication",
        "Authentication API",
        "To authenticate with the API, include your API key in the Authorization header. Use the format: Authorization: Bearer YOUR_API_KEY. API keys can be generated from the dashboard."
    ),
    new Document(
        "2",
        "https://docs.example.com/tutorials/getting-started",
        "Getting Started Tutorial",
        "Welcome to our platform! This tutorial will guide you through the initial setup. First, install the SDK using npm install @example/sdk. Then, initialise the client with your credentials."
    ),
    new Document(
        "3",
        "https://docs.example.com/examples/basic-usage",
        "Basic Usage Examples",
        "Here are some common usage patterns. To create a new resource, use client.create(). To fetch existing resources, use client.get(id). To update, use client.update(id, data)."
    ),
    new Document(
        "4",
        "https://docs.example.com/reference/errors",
        "Error Reference",
        "Common error codes: 400 Bad Request - Invalid input parameters. 401 Unauthorized - Missing or invalid API key. 404 Not Found - Resource does not exist. 429 Too Many Requests - Rate limit exceeded."
    )
};

 

From the above, we can see the content in document with in ID of 2 maps to the response in VS Code.

Troubleshooting

If Continue doesn’t discover your tools:

Check that Visual Studio is running and the server is listening on the correct port

Verify the YAML file is in the correct location: C:\Users\<username>\.continue\mcpServers\

Ensure the URL matches exactly: http://localhost:5125/sse

Restart VS Code after making configuration changes

Check for port conflicts – only one process can bind to port 5125

~

Debugging Connection Issues

When things go wrong, here are some other things you can try.

Add Request Logging

Add middleware to see what requests are hitting your server:

app.Use(async (context, next) =>
{
    Console.WriteLine($">>> Request: {context.Request.Method} {context.Request.Path}");
    await next();
});

Check for HTTPS Redirect Issues

A common gotcha is HTTPS redirection breaking SSE connections.  This caught be out at first. Disable it for development:

if (!app.Environment.IsDevelopment())
{
    app.UseHttpsRedirection();
}

Verify Port Configuration

Check your launchSettings.json to confirm the correct ports:

{
  "profiles": {
    "http": {
      "applicationUrl": "http://localhost:5125"
    },
    "https": {
      "applicationUrl": "https://localhost:7073;http://localhost:5125"
    }
  }
}

Watch for Multiple Instances

If you’re getting unexpected behaviour, check Task Manager for stray dotnet.exe processes. Multiple instances can cause port conflicts.

~

Tips for Triggering Tool Usage

Sometimes you can find your tool doesn’t get called and human intent is not detected.  This caught me out initially.

The LLM decides whether to use your tools based on your prompt. It’s important to try and use precise language.  To improve tool invocation, here are some tips.

Be Explicit in Prompts

Use prompts like “search the docs for authentication examples” or “use the documentation tool to find API reference info” rather than how do I authenticate?” (might use general knowledge).

 Use Clear Tool Descriptions

Your [Description] attributes should be specific and action oriented. The LLM reads these to decide when to invoke your tool.

Check Agent Mode

In tools like Continue, ensure you’re in Agent mode, not Chat mode. Tools only work in Agent mode.

~

Summary

In this post, we’ve covered the fundamentals of building and debugging an MCP server.

In future posts, we’ll explore:

  • Query logging for training data collection
  • Caching with vector similarity matching
  • Authentication and authorisation for MCP endpoints
  • Deploying MCP servers to Azure

 

MCP represents a significant step forward in AI tool interoperability. By adopting this standard, you’re building integrations that will work across an expanding ecosystem of AI assistants.

~

Further Reading and Resources

Some additional resources you might find helpful.

~

JOIN MY EXCLUSIVE EMAIL LIST
Get the latest content and code from the blog posts!
I respect your privacy. No spam. Ever.
Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories