Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
147123 stories
·
33 followers

Salesforce Shelves Heroku

1 Share
Salesforce is essentially shutting down Heroku as an evolving product, moving the cloud platform that helped define modern app deployment to a "sustaining engineering model" focused entirely on stability, security and support. Existing customers on credit card billing see no changes to pricing or service, but enterprise contracts are no longer available to new buyers. Salesforce said it is redirecting engineering investment toward enterprise AI.

Read more of this story at Slashdot.

Read the whole story
alvinashcraft
16 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

RFK Jr. Has Packed an Autism Panel With Cranks and Conspiracy Theorists

1 Share
Among those Robert F. Kennedy Jr. recently named to a federal autism committee are people who tout dangerous treatments and say vaccine manufacturers are “poisoning children.”
Read the whole story
alvinashcraft
17 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

What’s New in vcpkg (Nov 2025 – Jan 2026)

1 Share

This blog post summarizes changes to the vcpkg package manager as part of the 2025.12.12 and 2026.01.16 registry releases and the 2025-11-13, 2025-11-18, 2025-11-19, 2025-12-05, and 2025-12-16 tool releases. These updates include support for targeting the Xbox GDK October 2025 update, removing a misleading and outdated output message, and other minor improvements and bug fixes.

Some stats for this period:

  • There are now 2,750 total ports available in the vcpkg curated registry. A port is a versioned recipe for building a package from source, such as a C or C++ library.
  • 82 new ports were added to the curated registry.
  • 504 ports were updated by December and 584 ports were updated in January. As always, we validate each change to a port by building all other ports that depend on or are depended by the library that is being updated for our 15 main triplets.
  • 182 community contributors made commits.
  • The main vcpkg repo has over 7,300 forks and 26,600 stars on GitHub.

vcpkg changelog (2025.12.12, 2026.01.16 releases)

  • Removed an outdated output message after running vcpkg upgrade that could mislead users (PR: Microsoft/vcpkg-tool#1802).
  • Updated vcpkg to understand new layout structure and environment variables for targeting Xbox as of the October 2025 Microsoft GDK update. (PRs: Microsoft/vcpkg-tool#1834, thanks @walbourn!).
    • GameDKLatest is associated with the ‘old’ layouts and only exists when they are optionally installed by October 2025 or by earlier GDKs. October 2024 GDK or later are still in-service.
    • GameDKXboxLatest is associated with the ‘new’ layouts which are always present for October 2025 or later.
  • Other minor improvements and bug fixes.

Total ports available for tested triplets

Triplet Ports available
x86-windows 2549
x64-windows 2678
x64-windows-release 2678
x64-windows-static 2557
x64-windows-static-md 2614
x64-uwp 1506
arm64-windows 2304
arm64-windows-static-md 2290
arm64-uwp 1475
arm64-osx 2484
x64-linux 2688
arm-neon-android 2106
x64-android 2167
arm64-android 2134
x86-windows 2549

While vcpkg supports a much larger variety of target platforms and architectures (as community triplets), the list above is validated exhaustively to ensure updated ports don’t break other ports in the catalog.

Thank you to our contributors

vcpkg couldn’t be where it is today without contributions from our open-source community. Thank you for your continued support! The following people contributed to the vcpkg, vcpkg-tool, or vcpkg-docs repos in this release (listed by commit author or GitHub username):

a-alomran Christopher Lee jreichel-nvidia Richard Powell
Aaron van Geffen Chuck Walbourn Kadir Rimas Misevičius
Aditya Rao Colden Cullen Kai Blaschke RobbertProost
Adrien Bourdeaux Connor Broyles Kai Pastor Rok Mandeljc
Ajadaz CQ_Undefine Kaito Udagawa RPeschke
Alan Jowett Craig Edwards kedixa Saikari
Alan Tse Crindzebra Sjimo Kevin Ring Scranoid
albertony cuihairu Kiran Chanda Sean Farrell
Aleks Tuchkov Dalton Messmer Kyle Benesch Seth Flynn
Aleksandr Orefkov Daniel Collins kzhdev shixiong2333
Aleksi Sapon David Fiedler Laurent Rineau Silvio Traversaro
Alex Emirov deadlightreal LE GARREC Vincent Simone Gasparini
Alexander Neumann Dennis lemourin Sina Behmanesh
Alexis La Goutte Dr. Patrick Urbanke lithrad Stephen Webb
Alexis Placet Dzmitry Baryshau llm96 Steven
Allan Hanan eao197 Lukas Berbuer SunBlack
Anders Wind Egor Tyuvaev Lukas Schwerdtfeger Sylvain Doremus
Andre Nguyen Ethan J. Musser Marcel Koch Szabolcs Horvát
Andrew Kaster Eviral Martin Moene Takatoshi Kondo
Andrew Tribick Fidel Yin Matheus Gomes talregev
Ankur Verma freshthinking matlabbe Theodore Tsirpanis
Argentoz Fyodor Krasnov Matthias Kuhn Thomas Arcila
Attila Kovacs galabovaa Michael Hansen Thomas1664
autoantwort GioGio Michele Caini TLescoatTFX
ayeteadoe Giuseppe Roberti Mikhail Titov Tobias Markus
Ayush Acharjya Glyn Matthews miyan Toby
Barak Shoshany Gordon Smith miyanyan toge
Benno Waldhauer hehanjing Morcules Tom Conder
Bernard Teo Hiroaki Yutani myd7349 Tom M.
Bertin Balouki SIMYELI Hoshi Mzying2001 Tom Tan
bjovanovic84 huangqinjin Nick D’Ademo Tommy-Xavier Robillard
blavallee i-curve Nikita UlrichBerndBecker
bwedding Igor Kostenko Osyotr Vallabh Mahajan
Byoungchan Lee ihsan demir PARK DongHa Vincent Le Garrec
Cappecasper03 Ioannis Makris pastdue Vitalii Koshura
Carson Radtke Ivan Maidanski Pasukhin Dmitry Vladimir Shaleev
cDc Jaap Aarts Patrick Colis Waldemar Kornewald
Charles Cabergs JacobBarthelmeh Paul Lemire Wentsing Nee
Charles Dang James Grant Pavel Kisliak wentywenty
Charles Karney Janek Bevendorff Pedro López-Cabanillas xavier2k6
chausner Jeremy Dumais Raul Metsma ycdev1
chenjunfu2 Jesper Stemann Andersen RealChuan Yunze Xu
Chris Birkhold Jinwoo Sung RealTimeChris Yury Bura
Chris Leishman JoergAtGithub Rémy Tassoux zuhair-naqvi
Chris Sarbora John Wason Riccardo Ressi
Chris W Jonatan Nevo Richard Barnes

Learn more

You can find the main release notes on GitHub. Recent updates to the vcpkg tool can be viewed on the vcpkg-tool Releases page. To contribute to vcpkg documentation, visit the vcpkg-docs repo. If you’re new to vcpkg or curious about how a package manager can make your life easier as a C/C++ developer, check out the vcpkg website – vcpkg.io.

If you would like to contribute to vcpkg and its library catalog, or want to give us feedback on anything, check out our GitHub repo. Please report bugs or request updates to ports in our issue tracker or join more general discussion in our discussion forum.

The post What’s New in vcpkg (Nov 2025 – Jan 2026) appeared first on C++ Team Blog.

Read the whole story
alvinashcraft
17 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Building an AI Skills Executor in .NET: Bringing Anthropic’s Agent Pattern to the Microsoft Ecosystem

1 Share

When Anthropic released their Agent Skills framework, they published a blueprint for how enterprise organizations should structure AI agent capabilities. The pattern is straightforward: package procedural knowledge into composable skills that AI agents can discover and apply contextually. Microsoft, OpenAI, Cursor, and others have already adopted the standard, making skills portable across the AI ecosystem.

But here’s the challenge for .NET shops: most implementations assume Python or TypeScript. If your organization runs on the Microsoft stack, you need an implementation that speaks C#.

This article walks through building a proof-of-concept AI Skills Executor in .NET 10 that combines Azure AI Foundry for LLM capabilities with the official MCP C# SDK for tool execution. I want to be upfront about something: what I’m showing here is a starting point, not a production-ready framework. The goal is to demonstrate the pattern and the key integration points so you can evaluate whether this approach makes sense for your organization, and then build something more robust on top of it.

The complete working code is available on GitHub if you want to follow along.

The Scenario: Why You’d Build This

Before we get into any code, I want to ground this in a real problem. Otherwise, every pattern looks like a solution searching for a question.

Imagine you’re the engineering lead at a mid-size financial services firm. Your team manages about forty .NET microservices across Azure. You’ve got a mature CI/CD pipeline, established coding standards, and a healthy backlog of technical debt that nobody has time to address. Sound familiar?

Your developers are already using AI assistants like GitHub Copilot and Claude to write code faster. That’s great. But you keep running into the same frustrations. A junior developer asks the AI to set up a new microservice, and it generates a project structure that doesn’t match your organization’s conventions. A senior developer crafts a detailed prompt for your specific deployment pipeline, shares it in a Slack thread, and within a month there are fifteen variations floating around with no way to standardize or improve any of them. Your architecture review board has patterns they want enforced, but those patterns live in a Confluence wiki that no AI assistant knows about.

This is the problem skills solve. Instead of every developer independently teaching their AI assistant how your organization works, you encode that knowledge once into a skill. A “New Service Scaffolding” skill that knows your project structure, your required NuGet packages, your logging conventions, and your deployment configuration. A “Code Review” skill that checks against your actual standards, not generic best practices. A “Tech Debt Assessment” skill that can scan a repo and produce a prioritized report using your team’s severity criteria.

The Skills Executor is the engine that makes these skills operational. It loads the right skill, connects it to an LLM via Azure AI Foundry, gives the LLM access to tools through MCP servers, and runs the agentic loop until the job is done. Keep this financial services scenario in mind as we walk through the architecture. Every component maps back to making this kind of organizational knowledge usable.

Where Azure AI Foundry Fits

If you’ve been tracking Microsoft’s AI platform evolution, you know that Azure AI Foundry (recently rebranded as Microsoft Foundry) has become the unified control plane for enterprise AI. The reason it matters for a skills executor is that it gives you a single endpoint for model access, agent management, evaluation, and observability, all under one roof with enterprise-grade security.

For this project, Foundry provides two things we need. First, it’s the gateway to Azure OpenAI models with function calling support, which is what drives the agentic loop at the core of the executor. You deploy a model like GPT-4.1 to your Foundry project, and the executor calls it through the Azure OpenAI SDK using your Foundry endpoint. Second, as you mature beyond this proof of concept, Foundry gives you built-in evaluation, red teaming, and monitoring capabilities that you’d otherwise have to build from scratch. That path from prototype to production is a lot shorter when your orchestration layer already speaks Foundry’s language.

The Azure AI Foundry .NET SDK (currently at version 1.2.0-beta.1) provides the Azure.AI.Projects client library for connecting to a Foundry project endpoint. In our executor, we use the Azure.AI.OpenAI package to interact with models deployed through Foundry, which means the integration is mostly about pointing your OpenAI client at your Foundry-provisioned endpoint instead of a standalone Azure OpenAI resource.

Understanding the Architecture

The Skills Executor has four cooperating components. The Skill Loader discovers and parses SKILL.md files from a configured directory, pulling metadata from YAML frontmatter and instructions from the markdown body. The Azure OpenAI Service handles all LLM interactions through your Foundry-provisioned endpoint, including chat completions with function calling. The MCP Client Service connects to one or more MCP servers, discovers their available tools, and routes execution requests. And the Skill Executor itself orchestrates the agentic loop: taking user input, managing the conversation with the LLM, executing tool calls when requested, and returning final responses.

Skills Executor Architecture DiagramFigure: The Skills Executor architecture showing how skills, Azure OpenAI (via Foundry), and MCP servers work together.

The important design decision here is that the orchestrator contains zero business logic about when to use specific tools. It provides the LLM with available tools, executes whatever the LLM requests, feeds results back, and repeats until the LLM produces a final response. All the intelligence comes from the skill’s instructions guiding the LLM’s decisions. This is what makes the pattern composable. Swap the skill, and the same executor does completely different work.

Setting Up the Project

The solution uses three projects targeting .NET 10 (the current LTS release):

dotnet new sln -n SkillsQuickstart
dotnet new classlib -n SkillsCore -f net10.0
dotnet new console -n SkillsQuickstart -f net10.0
dotnet new console -n SkillsMcpServer -f net10.0

dotnet sln add SkillsCore
dotnet sln add SkillsQuickstart
dotnet sln add SkillsMcpServer

The key NuGet packages for the orchestrator are Azure.AI.OpenAI for LLM interactions through your Foundry endpoint and ModelContextProtocol --prerelease for MCP client/server capabilities. The MCP C# SDK is maintained by Microsoft in partnership with Anthropic and is currently working toward its 1.0 stable release, so the prerelease flag is still needed. For the MCP server project, you just need the ModelContextProtocol and Microsoft.Extensions.Hosting packages.

Skills as Markdown Files

A skill is a folder containing a SKILL.md file with YAML frontmatter for metadata and markdown body for instructions. Think back to our financial services scenario. Here’s what a tech debt assessment skill might look like:

---
name: Tech Debt Assessor
description: Scans codebases and produces prioritized tech debt reports.
version: 1.0.0
author: Platform Engineering
category: quality
tags:
  - tech-debt
  - analysis
  - reporting
---

# Tech Debt Assessor

You are a technical debt analyst for a .NET microservices environment.
Your job is to scan a codebase and produce a prioritized assessment.

## Severity Framework

- **Critical**: Security vulnerabilities, deprecated APIs with known exploits
- **High**: Missing test coverage on business-critical paths, outdated packages
  with available patches
- **Medium**: Code style violations, TODO/FIXME accumulation, copy-paste patterns
- **Low**: Documentation gaps, naming convention inconsistencies

## Workflow

1. Use analyze_directory to understand the project structure
2. Use count_lines to gauge project scale by language
3. Use find_patterns to locate TODO, FIXME, HACK, and BUG markers
4. Synthesize findings into a report organized by severity

**ALWAYS use tools to gather real data. Do not guess about the codebase.**

Notice how the skill encodes your organization’s specific severity framework. A generic AI assistant would apply some default notion of tech debt priority. This skill applies yours. And because it’s just a markdown file in a git repo, your architecture review board can review changes to it the same way they review code.

The Skill Loader parses these files by splitting the YAML frontmatter from the markdown body. The implementation uses YamlDotNet for deserialization and a simple string-splitting approach for frontmatter extraction. I won’t paste the full loader code here since it’s fairly standard file I/O and YAML parsing. You can see the complete implementation in the GitHub repository, but the core idea is that each SKILL.md file becomes a SkillDefinition object with metadata properties (name, description, tags) and an Instructions property containing the full markdown body.

Connecting to MCP Servers

The MCP Client Service manages connections to MCP servers, discovers their tools, and routes execution requests. The core flow is: connect to each configured server using StdioClientTransport, call ListToolsAsync() to discover available tools, and maintain a lookup dictionary mapping tool names to the client that owns them.

When the executor needs to call a tool, it looks up the tool name in the dictionary and routes the call to the right MCP server via CallToolAsync(). This means you can have multiple MCP servers, each with different tools. A custom server with your internal tools, a GitHub MCP server for repository operations, a filesystem server for file access. The executor doesn’t care where a tool lives.

Server configuration lives in appsettings.json:

{
  "McpServers": {
    "Servers": [
      {
        "Name": "skills-mcp-server",
        "Command": "dotnet",
        "Arguments": ["run", "--project", "../SkillsMcpServer"],
        "Enabled": true
      },
      {
        "Name": "github-mcp-server",
        "Command": "npx",
        "Arguments": ["-y", "@modelcontextprotocol/server-github"],
        "Environment": {
          "GITHUB_PERSONAL_ACCESS_TOKEN": ""
        },
        "Enabled": true
      }
    ]
  }
}

The empty GITHUB_PERSONAL_ACCESS_TOKEN is intentional. The service resolves empty environment values from .NET User Secrets at runtime, keeping sensitive tokens out of source control.

The Agentic Loop

This is the core of the executor, and it’s where the pattern earns its keep. The agentic loop is the conversation cycle between the user, the LLM, and the available tools. Here’s the essential logic, stripped of error handling and logging:

public async Task<SkillResult> ExecuteAsync(SkillDefinition skill, string userInput)
{
    var messages = new List<ChatMessage>
    {
        new SystemChatMessage(skill.Instructions ?? "You are a helpful assistant."),
        new UserChatMessage(userInput)
    };

    var tools = BuildToolDefinitions(_mcpClient.GetAllTools());
    var iterations = 0;

    while (iterations++ < MaxIterations)
    {
        var response = await _openAI.GetCompletionAsync(messages, tools);
        messages.Add(new AssistantChatMessage(response));

        var toolCalls = response.ToolCalls;
        if (toolCalls == null || toolCalls.Count == 0)
        {
            // No tool calls means the LLM has produced its final answer
            return new SkillResult
            {
                Response = response.Content.FirstOrDefault()?.Text ?? "",
                ToolCallCount = iterations - 1
            };
        }

        // Execute each requested tool and feed results back
        foreach (var toolCall in toolCalls)
        {
            var args = JsonSerializer.Deserialize<Dictionary<string, object?>>(
                toolCall.FunctionArguments);
            var result = await _mcpClient.ExecuteToolAsync(toolCall.FunctionName, args);
            messages.Add(new ToolChatMessage(toolCall.Id, result));
        }
    }

    throw new InvalidOperationException("Max iterations exceeded");
}

The loop keeps going until the LLM responds without requesting any tool calls (meaning it’s done) or until a safety limit on iterations is reached. Each tool result gets added to the conversation history, so the LLM has full context of what it’s discovered.

Let’s trace through our financial services scenario. A developer selects the Tech Debt Assessor skill and asks “Assess the tech debt in our OrderService at C:\repos\order-service.” The executor loads the skill’s instructions as the system prompt, sends the request to Azure OpenAI through Foundry with the available MCP tools, and the LLM (guided by the skill’s workflow) starts calling tools. First analyze_directory to understand the project structure, then count_lines for scale metrics, then find_patterns to locate debt markers. After each tool call, the results come back into the conversation, and the LLM decides what to do next. Eventually, it synthesizes everything into a severity-prioritized report using your organization’s framework.

The BuildToolDefinitions method bridges MCP and Azure OpenAI by converting MCP tool schemas into ChatTool function definitions. It’s a one-liner per tool using ChatTool.CreateFunctionTool(), mapping the tool’s name, description, and JSON schema.

Building Custom MCP Tools

The MCP C# SDK makes exposing custom tools simple. You create a class with methods decorated with the [McpServerTool] attribute, and the SDK handles discovery and protocol communication:

[McpServerToolType]
public static class ProjectAnalysisTools
{
    [McpServerTool, Description("Analyzes a directory structure and returns a tree view")]
    public static string AnalyzeDirectory(
        [Description("Path to the directory to analyze")] string path,
        [Description("Maximum depth to traverse")] int maxDepth = 3)
    {
        // Walk the directory tree, return a formatted string representation
        // Full implementation in the GitHub repo
    }

    [McpServerTool, Description("Counts lines of code by file extension")]
    public static string CountLines(
        [Description("Path to the directory to analyze")] string path,
        [Description("File extensions to include (e.g., .cs,.js)")] string? extensions = null)
    {
        // Enumerate files, count lines per extension, return summary
    }

    [McpServerTool, Description("Finds TODO, FIXME, and HACK comments in code")]
    public static string FindPatterns(
        [Description("Path to the directory to search")] string path)
    {
        // Scan files for debt markers, return locations and context
    }
}

The server’s Program.cs is minimal. Five lines to register the MCP server with stdio transport and auto-discover tools from the assembly:

var builder = Host.CreateApplicationBuilder(args);

builder.Services
    .AddMcpServer()
    .WithStdioServerTransport()
    .WithToolsFromAssembly();

await builder.Build().RunAsync();

When the Skills Executor starts your MCP server, the SDK automatically discovers all [McpServerTool] methods and exposes them through the protocol. Any MCP-compatible client can use these tools, not just your executor. That’s the portability of the standard at work.

Three Skills, Three Patterns

The architecture supports different tool-usage patterns depending on what the skill needs. Back to our financial services firm:

Code Explainer skill uses no tools at all. A developer pastes in a complex LINQ query from the legacy monolith, and the skill relies entirely on the LLM’s reasoning to explain what it does. No tool calls needed. The skill instructions just tell the LLM to start with a high-level summary, walk through step by step, and flag any design decisions worth discussing.

The Tech Debt Assessor from our earlier example uses custom MCP tools. It can’t just reason about a codebase in the abstract. It needs to actually inspect the file structure, count lines, and find patterns. The skill instructions lay out a specific workflow and explicitly tell the LLM to always use tools rather than guessing.

GitHub Assistant skill uses the external GitHub MCP server. When a developer asks “What open issues are tagged as P0 in the order-service repo?”, the skill maps that to the GitHub MCP server’s list_issues tool. The skill instructions explain which tools are available and how to translate user requests into tool calls.

The key thing to notice: the executor code is identical across all three cases. The only thing that changes is the SKILL.md file. That’s the whole point. Swap the skill, swap the behavior.

What This Architecture Gives You

I keep coming back to the question of “why.” Why build a custom executor when you could just use Claude or Copilot directly? Three reasons stand out for enterprise teams.

Standardization without rigidity. Skills let you standardize how AI performs common tasks without hardcoding business logic into application code. When your code review standards change, you update the SKILL.md file, not the orchestrator. Domain experts can write skills without understanding the execution infrastructure. Platform teams can enhance the executor without touching individual skills.

Tool reusability across contexts. MCP servers expose tools that any skill can use. The project analysis tools work whether invoked by a tech debt assessor, a documentation generator, or a migration planner. You build the tools once and compose them differently through skills.

Ecosystem portability. Because skills follow Anthropic’s open standard, they work in VS Code, GitHub Copilot, Claude, and any other tool that supports the format. Skills you create for this executor also work in those environments. Your investment compounds across your development toolchain rather than getting locked into one vendor.

What This Doesn’t Give You (Yet)

I want to be honest about the gaps, because shipping something like this to production would require more work.

There’s no authentication or authorization layer. In a real deployment, you’d want to control which users can access which skills and which tools. There’s no retry logic or circuit-breaking on MCP server connections. The error handling is minimal. There’s no telemetry or observability beyond basic console output, though Azure AI Foundry’s built-in monitoring would help close that gap as you mature the solution. There’s no skill chaining (one skill invoking another), no versioning strategy for skill updates, and no caching of skill metadata.

Think of this as the architectural proof that the pattern works in .NET. The production hardening is a separate effort, and it’ll look different depending on your organization’s requirements.

Where to Go From Here

If the pattern resonates, here’s how I’d suggest approaching it. Start by identifying two or three repetitive tasks your team does that involve organizational knowledge an AI assistant wouldn’t have on its own. Write those as SKILL.md files. Get the executor running locally and test whether the skills actually produce useful output with your real codebases and workflows.

From there, the natural extension points are: skill chaining to allow complex multi-step workflows, a centralized skill registry so teams across your organization can share and discover skills, and observability hooks that feed into Azure AI Foundry’s evaluation and monitoring capabilities. If you’re running Foundry Agents, there’s also a path to wrapping the executor as a Foundry agent that can be managed through the Foundry portal.

The real value isn’t in the executor code itself. It’s in the skills your organization creates. Every procedure, standard, and piece of institutional knowledge that you encode into a skill is one less thing that lives only in someone’s head or a Slack thread.

Get the Code

The complete implementation is available on GitHub at github.com/MCKRUZ/DotNetSkills. Clone the repository to explore the full source, run the examples, or use it as a foundation for building your own skills executor. All the plumbing code I skipped in this article (the full Skill Loader, the MCP Client Service, the Azure OpenAI Service wrapper) is there and commented.

References

The post Building an AI Skills Executor in .NET: Bringing Anthropic’s Agent Pattern to the Microsoft Ecosystem appeared first on Microsoft Foundry Blog.

Read the whole story
alvinashcraft
17 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Blazor Community Standup: ASP.NET Core & Blazor Roadmap for .NET 11

1 Share
From: dotnet
Duration: 0:00
Views: 0

Join us for a walkthrough of the ASP.NET Core & Blazor roadmap for .NET 11. We’ll discuss the expected improvements for this release and share progress on current work.

🔗 Links: https://www.theurlist.com/aspnet-standup-20260210

🎙️ Featuring: Daniel Roth, Javier Calvarro Nelson

#dotnet #aspnetcore #blazor

Read the whole story
alvinashcraft
18 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Opus 4.6 and ChatGPT 5.3-Codex Are Here and the Labs Are at War

1 Share
From: AIDailyBrief
Duration: 15:23
Views: 472

Anthropic dropped Claude Opus 4.6 and OpenAI responded with GPT 5.3 Codex just 20 minutes later — the most intense head-to-head model release we've ever seen. Here's what each model brings, how they compare, and what the first reactions are telling us.

The AI Daily Brief helps you understand the most important news and discussions in AI.
Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614
Get it ad free at http://patreon.com/aidailybrief
Learn more about the show https://aidailybrief.ai/

Read the whole story
alvinashcraft
18 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories