Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
150700 stories
·
33 followers

Rivian is building its own AI assistant

1 Share
The EV maker will likely share more details on its upcoming AI & Autonomy Day scheduled for December 11.
Read the whole story
alvinashcraft
36 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

MCP joins the Linux Foundation: What this means for developers building the next era of AI tools and agents

1 Share

Over the past year, AI development has exploded. More than 1.1 million public GitHub repositories now import an LLM SDK (+178% YoY), and developers created nearly 700,000 new AI repositories, according to this year’s Octoverse report. Agentic tools like vllm, ollama, continue, aider, ragflow, and cline are quickly becoming part of the modern developer stack.

As this ecosystem expands, we’ve seen a growing need to connect models to external tools and systems—securely, consistently, and across platforms. That’s the gap the Model Context Protocol (MCP) has rapidly filled. 

Born as an open source idea inside Anthropic, MCP grew quickly because it was open from the very beginning and designed for the community to extend, adopt, and shape together. That openness is a core reason it became one of the fastest-growing standards in the industry. That also allowed companies like GitHub and Microsoft to join in and help build out the standard.  

Now, Anthropic is donating MCP to the Agentic AI Foundation, which will be managed by the Linux Foundation, and the protocol is entering a new phase of shared stewardship. This will provide developers with a foundation for long-term tooling, production agents, and enterprise systems.  This is exciting for those of us who have been involved in the MCP community. And given our long-term support of the Linux Foundation, we are hugely supportive of this move.

The past year has seen incredible growth and change for MCP.  I thought it would be great to review how MCP got here, and what its transition to the Linux Foundation means for the next wave of AI development.

Before MCP: Fragmented APIs and brittle integrations

LLMs started as isolated systems. You sent them prompts and got responses back. We would use patterns like retrieval-augmented generation (RAG) to help us bring in data to give more context to the LLM, but that was limited. OpenAI’s introduction of function calling brought about a huge change as, for the first time, you could call any external function. This is what we initially built on top of as part of GitHub Copilot. 

By early 2023, developers were connecting LLMs to external systems through a patchwork of incompatible APIs: bespoke extensions, IDE plugins, and platform-specific agent frameworks, among other things. Every provider had its own integration story, and none of them worked in exactly the same way. 

Nick Cooper, an OpenAI engineer and MCP steering committee member, summarized it plainly: “All the platforms had their own attempts like function calling, plugin APIs, extensions, but they just didn’t get much traction.”

This wasn’t a tooling problem. It was an architecture problem.

Connecting a model to the realtime web, a database, ticketing system, search index, or CI pipeline required bespoke code that often broke with the next model update. Developers had to write deep integration glue one platform at a time.

As David Soria Parra, a senior engineer at Anthropic and one of the original architects of MCP, put it, the industry was running headfirst into an n×m integration problem with too many clients, too many systems, and no shared protocol to connect them.

In practical terms, the n×m integration problem describes a world where every model client (n) must integrate separately with every tool, service, or system developers rely on (m). This would mean five different AI clients talking to ten internal systems, resulting in fifty bespoke integrations—each with different semantics, authentication flows, and failure modes. MCP collapses this by defining a single, vendor-neutral protocol that both clients and tools can speak. With something like GitHub Copilot, where we are connecting to all of the frontier labs models and developers using Copilot, we also need to connect to hundreds of systems as part of their developer platform. This was not just an integration challenge, but an innovation challenge. 

And the absence of a standard wasn’t just inefficient; it slowed real-world adoption. In regulated industries like finance, healthcare, security, developers needed secure, auditable, cross-platform ways to let models communicate with systems. What they got instead were proprietary plugin ecosystems with unclear trust boundaries.

MCP: A protocol built for how developers work

Across the industry including at Anthropic, GitHub, Microsoft, and others, engineers kept running into the same wall: reliably connecting models to context and tools. Inside Anthropic, teams noticed that their internal prototypes kept converging on similar patterns for requesting data, invoking tools, and handling long-running tasks. 

Soria Parra described MCP’s origin simply: it was a way to standardize patterns Anthropic engineers were reinventing. MCP distilled those patterns into a protocol designed around communication, or how models and systems talk to each other, request context, and execute tools.

Anthropic’s Jerome Swanwick recalled an early internal hackathon where “every entry was built on MCP … went viral internally.”

That early developer traction became the seed. Once Anthropic released MCP publicly alongside high-quality reference servers, we saw the value immediately, and it was clear that the broader community understood the value immediately. MCP offered a shared way for models to communicate with external systems, regardless of client, runtime, or vendor.

Why MCP clicked: Built for real developer workflows

When MCP launched, adoption was immediate and unlike any standard I have seen before.

Developers building AI-powered tools and agents had already experienced the pain MCP solved. As Microsoft’s Den Delimarsky, a principal engineer and core MCP steering committee member focused on security and OAuth, said: “It just clicked. I got the problem they were trying to solve; I got why this needs to exist.”

Within weeks, contributors from Anthropic, Microsoft, GitHub, OpenAI, and independent developers began expanding and hardening the protocol. Over the next nine months, the community added:

  • OAuth flows for secure, remote servers
  • Sampling semantics (These help ensure consistent model behavior when tools are invoked or context is requested, giving developers more predictable execution across different MCP clients.)
  • Refined tool schemas
  • Consistent server discovery patterns
  • Expanded reference implementations
  • Improving long-running task support

Long-running task APIs are a critical feature. They allow builds, indexing operations, deployments, and other multi-minute jobs to be tracked predictably, without polling hacks or custom callback channels. This was essential for the long-running AI agent workflows that we now see today.

Delimarsky’s OAuth work also became an inflection point. Prior to it, most MCP servers ran locally, which limited usage in enterprise environments and caused installation friction. OAuth enabled remote MCP servers, unlocking secure, compliant integrations at scale. This shift is what made MCP viable for multi-machine orchestration, shared enterprise services, and non-local infrastructure.

Just as importantly, OAuth gives MCP a familiar and proven security model with no proprietary token formats or ad-hoc trust flows. That makes it significantly easier to adopt inside existing enterprise authentication stacks.

Similarly, the MCP Registry—developed in the open by the MCP community with contributions and tooling support from Anthropic, GitHub, and others—gave developers a discoverability layer and gave enterprises governance control. Toby Padilla, who leads MCP Server and Registry efforts at GitHub, described this as a way to ensure “developers can find high-quality servers, and enterprises can control what their users adopt.”

But no single company drove MCP’s trajectory. What stands out across all my conversations with the community is the sense of shared stewardship.

Cooper articulated it clearly: “I don’t meet with Anthropic, I meet with David. And I don’t meet with Google, I meet with Che.” The work was never about corporate boundaries. It was about the protocol.

This collaborative culture, reminiscent of the early days of the web, is the absolute best of open source. It’s also why, in my opinion, MCP spread so quickly.

Developer momentum: MCP enters the Octoverse

The 2025 Octoverse report, our annual deep dive into open source and public activity on GitHub, highlights an unprecedented surge in AI development:

  • 1.13M public repositories now import an LLM SDK (+178% YoY)
  • 693k new AI repositories were created this year
  • 6M+ monthly commits to AI repositories
  • Tools like vllm, ollama, continue, aider, cline, and ragflow dominated fastest-growing repos
  • Standards are emerging in real time with MCP alone, hitting 37k stars in under eight months

These signals tell a clear story: developers aren’t just experimenting with LLMs, they’re operationalizing them.

With hundreds of thousands of developers building AI agents, local runners, pipelines, and inference stacks, the ecosystem needs consistent ways to connect models to tools, services, and context.

MCP isn’t riding the wave. The protocol aligns with where developers already are and where the ecosystem is heading.

The Linux Foundation move: The protocol becomes infrastructure

As MCP adoption accelerated, the need for neutral governance became unavoidable. Openness is what drove its initial adoption, but that also demands shared stewardship—especially once multiple LLM providers, tool builders, and enterprise teams began depending on the protocol.

By transitioning governance to the Linux Foundation, Anthropic and the MCP steering committee are signaling that MCP has reached the maturity threshold of a true industry standard.

Open, vendor-neutral governance offers everyone:

1. Long-term stability

A protocol is only as strong as its longevity. Linux Foundation’s backing reduces risk for teams adopting MCP for deep integrations.

2. Equal participation

Whether you’re a cloud provider, startup, or individual maintainer, Linux Foundation governance processes support equal contribution rights and transparent evolution.

3. Compatibility guarantees

As more clients, servers, and agent frameworks rely on MCP, compatibility becomes as important as the protocol itself.

4. The safety of an open standard

In an era where AI is increasingly part of regulated workloads, neutral governance makes MCP a safer bet for enterprises.

MCP is now on the same path as technologies like Kubernetes, SPDX, GraphQL, and the CNCF stack—critical infrastructure maintained in the open.

Taken together, this move aligns with the Agentic AI Foundation’s intention to bring together multiple model providers, platform teams, enterprise tool builders, and independent developers under a shared, neutral process. 

What MCP unlocks for developers today

Developers often ask: “What do I actually get from adopting MCP?”

Here’s the concrete value as I see it:

1. One server, many clients

Expose a tool once. Use it across multiple AI clients, agents, shells, and IDEs.

No more bespoke function-calling adapters per model provider.

2. Predictable, testable tool invocation

MCP’s schemas make tool interaction debuggable and reliable, which is closer to API contracts than prompt engineering.

3. A protocol for agent-native workloads

As Octoverse shows, agent workflows are moving into mainstream engineering:

  • 1M+ agent-authored pull requests via GitHub Copilot coding agent alone in the five months since it was released
  • Rapid growth of key AI projects like vllm and ragflow
  • Local inference tools exploding in popularity

Agents need structured ways to call tools and fetch context. MCP provides exactly that.

4. Secure, remote execution

OAuth and remote-server support mean MCP works for:

  • Enterprises
  • Regulated workloads
  • Multi-machine orchestration
  • Shared internal tools

5. A growing ecosystem of servers

With a growing set of community and vendor-maintained MCP servers (and more added weekly), developers can connect to:

  • Issue trackers
  • Code search and repositories
  • Observability systems
  • Internal APIs
  • Cloud services
  • Personal productivity tools

Soria Parra emphasized that MCP isn’t just for LLMs calling tools. It can also invert the workflow by letting developers use a model to understand their own complex systems.

6. It matches how developers already build software

MCP aligns with developer habits:

  • Schema-driven interfaces (JSON Schema–based)
  • Reproducible workflows
  • Containerized infrastructure
  • CI/CD environments
  • Distributed systems
  • Local-first testing

Most developers don’t want magical behavior—they want predictable systems. MCP meets that expectation.

MCP also intentionally mirrors patterns developers already know from API design, distributed systems, and standards evolution—favoring predictable, contract-based interactions over “magical” model behaviors.

What happens next

The Linux Foundation announcement is the beginning of MCP’s next phase, and the move signals:

  • Broader contribution
  • More formal governance
  • Deeper integration into agent frameworks
  • Cross-platform interoperability
  • An expanding ecosystem of servers and clients

Given the global developer growth highlighted in Octoverse—36M new developers on GitHub alone this year—the industry needs shared standards for AI tooling more urgently than ever.

MCP is poised to be part of that future. It’s a stable, open protocol that lets developers build agents, tools, and workflows without vendor lock-in or proprietary extensions.

The next era of software will be shaped not just by models, but by how models interact with systems. MCP is becoming the connective tissue for that interaction.

And with its new home in the Linux Foundation, that future now belongs to the community.

Explore the MCP specification and the GitHub MCP Registry to join the community working on the next phase of the protocol.

The post MCP joins the Linux Foundation: What this means for developers building the next era of AI tools and agents appeared first on The GitHub Blog.

Read the whole story
alvinashcraft
36 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Model Context Protocol: how MCP went from blog post to the Linux Foundation

1 Share
From: GitHub
Duration: 1:37
Views: 126

The Model Context Protocol (MCP) started as a small idea for an open way for AI models to connect to tools, systems, and developer workflows. It turned into one of the fastest-growing open standards in AI.

In this video, engineers and maintainers from Anthropic, GitHub, Microsoft, and OpenAI share how MCP “just clicked,” why openness was essential from day zero, and what its move to the Linux Foundation means for developers building agents and AI-powered tools.

Watch the story behind MCP’s momentum, community, and new home under the Agentic AI Foundation.

Full blog: https://github.blog/open-source/maintainers/mcp-joins-the-linux-foundation-what-this-means-for-developers-building-the-next-era-of-ai-tools-and-agents/

Explore MCP on GitHub: https://gh.io/MCP

#MCP #AI #OpenSource

Stay up-to-date on all things GitHub by subscribing and following us at:
YouTube: http://bit.ly/subgithub
Blog: https://github.blog
X: https://twitter.com/github
LinkedIn: https://linkedin.com/company/github
Instagram: https://www.instagram.com/github
TikTok: https://www.tiktok.com/@github
Facebook: https://www.facebook.com/GitHub/

About GitHub:
It’s where over 100 million developers create, share, and ship the best code possible. It’s a place for anyone, from anywhere, to build anything—it’s where the world builds software. https://github.com

Read the whole story
alvinashcraft
36 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Built to Scale: How a Config-Driven ETL Engine Is Powering Environmental, Social, and Governance…

1 Share

Built to Scale: How a Config-Driven ETL Engine Is Powering Environmental, Social, and Governance Data Innovation

A reuseable Python framework is simplifying data workflows, accelerating development, and supporting McDonald’s evolving cloud strategy.

by: Shruti Varma, Sr Manager, Data Engineering, and Sagar Dalvi, Data Engineer

Quick Bytes:

  • Traditional ETL development couldn’t keep pace with McDonald’s growing Environmental, Social, and Governance (ESG) data needs — manual coding, inconsistent performance, and limited scalability slowed progress
  • We built a reusable, config-driven ETL/ELT engine powered by Python and YAML, designed specifically for ESG data and aligned with our cloud strategy
  • Teams now develop pipelines faster, reduce manual effort, and gain flexibility across cloud environments — freeing up time for higher-impact work

Why modern data teams need more than speed
Today’s data teams need more than just fast pipelines — they need flexibility, reliability, and scalability. Robust ETL (Extract, Transform, Load) or ELT (Extract, Load, Transform) solutions must integrate diverse data sources, manage complex transformations, and deliver consistent performance. And they must do it all while staying cost-efficient and agile.

To meet these demands, we architected and built an enterprise-grade ETL framework purpose-built for Environmental, Social, and Governance (ESG) data processing. This reusable, config-driven engine is designed to accelerate development, reduce manual effort, and support McDonald’s evolving cloud strategy.

Why we needed a smarter ETL engine
Traditional ETL development often requires weeks of coding and testing, with inconsistent performance across platforms. As McDonald’s ESG data needs grew, so did the complexity of our workflows. We needed a solution that could scale, adapt, and accelerate development — without sacrificing reliability or flexibility.

That’s why our engineering teams built a reusable, config-driven ETL/ELT engine designed specifically for ESG data. By shifting to a YAML-based automation model, we’ve unlocked faster insights, reduced manual effort, and freed up engineering time for higher-impact work.

A smarter engine: Config-driven, Python-powered, and cloud ready
The ESG-ETL/ELT Engine is a Python-based framework that replaces custom code with configuration files. Teams define job logic in YAML, and the engine reads these files to perform data operations automatically. This design makes the engine ideal for handling diverse data sources, complex transformations, and multi-cloud deployments.

From config to execution: How the engine works
At the core of the engine is a simple yet powerful structure:

  • A common configuration file holds shared settings like database credentials, S3 buckets, and email servers
  • Job-specific YAML files define source connectors, transformations, and target destinations
  • Jobs are orchestrated using Airflow or similar tools
  • The Python engine reads the YAML, processes the data, and loads it into destinations like Redshift or S3

This structure allows teams to build reusable, scalable pipelines without writing new code for each job.

ESG Architecture: This diagram shows the overall structure of McDonald’s ESG data architecture, highlighting how the configurable ETL engine fits into the broader data ecosystem
ETL/ELT Engine Architecture: This diagram shows the core structure of our Configurable ETL Engine (Component 3 in the ESG Architecture), which enables scalable, reusable data pipelines for McDonald’s ESG initiatives.

What makes this engine powerful
With this framework, teams benefit from:

  • Time savings: Teams have consistently experienced faster development with the configurable framework compared to traditional ETL methods
  • Scalability and flexibility: Supports small to complex jobs, large datasets, and varied transformation logic
  • Robust auditing and monitoring: Every job instance is tracked, logged, and easy to debug
  • Automated notifications: Teams receive proactive email alerts about job execution status
  • Cloud portability: Runs across virtual machines, containers, and orchestration platforms with minimal adjustments — future-proofing the solution and reducing vendor dependency

What the engine can do
The engine focuses on three types of operations:

Advanced data transformation capabilities
The engine’s true power lies in its ability to handle a wide spectrum of data transformation tasks with ease and consistency. Teams can seamlessly rename columns, convert data types, format dates, handle null values, trim strings, join and union datasets, perform ranking and aggregations, create derived columns, apply SQL functions and filters, and generate surrogate or hash keys. This flexible framework makes it simple to integrate new transformation logic as requirements evolve, ensuring that even complex ESG data workflows remain efficient, standardized, and easy to maintain for everyone involved.

Building smarter pipelines with the YAML configuration generator
To make pipeline creation even easier, we developed a Visual YAML Configuration Generator — a web-based tool designed specifically for this engine.

Key highlights:

  • Visual form builder: Dynamic web forms let users define pipeline sources, transformations, and targets without writing raw YAML
  • Real-time preview and validation: Instantly see the generated YAML and catch mistakes before deployment
  • Reusable templates: Speed up pipeline creation with pre-built or custom templates
  • Multi-source and multi-target support: Easily configure pipelines for S3, Redshift, databases, and semi-structured files
  • Import/export: Edit existing YAML files directly in the tool and export new configurations ready for execution

Why it’s built to matter
The ESG Configurable ETL/ELT Engine empowers teams to reuse the same engine across multiple projects, enforce transformation logic standards, and reduce manual errors — all while freeing up developer time for more strategic work.

As our data and data sources continue to grow exponentially, agile and reusable frameworks like this one are key to keeping us ahead. By adopting a config-driven approach, we simplify operations, standardize pipelines, and deliver actionable insights faster than ever.


Built to Scale: How a Config-Driven ETL Engine Is Powering Environmental, Social, and Governance… was originally published in McDonald’s Technical Blog on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read the whole story
alvinashcraft
37 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Encode What You Know With Neo: Custom Instructions and Slash Commands

1 Share

Every organization builds up knowledge over time: naming standards, compliance requirements, patterns your team has settled on, and proven approaches to common tasks. Until now, bringing this knowledge into Neo meant repeating it manually each time - specifying preferences, describing how your team works, and recreating prompts that someone already perfected.

Two new features change this. Custom Instructions teach Neo your standards so it applies them automatically. Slash Commands capture proven prompts so anyone on your team can use them with a keystroke.

Custom Instructions: Standards Applied Automatically

Custom Instructions let you define what Neo should know about your organization and how it should behave. This includes naming conventions, required tags and compliance requirements, technology preferences, and cost guidelines - but also actions Neo should take automatically, like including a rough cost estimate whenever it proposes new infrastructure. You configure them once in your organization settings, and Neo applies them to every task from that point forward.

Consider the difference. Before Custom Instructions, a simple request required loading context:

“Neo, update our Lambda functions to Node 20. Remember, we use TypeScript exclusively, our naming convention is service-region-env, we always deploy to us-east-1 first for testing, and all resources need our standard compliance tags including CostCenter and DataClassification.”

With those details captured in Custom Instructions, the same request becomes:

“Neo, update our Lambda functions to Node 20.”

Neo already knows how your team works, so you can focus on what you’re trying to accomplish.

Slash Commands: Capture What Works

Over time, your team figures out the right way to ask Neo for certain tasks. Maybe someone wrote the perfect prompt for checking policy violations, or discovered an approach to drift detection that catches issues others miss. That knowledge tends to live in someone’s head or buried in a Slack thread.

Slash Commands turn these prompts into shortcuts anyone can use. When you type / in Neo, you’ll see available commands, select one, and Neo receives the full prompt behind it.

Slash Commands in action

Neo ships with built-in commands for common tasks:

Command What it does
/get-started Learn what Neo can do and how to structure effective requests
/policy-issues-report Lists your most severe policy violations
/component-version-report Lists components that are outdated in your private registry
/provider-version-report Lists providers that are outdated

You can also create your own. In Pulumi Cloud, you define the prompt - no coding required. Once saved, your team can start using it immediately. If a command needs more information than what’s provided, Neo will ask follow-up questions to fill in the gaps.

Get Started

Custom Instructions and Slash Commands are available now. You can configure Custom Instructions in Neo Settings. Slash Commands come with several built-in options, and you can create custom ones tailored to your workflow.

Read the whole story
alvinashcraft
37 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Announcing Azure DevOps Server General Availability

1 Share

We’re thrilled to announce that Azure DevOps Server is now generally available (GA)! This release marks the transition from the Release Candidate (RC) phase to full production readiness, delivering enterprise-grade DevOps capabilities for organizations that prefer self-hosted solutions.

You can upgrade directly from Azure DevOps Server RC or any supported version of Team Foundation Server (TFS 2015 and newer). Head over to the release notes for a complete breakdown of changes included with this release.

Note: Team Foundation Server 2015 reached the end of Extended Support on October 14, 2025. We strongly recommend upgrading to Azure DevOps Server to maintain security and compliance.

Here are some key links:

We’d love for you to install this release and provide any feedback at Developer Community.

The post Announcing Azure DevOps Server General Availability appeared first on Azure DevOps Blog.

Read the whole story
alvinashcraft
37 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories