Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
146009 stories
·
33 followers

Connect AI Agents to WordPress.com with OAuth 2.1 and MCP

1 Share

In October, we announced that WordPress.com now supports MCP (Model Context Protocol), enabling AI agents to interact with your sites.

Today, WordPress.com supports OAuth 2.1, making MCP integrations simpler. 

MCP clients work natively with OAuth 2.1, so authorizing the AI tools you already use is as straightforward as adding a URL and approving access — no workarounds or manual configuration required.

With MCP, AI agents can help with everyday tasks on your WordPress.com site, such as finding posts, pulling site details, or drafting new content, while you control what they can access.

How OAuth 2.1 powers MCP integrations

When an AI assistant (like Claude Desktop, ChatGPT, or a custom AI tool) wants to access your WordPress.com content, OAuth 2.1 now handles the secure connection:

  1. The MCP client requests authorization.
  2. You’re redirected to WordPress.com to approve the connection.
  3. After approval, the client receives secure tokens.
  4. The client uses those tokens to access the WordPress.com MCP.
  5. Tokens refresh automatically as needed.

All of this is protected by PKCE (Proof Key for Code Exchange). 

Even if someone intercepts the authorization code, they can’t use it without the secret verification code that stays on your device.

Simple setup: Just add a URL

WordPress.com provides an MCP server that AI tools can connect to using OAuth 2.1.

All you’ll need to do is:

  • Create a custom connector or app in your AI tool
  • Add the WordPress.com MCP server URL
  • Authenticate and approve access through WordPress.com using OAuth 2.1
Apps & Connectors settings page.

That’s it. 

WordPress.com handles authentication and permissions, so there’s no manual credential setup and no passwords to share.

Tip: View the MCP connection guide in the developer documentation for Claude Desktop and ChatGPT–specific instructions.

What MCP clients can do with WordPress.com

Once authenticated, MCP clients can interact with your WordPress.com sites through the MCP API:

  • Search and retrieve posts: Find content across your sites
  • Read post details: Access full post content, metadata, and comments
  • Access site information: Get site settings, statistics, and user data

All of this happens with the permissions you’ve explicitly granted, and you can revoke MCP access at any time from your WordPress.com MCP settings.

Get started with MCP and OAuth 2.1

OAuth 2.1 is available now for all AI agents to connect to WordPress.com. 

Whether you’re building a custom integration or using existing MCP-compatible AI tools, it provides the secure authentication foundation for your work.

If you haven’t already, enable MCP on your WordPress.com account to start connecting your AI assistants.

Useful resources

Share Your Feedback

We’d love to hear how you’re using OAuth 2.1 and MCP with WordPress.com. Have questions or suggestions? Drop a comment below or share your experience in the developer forums.





Read the whole story
alvinashcraft
33 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Using MCP Servers: From Quick Tools to Multi-Agent Systems

1 Share

Model Context Protocol (MCP) servers are a spec for exposing tools, models, or services to language models through a common interface. Think of them as smart adapters: they sit between a tool and the LLM, speaking a predictable protocol that lets the model interact with things like APIs, databases, and agents without needing to know implementation details.

But like most good ideas, the devil’s in the details.

The Promise—and the Problems of Running MCP Servers

Running an MCP sounds simple: spin up a Python or Node server that exposes your tool. Done, right? Not quite.

You run into problems fast:

  • Runtime friction: If an MCP is written in Python, your environment needs Python (plus dependencies, plus maybe a virtualenv strategy, plus maybe GPU drivers). Same goes for Node. This multiplies fast when you’re managing many MCPs or deploying them across teams.
  • Secrets management: MCPs often need credentials (API keys, tokens, etc.). You need a secure way to store and inject those secrets into your MCP runtime. That gets tricky when different teams, tools, or clouds are involved.
  • N×N integration pain: Let’s say you’ve got three clients that want to consume MCPs, and five MCPs to serve up. Now you’re looking at 15 individual integrations. No thanks.

To make MCPs practical, you need to solve these three core problems: runtime complexity, secret injection, and client-to-server wiring. 

If you’re wondering where I’m going with all this, take a look at those problems. We already have a technology that has been used by developers for over a decade that helps solve them: Docker containers.

In the rest of this blog I’ll walk through three different approaches, going from least complex to most complex, for integrating MCP servers into your developer experience. 

Option 1 — Docker MCP Toolkit & Catalog

For the developer who already uses containers and wants a low-friction way to start with MCP.

If you’re already comfortable with Docker but just getting your feet wet with MCP, this is the sweet spot. In the raw MCP world, you’d clone Python/Node servers, manage runtimes, inject secrets yourself, and hand-wire connections to every client. That’s exactly the pain Docker’s MCP ecosystem set out to solve.

Docker’s MCP Catalog is a curated, containerized registry of MCP servers. Each entry is a prebuilt container with everything you need to run the MCP server. 

The MCP Toolkit (available via Docker Desktop) is your control panel: search the catalog, launch servers with secure defaults, and connect them to clients.

How it helps:

  • No language runtimes to install
  • Built-in secrets management
  • One-click enablement via Docker Desktop
  • Easily wire the MCPs to your existing agents (Claude Desktop, Copilot in VS Code, etc)
  • Centralized access via the MCP Gateway
MCP Catalog

Figure 1: Docker MCP Catalog: Browse hundreds of MCP servers with filters for local or remote and clear distinctions between official and community servers

A Note on the MCP Gateway
One important piece working behind the scenes in both the MCP Toolkit and cagent (a framework for easily building multi-agent applications that we cover below) is the MCP Gateway, an open-source project from Docker that acts as a centralized frontend for all your MCP servers. Whether you’re using a GUI to start containers or defining agents in YAML, the Gateway handles all the routing, authentication, and translation between clients and tools. It also exposes a single endpoint that custom apps or agent frameworks can call directly, making it a clean bridge between GUI-based workflows and programmatic agent development.

Moving on: Using MCP servers alongside existing AI agents is often the first step for many developers. You wire up a couple tools, maybe connect to a calendar or a search API, and use them in something like Claude, ChatGPT, or a small custom agent. For step-by-step tutorials on how to automate dev workflows with Docker’s MCP Catalog and Toolkit with popular clients, check out these guides on ChatGPT, Claude Desktop,Codex, Gemini CLI, and Claude Code
Once that pattern clicks, the next logical step is to use those same MCP servers as tools inside a multi-agent system.

Option 2 — cagent: Declarative Multi-Agent Apps

For the developer who wants to build custom multi-agent applications but isn’t steeped in traditional agentic frameworks.

If you’re past simple MCP servers and want agents that can delegate, coordinate, and reason together, cagent is your next step. It’s Docker’s open-source, YAML-first framework for defining and running multi-agent systems—without needing to dive into complex agent SDKs or LLM loop logic.

Cagent lets you describe:

  • The agents themselves (model, role, instructions)
  • Who delegates to whom
  • What tools each agent can access (via MCP or local capabilities)

Below is an example of a pirate flavored chat bot:

agents:
  root:
    description: An agent that talks like a pirate
    instruction: Always answer by talking like a pirate.
    welcome_message: |
      Ahoy! I be yer pirate guide, ready to set sail on the seas o' knowledge! What be yer quest? 
    model: auto


cagent run agents.yaml

You don’t write orchestration code. You describe what you want, and Cagent runs the system.

Why it works:

  • Tools are scoped per agent
  • Delegation is explicit
  • Uses MCP Gateway behind the scene
  • Ideal for building agent systems without writing Python

If you’d like to give cagent a try, we have a ton of examples in the project’s GitHub repository. Check out this guide on building multi-agent systems in 5 minutes. 

Option 3 — Traditional Agent Frameworks (LangGraph, CrewAI, ADK)

For developers building complex, custom, fully programmatic agent systems.

Traditional agent frameworks like LangGraph, CrewAI, or Google’s Agent Development Kit (ADK) let you define, control, and orchestrate agent behavior directly in code. You get full control over logic, state, memory, tools, and workflows.

They shine when you need:

  • Complex branching logic
  • Error recovery, retries, and persistence
  • Custom memory or storage layers
  • Tight integration with existing backend code

Example: LangGraph + MCP via Gateway


import requests
from langgraph.graph import StateGraph
from langchain.agents import Tool
from langchain_openai import ChatOpenAI

# Discover MCP endpoint from Gateway
resp = requests.get("http://localhost:6600/v1/servers")
servers = resp.json()["servers"]
duck_url = next(s["url"] for s in servers if s["name"] == "duckduckgo")

# Define a callable tool
def mcp_search(query: str) -> str:
    return requests.post(duck_url, json={"input": query}).json()["output"]

search_tool = Tool(name="web_search", func=mcp_search, description="Search via MCP")

# Wire it into a LangGraph loop
llm = ChatOpenAI(model="gpt-4")
graph = StateGraph()
graph.add_node("agent", llm.bind_tools([search_tool]))
graph.add_edge("agent", "agent")
graph.set_entry_point("agent")

app = graph.compile()
app.invoke("What’s the latest in EU AI regulation?")

In this setup, you decide which tools are available. The agent chooses when to use them based on context, but you’ve defined the menu.
And yes, this is still true in the Docker MCP Toolkit: you decide what to enable. The LLM can’t call what you haven’t made visible.


Choosing the Right Approach

Approach

Best For

You Manage

You Get

Docker MCP Toolkit + Catalog

Devs new to MCP, already using containers

Tool selection

One-click setup, built-in secrets, Gateway integration

Cagent

YAML-based multi-agent apps without custom code

Roles & tool access

Declarative orchestration, multi-agent workflows

LangGraph / CrewAI / ADK

Complex, production-grade agent systems

Full orchestration

Max control over logic, memory, tools, and flow

Wrapping Up
Whether you’re just connecting a tool to Claude, designing a custom multi-agent system, or building production workflows by hand, Docker’s MCP tooling helps you get started easily and securely. 

Check out the Docker MCP Toolkit, cagent, and MCP Gateway for example code, docs, and more ways to get started.

Read the whole story
alvinashcraft
33 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Substack is launching a TV app, and not everyone is happy

1 Share

Substack announced Thursday it's launching Apple TV and Google TV apps that audiences can use for videos and livestreams - and early reactions suggest not all users are thrilled.

Subscribers can watch videos and livestreams from creators they follow, but the app will also have a recommendations-based "For You" feed that mixes in other creators' content. The TV app is available to both free and paid subscribers, and Substack says it will eventually add audio content and more discovery features.

For many, it appears this was not welcome news. In the short time since the blog post went live, it's been flooded with comments from writers and us …

Read the full story at The Verge.

Read the whole story
alvinashcraft
34 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

How Claude Code Is Reshaping Software—and Anthropic

1 Share
WIRED spoke with Boris Cherny, head of Claude Code, about how the viral coding tool is changing the way Anthropic works.
Read the whole story
alvinashcraft
35 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Codex Is Now Integrated Into JetBrains IDEs

1 Share

OpenAI Codex is now natively integrated into the JetBrains AI chat, giving you another powerful option for tackling real development tasks right inside your IDE. 

You can use Codex with a JetBrains AI subscription, your ChatGPT account, or an OpenAI API key – all within the same AI сhat interface.
See Codex in action in a JetBrains IDE, as Dominik Kundel from OpenAI and Gleb Melnikov from JetBrains walk you through real development tasks:

Get started with Codex in your IDE

Codex is available directly in the AI chat of your JetBrains IDE (starting from v2025.3). Make sure you have the latest version of the AI Assistant plugin. You will find Codex in the agent picker menu and can start using it right away.

If you’re new to the AI chat, open the JetBrains AI widget in the top-right corner of your IDE, click Let’s Go, and follow the instructions to install the AI Assistant plugin. You can find a detailed step-by-step guide in the documentation.

Flexible authentication options

There are several different ways you can authenticate Codex inside your JetBrains IDE, which means you can choose the setup that best fits your preferences:

  • JetBrains AI
    Use Codex directly as part of your JetBrains AI subscription.
  • ChatGPT
    Sign in using an existing ChatGPT account.
  • Bring Your Own Key (BYOK)
    Connect to Codex using your own OpenAI API key.

Free access for a limited time

The Codex agent is available for free for a limited time when accessed via JetBrains AI, including the free trial or free tier version. The promotion starts on January 22 and will remain available until your allocated promotional credits have been used up. We reserve the right to cancel the free promotion at any time.

This free offer does not apply when using a ChatGPT account or an OpenAI API key. Other JetBrains AI features continue to consume AI Credits as usual.

When the free period ends, OpenAI’s Codex agent will continue to be available to you, but any use of it after this point will consume AI Credits. You can track your usage of AI Credits via the JetBrains AI widget

Working with Codex in JetBrains IDEs

In JetBrains IDEs, you can make Codex your active agent via the AI chat. With Codex, you can delegate real coding tasks from within your IDE and let the agent reason, act, and iterate alongside you. Codex supports various interaction modes, so you can decide how much autonomy to give it – from simple question-response permissions to the ability to access your network and run commands autonomously. You can also switch between supported OpenAI models and their reasoning budget directly in the AI chat, making it easy to balance reasoning depth, speed, and cost depending on the task at hand.

Looking ahead

By partnering with leading AI providers like OpenAI and integrating their technologies directly into JetBrains IDEs, we’re ensuring these tools work where developers already are, in a way that respects their preferences.

We’d love to hear how you’re using Codex and what you’d like to see next. Are there other agents or capabilities you’d like us to bring into JetBrains IDEs? Let us know and help shape what comes next for JetBrains AI.

Read the whole story
alvinashcraft
35 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Microsoft Security success stories: Why integrated security is the foundation of AI transformation

1 Share

AI is transforming how organizations operate and how they approach security. In this new era of agentic AI, every interaction, digital or human, must be built on trust. As businesses modernize, they’re not just adopting AI tools, they’re rearchitecting their digital foundations. And that means security can’t be an afterthought. It must be woven in from the beginning into every layer of the stack—ubiquitous, ambient, and autonomous—just like the AI it protects. 

In this blog, we spotlight three global organizations that are leading the way. Each is taking a proactive, platform-first approach to security—moving beyond fragmented defenses and embedding protection across identity, data, devices, and cloud infrastructure. Their stories show that when security is deeply integrated from the start, it becomes a strategic enabler of resilience, agility, and innovation. And by choosing Microsoft Security, these customers are securing the foundation of their AI transformation from end to end.

Why security transformation matters to decision makers

Security is a board-level priority. The following customer stories show how strategic investments in security platforms can drive cost savings, operational efficiency, and business agility, not just risk reduction. Read on to learn how Ford, Icertis, and TriNet transformed their operations with support from Microsoft.

Ford builds trust across global operations

In the automotive industry, a single cyberattack can ripple across numerous aspects of the business. Ford recognized that rising ransomware and targeted cyberattacks demanded a different approach. The company made a deliberate shift away from fragmented, custom-built security tools toward a unified Microsoft security platform, adopting a Zero Trust approach and prioritizing security embedded into every layer of its hybrid environment—from endpoints to data centers and cloud infrastructure.

Unified protection and measurable impact

Partnering with Microsoft, Ford deployed Microsoft Defender, Microsoft Sentinel, Microsoft Purview, and Microsoft Entra to strengthen defenses, centralize threat detection, and enforce data governance. AI-powered telemetry and automation improved visibility and accelerated incident response, while compliance certifications supported global scaling. By building a security-first culture and leveraging Microsoft’s integrated stack, Ford reduced vulnerabilities, simplified operations, and positioned itself for secure growth across markets.

Read the full customer story to discover more about Ford’s security modernization collaboration with Microsoft.

Icertis cuts security operations center (SOC) incidents by 50%

As a global leader in contract intelligence, Icertis introduced generative AI to transform enterprise contracting, launching applications built on Microsoft Azure OpenAI and its Vera platform. These innovations brought new security challenges, including prompt injection risks and compliance demands across more than 300 Azure subscriptions. To address these, Icertis adopted Microsoft Defender for Cloud for AI posture management, threat detection, and regulatory alignment, ensuring sensitive contract data remains protected.

Driving security efficiency and resilience

By integrating Microsoft Security solutions—Defender for Cloud, Microsoft Sentinel, Purview, Entra, and Microsoft Security Copilot—Icertis strengthened governance and accelerated incident response. AI-powered automation reduced alert triage time by up to 80%, cut mean time to resolution to 25 minutes, and lowered incident volume by 50%. With Zero Trust principles and embedded security practices, Icertis scales innovation securely while maintaining compliance, setting a new standard for trust in AI-powered contracting.

Read the full customer story to learn how Icertis secures sensitive contract data, accelerates AI innovation, and achieves measurable risk reduction with Microsoft’s unified security platform.

TriNet moves to Microsoft 365 E5, achieves annual savings in security spend

Facing growing complexity from multiple point solutions, TriNet sought to reduce operational overhead and strengthen its security posture. The company’s leadership recognized that consolidating tools could improve visibility, reduce risk, and align security with its broader digital strategy. After evaluating providers, TriNet chose Microsoft 365 E5 for its integrated security platform, delivering advanced threat protection, identity management, and compliance capabilities.

Streamlined operations and improved efficiencies

By adopting Microsoft Defender XDR, Purview, Entra, Microsoft Sentinel, and Microsoft 365 Copilot, TriNet unified security across endpoints, cloud apps, and data governance. Automation and centralized monitoring reduced alert fatigue, accelerated incident response, and improved Secure Score. The platform blocked a spear phishing attempt targeting executives, demonstrating the value of Zero Trust and advanced safeguards. With cost savings from tool consolidation and improved efficiency, TriNet is building a secure foundation for future innovation.

Read the full customer story to see how TriNet consolidated its security stack with Microsoft 365 E5, reduced complexity, and strengthened defenses against advanced threats.

How to plan, adopt, and operationalize a Microsoft Security strategy 

Ford, Icertis, and TriNet each began their transformation by assessing legacy systems and identifying gaps that created complexity and risk. Ford faced fragmented tools across a global manufacturing footprint, Icertis needed to secure sensitive contract data while adopting generative AI, and TriNet aimed to reduce operational complexity caused by managing multiple point solutions, seeking a more streamlined and integrated approach. These assessments revealed the need for a unified, risk-based strategy to simplify operations and strengthen protection.

Building on Zero Trust and deploying integrated solutions

All three organizations aligned on Zero Trust principles as the foundation for modernization. They consolidated security into Microsoft’s integrated platform, deploying Defender for endpoint and cloud protection, Microsoft Sentinel for centralized monitoring, Purview for data governance, Entra for identity management, and Security Copilot for AI-powered insights. This phased rollout allowed each company to embed security into daily operations while reducing manual processes and improving visibility.

Measuring impact and sharing best practices

The results were tangible: Ford accelerated threat detection and governance across its hybrid environment, Icertis cut incident volume by 50% and reduced triage time by 80%, and TriNet improved Secure Score while achieving cost savings through tool consolidation. Automation and AI-powered workflows delivered faster response times and reduced complexity. Each organization now shares learnings internally and with industry peers—whether through executive briefings, training programs, or participation in cybersecurity forums—helping set new standards for resilience and innovation.

Working towards a more secure future

The future of enterprise security is being redefined by AI, by innovation, and by the bold choices organizations make today. Modernization, automation, and collaboration are no longer optional—they’re foundational. As AI reshapes how we work, build, and protect, security must evolve in lockstep: not as an add-on, but as a fabric woven through every layer of the enterprise. 

These customer stories show us that building a security-first approach isn’t just possible; it’s imperative. From cloud-native disruptors to global institutions modernizing complex environments, leading organizations are showing what’s possible when security and AI move together. By unifying their tools, automating what once was manual, and using AI to stay ahead of emerging cyberthreats, they’re not just protecting today, they’re securing the future and shaping what comes next. 

Share your thoughts

Are you a regular user of Microsoft Security products? Share your insights and experiences on Gartner Peer Insights™.

Learn more

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.

The post Microsoft Security success stories: Why integrated security is the foundation of AI transformation appeared first on Microsoft Security Blog.

Read the whole story
alvinashcraft
35 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories