Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
149752 stories
·
33 followers

Elon Musk Said Grok’s Roasts Would Be ‘Epic’ at Parties—So I Tried It on My Coworkers

1 Share
It went about as well as you’d expect.
Read the whole story
alvinashcraft
43 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Sydnee Mayers

1 Share

Sydnee leads AI product initiatives at Cribl, the data engine for IT & Security. Prior to joining Cribl, Sydnee was leading hyperscale inference within the CoreAI group at Microsoft. Sydnee is passionate about helping individuals learn and adopt AI safely and responsibly.You can find Sydnee on the following sites:

PLEASE SUBSCRIBE TO THE PODCAST

You can check out more episodes of Coffee and Open Source on https://www.coffeeandopensource.com

Coffee and Open Source is hosted by Isaac Levin





Download audio: https://anchor.fm/s/63982f70/podcast/play/111705657/https%3A%2F%2Fd3ctxlq1ktw2nl.cloudfront.net%2Fstaging%2F2025-10-25%2F413165254-44100-2-6a3bd4d95573e.mp3
Read the whole story
alvinashcraft
44 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Why Your Organization Should Set Up Asset Libraries in SharePoint (and Enable Them for Copilot)

1 Share

In today’s fast-paced digital workplace, brand consistency and productivity are critical. Yet, many organizations struggle with scattered images, outdated templates, and employees spending time hunting for approved assets. This is where SharePoint Organization Asset Libraries (OALs) come in—and when integrated with Microsoft Copilot, they become a game-changer.

Watch the following two videos on setup in SharePoint and Copilot. 

 

✅ Why Set Up Asset Libraries?

  1. Brand Consistency Across the Organization
    • Ensure everyone uses the latest logos, images, and templates.
    • Reduce risk of outdated or incorrect branding in presentations, documents, and marketing materials.
  2. Centralized Access
    • A single, secure location for approved assets.
    • Accessible from SharePoint, Office apps, and Copilot—no more digging through email attachments or old folders.
  3. Time Savings
    • Employees spend less time searching for assets and more time creating impactful content.
    • Copilot can instantly surface approved visuals and templates during content generation.
  4. Improved Governance
    • Control who can upload and approve assets.
    • Apply metadata and tagging for better searchability and compliance.

🤖 The Added Value with Copilot

When you enable your asset libraries for Copilot, you unlock:

  • AI-Powered Discovery
    Copilot can suggest brand-approved images and templates while drafting documents or creating presentations.
  • Contextual Recommendations
    Need a hero image for a slide? Copilot pulls from your organization’s library, ensuring compliance and quality.
  • Enhanced Creativity Without Risk
    Employees can confidently use Copilot knowing all visuals are on-brand and approved.

🌟 Business Impact

  • Faster Content Creation: Marketing teams, sales reps, and executives can build polished materials in minutes.
  • Reduced Errors: No more accidental use of outdated logos or templates.
  • Better ROI on Brand Assets: Maximize the value of your design investments by making them easily accessible.

🔗 Resources to Get Started

Read the whole story
alvinashcraft
44 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Recognizing Microsoft MVPs at ESPC25

1 Share

At ESPC25 in Dublin, Microsoft MVPs will be easy to spot. They will lead tutorials, power “Ask the Experts,” mentor emerging voices, and support hands-on demos across the Innovation Hub. The Most Valuable Professionals (MVP) program recognizes exceptional community leaders for their technical expertise, leadership, public speaking and writing, online influence, and a strong focus on solving real world problems. MVPs also receive early access to Microsoft products and direct communication channels with our product teams, which helps them bring timely, practical guidance back to the community.

Why MVPs matter to our community

MVPs are a catalyst for learning and connection. They grow technical communities by organizing meetups, welcoming new participants, and championing diversity and inclusion. They amplify expertise through talks, demos, posts, and guides that turn complex topics into repeatable paths to success. They also share product insight from the field, turning customer feedback into opportunities for improvement by telling engineering what works and what needs attention. The outcome is better products, faster adoption, and stronger communities.

How to become an MVP

You do not apply for the MVP Award directly. You are nominated by a current MVP or by a full time Microsoft employee. If you are active in the community, someone may notice your contributions. You can also be proactive: connect with MVPs in your domain, ask for mentorship and feedback on your portfolio, and participate in MVPled events to contribute and learn. After nomination, you will submit your community work for review. The MVP program team and the relevant product groups evaluate your impact, consistency, and alignment with the program’s values.
Learn more: What is a Microsoft MVP?

MVP presence at ESPC25

We’re excited to welcome 88 MVPs onsite at ESPC25.

You will find MVPs throughout the event experience. Visit the “Ask the Experts” tables for practical Q&A on Microsoft Copilot, Teams, SharePoint, OneDrive, Viva, and more. Join MVP meetups to reconnect across regions, share ideas, and celebrate community milestones. Stop by the Innovation Hub for Lightning Talks, Tech Demos, Ideation Lab activities, and the Feedback Lounge. These are hands-on spaces where attendees can experiment, share feedback, and turn ideas into action.

MVP sessions at a glance

Below is the full set of tutorials and breakouts. Use the conference app for rooms and live updates.

Monday — Tutorials (9:00 AM–5:00 PM)

  • Building Custom Agents for Microsoft 365 Copilot — Andrew Connell (MVP, Voitanos, USA), Mark Rackley (MVP, Protiviti, USA)
  • Building effective intranets: The Information Architecture (IA) Blueprint for AI Integration — Susan Hanley (MVP, Susan Hanley LLC, USA)
  • Build Your Own Agent: Copilot Studio in a Day — Joe Griffin (MVP, proMX UK Limited, United Kingdom)
  • Mastering SharePoint Premium from Content AI to Advanced Administration — Vlad Catrinescu (MVP, MCT, Syskit, Canada)
  • Agentageddon and How to Prevent It — Kevin McDonnell (MVP, Avanade, United Kingdom), Zoe Wilson (Microsoft RD, MVP, Kyndryl, United Kingdom)
  • Everything you Wanted to Know About Microsoft Fabric and Power BI, But Were Afraid to Ask — John White (MVP, AvePoint, Canada)
  • Microsoft Security, Compliance, and Identity Fundamentals Bootcamp — Mike Maadarani (MVP, Taiga AI, Canada), Habib Mankal (MVP, WaveCore IT Inc., Canada)
  • Getting Started with Power Platform & AI on SharePoint — Luise Freese (MVP, M365Princess, Germany)
  • Hacking and Securing Windows Infrastructure — Paula Januszkiewicz (Microsoft RD, MVP, CQURE, Poland)

Tuesday

10:15–11:15 AM

  • Building smart agents with Azure AI Foundry — Luise Freese (MVP, Germany)
  • Smarter Contract Management in M365: AI, eSignature & Knowledge Agent — Leon Armston (MVP, United Kingdom)
  • Generative AI: Shaping the future of LowCode and Pro Code Development — Jussi Roine (MVP, Finland)
  • The Oversharing Solution Blueprint — Sara Fennah (MVP, United Kingdom)
  • The Anatomy of an Agent — Kevin McDonnell (MVP, United Kingdom)
  • Cracking the Code of Microsoft 365 Copilot’s ROI — Pieter Op De Beeck (MVP, Belgium)
  • Technical changes in M365 compliance — Tony Redmond (MVP, Ireland)
  • Fireside Chat with Two Recovering Consultants — Karoliina Kettukari (MVP, Finland), Sari Soinoja (MVP, Finland)

11:45 AM–12:45 PM

  • Scaling new heights: Mastering AI with Enterprise Scale AI Factory — Kim Berg (MVP, Sweden)
  • Case Study: 2 years with M365 Copilot — how did we succeed — Karoliina Kettukari (MVP, Finland)
  • Securing Generative AI: A Zero Trust Approach to Deploying Copilot in your Company — Seyfallah Tagrerout (Microsoft RD, MVP, Switzerland)
  • Unleashing the Power of RAG in Copilot Studio: Applications and Best Practices — Rick Van Rousselt (MVP, Belgium)
  • Model Context Protocol vs. Connectors: Rethinking Integration in the Power Platform — Mats Necker (MVP, Germany)
  • You’re Holding it Wrong — Best practices in Power BI Reporting — John White (MVP, Canada)

2:00–3:00 PM

  • How to work like a Brainiac with M365 Copilot — a nobrainer — Caroline Kallin (MVP, Sweden)

3:15–4:15 PM

  • Navigating the AI Agent Landscape: Strategy, Selection, and Use Cases — Foyin OlajideBello (Microsoft RD, MVP, Ireland)
  • From Assessment to Action: Strengthening Microsoft 365 Governance — Christian Buckley (Microsoft RD, MVP, USA)
  • Mastering the Microsoft Graph PowerShell SDK — Tony Redmond (MVP, Ireland)
  • Navigating Communication Channels in Microsoft 365 for Meaningful Engagement — Emily Mancini (MVP, USA)
  • Testing smarter: Bringing AI into your E2E testing workflows — Elio Struyf (MVP, Belgium)

4:45–5:45 PM

  • Keynote: Hacker’s Perspective on New Risks: Revisiting the Cybersecurity Priorities for 2025 — Paula Januszkiewicz (Microsoft RD, MVP, Poland)
  • Copilot Customization: Choosing Between Copilot Studio & Teams Toolkit — Andrew Connell (MVP, USA), Mark Rackley (MVP, USA)
  • Intune 2025: The Latest Features in Action — Florian Salzmann (MVP, Switzerland)
  • Scaling Azure Applications: From Small Projects to Global Infrastructure — Tiago Costa (MVP, MCT, Portugal)
  • Power Platform Governance: Taming the Wild West! — Agnius Bartninkas (MVP, Lithuania)

Wednesday

10:15–11:15 AM

  • Voice AI Agents with OpenAI and Microsoft Teams — Renato Romao (MVP, Ireland)
  • Developing Hybrid Cloud with Azure Local: A Synergy of Bridging OnPremises and Cloud — Jonah Andersson (MVP, Sweden)
  • Beyond the Black Box: Creating Transparent and Ethical Reporting for Microsoft Copilot — Thomas Golles (MVP, Austria), Stephan Bisser (MVP, Austria)
  • From Code Villain to SPFx Hero: SPFx Best Practices for Sustainable Solutions — Dan Toft (MVP, Denmark)
  • Deep dive into Microsoft Security Exposure Management — Jussi Roine (MVP, Finland)
  • Implementing SCIM for MultiTenant Identity Management in M365 and Azure — Rodrigo Pinto (MVP, Portugal)
  • ErrorFree Solutions — Error Handling in Power Apps and Power Automate — Tomasz Poszytek (MVP, Poland), Krzysztof Kania (MVP, Poland)
  • Voices of Change: Real Stories of Mentoring in Action — Keith Atherton (MVP, United Kingdom)

11:45 AM–12:45 PM

  • Elevate Your Intranet: Practical PresentDay Tips for a FutureReady Platform — Susan Hanley (MVP, USA)
  • Use Agents Toolkit to Build Solutions on Microsoft 365 — Bill Ayers (MVP, MCM, MCT, United Kingdom)
  • Accelerate Agent Building with Copilot Studio — AlbertJan Schot (MVP, Netherlands)
  • Microsoft 365 tenant setup & configuration. Been there, done that? — Thomas Vochten (MVP, MCT, Belgium)
  • Defence Against the Dark Arts — Building Rock Solid Entra ID Security Solutions — Andy Malone (MVP, MCT, United Kingdom)
  • Microsoft 365 Copilot Extensibility: Navigating the Possibilities and Pitfalls — Thomas Golles (MVP, Austria), Stephan Bisser (MVP, Austria)

2:00–3:00 PM

  • Automating 2500 RFPs per year with Azure AI Foundry and Azure AI Search — Chris O’Brien (MVP, United Kingdom)
  • Developer challenges in GenAI projects — Stas Lebedenko (MVP, Ukraine)
  • Automating Microsoft 365 Governance — Best Practices to Ensure Compliance — Tobias Maestrini (MVP, Switzerland), Daniel Kordes (MVP, Switzerland)
  • The adventures of a Microsoft 365 Platform Owner — Jasper Oosterveld (MVP, Netherlands)
  • Canvas Apps Mastery: Create Reusable Code — Krzysztof Kania (MVP, Poland)
  • IntuneDriven Approaches to Minimize Local Admin Risks — Simon Skotheimsvik (MVP, Norway)
  • Ask Me Anything: Life at Microsoft & the MVP Journey — Susan Hanley (MVP, USA), Jonah Andersson (MVP, Sweden)

3:15–4:15 PM

  • Taming the Wild West of Generative AI with Purview and Defender — Tatu Seppala (MVP, Finland)
  • Demystifying Agent Types within the Semantic Kernel Framework — Arin Roy (MVP, Netherlands)
  • How to deal with 2.8 million files? — Sari Soinoja (MVP, Finland)
  • Deep Dive on Power BI, Teams and SharePoint — John White (MVP, Canada)
  • Prepare for GenAI and Copilot: Step up your Data Security and Information Governance game! — Bram de Jager (MVP, MCM, Netherlands)
  • Entra ID App registrations — The good, bad and the ugly — Andy Malone (MVP, MCT, United Kingdom)

4:45–5:45 PM

  • Zero Trust — Dope or Nope? — Sami Laiho (MVP, Finland)
  • Utilize a Local LLM to Chat with Your M365 and onpremises Data — Peter Paul Kirschner (MVP, Austria), Guido Zambarda (MVP, Italy)
  • Assess Power Platform Governance, from Results to Action — Edit Kapcari (MVP, Deutschland)
  • AIPowered Content Management Transformation with SharePoint Premium — Eric Overfield (Microsoft RD, MVP, MCT, USA)
  • Securing Your Copilot Agents: Implementing Robust Security Controls — Reshmee Auckloo (MVP, United Kingdom)

Looking back: ESPC24 photos and highlights

ESPC24 offered a vivid snapshot of community in motion. MVP meetups felt like a reunion of peers who build together. “Ask the Experts” tables turned complex questions into practical next steps. The Microsoft booth was a high energy hub where conversations led to collaboration. The D&I luncheon brought a full room together around inclusion and belonging.

 

 

Connect with MVPs at ESPC25

Visit the Innovation Hub for Lightning Talks, Tech Demos, Ideation Lab activities, and the Feedback Lounge. Introduce yourself at MVP meetups. Make time for the “Ask the Experts” tables to get tailored guidance for your scenarios. If you are exploring the MVP journey, these touchpoints are ideal for learning more, sharing your portfolio, and asking for mentorship.

Read the whole story
alvinashcraft
44 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

AI Skeptic to AI Pragmatist

1 Share

A few months ago I was an AI skeptic. I was concerned that AI was similar to Blockchain, in that it was mostly hype, with little practical application outside of a few niche use cases.

I still think AI is overhyped, but having intentionally used LLMs and AI agents to help build software, I have moved from skeptic to pragmatist.

I still don’t know that AI is “good” in any objective sense. It consumes a lot of water and electricity, and the training data is often sourced in ethically questionable ways. This post isn’t about that, as people have written extensively on the topic.

The thing is, I have spent the past few months intentionally using AI to help me do software design and development, primarily via Copilot in VS Code and Visual Studio, but also using Cursor and a couple other AI tools.

The point of this post is to talk about my experience with actually using AI successfully, and what I’ve learned along the way.

In my view, as an industry and society we need to discuss the ethics of AI. That discussion needs to move past a start point that says “AI isn’t useful”, because it turns out that AI can be useful, if you know how to use it effectively. Therefore, the discussion needs to acknowledge that AI is a useful tool, and then we can discuss the ethics of its use.

AI Pragmatist

The Learning Curve

The first thing I learned is that AI is not magic. You have to learn how to use it effectively, and that takes time and effort. It is also the case that the “best practices” for using AI are evolving as we use it, so it is important to interact with others who are also using AI to learn from their experiences.

For example, I started out trying to just “vibe code” with simple prompts, expecting the AI to just do the right thing. AI is non-deterministic though, and the same prompt can generate different results each time, depending on the random seed used by the AI. It is literally a crap shoot.

To get any reasonable results, it is necessary to provide context to the AI beyond expressing a simple desire. There are various patterns for doing this. The one I’ve been using is this:

  1. Who am I? (e.g. “You are a senior software engineer with 10 years of experience in C# and .NET.”)
  2. Who are you? (e.g. “You are an AI assistant that helps software engineers write high-quality code.”)
  3. Who is the end user? (e.g. “The end users are financial analysts who need to access market data quickly and reliably.”)
  4. What are we building? (e.g. “You are building a RESTful API that provides access to market data.”)
  5. Why are we building it? (e.g. “The API will help financial analysts make better investment decisions by providing them with real-time market data.”)
  6. How are we building it? (e.g. “You are using C#, .NET 8, and SQL Server to build the API.”)
  7. What are the constraints? (e.g. “The API must be secure, scalable, and performant.”) You may provide other context as well, but this is a good starting point. What this means is that your initial prompt for starting any work will be fairly long - at least one sentence per item above, but in many cases each item will be a paragraph or more.

Subsequent prompts in a session can be shorter, because the AI will have that context. However, AI context windows are limited, so it your session gets long (enough prompts and responses), you may need to re-provide context.

To that point, it is sometimes a good idea to save your context in a document, so you can reference that file in subsequent requests or sessions. This is easy to do in VS Code or Visual Studio, where you can reference files in your prompts.

A Mindset Shift

Notice that I sometimes use the term “we” when talking to the AI. This is on purpose, because I have found that it is best to think of the AI as a collaborator, rather than a tool. This mindset shift is important, because it changes the way you interact with the AI.

Don’t get me wrong - I don’t think of the AI as a person - it really is a tool. But it is a tool that can collaborate with you, rather than just a tool that you use.

When you think of the AI as a collaborator, you are more likely to provide it with the context it needs to do its job effectively. You are also more likely to review and refine its output, rather than just accepting it at face value.

Rate of Change

Even in the short time I’ve been actively using AI, the models and tools have improved significantly. New features are being added all the time, and the capabilities of the models are expanding rapidly. If you evaluated AI a few months ago and decided it wasn’t useful for a scenario, it might well be able to handle that scenario now. Or not. My point is that you can’t base your opinion on a single snapshot in time, because the technology is evolving so quickly.

Effective Use of AI

AI itself can be expensive to use. We know that it consumes a lot of water and electricity, so minimizing its use is important from an ethical standpoint. Additionally, many AI services charge based on usage, so minimizing usage is also important from a cost standpoint.

What this means to me, is that it is often best to use AI to build deterministic tools that can then be used without AI. Rather than using AI for repetative tasks during development, I often use AI to build bash scripts or other tools that can then be used to perform those tasks without AI.

Also, rather than manually typing in all my AI instructions and context over and over, I store that information in files that can be referenced by the AI (and future team members who might need to maintain the software). I do find that the AI is very helpful for building these documents, especially Claude Sonnet 4.5.

GitHub Copilot will automatically use a special file you can put in your repo’s root:

/.github/copilot-instructions.md

It now turns out that you can put numerous files in an instructions folder under .github, and Copilot will use all of them. This is great for organizing your instructions into multiple files.

This file can contain any instructions you want Copilot to use when generating code. I have found this to be very helpful for providing context to Copilot without having to type it in every time. Not per-prompt instructions, but overall project instructions. It is a great place to put the “Who am I?”, “Who are you?”, “Who is the end user?”, “What are we building?”, “Why are we building it?”, “How are we building it?”, and “What are the constraints?” items mentioned above.

You can also use this document to tell Copilot to use specific MCP servers, or to avoid using certain ones. This is useful if you want to ensure that your code is only generated using models that you trust.

Prompt Rules

Feel free to use terms like “always” or “never” in your prompts. These aren’t foolproof, because AI is non-deterministic, but they do help guide the AI’s behavior. For example, you might say “Always use async/await for I/O operations” or “Never use dynamic types in C#”. This helps the AI understand your coding standards and preferences.

Avoid being passive or unclear in your prompts. Instead of saying “It would be great if you could…”, say “Please do X”. This clarity helps the AI understand exactly what you want.

If you are asking a question, be explicit that you are asking a question and looking for an answer, otherwise the AI (in agent mode) might just try to build code or other assets based on your question, thinking it was a request.

GitHub Copilot allows you to put markdown files with pre-built prompts into a prompts folder under .github. You can then reference these prompts in your code comments to have Copilot use them using the standard # file reference syntax. This is a great way to standardize prompts across your team.

Switch Models as Needed

You may find that different AI models work better for various purposes or tasks.

For example, I often use Claude Sonnet 4.5 for writing documentation, because it seems to produce clearer and more concise text than other models. However, I often use GPT-5-Codex for code generation, because it seems to understand code better.

That said, I know other people who do the exact opposite, so your mileage may vary. The key is to experiment with different models and see which ones work best for your specific needs.

The point is that you don’t have to stick with a single model for everything. You can switch models as needed to get the best results for your specific tasks.

In GitHub Copilot there is a cost to using premium models, with a multiplier. So (at the moment) Sonnet 4.5 is a 1x multiplier, while Haiku is a 0.33x multiplier. Some of the older and weaker models are 0x (free). So you can balance cost and quality by choosing the appropriate model for your task.

Agent Mode vs Ask Mode

GitHub Copilot has two primary modes of operation: Agent Mode and Ask Mode. Other tools often have similar concepts.

In Ask mode the AI responds to your prompts in the chat window, and doesn’t modify your code or take other actions. The vscode and Visual Studio UIs usually allow you to apply the AI’s response to your code, but you have to do that manually.

In Agent mode the AI can modify your code, create files, and take other actions on your behalf. This is more powerful, but also more risky, because the AI might make changes that you don’t want or expect.

I’d recommend starting with Ask mode until you are comfortable with the AI’s capabilities and limitations. Once you are comfortable, you can switch to Agent mode for more complex tasks. Agent mode is a massive time saver as a developer!

By default, Agent mode does prompt you for confirmation in most cases, and you can disable those prompts over time to loosen the restrictions as you become more comfortable with the AI.

Don’t Trust the AI

The AI can and will make mistakes. Especially if you ask it to do something complex, or look across a large codebase. For example, I asked Copilot to create a list of all the classes that implemented an interface in the CSLA .NET codebase. It got most of them, but not all of them, and it included some that didn’t implement the interface. I had to manually review and correct the list.

I think it might have been better to ask the AI to give me a grep command or something that would do a search for me, rather than trying to have it do the work directly.

However, I often have the AI look at a limited set of files and it is almost always correct. For example, asking the AI for a list of properties or fields in a single class is usually accurate.

Use Git Commit like “Save Game”

I’ve been a gamer for most of my life, and one thing I’ve learned from gaming is the concept of “save games”. In many games, you can save your progress at any point, and then reload that save if you make a mistake or want to try a different approach.

This is true for working with AI as well. Before you ask the AI to make significant changes to your code, make a git commit. This way, if the AI makes changes that you don’t want or expect, you can easily revert to the previous state.

I find myself making a commit any time I get compiling code, passing tests, or any other milestone - even a small one. THis IS how you can safely experiment with AI without fear of losing your work.

I’m not saying push to the server or do a pull request (PR) every time - just a local commit is sufficient for this purpose.

Sometimes the AI will go off the rails and make a mess of your code. Having a recent commit allows you to quickly get back to a known good state.

Create and Use MCP Servers

As you might imagine, I use CSLA .NET a lot. Because CSLA is open-source, the Copilot AI generally knows all about CSLA because open-source code is part of the training data. The problem is that the training data covers everything from CSLA 1.0 to the current version - so decades of changes. This means that when you ask Copilot to help you with CSLA code, it might give you code that is out of date.

I’ve created an MCP server for CSLA that has information about CSLA 9 and 10. If you add this MCP server to your Copilot settings, and ask questions about CSLA, you will get answers that are specific to CSLA 9 and 10, rather than older versions. This is the sort of thing you can put into your /.github/copilot-instructions.md file to ensure that everyone on your team is using the same MCP servers.

The results of the AI when using an MCP server like this are substantially better than without it. If you are using AI to help with a specific framework or library, consider creating an MCP server for that framework or library.

You can also build your own MCP server for your organization, project, or codebase. Such a server can provide code snippets, patterns, documentation, and other information specific to your context, which can greatly improve the quality of the AI’s output.

I wrote a blog post about building a simple MCP server.

Conclusion

AI is a useful tool for software development, but it is not magic. You have to learn how to use it effectively, and you have to be willing to review and refine its output. By thinking of the AI as a collaborator, providing it with context, and using MCP servers, you can get much better results.

As an industry, we need to move past the idea that AI isn’t useful, and start discussing how to use it ethically and effectively. Only then can we fully understand the implications of this technology and make informed decisions about its use.

Read the whole story
alvinashcraft
45 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

One Year of MCP: November 2025 Spec Release

1 Share

Today, MCP turns one year old. You can check out the original announcement blog post if you don’t believe us. It’s hard to imagine that a little open-source experiment, a protocol to provide context to models, became the de-facto standard for this very scenario in less than twelve months.

But not only do we hit the first anniversary milestone today - we’re also releasing a brand-new MCP specification version. Before we get to the details of what’s new, let’s do a bit of a retrospective.

Read the whole story
alvinashcraft
46 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories