Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
148089 stories
·
33 followers

Discord’s age verification data has a frontend leak — now what?

1 Share
Interesting Engineering reports: A newly uncovered flaw in Discord’s age verification rollout has added fresh pressure to the company’s 2026 compliance plans. Security researchers recently found that frontend components tied to identity vendor Persona were accessible on the open web, prompting debate over how securely the platform handles sensitive age checks. The discovery surfaced on...

Source

Read the whole story
alvinashcraft
7 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

AI Context Kit, Evolved: Why I Moved to AGENTS.md + Agent Skills (and how I used the Codex macOS App for the Migration)

1 Share
After releasing AI Context Kit, I learned about two emerging standards that are becoming essential for durable AI collaboration: AGENTS.md for project operation and Agent Skills for workflow authority. In this post, I explain why I transitioned to these standards, how I executed the migration using the Codex macOS App, and what the new architecture looks like.
Read the whole story
alvinashcraft
7 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

BONUS From Combat Pilot to Scrum Master - How Military Leadership Transforms Agile Teams With Nate Amidon

1 Share

BONUS: From Combat Pilot to Scrum Master - How Military Leadership Transforms Agile Teams

In this bonus episode, we explore a fascinating career transition with Nate Amidon, a former Air Force combat pilot who now helps software teams embed military-grade leadership principles into their Agile practices. Nate shares how the high-stakes discipline of aviation translates directly into building high-performing development teams, and why veterans make exceptional Scrum Masters.

The Brief-Execute-Debrief Cycle: Aviation Meets Agile

"We would mission brief in the morning and make sure everyone was on the same page. Then we problem-solved our way through the day, debriefed after, and did it again. When I learned about what Agile was, I realized it's the exact same thing."

 

Nate's transition from flying C-17 cargo planes to working with Agile teams wasn't as jarring as you might expect. Flying missions that lasted 2-3 weeks with a crew of 5-7 people taught him the fundamentals of iterative work: daily alignment, continuous problem-solving, and regular reflection. The brief-execute-debrief cycle that every military pilot learns mirrors the sprint cadence that Agile teams follow. Time-boxing wasn't new to him either—when you're flying, you only have so much fuel, so deadlines aren't arbitrary constraints but physical realities that demand disciplined execution.

In this episode with Christian Boucousis, we also discuss the brief-execute-debrief cycle in detail

In this segment, we also refer to Cynefin, and the classification of complexity

Alignment: The Real Purpose Behind Ceremonies

"It's really important to make sure everyone understands why you're doing what you're doing. We don't brief, execute, debrief just because—we do it because we know that getting everybody on the same page is really important."

 

One of the most valuable insights Nate brings to his work with software teams is the understanding that Agile ceremonies aren't bureaucratic checkboxes—they're alignment mechanisms. The purpose of sprint planning, daily stand-ups, and retrospectives is to ensure everyone knows the mission and can adapt when circumstances change. Interestingly, Nate notes that as teams become more high-performing, briefings get shorter and more succinct. The discipline remains, but the overhead decreases as shared context grows.

The Art of Knowing When to Interrupt

"There are times when you absolutely should not interrupt an engineer. Every shoulder tap is a 15-minute reset for them to get back into the game. But there are also times when you absolutely should shoulder tap them."

 

High-performing teams understand the delicate balance between deep work and necessary communication. Nate shares an aviation analogy: when loadmasters are loading complex cargo like tanks and helicopters, interrupting them with irrelevant updates would be counterproductive. But if you discover that cargo shouldn't be on the plane, that's absolutely worth the interruption. This judgment—knowing what matters enough to break flow—is something veterans develop through high-stakes experience. Building this awareness across a software team requires:

 

  • Understanding what everyone is working on

  • Knowing the bigger picture of the mission

  • Creating psychological safety so people feel comfortable speaking up

  • Developing shared context through daily stand-ups and retrospectives

Why Veterans Make Exceptional Scrum Masters

"I don't understand why every junior officer getting out of the military doesn't just get automatically hired as a Scrum Master. If you were to say what we want a Scrum Master to do, and what a junior military officer does—it's line for line."

 

Nate's company, Form100 Consulting, specifically hires former military officers and senior NCOs for Agile roles, often bringing them on without tech experience. The results consistently exceed expectations because veterans bring foundational leadership skills that are difficult to develop elsewhere: showing up on time, doing what you say you'll do, taking care of team members, seeing the forest through the trees. These intangible qualities—combined with the ability to stay calm, listen actively, and maintain integrity under pressure—make for exceptional servant leaders in the software development space.

The Onboarding Framework for Veterans

"When somebody joins, we have assigned everybody a wingman—a dedicated person that they check in with regularly to bounce ideas off, to ask questions."

 

Form100's approach to transitioning veterans into tech demonstrates the same principles they advocate for Agile teams. They screen carefully for the right personality fit, provide dedicated internal training on Agile methodologies and program management, and pair every new hire with a wingman. This military unit culture helps bridge the gap between active duty service and the private sector, addressing one of the biggest challenges: the expectation gap around leadership standards that exists between military and civilian organizations.

Extreme Ownership: Beyond Process Management

"To be a good Scrum Master, you have to take ownership of the team's execution. If the product requirements aren't good, it's a Scrum Master's job to help. If QA is the problem, take ownership. You should be the vessel and ownership of the entire process of value delivery."

 

One of Nate's core philosophies comes from Jocko Willink's Extreme Ownership. Too many Scrum Masters limit themselves to being "process people" who set meetings and run ceremonies. True servant leadership means owning everything that affects the team's ability to deliver value—even things technically outside your job description. When retrospectives devolve into listing external factors beyond the team's control, the extreme ownership mindset reframes the conversation: "Did we give the stakeholder the right information? Did they make a great decision based on bad information we provided?" This shift from blame to ownership drives genuine continuous improvement.

Building Feedback Loops in Complex Environments

"In the military, we talk about the OODA loop. Everything gets tighter, we get better—that's why we do the debrief."

 

Understanding whether you're operating in a complicated or complex domain (referencing the Cynefin framework) determines how tight your feedback loops need to be. In complex environments—where most software development lives—feedback loops aren't just for reacting to what happened; they're for probing and understanding what's changing. Sprint goals become essential because without knowing where you're headed, you can't detect when circumstances have shifted. The product owner role becomes critical as the voice connecting business priorities to team execution, ensuring the mission stays current even when priorities change mid-sprint.

Recommended Resources

Nate recommends the following books: 

 

About Nate Amidon

 

Nate is a former Air Force combat pilot and founder of Form100 Consulting. He helps software teams embed leadership at the ground level, translating military principles into Agile practices. With a focus on alignment, accountability, and execution, Nate empowers organizations to lead from within and deliver real results in a dynamic tech landscape.

 

You can link with Nate Amidon on LinkedIn and learn more at Form100 Consulting.





Download audio: https://traffic.libsyn.com/secure/scrummastertoolbox/20260221_Nate_Amidon_BONUS.mp3?dest-id=246429
Read the whole story
alvinashcraft
7 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Claude Code comes to Roadmap, OpenClaw loses its head, and AI workslop

1 Share

I’m Matt Burns, Director of Editorial at Insight Media Group. Each week, I round up the most important AI developments, explaining what they mean for people and organizations putting this technology to work. The thesis is simple: workers who learn to use AI will define the next era of their industries, and this newsletter is here to help you be one of them.


This is a big week for Roadmap.sh: our site launched the Claude Code roadmap to help support the untold number of users jumping on the popular platform. It’s a comprehensive guide covering everything from vibe coding and agentic loops to MCP servers, plugins, hooks, and subagents. This is the place to start with Claude Code and AI-assisted coding, even if you’re already vibe coding elsewhere. It will help anyone jump from casual prompting to real agentic workflows. We’re proud to have it out there, and we think it speaks to how developer skillsets are being rewritten in real time, and the platforms mapping the new world are the ones who’ll own it.

This shift showed up clearly in this Towards Data Science interview with Stephanie Kirmer, a machine learning engineer with almost a decade in the field. She describes how LLMs have reshaped her daily work from using code assistants to bounce ideas off, critique approaches, and handle the grunt work around unit tests. She’s also candid about their limits, noting that the real value still comes from experience applied to unusual problems. Her take on the broader AI economy is worth sitting with, too. She thinks we’re in a bubble, not because the tech isn’t useful, but because the investment expectations are widely out of proportion. If the industry were willing to accept good returns on moderate investment rather than demand immense returns on a gigantic investment, she says, the current AI world could be sustainable.

Productivity Gains or AI Workslop?

AI tools are objectively improving, but are organizations actually becoming more productive? Fortune reported a study of thousands of C-suite executives who have yet to see AI produce a productivity boom. The executives surveyed invested heavily in AI but are struggling to translate that into measurable output gains. This is reminiscent of past technology shifts, first with PCs, and then with the internet and mobile.

Another study released this week points to AI training as the missing link. According to a major CEPR study across 12,000 European firms, the organizations that train their people to use AI are the organizations that benefit from it. The study notes that AI adoption increases labor productivity by 4% on average, with no evidence of short-run employment reductions. Every additional 1% of investment in workforce training amplified AI’s productivity effect by nearly 6%.

In short, the orgs that just buy AI licenses and expect magic are the orgs that are disappointed. Organizations need to invest both in AI and in employee upskilling.

AI Gets Cheaper and More Capable

Anthropic this week released Claude Sonnet 4.6, which promises near-Opus-level performance at significantly lower pricing. It even beats Anthropic’s Opus model on several tasks, and according to some benchmarks, it matches or outperforms Google’s Gemini 3 Pro and OpenAI’s GPT-5.2 in several categories. It’s a major upgrade to a mid-tier option.

This news should help push AI adoption among organizations on the fence by offering near-peak performance at a much lower cost.

On the other side, OpenAI launched Codex Spark, a model built for raw speed, capable of 1,000 tokens per second. It’s designed for rapid prototyping and real-time collaboration, complementing the heavier Codex model for long-running agentic tasks.

Google got in on the fun, too, releasing Gemini 3.1 Pro. While this is still in preview, early benchmarks show that it’s currently far better at solving complex problems than Google’s previous mainstream model. According to Google, the ‘core intelligence’ of Gemini 3.1 Pro comes directly from the Deep Think model, which explains why 3.1 Pro performs so well on reasoning benchmarks.

The pattern is clear: performance rises, prices fall, and the barrier to entry for orgs adopting AI keeps shrinking. If cost were an excuse for waiting, that excuse is shrinking with each new release.

OpenClaw: Still Fun, Still messy, and Now Headless

OpenClaw continues to be one of the most fascinating open source projects in the AI space. If you haven’t been following it, OpenClaw is, in short, a platform that runs Claude Code autonomously to be a user’s personal AI assistant. Buy a Mac Mini, install OpenClaw, and let it run your (or its) life. Eivind Kjosbakken published this OpenClaw walkthrough on TDS, and it’s a great practical guide for personalizing its behavior with skills and connecting it to Slack, GitHub, and Gmail. He reports massive efficiency gains within a week. Projects like this reinforce that the era of the AI assistant isn’t just a pipedream; it’s already becoming a reality.

The project took a turn on Sunday, though. OpenClaw founder Peter Steinberger announced he’s joining OpenAI to work on next-gen personal agents. The good news is OpenClaw is moving to a foundation, and Steinberger says it will remain open source. The less good news involves security. The New Stack reported that Snyk found over 7% of skills on ClawHub, the OpenClaw marketplace, contain flaws that expose sensitive credentials. Meanwhile, Anthropic caused a panic and later reaffirmed its policies, saying users can still use Claude accounts to run OpenClaw and similar tools.

The New Stacks’s Frederic Lardinois interviewed the founder of NanoClaw, a lightweight alternative to OpenClaw. Gavriel Cohen built the ClawBot alternative in a weekend after learning about security flaws in the popular agentic framework. NanoClaw launched on GitHub in late January and now has just under 10,000 stars. The core principle is radical minimalism: about a few hundred lines of actual code, a handful of dependencies, and each agent running inside its own container.

 

India is Betting Big on AI

India this week hosted a massive AI event. At the India AI Impact Summit, Replit CEO Amjad Masad put it bluntly: “Two kids in India can now compete with Salesforce.” Also at the event, Adani Group pledged $100 billion towards building AI-ready data centers in India by 2035, partnering with Google, Microsoft, and Flipkart to build what they say will be the world’s largest integrated data center platform.

And if you needed a kicker for how seriously the world’s biggest AI players are taking India, OpenAI’s Sam Altman and Anthropic’s Dario Amodei both showed up on stage at the summit alongside PM Modi. But they seemingly refused to hold hands for a traditional unity pose. They raised fists instead, standing right next to each other. It’s a small moment that captures an enormous truth: The rivalry between OpenAI and Anthropic is real and the stakes are global.

AI’s Biggest Funding Yet

OpenAI and Anthropic are both raising capital at astronomical levels. According to Bloomberg, OpenAI’s latest round is on track to exceed $100 billion, with Amazon expected to invest up to $50 billion, Softbank with up to $30 billion and Nvidia coming in around $20 billion. The company’s valuation could exceed $850 billion. For context, last week, Anthropic closed a $30 billion round at a $380 billion valuation, with annualized revenue hitting $14 billion. Both companies are reportedly preparing for a potential IPO.

There’s more. Fei-Fei Li’s World Labs raised $1 billion for its “spatial intelligence” approach to AI. Their pitch involves building models that understand 3D worlds rather than just flat text and images. Investors include AMD, Autodesk, and Nvidia. In the past, a billion-dollar raise would dominate the news cycle, but this one is barely making it above the fold, suggesting that the scale of investment is radically shifting.

Ads or no ads?

One more thread worth watching is how AI companies plan to actually make money. Perplexity announced that it’s ditching ads and going all-in on subscriptions. The reasoning is comforting: Users need to trust that every answer is the best possible answer, not one influenced by an advertiser. Valued at $20 billion with $200 million in ARR, Perplexity is betting that trust can be a moat.

Anthropic clearly feels the same way. The company ran a series of Super Bowl ads that are darkly funny, parodying what happens when your AI assistant starts serving ads mid-conversation. The tagline is sharp: “Ads are coming to AI. But not to Claude.” These spots were a direct shot at OpenAI, which recently began serving contextual ads at the bottom of ChatGPT conversations for free and Go-tier users.

As AI companies explore monetization, the lesson for users is the same: The underlying AI technology is being subsidized, competed over, and made more accessible (and cheaper) nearly every week.

The post Claude Code comes to Roadmap, OpenClaw loses its head, and AI workslop appeared first on The New Stack.

Read the whole story
alvinashcraft
8 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Beyond the vibe code: The steep mountain MCP must climb to reach production

1 Share
Abstract geometric pattern featuring organic polygonal shapes in coral orange, teal, and dark navy blue separated by thick dark outlines, resembling a Voronoi diagram or modern mosaic background.

One thing I didn’t do last year was go to any Model Context Protocol (MCP) conferences, largely because I couldn’t see why one very young protocol could represent the wide world of LLMs. But with the speed of AI development, MCP is now looking for a gym membership and checking its pension plan.

Going to the London MCPconference last week (they are worldwide now) allowed me to dip into the issues that developers are seeing in the area of LLM and tool connections – and perhaps how tool vendors are responding to these. Obviously, some are pushing “Secure MCP Gateways” or “Governed MCP Infrastructure” for their enterprise customers, and naturally, some visitors are looking for these solutions.

Although it has yet to establish itself fully, MCP has already had a near-death experience.

OpenClaw was never directly mentioned, but it was definitely the ghost at the feast. MCP may have opened the door a little too wide – before we fully understood the security implications. (There is some irony here: OpenClaw is effectively an MCP client app – just one without opinions about invoking native MCP.)

MCP is still finding itself

Maybe it isn’t surprising that, in such a fast-moving environment, no single technology has had the time or space to establish itself properly. In terms of implementation, MCP servers are still mainly “internal,” that is, used behind a company firewall. It is still the case that going from a quick vibe-coded solution to production remains a problem. The protocol was supposed to democratise access to existing services, and it has. Many companies have experimented with connecting incoming bug reports through email and creating issues in Salesforce or JIRA. After that, though, things are less clear.

I noticed that “tools” and “skills” were sometimes used interchangeably. Technically, tools are single-purpose instructions, whereas creating a Claude agent skill is just creating a folder that contains a SKILL.md file. The skill can also contain resources and tools. But I’m wondering if the two terms stand in for  “more MCP” these days.

“A2A” appeared on many slides without comment. Mentioning Google’s Agent-to-Agent Protocol is probably an offhand way to make a talk sound more general. I saw almost no mentions of Agent Communication Protocol (AGP).

“Something, something, security”

Inevitably, questions about security have been gnawing (or clawing) at MCP servers. Although MCP was designed to keep the route from agent to your data as friction-free as possible, with security as optional, security is still there. Duncan Doyle gave a nice, practical talk on MCP security with OAuth 2.1; the tenet of which (as with many other speakers) was “please implement security”.

As usual, the core problem is trust. While MCP can define what can be done, OAuth is left with the harder task of working out which entity is allowed access to what in a workflow. Duncan introduced the idea of security elicitation, which arises from problems that occur when servers request additional information from users via clients during interactions.

The more you think about this, the worse it gets. How does an MCP server securely ask a client for your GitHub credentials? You soon realise that the MCP client must support an approval process for the user and clearly identify the intended recipient. The demo during the talk walked through the many MCP authorisation steps and just proved that a properly implemented server was no quick task. The finishing statement, “it has never been easier to get hijacked,” was as much a sign of the times as it was a direct criticism of the technology.

Opening the window

The other big takeaway was that tools take up space in the context window, so returning 100 “useful” tools is usually a bad idea.

Simply reducing the number of tools the LLM is told about during a query saves space. This leads to the idea of progressive disclosure, which initially sounds like something from the Moulin Rouge, but is actually the idea of just responding to tool requests only with the tools needed next. Similarly, too much data returned by a query also helps fill the window of a long-running conversation. I also heard reference to “episodic memory”.

A few talks declared “code mode” as some sort of win – when you connect to an MCP server, the Agents SDK will fetch the MCP server’s schema, and then convert it into a TypeScript API, complete with doc comments based on the schema.

But as we saw with GSD, if the LLM can write enough notes that other sub-agents can pick up and read, the context window size problem can be continually averted.

Buying and selling the chat

“ChatGPT is the number one place to be right now” was not a statement to feed my developer mind, but it should attract some attention. ChatGPT apps are the settled name for the OpenAI tech I talked about at the end of last year, and are clearly intended to replace the web. The idea is that MCP servers are effectively brands or corporate services. Here, discovery is the driving force, and a tool call might result in a chance to buy some pizza. Processing payments presents an inherent dilemma: How do you reduce friction while still ensuring security? And here we bounce back to the security issues.

But there is no doubt that when a user is conversing on ChatGPT, there is no context switching, and the learning curve for most tasks is pretty short.

Conclusion

However you do it, it is still the developer’s job to build a service and expose it in an agent-friendly way. But while context windows have a finite size and security considerations are still not always implemented in full, there are clearly several paths MCP could take successfully. Maybe with some rigorous workouts, the middle-age spread can be halted.

And to end with the good advice of Teo Borschberg, here are some hints for starting your first internal MCP implementation:

  • Pick one internal system that people constantly ask questions about
  • Build one MCP server with read-only access
  • Give it to five people who aren’t engineers

The post Beyond the vibe code: The steep mountain MCP must climb to reach production appeared first on The New Stack.

Read the whole story
alvinashcraft
8 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Remote MCP Server using .NET SDK with VS Code Support

1 Share

Remote MCP Server using .NET SDK with VS Code Support

This project is a C# implementation of a Model Context Protocol (MCP) server that exposes weather forecast and health check tools. This server integrates with GitHub Copilot in VS Code, allowing AI assistants to call your custom tools.

Overview

This project demonstrates how to build an HTTP-based MCP server using the .NET SDK. The server exposes two tools:

  • ping: Health check with optional message echo
  • get_weather: Weather forecast retrieval for cities

Prerequisites

Project Structure

remote-MCP-servers-using-dotnet-sdk-vs-code-support/
├── src/
│   └── McpServer/
│       └── McpServer/
│           ├── McpServerTools.cs      # Tool definitions
│           ├── Program.cs             # Server entry point
│           └── McpServer.csproj       # Project file
├── .vscode/
│   └── mcp.json                       # MCP server configuration
└── README.md

Getting Started

1. Build and Run the Server

Open a terminal in the project root and run:

dotnet run --project .\src\McpServer\McpServer\McpServer.csproj

You should see output indicating the server is running:

info: Microsoft.Hosting.Lifetime[14]
      Now listening on: http://0.0.0.0:8081

2. Configure VS Code

Create or verify .vscode/mcp.json in your workspace:

{
  "servers": {
    "local-mcp-server": {
      "url": "http://0.0.0.0:8081/mcp",
      "type": "http"
    }
  }
}

3. Connect to the Server in VS Code

  1. Open Command Palette: Press Ctrl + Shift + P (Windows/Linux) or Cmd + Shift + P (Mac)

  2. List MCP Servers: Type and select List MCP Servers

  3. Select your server: Choose local-mcp-server from the list

  4. Available actions:

    • Start: Starts the MCP server connection
    • Stop: Stops the MCP server connection
    • Restart: Restarts the MCP server connection
    • Show Configuration: Displays the server configuration from mcp.json
    • Show Output: Opens the output panel showing server logs and tool discovery
    • Configure Model Access: Configures which AI models can access this MCP server

4. Verify Server Status

After starting the server, check the output panel for:

[info] Connection state: Running
[info] Discovered 2 tools

✅ If you see this, the server is ready to use!

⚠ If you see warnings about tool descriptions, ensure all methods in McpServerTools.cs have [Description] attributes.

Testing the MCP Server

Ping Test

Once the server is running, test it using the ping tool in GitHub Copilot Chat:

Open chat windows and start conversation

Test 1: Basic Health Check

Ask: "Ping the server"
Expected Response: ✅ MCP server is alive.

Ask your question

image

Approve and get result

image

Test 2: Echo Message

Ask: "Ping the server with message 'Hello MCP'"
Expected Response: ✅ MCP server received: Hello MCP

Weather Forecast Test

Test the weather forecast functionality:

Ask: "What's the weather in London?"
Expected Response: Weather forecast data for London

Ask your question

image

Approve and get result

image

Available Tools

The local-mcp-server provides the following tools:

ping

Description: Health check tool that verifies the MCP server is running. If a message is provided, it echoes it back; otherwise, returns server health status.

Parameters:

  • message (string, optional): Optional message to echo back. If empty, returns health status.

Example Usage:

  • “Ping the server”
  • “Run a health check on the MCP server”

get_weather

Description: Retrieves the current weather forecast for a specified city.

Parameters:

  • city (string, required): The name of the city to get weather forecast for.

Example Usage:

  • “What’s the weather in Paris?”
  • “Get weather forecast for Tokyo”

Troubleshooting

Server Not Discovering Tools

Symptom:

[warning] Tool get_weather does not have a description
[warning] Tool ping does not have a description

Solution: Ensure all tool methods have [Description] attributes at the method level in McpServerTools.cs:

[McpServerTool]
[Description("Health check tool that verifies the MCP server is running...")]
public async Task<string> Ping([Description("Optional message...")] string message)

Server Won’t Start

Symptom: Port 8081 already in use

Solution:

  1. Stop any existing instances of the server
  2. Check for processes using port 8081: netstat -ano | findstr :8081
  3. Kill the process or change the port in Program.cs

Tools Not Visible in Copilot Chat

Symptom: Server is running but tools don’t appear in chat

Solution:

  1. Verify the server is running: Check output panel
  2. Restart VS Code: Ctrl + Shift + P → “Developer: Reload Window”
  3. Reconnect to the server: Use “List MCP Servers” command
  4. Ensure GitHub Copilot extension is up to date

Connection State: Stopped

Symptom: Server keeps stopping automatically

Solution:

  1. Check server logs for errors: “Show Output” command
  2. Verify the server is running on the correct port (8081)
  3. Test the endpoint manually: curl -k -v http://localhost:8081/api/healthz

Resources

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

License

MIT License

Repository

Repository

Read the whole story
alvinashcraft
8 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories