Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
138903 stories
·
31 followers

Radar Trends to Watch: May 2025

1 Share

Anthropic’s Model Context Protocol (MCP) has received a lot of attention for standardizing the way models communicate with tools, making it much easier to build intelligent agents. Google’s Agent2Agent (A2A) now adds features that were left out of the original MCP specification: security, agent cards for describing agent capabilities, and more. Is A2A competitive or complementary? Is it another layer in a developing protocol stack for agentic applications? Similarly, Claude Code has been the flagship for agentic coding, the next step beyond cut-and-paste and comment completion (GitHub) models. Now, with OpenAI’s terminal-based Codex and Google’s Firebase Studio IDE, it has competition. The upside for Anthropic? These tools implicitly acknowledge that Anthropic is the AI vendor to beat.

Artificial Intelligence

  • OpenAI’s latest video generation model (gpt-image-1) is now available via the company’s API
  • The European Space Agency and IBM have created TerraMind, a generative AI model of the Earth. Among other things, the model has been trained for climate forecasting. It’s available on Hugging Face
  • WhaleSpotter is an AI-enabled thermal camera that ships can use to spot whales in time to change course and avoid collisions. The system detects the heat from a whale’s spout.
  • Google’s latest reasoning model, Gemini 2.5 Flash, is now available in preview. Flash is a “hybrid reasoning model” that allows users to specify a “thinking budget” so they can control how much money (time, tokens) is spent on reasoning. 
  • MCP Run Python is an MCP server from Pydantic for running LLM-generated Python code in a sandbox. Simon Willison has a couple of fascinating demos
  • OpenAI has launched its o3 and o4-mini models. o3 is its most advanced reasoning model, and o4-mini is a smaller reasoning model designed to be faster and more cost-efficient. These new models replace o1 and o3-mini.
  • A model for maritime navigation has demonstrated that explaining the reason for navigational decisions increases trust and reduces human error
  • OpenAI has released GPT-4.1, including mini and nano versions. OpenAI claims that GPT-4.1 improves significantly on code generation and instruction following. All the models have a 1M token input window. The 4.1 series models are currently only available via the API. GPT-4 is slated to be retired, as is GPT-4.5 preview. 
  • A new paper from DeepMind describes some strategies for defending against prompt injection attacks. As Simon Willison writes, prompt injection has been around for two and a half years; this may be the first significant progress in defeating it.
  • ChatGPT can now reference your entire chat history. This is a significant extension of its older Memory feature, which could only remember a few pieces of information. 
  • MCP may be the basis for the next generation of AI-driven technology, but it’s important to remember security. Protocol vulnerabilities are as dangerous as SQL injection—and MCP has many of them. (No doubt A2A does too; it goes with the territory.)
  • Anthropic has announced a new Max Plan for Claude users to mitigate complaints that users are bumping into their usage limits too often. Max is $100 or $200 a month, for 5x or 20x more usage than Pro. It’s not cheap, but bumping into limits is frustrating.
  • For those of us who like keeping our AI close to home, there’s now DeepCoder, a 14B model that specializes in coding and that claims performance similar to OpenAI’s o3-mini. Dataset, code, training logs, and system optimizations are all open.
  • Two important papers from Anthropic give some clues about how agents think. And an article by Google’s Blaise Agüera y Arcas and James Manyika challenges our notions of how we think.
  • Google has announced its Agent2Agent protocol (A2A), to facilitate communications between intelligent agents. It provides communications between agents, agent discovery, and asynchronous task management. The company stresses that A2A is complementary to MCP. 
  • The Model Context Protocol is taking the AI world by storm. There are several projects listing MCP servers, including mcpservers.org, the awesome-mcp-servers GitHub repo, Glama’s list, and Cline’s MCP Marketplace (accessible through its plug-in). 
  • OpenAI is rolling out watermarks for its image generation model, possibly in response to reactions to its “Studio Ghibli” filter. Users with a paid account can apparently save images without watermarks. 
  • Meta has released the Llama 4 “herd” of open models. They’re all mixture-of-experts models with large context windows. Scout and Maverick both have 17B active parameters, with 16 and 128 “experts,” respectively; they’re available on llama.com and Hugging Face. Behemoth is a 288B active parameter (2T total) “teacher” model used to train other models. 
  • OpenAI is actually planning to release an open model? Surprise, surprise. Needless to say, it hasn’t been released yet. But they want feedback already.
  • Gemini 2.5 is now available to free users; select Gemini 2.5 Pro (Experimental) in the Gemini app. Some of its capabilities are restricted (for example, free users can’t upload documents). 
  • Can an AI be a trusted third party? Can it make a judgment based on information from two sources without revealing the information on which the judgment was based? The answer may be “yes.” It helps that models can be deleted.
  • Google’s open Gemma 3 models have taken several steps forward. They now support function calling and larger (128K) context windows. Quantization-aware training optimizes their performance to make the models accessible for less-powerful hardware: a single GPU or even a GPU-less laptop.

Programming

  • We do code reviews. Should we also do data reviews? As we become more dependent on AI and massive data pipelines, we need to know that our data is trustworthy.
  • When using Claude Code, the thinking budget is evidently controlled by using the words “think,” “think hard,” “think harder,” and “ultrathink” in prompts.
  • Kelsey Hightower sees the Nix project as a possible complement to Docker. Using Nix inside of Docker files leads to more efficient and reproducible builds.
  • OpenAI has also released Codex, a coding agent that runs in the terminal. It appears to be similar to Claude Code, but it has an open source license. 
  • The kro project (Kubernetes Resource Orchestrator) allows developers to build groups of Kubernetes resources that can be used to simplify Kubernetes cluster configurations in a vendor-independent way.
  • Python now has a tariff package to tax imports! 50% on NumPy, 200% on pandas. As in the real world, you only tax yourself.
  • Google’s Firebase Studio is a generative AI-native IDE for building full stack web applications. It’s getting good reviews online. In addition to integration with Git and GitHub, it’s integrated into Google Cloud, so it can deploy applications automatically.
  • OpenAI will require organization verification for developers to gain API access to future models. Despite the name, this status applies to individual developers and will require a valid government-issued ID; IDs from over 200 countries are acceptable.
  • Amazon’s Alexa has lost its shine, but the new Alexa+ is based on generative AI. The company is looking for developers to test its AI-native SDKs.
  • Although Rust code is still a small part of the Linux kernel, its presence is growing—and Rust’s memory safety is paying off. 
  • NVIDIA is adding native support for Python to CUDA, its toolkit for programming GPUs.
  • NVIDIA has also announced that a future version of CUDA will allow developers to treat large clusters of GPUs as a single virtual GPU. There’s no estimate for when these new features will be released.
  • Microsoft has published a paper about giving a code-generating LLM access to a Python debugger. Agentic vibe debugging, here we come!
  • Run a server in the browser? With Wasm, why not? It’s not a good production environment, but it could be ideal for development and debugging. 
  • Rust finally has a formal language specification! The spec was developed and donated to the Rust Foundation by Ferrous Systems, a company that develops Rust compilers. I’m shocked that one didn’t already exist—but apparently one didn’t.

Security

  • Policy Puppetry is a new prompt injection attack technique that works against all major LLMs. The attack works by writing the malicious prompt in a form that can be interpreted as a policy file that the LLM would be required to obey.
  • Windows Recall is back. It’s in the preview channel. Many of the problems appear to have been fixed. It’s not on by default, it can be uninstalled, and it can be used without a network connection. But it’s still creepy, and Microsoft’s reputation is a problem that remains.
  • Mitre’s CVE program (Common Vulnerabilities and Exposures) was almost defunded. Funding expired on April 15 and was only extended for 11 months on April 17. CVE has been essential in disseminating information about security weaknesses in computer systems. 
  • Google has announced end-to-end encryption (e2e) for Gmail. While this reduces the burden of implementing e2e encryption for IT departments, it’s debatable whether this is truly e2e. Recipients who don’t use Gmail can use a special subset of Gmail to read encrypted mail. 
  • OpenPubkey SSH simplifies using SSH with single sign-on. It adds SSH public keys to the ID tokens used by OpenID Connect. Short-lived SSH keypairs are created automatically when users sign in, and don’t need to be managed by users.

Infrastructure

  • Microsoft is working on a tool that automates fixing Windows 11 boot crashes. Boot crashes are typically caused by configuration errors or installing a bad device. A tool like this might have helped users to recover after the bad CrowdStrike update last year.

Web

  • Could OpenAI be the new Twitter? The company’s apparently in the early stages of creating a social network that integrates with ChatGPT.
  • xkcd’s annual belated April Fools’ joke on push notifications is a masterpiece. 
  • Mozilla is looking past its Thunderbird email client to Thundermail Pro, a full email service that’s designed to compete with Gmail. It will include a calendaring service and an AI tool for help writing messages.

Quantum Computing

  • Quantum messages have been sent over commercial communications infrastructure. The distance (254 km) almost doesn’t matter; what’s more important is that the experiment used commercial optical fiber with no cooling or other quantum-specific support.
  • An Australian company has developed an alternative to GPS that uses quantum sensors to pinpoint locations based on the Earth’s magnetic field. The device doesn’t emit signals, can filter out noise, and unlike current GPS systems, isn’t vulnerable to outages or attacks. 
  • Phasecraft has developed an algorithm that makes quantum simulations more efficient. This advance could help quantum computers to model chemical reactions and create new materials.

Robotics

  • Hugging Face has acquired Pollen Robotics and is planning to sell robots. Its first offering, Reachy 2, is a humanoid robot that can be programmed using Hugging Face’s LeRobot models.
  • RoboBee is a tiny flying robot (roughly an inch long) that can land safely on a leaf.

On May 8, O’Reilly Media will be hosting Coding with AI: The End of Software Development as We Know It—a live virtual tech conference spotlighting how AI is already supercharging developers, boosting productivity, and providing real value to their organizations. Register for free here.

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

MCP: What It Is and Why It Matters—Part 1

1 Share

This is the first of five parts in this series.

1. ELI5: Understanding MCP

Imagine you have a single universal plug that fits all your devices—that’s essentially what the Model Context Protocol (MCP) is for AI. MCP is an open standard (think “USB-C for AI integrations”) that allows AI models to connect to many different apps and data sources in a consistent way. In simple terms, MCP lets an AI assistant talk to various software tools using a common language, instead of each tool requiring a different adapter or custom code.

This image has an empty alt attribute; its file name is AD_4nXdZtcLZfy8ZhLcG_Tjum6Nomnb9f6Fc7lb9jaL9XasG7GjkjuoAohG0ShKbv-XmwyCuhMevoqbzVfUqZxNwFvMFunfaC10HQKdBMlNZl13EtpQgp080j59zSXdbcbIjS3GeAO3CEw

So, what does this mean in practice? If you’re using an AI coding assistant like Cursor or Windsurf, MCP is the shared protocol that lets that assistant use external tools on your behalf. For example, with MCP an AI model could fetch information from a database, edit a design in Figma, or control a music app—all by sending natural-language instructions through a standardized interface. You (or the AI) no longer need to manually switch contexts or learn each tool’s API; the MCP “translator” bridges the gap between human language and software commands.

In a nutshell, MCP is like giving your AI assistant a universal remote control to operate all your digital devices and services. Instead of being stuck in its own world, your AI can now reach out and press the buttons of other applications safely and intelligently. This common protocol means one AI can integrate with thousands of tools as long as those tools have an MCP interface—eliminating the need for custom integrations for each new app. The result: Your AI helper becomes far more capable, able to not just chat about things but take actions in the real software you use.

🧩 Built an MCP that lets Claude talk directly to Blender. It helps you create beautiful 3D scenes using just prompts!

Here’s a demo of me creating a “low-poly dragon guarding treasure” scene in just a few sentences👇

Video: Siddharth Ahuja

2. Historical Context: From Text Prediction to Tool-Augmented Agents

To appreciate MCP, it helps to recall how AI assistants evolved. Early large language models (LLMs) were essentially clever text predictors: Given some input, they’d generate a continuation based on patterns in training data. They were powerful for answering questions or writing text, but functionally isolated—they had no built-in way to use external tools or real-time data. If you asked a 2020-era model to check your calendar or fetch a file, it couldn’t; it only knew how to produce text.

2023 was a turning point. AI systems like ChatGPT began to integrate “tools” and plug-ins. OpenAI introduced function calling and plug-ins, allowing models to execute code, use web browsing, or call APIs. Other frameworks (LangChain, AutoGPT, etc.) emerged, enabling multistep “agent” behaviors. These approaches let an LLM act more like an agent that can plan actions: e.g., search the web, run some code, then answer. However, in these early stages each integration was one-off and ad hoc. Developers had to wire up each tool separately, often using different methods: One tool might require the AI to output JSON; another needed a custom Python wrapper; another a special prompt format. There was no standard way for an AI to know what tools are available or how to invoke them—it was all hard-coded.

By late 2023, the community realized that to fully unlock AI agents, we needed to move beyond treating LLMs as solitary oracles. This gave rise to the idea of tool-augmented agents—AI systems that can observe, plan, and act on the world via software tools. Developer-focused AI assistants (Cursor, Cline, Windsurf, etc.) began embedding these agents into IDEs and workflows, letting the AI read code, call compilers, run tests, etc., in addition to chatting. Each tool integration was immensely powerful but painfully fragmented: One agent might control a web browser by generating a Playwright script, while another might control Git by executing shell commands. There was no unified “language” for these interactions, which made it hard to add new tools or switch AI models.

This is the backdrop against which Anthropic (the creators of the Claude AI assistant) introduced MCP in late 2024. They recognized that as LLMs became more capable, the bottleneck was no longer the model’s intelligence, but its connectivity. Every new data source or app required bespoke glue code, slowing down innovation. MCP emerged from the need to standardize the interface between AI and the wide world of software—much like establishing a common protocol (HTTP) enabled the web’s explosion. It represents the natural next step in LLM evolution: from pure text prediction to agents with tools (each one custom) to agents with a universal tool interface.

3. The Problem MCP Solves

Without MCP, integrating an AI assistant with external tools is a bit like having a bunch of appliances each with a different plug and no universal outlet. Developers were dealing with fragmented integrations everywhere. For example, your AI IDE might use one method to get code from GitHub, another to fetch data from a database, and yet another to automate a design tool—each integration needing a custom adapter. Not only is this labor-intensive; it’s brittle and doesn’t scale. As Anthropic put it:

Even the most sophisticated models are constrained by their isolation from datatrapped behind information silos.…Every new data source requires its own custom implementation, making truly connected systems difficult to scale.

MCP addresses this fragmentation head-on by offering one common protocol for all these interactions. Instead of writing separate code for each tool, a developer can implement the MCP specification and instantly make their application accessible to any AI that speaks MCP. This dramatically simplifies the integration matrix: AI platforms need to support only MCP (not dozens of APIs), and tool developers can expose functionality once (via an MCP server) rather than partnering with every AI vendor separately.

Another big challenge was tool-to-tool “language mismatch.” Each software or service has its own API, data format, and vocabulary. An AI agent trying to use them had to know all these nuances. For instance, telling an AI to fetch a Salesforce report versus querying a SQL database versus editing a Photoshop file are completely different procedures in a pre-MCP world. This mismatch meant the AI’s “intent” had to be translated into every tool’s unique dialect—often by fragile prompt engineering or custom code. MCP solves this by imposing a structured, self-describing interface: Tools can declare their capabilities in a standardized way, and the AI can invoke those capabilities through natural-language intents that the MCP server parses. In effect, MCP teaches all tools a bit of the same language, so the AI doesn’t need a thousand phrasebooks.

The result is a much more robust and scalable architecture. Instead of building N×M integrations (N tools times M AI models), we have one protocol to rule them all. As Anthropic’s announcement described, MCP “replaces fragmented integrations with a single protocol,” yielding a simpler, more reliable way to give AI access to the data and actions it needs. This uniformity also paves the way for maintaining context across tools—an AI can carry knowledge from one MCP-enabled tool to another because the interactions share a common framing. In short, MCP tackles the integration nightmare by introducing a common connective tissue, enabling AI agents to plug into new tools as easily as a laptop accepts a USB device.

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Raspberry Pi Connect is out of beta: simple remote access, now even better

1 Share

It’s been just over a year since we launched the Raspberry Pi Connect beta, giving you simple, remote access to your Raspberry Pi straight out of the box, from anywhere in the world. The response from users has been fantastic, and we rapidly reached an install base of over 100,000 devices. Today we’re excited to announce that following the recent release of version 2.5, we’re dropping the “beta”.

Composite image: screen grabs of the Raspberry Pi Connect Dashboard, a connected device desktop showing Connect system tray icons, and the Connect command line interface

Smarter wake-ups: data-efficient connections in v2.5

Prior to version 2.5, the Connect client software running on a Raspberry Pi device connected to the service would poll continually Raspberry Pi servers for requests to connect. This worked well for us because it was easy to scale – traffic was a predictable shape; there was just a lot of it. But wasn’t ideal for users – their devices were regularly waking up to make HTTP requests, and data usage was higher than it needed to be.

Starting with version 2.5, the Connect client now holds a single long-lived HTTP connection to a Raspberry Pi server.  Now when you click the “Connect” button on connect.raspberrypi.com, an event is broadcast to the device to wake it up and start the process of establishing a connection.

Screen grab of a Raspberry Pi Connect dashboard showing a loading screen that reads: "Waiting for response from pitowers"

Optimised heartbeat for leaner dashboard updates

Separately from connection negotiation, the Connect client sends heartbeats to Raspberry Pi servers, periodically and also on startup and shutdown of a device and in response to changes to its internal state. For example, the user disallowing screen sharing via the CLI (command line interface) would trigger a heartbeat. This information is then used to keep your dashboard on connect.raspberrypi.com up to date. 

Prior to version 2.5, the Connect client would send four heartbeats in rapid succession; this wasn’t a conscious design decision, but a side effect of how the client evolved over time. Starting with 2.5, these heartbeats are now debounced, and users should see many fewer requests to the Connect API outside of connection negotiation.

Also starting in 2.5, each individual heartbeat is now compressed before it is sent to the server, making it about 50% smaller.

Screen grab of a Raspberry Pi Connect dashboard showing four devices and their connection status

How to update

To update to the latest version of Raspberry Pi Connect only, run the following commands (if you have installed Connect Lite, replace rpi-connect with rpi-connect-lite):

sudo apt update
sudo apt install --only-upgrade rpi-connect

This week’s other Raspberry Pi software news is that we’ve released a new version of Raspberry Pi OS; this has the latest version of Connect installed, so you might want to consider updating your OS. Read our post about the new release for instructions on how to do that.

If you haven’t tried Connect yet, check out our official guide to get it up and running on your devices.

The post Raspberry Pi Connect is out of beta: simple remote access, now even better appeared first on Raspberry Pi.

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Podcast: InfoQ Culture & Methods Trends in 2025

1 Share

By Charity Majors, Ben Linders, Rafiq Gemmail, Craig Smith, Shane Hastie
Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Official C# SDK for Model Context Protocol Announced

1 Share
Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Compliancy vs Commitment

1 Share

I don't see criticism and (a certain level of) conflict as unhealthy in an organization. The contrary! It is when people stop raising their voice and sharing their feedback that you need to start worrying. It could be a sign that people no longer care and have decayed from commitment to compliance.

Beyond following orders

As leaders we are constantly seeking ways to drive results. But there's a fundamental distinction worth understanding on how to achieve this: 

If we collaborate, the result is commitment. If we coerce, the result is compliance.

Commitment comes from within, whereas compliance is forced by an external source. When people comply, they're simply following orders. They do just enough to get by, meet the bare minimum requirements, or check the box. There's no personal investment in the outcome.

Commitment, on the other hand, invites full participation, engagement, and discretionary effort.

The language of Commitment

L. David Marquet states in his book ‘‘Leadership is language’ that our language reflects and reinforces this distinction in powerful ways. In the book he gives the example on how you talk to yourself when trying to change a habit

  • "I can't eat sweets" (compliance) vs. "I don't eat sweets" (commitment)
  • "I can't miss this deadline" vs. "I don't miss deadlines"
  • "I can't spend my time that way" vs. "I don't spend my time that way"

The word "can't" suggests an external restriction—something is preventing you from doing what you actually want to do. "Don't," however, reflects an internal choice and identity. You're the kind of person who doesn't do that thing. This subtle shift in language creates a remarkable difference in results.

Choice: The foundation of commitment

For commitment to exist, there must first be choice.

If a person has no choice but to say yes, then what we have is compliance. 

This explains why common workplace initiatives to "inspire" and "empower" employees often fall flat—if people don't genuinely feel they have a choice, their response will be compliance at best.

Compliance also:

  • Gives people a pass on thinking
  • Removes personal responsibility ("I was just following orders")
  • Creates fragile operations due to lack of context

Building a culture of commitment

The key is creating conditions where people can make genuine commitments rather than merely complying with directives.

How can we foster commitment rather than settling for compliance?

  1. Provide context, not just directives. When people understand the "why" behind a request, they're more likely to commit.
  2. Create genuine choice. Even when options are limited, finding ways to give people some agency in how they approach their work fosters commitment.
  3. Use the language of commitment. Pay attention to how requests are framed and how decisions are discussed.
  4. Recognize that commitment is personal. Groups don't make commitments—individuals do. Honor the personal nature of commitment.

So the next time you're facing resistance or lackluster results, ask yourself: Am I seeking compliance or commitment? The answer might reveal exactly why you're not getting the engagement you desire.

More information

Discontinuous improvement

Appreciate, don’t evaluate

Trust & Inspire instead of Command & Control

Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories