Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
148770 stories
·
33 followers

Radar Trends to Watch: March 2026

1 Share

The explosion of interest in OpenClaw was one of the last items added to the February 1 trends. In February, things went crazy. We saw a social network for agents (no humans allowed, though they undoubtedly sneak on); a multiplayer online game for agents (again, no humans); many clones of OpenClaw, most of which attempt to mitigate its many security problems; and much more. Andrej Karpathy has said that OpenClaw is the next layer on top of AI agents. If the security issues can be resolved (which is a good question), he’s probably right.

AI

  • Alibaba has released a fleet of mid-size Qwen 3.5 models. Their theme is providing more intelligence with less computing cycles—something we all need to appreciate. 
  • Important advice for agentic engineering: Always start by running the tests.
  • Google has released Lyria 3, a model that generates 30-second musical clips from a verbal description. You can experiment with it through Gemini.
  • There’s a new protocol in the agentic stack. Twilio has released the Agent-2-Human (A2H) protocol, which facilitates handoffs between agents and humans as they collaborate.
  • Yet more and more model releases: Claude Sonnet 4.6, followed quickly by Gemini 3.1 Pro. If you care, Gemini 3.1 Pro currently tops the abstract reasoning benchmarks.
  • Kimi Claw is yet another variation on OpenClaw. Kimi Claw uses Moonshot AI’s most advanced model, Kimi K2.5 Thinking model, and offers one-click setup in Moonshot’s cloud.
  • NanoClaw is another OpenClaw-like AI-based personal assistant that claims to be more security conscious. It runs agents in sandboxed Linux containers with limited access to outside resources, limiting abuse. 
  • OpenAI has released a research preview of GPT-5.3-Codex-Spark, an extremely fast coding model that runs on Cerebras hardware. The company claims that it’s possible to collaborate with Codex in “real time” because it gives “near-instant” results.
  • RAG may not be the newest idea in the AI world, but text-based RAG is the basis for many enterprise applications of AI. But most enterprise data includes graphs, images, and even text in formats like PDF. Is this the year for multimodal RAG?
  • Z.ai has released its latest model, GLM-5. GLM-5 is an open source “Opus-class” model. It’s significantly smaller than Opus and other high-end models, though still huge; the mixture-of-experts model has 744B parameters, with 40B active.
  • Waymo has created a World Model to model driving behavior. It’s capable of building lifelike simulations of traffic patterns and behavior, based on video collected from Waymo’s vehicles.
  • Recursive language models (RLMs) solve the problem of context rot, which happens when output from AI degrades as the size of the context increases. Drew Breunig has an excellent explanation.
  • You’ve heard of Moltbook—and perhaps your AI agent participates. Now there’s SpaceMolt—a massive multiplayer online game that’s exclusively for agents. 
  • Anthropic and OpenAI simultaneously released Claude Opus 4.6 and GPT-5.3-Codex, both of which offer improved models for AI-assisted programming. Is this “open warfare,” as AINews claims? You mean it hasn’t been open warfare prior to now?
  • If you’re excited by OpenClaw, you might try NanoBot. It has 1% of OpenClaw’s code, written so that it’s easy to understand and maintain. No promises about security—with all of these personal AI assistants, be careful!
  • OpenAI has launched a desktop app for macOS along the lines of Claude Code. It’s something that’s been missing from their lineup. Among other things, it’s intended to help programmers work with multiple agents simultaneously.
  • Pete Warden has put together an interactive guide to speech embeddings for engineers, and published it as a Colab notebook.
  • Aperture is a new tool from Tailscale for “providing visibility into coding agent usage,” allowing organizations to understand how AI is being used and adopted. It’s currently in private beta.
  • OpenAI Prism is a free workspace for scientists to collaborate on research. Its goal is to help scientists build a new generation of AI-based tooling. Prism is built on ChatGPT 5.2 and is open to anyone with a personal ChatGPT account.

Programming

  • Pi is a very simple but extensible coding agent that runs in your terminal.
  • Researchers at Anthropic have vibe-coded a C compiler using a fleet of Claude agents. The experiment cost roughly $20,000 worth of tokens, and produced 100,000 lines of Rust. They are careful to say that the compiler is far from production quality—but it works. The experiment is a tour de force demonstration of running agents in parallel. 
  • I never knew that macOS had a sandboxing tool. It looks useful. (It’s also deprecated, but looks much easier to use than the alternatives.)
  • GitHub now allows pull requests to be turned off completely, or to be limited to collaborators. They’re doing this to allow software maintainers to eliminate AI-generated pull requests, which are overwhelming many developers.
  • After an open source maintainer rejected a pull request generated by an AI agent, the agent published a blog post attacking the maintainer. The maintainer responded with an excellent analysis, asking whether threats and intimidation are the future of AI.
  • As Simon Willison has written, the purpose of programming isn’t to write code but to deliver code that works. He’s created two tools, Showboat and Rodney, that help AI agents demo their software so that the human authors can verify that the software works. 
  • Anil Dash asks whether codeless programming, using tools like Gas Town, is the future.

Security

  • There is now an app that alerts you when someone in the vicinity has smart glasses.
  • Agentsh provides execution layer security by enforcing policies to prevents agents from doing damage. As far as agents are concerned, it’s a replacement for bash.
  • There’s a new kind of cyberattack: attacks against time itself. More specifically, this means attacks against clocks and protocols for time synchronization. These can be devastating in factory settings.
  • What AI Security Research Looks Like When It Works” is an excellent overview of the impact of AI on discovering vulnerabilities. AI generates a lot of security slop, but it also finds critical vulnerabilities that would have been opaque to humans, including 12 in OpenSSL.
  • Gamifying prompt injection—well, that’s new. HackMyClaw is a game (?) in which participants send email to Flu, an OpenClaw instance. The goal is to force Flu to reply with secrets.env, a file of “confidential” data. There is a prize for the first to succeed.
  • It was only a matter of time: There’s now a cybercriminal who is actively stealing secrets from OpenClaw users. 
  • Deno’s secure sandbox might provide a way to run OpenClaw safely
  • IronClaw is a personal AI assistant modeled after OpenClaw that promises better security. It always runs in a sandbox, never exposes credentials, has some defenses against prompt injection, and only makes requests to approved hosts.
  • A fake recruiting campaign is hiding malware in programming challenges that candidates must complete in order to apply. Completing the challenge requires installing malicious dependencies that are hosted on legitimate repositories like npm and PyPI.
  • Google’s Threat Intelligence Group has released its quarterly analysis of adversarial AI use. Their analysis includes distillation, or collecting the output of a frontier AI to train another AI.
  • Google has upgraded its tools for removing personal information and images, including nonconsensual explicit images, from its search results. 
  • Tirith is a new tool that hooks into the shell to block bad commands. This is often a problem with copy-and-paste commands that use curl to pipe an archive into bash. It’s easy for a bad actor to create a malicious URL that is indistinguishable from a legitimate URL.
  • Claude Opus 4.6 has been used to discover 500 0-day vulnerabilities in open source code. While many open source maintainers have complained about AI slop, and that abuse isn’t likely to stop, AI is also becoming a valuable tool for security work.
  • Two coding assistants for VS Code are malware that send copies of all the code to China. Unlike lots of malware, they do their job as coding assistants well, making it less likely that victims will notice that something is wrong. 
  • Bizarre Bazaar is the name for a wave of attacks against LLM APIs, including self-hosted LLMs. The attacks attempt to steal resources from LLM infrastructure, for purposes including cryptocurrency mining, data theft, and reselling LLM access. 
  • The business model for ransomware has changed. Ransomware is no longer about encrypting your data; it’s about using stolen data for extortion. Small and mid-size businesses are common targets. 

Web

  • Cloudflare has a service called Markdown for Agents that converts websites from HTML to Markdown when an agent accesses them. Conversion makes the pages friendlier to AI and significantly reduces the number of tokens needed to process them.
  • WebMCP is a proposed API standard that allows web applications to become MCP servers. It’s currently available in early preview in Chrome.
  • Users of Firefox 148 (which should be out by the time you read this) will be able to opt out of all AI features.

Operations

  • Wireshark is a powerful—and complex—packet capture tool. Babyshark is a text interface for Wireshark that provides an amazing amount of information with a much simpler interface.
  • Microsoft is experimenting with using lasers to etch data in glass as a form of long-term data storage.

Things

  • You need a desk robot. Why? Because it’s there. And fun.
  • Do you want to play Doom on a Lego brick? You can.


Read the whole story
alvinashcraft
55 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Voice Clones & Phony Texts: Artificial Intelligence Fraud

1 Share

In an era where technology is advancing at a breathtaking pace, so are the methods of those who seek to exploit it. This episode of Mailin’ It! pulls back the curtain on the darker side of artificial intelligence, revealing how fraudsters are creating hyper-realistic scams that can fool even the most discerning eye. From voice clones that can mimic a loved one in distress to flawless phishing emails, the line between real and fake has never been more blurred. Joined by Stephanie Glad, the U.S. Postal Inspection Service's Program Manager for Mail Fraud, Karla and Jeff explore the anatomy of these sophisticated crimes, including the terrifyingly effective "Grandparent scam." Stephanie, a security expert who’s worked for both the FBI and CIA, takes our co-hosts through the subtle red flags to watch for—like inconsistencies in video calls and requests for payment via cryptocurrency. Their discussion also touches on preventative strategies, such as multi-factor authentication and what to do if you've been targeted. This is a must-listen for all of us trying to safely navigate our increasingly complex digital world. 


Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.





Download audio: https://afp-920619-injected.calisto.simplecastaudio.com/f32cca5f-79ec-4392-8613-6b30c923629b/episodes/ccb77e5e-b87d-457e-86e3-2ee59152e482/audio/128/default.mp3?aid=rss_feed&awCollectionId=f32cca5f-79ec-4392-8613-6b30c923629b&awEpisodeId=ccb77e5e-b87d-457e-86e3-2ee59152e482&feed=bArttHdR
Read the whole story
alvinashcraft
55 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

5 ways the AI bubble could burst

1 Share

In episode 90 of The AI Fix, VC investor Mercedes Bent shares an insider's view of how venture capital is reshaping the AI race, why she believes AI will create jobs instead of destroying them, and five ways the AI bubble could burst.

Also this week: Anthropic accuses three Chinese labs of scraping its models using thousands of fake accounts, an autonomous agent trashes a researcher's inbox (then apologises), and DeepMind's CEO proposes an “Einstein test” for AGI.

Episode links:


The AI Fix

The AI Fix podcast is presented by Mark Stockley.

Grab T-shirts, hoodies, mugs and other goodies in our online store.

Learn more about the podcast at theaifix.show, and follow us on Bluesky at @theaifix.show.

Never miss another episode by following us in your favourite podcast app. It's free!

Like to give us some feedback or sponsor the podcast? Get in touch.

Support the show and gain access to ad-free episodes by becoming a supporter: Join The AI Fix Plus!



Our Sponsors:
* Check out Anthropic: https://claude.ai/aifix


Advertising Inquiries: https://redcircle.com/brands

Privacy & Opt-Out: https://redcircle.com/privacy



Download audio: https://audio3.redcircle.com/episodes/a272f022-99ae-49c7-a22b-6b3905c9cd7e/stream.mp3
Read the whole story
alvinashcraft
55 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Random.Code() - General Refactorings in Rocks, Part 2

1 Share
From: Jason Bock
Duration: 1:06:28
Views: 107

In this stream, I'll keep working on Rocks and making changes to the code base to make things a bit better.

https://github.com/JasonBock/Rocks/issues/408

#csharp #dotnet

Read the whole story
alvinashcraft
56 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

When Teams Slowly Decay by Anointing a Hidden Dictator | Nigel Baker

1 Share

Nigel Baker: When Teams Slowly Decay by Anointing a Hidden Dictator

Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes.

 

"The world won't end with a bang, but with a whimper. My great fear is not teams exploding like a bomb—that shows they care. The big thing for me is teams that decay slowly." - Nigel Baker

 

Nigel shares a pattern he has witnessed repeatedly: teams that self-destruct not through dramatic conflict, but through a slow, quiet decay. Referencing The Five Dysfunctions of a Team by Patrick Lencioni, he points to something even more insidious than inattention to results—teams that avoid taking responsibility for decision-making. 

When teams struggle with self-organization, they often try to "self-organize themselves out of self-organization" by anointing a hidden dictator: the big brain, the big mouth, the tech lead, or the project manager who everyone secretly defers to. Nigel offers two practical tools to counter this pattern. 

First, the "yes and" technique from improv comedy—instead of taking ownership away from team members, you accept their idea and add to it, keeping the ownership where it belongs. 

Second, Socratic questioning, where instead of passing knowledge from you to them, you help them pass knowledge from themselves to themselves. But Nigel adds an important caution: the Agile community has swung too far into pure coaching mode. Sometimes people genuinely need help, not therapy—they need to know which server the files are on, not a deep coaching question about their feelings.

 

In this segment, we talk about Paul Goddard's work on improv comedy in Agile, and the power of the "yes and" technique for keeping ownership with teams.

 

Self-reflection Question: Is your team quietly deferring all decisions to one person, and if so, what practical steps can you take to redistribute that ownership?

Featured Book of the Week: Leading Self-Directed Work Teams by Kimball Fisher

Nigel's book recommendations reflect his belief that the most inspiring ideas come from adjacent fields rather than Agile literature itself. Leading Self-Directed Work Teams by Kimball Fisher stands out because it explores similar principles to the Scrum Master role but without any Agile jargon—showing how a completely different industry arrived at the same insights about empowered teams. Nigel also recommends the Strategyzer books by Alex Osterwalder, including Business Model Generation and Testing Business Ideas, for the business thinking that coaches need but rarely pick up at work. Scrum Mastery by Geoff Watts remains his go-to foundational text for new Scrum Masters. And the book he waited 4.5 years for—until Amazon cancelled the pre-order—is the latest edition of The Facilitator's Guide to Participatory Decision Making by Sam Kaner, a deeply practical reference guide that gives real people real tools for real situations.

 

[The Scrum Master Toolbox Podcast Recommends]

🔥In the ruthless world of fintech, success isn't just about innovation—it's about coaching!🔥

Angela thought she was just there to coach a team. But now, she's caught in the middle of a corporate espionage drama that could make or break the future of digital banking. Can she help the team regain their mojo and outwit their rivals, or will the competition crush their ambitions? As alliances shift and the pressure builds, one thing becomes clear: this isn't just about the product—it's about the people.

 

🚨 Will Angela's coaching be enough? Find out in Shift: From Product to People—the gripping story of high-stakes innovation and corporate intrigue.

 

Buy Now on Amazon

 

[The Scrum Master Toolbox Podcast Recommends]

 

About Nigel Baker

 

Nigel Baker is a seasoned agile coach with a keen intellect, warm creativity, and thoughtful humour. With a career spanning software engineering, consultancy and global training, he inspires teams to thrive, not just perform. Outside work, he loves bold ideas, good conversation and a life well lived.

 

You can link with Nigel Baker on LinkedIn. You can also find Nigel at AgileBear.com.





Download audio: https://traffic.libsyn.com/secure/scrummastertoolbox/20260303_Nigel_Baker_Tue.mp3?dest-id=246429
Read the whole story
alvinashcraft
56 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Node.js 25.8.0 (Current)

1 Share
Read the whole story
alvinashcraft
56 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories