Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
148772 stories
·
33 followers

0.0.421

1 Share

2026-03-03

  • Autopilot permission dialog appears on first prompt submission instead of on mode switch
  • AUTO theme now reads your terminal's ANSI color palette and uses it directly, so colors match your terminal theme
  • Add structured form input for the ask_user tool using MCP Elicitations (experimental)
  • Plugin commands read extraKnownMarketplaces from project-level .claude/settings.json for Claude compatibility
  • Git hooks can detect Copilot CLI subprocesses via the COPILOT_CLI=1 environment variable to skip interactive prompts
  • Spurious "write EIO" error entries no longer appear in the timeline during session resume or terminal state transitions
  • Python-based MCP servers no longer time out due to buffered stdout
  • Error when --model flag specifies an unavailable model
  • MCP server availability correctly updates after signing in, switching accounts, or signing out
  • Display clickable PR reference next to branch name in the status bar
  • Add --plugin-dir flag to load a plugin from a local directory
  • Mouse text selection is automatically copied to the Linux primary selection buffer (middle-click to paste)
  • Fix VS Code shift+enter and ctrl+enter keybindings for multiline input
  • Use consistent ~/.copilot/pkg path for auto-update instead of XDG_STATE_HOME
  • ACP clients can configure reasoning effort via session config options
  • Click links in the terminal to open them in your default browser
  • Support repo-level config via .github/copilot/config.json for shared project settings like marketplaces and launch messages
  • Streaming output no longer truncates when running in alt-screen mode
  • Right-click paste no longer produces garbled text on Windows
  • Shell command output on Windows no longer renders as "No changes detected" in the timeline
  • GitHub API errors no longer appear as raw HTTP messages in the terminal when using the # reference picker
  • Markdown tables render with proper column widths, word wrap, and Unicode borders that adapt to terminal width
  • MCP elicitation form displays taller multi-line text input, hides tab bar for single-field forms, and fixes error flashing on field navigation
Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Copilot Java SDK 1.0.10

1 Share

Installation

⚠️ Disclaimer: This is an unofficial, community-driven SDK and is not supported or endorsed by GitHub. Use at your own risk.

📦 View on Maven Central

📖 Documentation · Javadoc

Maven

<dependency>
    <groupId>io.github.copilot-community-sdk</groupId>
    <artifactId>copilot-sdk</artifactId>
    <version>1.0.10</version>
</dependency>

Gradle (Kotlin DSL)

implementation("io.github.copilot-community-sdk:copilot-sdk:1.0.10")

Gradle (Groovy DSL)

implementation 'io.github.copilot-community-sdk:copilot-sdk:1.0.10'

What's Changed

📦 Other Changes

  • Fix invalid previous_tag parameter in release workflow by @Copilot in #134
  • Remove non-existent test coverage workflow from README by @Copilot in #133
  • Upstream sync: 2 commits (8598dc3, 304d812) by @Copilot in #136
  • Upstream sync: Add clone() methods to config classes (6 commits) by @Copilot in #138
  • Update docs coverage and hooks reference by @brunoborges in #141
  • Merge upstream SDK changes (2026-02-17) by @brunoborges in #142
  • Upstream sync: clientName, deny-by-default permissions, PermissionHandler.APPROVE_ALL by @Copilot in #144
  • Upstream sync: no-op (Python-only changes, 2026-02-23) by @Copilot in #146
  • Create Achitectural Decision Record regarding SemVer policy pre 1.0 for breaking changes, such as introducing Virtual Threads. by @edburns in #149
  • Upstream sync: GitHubToken rename, sendAndWait cancellation fix, Foundry Local docs by @Copilot in #148
  • Upgrade Jackson to 2.21.1 to fix async parser DoS vulnerability (GHSA-72hv-8253-57qq) by @brunoborges in #155
  • Comply with steps sized XS or S in https://github.com/github/open-source-releases/issues/667 . by @edburns in #158
  • On branch edburns/update-license by @edburns in #159
  • Fix CompactionTest timeout caused by prompt mismatch with snapshot by @Copilot in #160
  • [upstream-sync] Port 28 upstream commits from github/copilot-sdk (f0909a7→b9f746a) by @Copilot in #157
  • Sync docs and samples with breaking session permission API changes by @Copilot in #164
  • Upstream sync: session.setModel() and built-in tool override support by @Copilot in #162
  • Restructure upstream-merge prompt flow and add explicit documentation-impact gate by @Copilot in #168
  • Document missing PR #162 CopilotSession APIs in advanced guide by @Copilot in #166

New Contributors

Full Changelog: v1.0.9...v1.0.10

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Radar Trends to Watch: March 2026

1 Share

The explosion of interest in OpenClaw was one of the last items added to the February 1 trends. In February, things went crazy. We saw a social network for agents (no humans allowed, though they undoubtedly sneak on); a multiplayer online game for agents (again, no humans); many clones of OpenClaw, most of which attempt to mitigate its many security problems; and much more. Andrej Karpathy has said that OpenClaw is the next layer on top of AI agents. If the security issues can be resolved (which is a good question), he’s probably right.

AI

  • Alibaba has released a fleet of mid-size Qwen 3.5 models. Their theme is providing more intelligence with less computing cycles—something we all need to appreciate. 
  • Important advice for agentic engineering: Always start by running the tests.
  • Google has released Lyria 3, a model that generates 30-second musical clips from a verbal description. You can experiment with it through Gemini.
  • There’s a new protocol in the agentic stack. Twilio has released the Agent-2-Human (A2H) protocol, which facilitates handoffs between agents and humans as they collaborate.
  • Yet more and more model releases: Claude Sonnet 4.6, followed quickly by Gemini 3.1 Pro. If you care, Gemini 3.1 Pro currently tops the abstract reasoning benchmarks.
  • Kimi Claw is yet another variation on OpenClaw. Kimi Claw uses Moonshot AI’s most advanced model, Kimi K2.5 Thinking model, and offers one-click setup in Moonshot’s cloud.
  • NanoClaw is another OpenClaw-like AI-based personal assistant that claims to be more security conscious. It runs agents in sandboxed Linux containers with limited access to outside resources, limiting abuse. 
  • OpenAI has released a research preview of GPT-5.3-Codex-Spark, an extremely fast coding model that runs on Cerebras hardware. The company claims that it’s possible to collaborate with Codex in “real time” because it gives “near-instant” results.
  • RAG may not be the newest idea in the AI world, but text-based RAG is the basis for many enterprise applications of AI. But most enterprise data includes graphs, images, and even text in formats like PDF. Is this the year for multimodal RAG?
  • Z.ai has released its latest model, GLM-5. GLM-5 is an open source “Opus-class” model. It’s significantly smaller than Opus and other high-end models, though still huge; the mixture-of-experts model has 744B parameters, with 40B active.
  • Waymo has created a World Model to model driving behavior. It’s capable of building lifelike simulations of traffic patterns and behavior, based on video collected from Waymo’s vehicles.
  • Recursive language models (RLMs) solve the problem of context rot, which happens when output from AI degrades as the size of the context increases. Drew Breunig has an excellent explanation.
  • You’ve heard of Moltbook—and perhaps your AI agent participates. Now there’s SpaceMolt—a massive multiplayer online game that’s exclusively for agents. 
  • Anthropic and OpenAI simultaneously released Claude Opus 4.6 and GPT-5.3-Codex, both of which offer improved models for AI-assisted programming. Is this “open warfare,” as AINews claims? You mean it hasn’t been open warfare prior to now?
  • If you’re excited by OpenClaw, you might try NanoBot. It has 1% of OpenClaw’s code, written so that it’s easy to understand and maintain. No promises about security—with all of these personal AI assistants, be careful!
  • OpenAI has launched a desktop app for macOS along the lines of Claude Code. It’s something that’s been missing from their lineup. Among other things, it’s intended to help programmers work with multiple agents simultaneously.
  • Pete Warden has put together an interactive guide to speech embeddings for engineers, and published it as a Colab notebook.
  • Aperture is a new tool from Tailscale for “providing visibility into coding agent usage,” allowing organizations to understand how AI is being used and adopted. It’s currently in private beta.
  • OpenAI Prism is a free workspace for scientists to collaborate on research. Its goal is to help scientists build a new generation of AI-based tooling. Prism is built on ChatGPT 5.2 and is open to anyone with a personal ChatGPT account.

Programming

  • Pi is a very simple but extensible coding agent that runs in your terminal.
  • Researchers at Anthropic have vibe-coded a C compiler using a fleet of Claude agents. The experiment cost roughly $20,000 worth of tokens, and produced 100,000 lines of Rust. They are careful to say that the compiler is far from production quality—but it works. The experiment is a tour de force demonstration of running agents in parallel. 
  • I never knew that macOS had a sandboxing tool. It looks useful. (It’s also deprecated, but looks much easier to use than the alternatives.)
  • GitHub now allows pull requests to be turned off completely, or to be limited to collaborators. They’re doing this to allow software maintainers to eliminate AI-generated pull requests, which are overwhelming many developers.
  • After an open source maintainer rejected a pull request generated by an AI agent, the agent published a blog post attacking the maintainer. The maintainer responded with an excellent analysis, asking whether threats and intimidation are the future of AI.
  • As Simon Willison has written, the purpose of programming isn’t to write code but to deliver code that works. He’s created two tools, Showboat and Rodney, that help AI agents demo their software so that the human authors can verify that the software works. 
  • Anil Dash asks whether codeless programming, using tools like Gas Town, is the future.

Security

  • There is now an app that alerts you when someone in the vicinity has smart glasses.
  • Agentsh provides execution layer security by enforcing policies to prevents agents from doing damage. As far as agents are concerned, it’s a replacement for bash.
  • There’s a new kind of cyberattack: attacks against time itself. More specifically, this means attacks against clocks and protocols for time synchronization. These can be devastating in factory settings.
  • “What AI Security Research Looks Like When It Works” is an excellent overview of the impact of AI on discovering vulnerabilities. AI generates a lot of security slop, but it also finds critical vulnerabilities that would have been opaque to humans, including 12 in OpenSSL.
  • Gamifying prompt injection—well, that’s new. HackMyClaw is a game (?) in which participants send email to Flu, an OpenClaw instance. The goal is to force Flu to reply with secrets.env, a file of “confidential” data. There is a prize for the first to succeed.
  • It was only a matter of time: There’s now a cybercriminal who is actively stealing secrets from OpenClaw users. 
  • Deno’s secure sandbox might provide a way to run OpenClaw safely
  • IronClaw is a personal AI assistant modeled after OpenClaw that promises better security. It always runs in a sandbox, never exposes credentials, has some defenses against prompt injection, and only makes requests to approved hosts.
  • A fake recruiting campaign is hiding malware in programming challenges that candidates must complete in order to apply. Completing the challenge requires installing malicious dependencies that are hosted on legitimate repositories like npm and PyPI.
  • Google’s Threat Intelligence Group has released its quarterly analysis of adversarial AI use. Their analysis includes distillation, or collecting the output of a frontier AI to train another AI.
  • Google has upgraded its tools for removing personal information and images, including nonconsensual explicit images, from its search results. 
  • Tirith is a new tool that hooks into the shell to block bad commands. This is often a problem with copy-and-paste commands that use curl to pipe an archive into bash. It’s easy for a bad actor to create a malicious URL that is indistinguishable from a legitimate URL.
  • Claude Opus 4.6 has been used to discover 500 0-day vulnerabilities in open source code. While many open source maintainers have complained about AI slop, and that abuse isn’t likely to stop, AI is also becoming a valuable tool for security work.
  • Two coding assistants for VS Code are malware that send copies of all the code to China. Unlike lots of malware, they do their job as coding assistants well, making it less likely that victims will notice that something is wrong. 
  • Bizarre Bazaar is the name for a wave of attacks against LLM APIs, including self-hosted LLMs. The attacks attempt to steal resources from LLM infrastructure, for purposes including cryptocurrency mining, data theft, and reselling LLM access. 
  • The business model for ransomware has changed. Ransomware is no longer about encrypting your data; it’s about using stolen data for extortion. Small and mid-size businesses are common targets. 

Web

  • Cloudflare has a service called Markdown for Agents that converts websites from HTML to Markdown when an agent accesses them. Conversion makes the pages friendlier to AI and significantly reduces the number of tokens needed to process them.
  • WebMCP is a proposed API standard that allows web applications to become MCP servers. It’s currently available in early preview in Chrome.
  • Users of Firefox 148 (which should be out by the time you read this) will be able to opt out of all AI features.

Operations

  • Wireshark is a powerful—and complex—packet capture tool. Babyshark is a text interface for Wireshark that provides an amazing amount of information with a much simpler interface.
  • Microsoft is experimenting with using lasers to etch data in glass as a form of long-term data storage.

Things

  • You need a desk robot. Why? Because it’s there. And fun.
  • Do you want to play Doom on a Lego brick? You can.


Read the whole story
alvinashcraft
4 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Voice Clones & Phony Texts: Artificial Intelligence Fraud

1 Share

In an era where technology is advancing at a breathtaking pace, so are the methods of those who seek to exploit it. This episode of Mailin’ It! pulls back the curtain on the darker side of artificial intelligence, revealing how fraudsters are creating hyper-realistic scams that can fool even the most discerning eye. From voice clones that can mimic a loved one in distress to flawless phishing emails, the line between real and fake has never been more blurred. Joined by Stephanie Glad, the U.S. Postal Inspection Service's Program Manager for Mail Fraud, Karla and Jeff explore the anatomy of these sophisticated crimes, including the terrifyingly effective "Grandparent scam." Stephanie, a security expert who’s worked for both the FBI and CIA, takes our co-hosts through the subtle red flags to watch for—like inconsistencies in video calls and requests for payment via cryptocurrency. Their discussion also touches on preventative strategies, such as multi-factor authentication and what to do if you've been targeted. This is a must-listen for all of us trying to safely navigate our increasingly complex digital world. 


Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.





Download audio: https://afp-920619-injected.calisto.simplecastaudio.com/f32cca5f-79ec-4392-8613-6b30c923629b/episodes/ccb77e5e-b87d-457e-86e3-2ee59152e482/audio/128/default.mp3?aid=rss_feed&awCollectionId=f32cca5f-79ec-4392-8613-6b30c923629b&awEpisodeId=ccb77e5e-b87d-457e-86e3-2ee59152e482&feed=bArttHdR
Read the whole story
alvinashcraft
4 hours ago
reply
Pennsylvania, USA
Share this story
Delete

5 ways the AI bubble could burst

1 Share

In episode 90 of The AI Fix, VC investor Mercedes Bent shares an insider's view of how venture capital is reshaping the AI race, why she believes AI will create jobs instead of destroying them, and five ways the AI bubble could burst.

Also this week: Anthropic accuses three Chinese labs of scraping its models using thousands of fake accounts, an autonomous agent trashes a researcher's inbox (then apologises), and DeepMind's CEO proposes an “Einstein test” for AGI.

Episode links:


The AI Fix

The AI Fix podcast is presented by Mark Stockley.

Grab T-shirts, hoodies, mugs and other goodies in our online store.

Learn more about the podcast at theaifix.show, and follow us on Bluesky at @theaifix.show.

Never miss another episode by following us in your favourite podcast app. It's free!

Like to give us some feedback or sponsor the podcast? Get in touch.

Support the show and gain access to ad-free episodes by becoming a supporter: Join The AI Fix Plus!



Our Sponsors:
* Check out Anthropic: https://claude.ai/aifix


Advertising Inquiries: https://redcircle.com/brands

Privacy & Opt-Out: https://redcircle.com/privacy



Download audio: https://audio3.redcircle.com/episodes/a272f022-99ae-49c7-a22b-6b3905c9cd7e/stream.mp3
Read the whole story
alvinashcraft
4 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Random.Code() - General Refactorings in Rocks, Part 2

1 Share
From: Jason Bock
Duration: 1:06:28
Views: 107

In this stream, I'll keep working on Rocks and making changes to the code base to make things a bit better.

https://github.com/JasonBock/Rocks/issues/408

#csharp #dotnet

Read the whole story
alvinashcraft
4 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories