Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
150413 stories
·
33 followers

Docker, JetBrains, and Zed: Building a Common Language for Agents and IDEs

1 Share

Developers live in their editors. As agents become capable enough to write and refactor code, they should work natively inside those environments. 

That’s why JetBrains and Zed are co-developing ACP, the Agent Communication Protocol. ACP gives agents and editors a shared language, so any agent can read context, take actions, and respond intelligently without bespoke wiring for every tool.

Why it matters

Every protocol that’s reshaped development (LSP for language tools, MCP for AI context) works the same way: define the standard once, unlock the ecosystem. ACP does this for the editor itself. Write an agent that speaks ACP, and it works in JetBrains, Zed, or anywhere else that adopts the protocol. 

Docker’s contribution

Docker’s cagent, an open-source multi-agent runtime, already supports ACP, alongside Claude Code, Codex CLI, and Gemini CLI. Agents built with cagent can run in any ACP-compatible IDE, like JetBrains, immediately.

We’ve also shipped Dynamic MCPs, letting agents discover and compose tools at runtime, surfaced directly in the editor where developers work.

What’s next

ACP is early, but the direction is clear. As agents embed deeper into workflows, the winners will be tools that interoperate. Open standards let everyone build on shared foundations instead of custom glue.

Docker will continue investing in ACP and standards that make development faster, more open, and more secure. When code, context, and automation converge, shared protocols ensure we move forward together.

Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

Announcing vLLM v0.12.0, Ministral 3 and DeepSeek-V3.2 for Docker Model Runner

1 Share

At Docker, we are committed to making the AI development experience as seamless as possible. Today, we are thrilled to announce two major updates that bring state-of-the-art performance and frontier-class models directly to your fingertips: the immediate availability of Mistral AI’s Ministral 3 and DeepSeek-V3.2, alongside the release of vLLM v0.12.0 on Docker Model Runner.

Whether you are building high-throughput serving pipelines or experimenting with edge-optimized agents on your laptop, today’s updates are designed to accelerate your workflow.

Meet Ministral 3: Frontier Intelligence, Edge Optimized

vLLM 2nd blog image 1

While vLLM powers your production infrastructure, we know that development needs speed and efficiency right now. That’s why we are proud to add Mistral AI’s newest marvel, Ministral 3, to the Docker Model Runner library on Docker Hub.

Ministral 3 is Mistral AI’s premier edge model. It packs frontier-level reasoning and capabilities into a dense, efficient architecture designed specifically for local inference. It is perfect for:

  • Local RAG applications: Chat with your docs without data leaving your machine.
  • Agentic Workflows: Fast reasoning steps for complex function-calling agents.
  • Low-latency prototyping: Test ideas instantly without waiting for API calls.

DeepSeek-V3.2: The Open Reasoning Powerhouse

vLLM 2nd blog image 2

We are equally excited to introduce support for DeepSeek-V3.2. Known for pushing the boundaries of what open-weights models can achieve, the DeepSeek-V3 series has quickly become a favorite for developers requiring high-level reasoning and coding proficiency.

DeepSeek-V3.2 brings Mixture-of-Experts (MoE) architecture efficiency to your local environment, delivering performance that rivals top-tier closed models. It is the ideal choice for:

  • Complex Code Generation: Build and debug software with a model specialized in programming tasks.
  • Advanced Reasoning: Tackle complex logic puzzles, math problems, and multi-step instructions.
  • Data Analysis: Process and interpret structured data with high precision.

Run Them with One Command

With Docker Model Runner, you don’t need to worry about complex environment setups, python dependencies, or weight downloads. We’ve packaged both models so you can get started immediately.

To run Ministral 3:

docker model run ai/ministral3

To run DeepSeek-V3.2:

docker model run ai/deepseek-v3.2-vllm

These commands automatically pull the model, set up the runtime, and drop you into an interactive chat session. You can also point your applications to them using our OpenAI-compatible local endpoint, making them drop-in replacements for your cloud API calls during development.

vLLM v0.12.0: Faster, Leaner, and Ready for What’s Next

vLLM blog 1

We are excited to highlight the release of vLLM v0.12.0. vLLM has quickly become the gold standard for high-throughput and memory-efficient LLM serving, and this latest version raises the bar again.

Version 0.12.0 brings critical enhancements to the engine, including:

  • Expanded Model Support: Day-0 support for the latest architecture innovations, ensuring you can run the newest open-weights models (like DeepSeek V3.2 and Ministral 3) the moment they drop.
  • Optimized Kernels: Significant latency reductions for inference on NVIDIA GPUs, making your containerized AI applications snappier than ever.
  • Enhanced PagedAttention: Further optimizations to memory management, allowing you to batch more requests and utilize your hardware to its full potential.

Why This Matters

The combination of Ministral 3, DeepSeek-V3.2, and vLLM v0.12.0 represents the maturity of the open AI ecosystem.

You now have access to a serving engine that maximizes data center performance, alongside a choice of models to fit your specific needs—whether you prioritize the edge-optimized speed of Ministral 3 or the deep reasoning power of DeepSeek-V3.2. All of this is easily accessible via Docker Model Runner.

How You Can Get Involved

The strength of Docker Model Runner lies in its community, and there’s always room to grow. We need your help to make this project the best it can be. To get involved, you can:

  • Star the repository: Show your support and help us gain visibility by starring the Docker Model Runner repo.
  • Contribute your ideas: Have an idea for a new feature or a bug fix? Create an issue to discuss it. Or fork the repository, make your changes, and submit a pull request. We’re excited to see what ideas you have!
  • Spread the word: Tell your friends, colleagues, and anyone else who might be interested in running AI models with Docker.

We’re incredibly excited about this new chapter for Docker Model Runner, and we can’t wait to see what we can build together. Let’s get to work!

Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

The Secret Trick To Keep Copilot On Track With Your C# Code

1 Share


Read the whole story
alvinashcraft
41 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

The 6th edition of our Beginner’s Guide is available now!

1 Share

It was just over two years ago that we introduced the 5th edition of The Official Raspberry Pi Beginner’s Guide. That edition featured the latest and greatest of our hardware and software at the time: Raspberry Pi 5 and Raspberry Pi OS Bookworm. Since then, we’ve released many shiny new things, and the time was right for a major update. The 6th edition is available now, featuring Raspberry Pi 500 and 500+, and Raspberry Pi OS Trixie. We’ve also incorporated numerous improvements, clarifications, and corrections.

The cover of the 6th edition of the Official Raspberry Pi Beginner's Guide

Everything you need to get started

The book begins with a guided tour of Raspberry Pi hardware, covering the features and capabilities of the latest Raspberry Pi computers. After that, you’ll learn how to set up your Raspberry Pi and prepare it for its first boot. You’ll also get to know the desktop environment and find out how to install software (including the must-play games from Code the Classics Volume I and Volume II). And that’s just the first three chapters, which feature all-new full-colour images of Raspberry Pi 500+ and Raspberry Pi OS Trixie.

Write code with Python and Scratch

Chapters 4 through 7 introduce you to programming with Scratch and Python. You’ll start by writing simple programs that don’t require any extra hardware or peripherals. After you’ve mastered the basics of the two programming languages, you’ll move on to creating programs that interact with the outside world, taking your first step into physical computing with inputs such as push buttons and outputs such as LEDs. From there, you’ll progress to programming with the Sense HAT, an optional Raspberry Pi accessory that features on-board sensors and an LED matrix for visualisation.

Accessorise your Raspberry Pi

The rest of the book continues the theme of “things you can wire up to your Raspberry Pi.” The penultimate chapter shows you how to use a Raspberry Pi Camera Module, while the final chapter covers the Raspberry Pi Pico 2 microcontroller board, building on the Python and physical computing skills you learned earlier. Finally, the Beginner’s Guide wraps up with some useful reference appendices.

Get it today

The Official Raspberry Pi Beginner’s Guide, 6th edition, is out today. You can pick up your copy from our store for just £19.99. It’s also available from online retailers such as Amazon, and from booksellers who have exceptional taste in books. If you’re interested in an electronic version, we’ve got several ways you can get your hands on a PDF or ePUB.

After you add this new book to your shopping cart, be sure to check out the many other books we offer in our online shop.

The post The 6th edition of our Beginner’s Guide is available now! appeared first on Raspberry Pi.

Read the whole story
alvinashcraft
50 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

The Download: Zork goes open source, Blender 5.0, unlocking AirPods & more

1 Share
From: GitHub
Duration: 4:22
Views: 21

This week on The Download, Andrea Griffiths covers the release of Blender 5.0 and a major open source move from Microsoft, Xbox, and Activision involving the classic game Zork. We also look at a new proposal from Anthropic and OpenAI for the Model Context Protocol (MCP) and explore LibrePods, a project that unlocks AirPods features on non-Apple devices.

#TheDownload #DevNews #OpenSource

— CHAPTERS —

00:00 Welcome to The Download
00:30 Blender 5.0 ships with new features
01:04 Microsoft and Activision open source Zork
01:56 Anthropic and OpenAI propose MCP extension
02:50 GitHub Actions cache size increases
03:12 Project spotlight: LibrePods
03:55 Summary

Stay up-to-date on all things GitHub by subscribing and following us at:
YouTube: http://bit.ly/subgithub
Blog: https://github.blog
X: https://twitter.com/github
LinkedIn: https://linkedin.com/company/github
Instagram: https://www.instagram.com/github
TikTok: https://www.tiktok.com/@github
Facebook: https://www.facebook.com/GitHub/

About GitHub:
It’s where over 100 million developers create, share, and ship the best code possible. It’s a place for anyone, from anywhere, to build anything—it’s where the world builds software. https://github.com

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

OpenAI Calls a ‘Code Red’ + Which Model Should I Use? + The Hard Fork Review of Slop

1 Share
“For OpenAI to realize its ambitions, it is not going to be enough for them to make a model that is as good as Gemini 3. They need to be able to leapfrog it again.”
Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories