Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
151155 stories
·
33 followers

Develop and deploy voice AI apps using Docker

1 Share

Voice is the next frontier of conversational AI. It is the most natural modality for people to chat and interact with another intelligent being. However, the voice AI software stack is complex, with many moving parts. Docker has emerged as one of the most useful tools for AI agent deployment.

In this article, we’ll explore how to use open-source technologies and Docker to create voice AI agents that utilize your custom knowledge base, voice style, actions, fine-tuned AI models, and run on your own computer. It is based on a talk I recently gave at the Docker Captains Summit in Istanbul.

Docker and AI

Most developers consider Docker the “container store” for software. The Docker container provides a reliable and reproducible environment for developing software locally on your own machine and then shipping it to the cloud. It also provides a safe sandbox to isolate, run, and scale user-submitted software in the cloud. For complex AI applications, Docker provides a suite of tools that makes it easy for developers and platform engineers to build and deploy.

  • The Docker container is a great tool for running software components and functions in an AI agent system. It can run web servers, API servers, workflow orchestrators, LLM actions or tool calls, code interpreters, simulated web browsers, search engines, and vector databases.
  • With the NVIDIA Container Toolkit, you can access the host machine’s GPU from inside Docker containers, enabling you to run inference applications such as LlamaEdge that serve open-source AI models inside the container.
  • The Docker Model Runner runs OpenAI-compatible API servers for open-source LLMs locally on your own computer.
  • The Docker MCP Toolkit provides an easy way to run MCP servers in containers and make them available to AI agents.

The EchoKit platform provides a set of Docker images and utilizes Docker tools to simplify the deployment of complex AI workflows.

image2 3

EchoKit

The EchoKit consists of a server and a client. The client could be an ESP32-based hardware device that can listen for user voices using a microphone, stream the voice data to the server, receive and play the server’s voice response through a speaker. EchoKit provides the device hardware specifications and firmware under open-source licenses. To see it in action, check out the following video demos.

You can check out the GitHub repo for EchoKit.

The AI agent orchestrator

The EchoKit server is an open-source AI service orchestrator focused on real-time voice use cases. It starts up a WebSocket server that listens for streaming audio input and returns streaming audio responses. It ties together multiple AI models, including voice activity detection (VAD), automatic speech recognition (ASR), large language models (LLM), and text-to-speech (TTS), using one model’s output as the input for the next model.

You can start an EchoKit server on your local computer and configure the EchoKit device to access it over the local WiFi network. The “edge server” setup reduces network latency, which is crucial for voice AI applications.

The EchoKit team publishes a multi-platform Docker image that you can use directly to start an EchoKit server. The following command starts the EchoKit server with your own config.toml file and runs in the background.

docker run --rm \
  -p 8080:8080 \
  -v $(pwd)/config.toml:/app/config.toml \
  secondstate/echokit:latest-server &

The config.toml file is mapped into the container to configure how the EchoKit server utilizes various AI services in its voice response workflow. The following is an example of config.toml. It starts the WebSocket server on port 8080. That’s why in the Docker command, we map the container’s port 8080 to the same port on the host. That allows the EchoKit server to be accessible through the host computer’s IP address. The rest of the config.toml specifies how to access the ASR, LLM, and TTS models to generate a voice response for the input voice data.

addr = "0.0.0.0:8080"
hello_wav = "hello.wav"

[asr]
platform = "openai"
url = "https://api.groq.com/openai/v1/audio/transcriptions"
api_key = "gsk_XYZ"
model = "whisper-large-v3"
lang = "en"
prompt = "Hello\n你好\n(noise)\n(bgm)\n(silence)\n"

[llm]
platform = "openai_chat"
url = "https://api.groq.com/openai/v1/chat/completions"
api_key = "gsk_XYZ"
model = "openai/gpt-oss-20b"
history = 20

[tts]
platform = "elevenlabs"
url = "wss://api.elevenlabs.io/v1/text-to-speech/"
token = "sk_xyz"
voice = "VOICE-ID-ABCD"

[[llm.sys_prompts]]
role = "system"
content = """
You are a comedian. Engage in lighthearted and humorous conversation with the user. Tell jokes when appropriate.

"""

The AI services configured for the above EchoKit server are as follows.

  • It utilizes Groq for ASR (voice-to-text) and LLM tasks. You will need to fill in your own Groq API key.
  • It utilizes ElevenLabs for streaming TTS (text-to-speech). You will need to fill in your own ElevenLabs API key.

Then, in the EchoKit device setup, you just need to point your device to the local EchoKit server.

ws://local-network-ip.address:8080/ws/

For more options on the EchoKit server configuration, please refer to our documentation!

The VAD server

The voice-to-text ASR is not sufficient by itself. It could hallucinate and generate nonsensical text if the input voice is not human speech (e.g., background noise, street noise, or music). It also would not know when the user has finished speaking, and the EchoKit server needs to ask the LLM to start generating a response.

A VAD model is used to detect human voice and conversation turns in the voice stream. The EchoKit team has a multi-platform Docker image that incorporates the open-source Silero VAD model. The image is much larger than the plain EchoKit server, and it requires more CPU resources to run. But it delivers substantially better voice recognition results. Here is the Docker command to start the EchoKit server with VAD in the background.

docker run --rm \
  -p 8080:8080 \
  -v $(pwd)/config.toml:/app/config.toml \
  secondstate/echokit:latest-server-vad &

The config.toml file for this Docker container also needs an additional line in the ASR section, so that the EchoKit server knows to stream incoming audio data to the local VAD service and act on the VAD signals. The Docker container runs the Silero VAD model as a WebSocket service inside the container on port 8000. There is no need to expose the container port 8000 to the host.

addr = "0.0.0.0:8080"
hello_wav = "hello.wav"

[asr]
platform = "openai"
url = "https://api.groq.com/openai/v1/audio/transcriptions"
api_key = "gsk_XYZ"
model = "whisper-large-v3"
lang = "en"
prompt = "Hello\n你好\n(noise)\n(bgm)\n(silence)\n"
vad_url = "http://localhost:9093/v1/audio/vad"

[llm]
platform = "openai_chat"
url = "https://api.groq.com/openai/v1/chat/completions"
api_key = "gsk_XYZ"
model = "openai/gpt-oss-20b"
history = 20

[tts]
platform = "elevenlabs"
url = "wss://api.elevenlabs.io/v1/text-to-speech/"
token = "sk_xyz"
voice = "VOICE-ID-ABCD"

[[llm.sys_prompts]]
role = "system"
content = """
You are a comedian. Engage in lighthearted and humorous conversation with the user. Tell jokes when appropriate.

"""

We recommend using the VAD enabled EchoKit server whenever possible.

MCP services

A key feature of AI agents is to perform actions, such as making web-based API calls, on behalf of LLMs. For example, the “US civics test prep” example for EchoKit requires the agent to get exam questions from a database, and then generate responses that guide the user toward the official answer.

The MCP protocol is the industry standard for providing tools (function calls) to LLM agents. For example, the DuckDuckGo MCP server provides a search tool for LLMs to search the internet if the user asks for current information that is not available in the LLM’s pre-training data. The Docker MCP Toolkit provides a set of tools that make it easy to run MCP servers that can be utilized by EchoKit.

image1 4

The command below starts a Docker MCP gateway server. The MCP protocol defines several ways for agents or LLMs to access MCP tools. Our gateway server is accessible through the streaming HTTP protocol at port 8011.

docker mcp gateway run --port 8011 --transport streaming

Next, you can add the DuckDuckGo MCP server to the gateway. The search tool provided by the DuckDuckGo MCP server is now available on HTTP port 8011.

docker mcp server enable duckduckgo

You can simply configure the EchoKit server to use the DuckDuckGo MCP tools in the config.toml file.

[[llm.mcp_server]]
server = "http://localhost:8011/mcp"
type = "http_streamable"
call_mcp_message = "Please hold on a few seconds while I am searching for an answer!"

Now, when you ask EchoKit a current event question, such as “What is the latest Tesla stock price?”, it will first call the DuckDuckGo MCP’s search tool to retrieve this information and then respond to the user.

The call_mcp_message field is a message the EchoKit device will read aloud when the server calls the MCP tool. It is needed since the MCP tool call could introduce significant latency in the response.

Docker Model Runner

The EchoKit server orchestrates multiple AI services. In the examples in this article so far, the EchoKit server is configured to use cloud-based AI services, such as Groq and ElevenLabs. However, many applications—especially in the voice AI area—require the AI models to run locally or on-premises for security, cost, and performance reasons.

Docker Model Runner is Docker’s solution to run LLMs locally. For example, the following command downloads and starts OpenAI’s open-source gpt-oss-20b model on your computer.

docker model run ai/gpt-oss

The Docker Model Runner starts an OpenAI-compatible API server at port 12434. It could be directly utilized by the EchoKit server via config.toml.

[llm]
platform = "openai_chat"
url = "http://localhost:12434/engines/llama.cpp/v1/chat/completions"
model = "ai/gpt-oss"
history = 20

At the time of this writing, the Docker Model Runner only supports LLMs. The EchoKit server still relies on cloud services, or local AI solutions such as LlamaEdge, for other types of AI services.

Conclusion

The complexity of the AI agent software stack has created new challenges in software deployment and security. Docker is a proven and extremely reliable tool for delivering software to production. Docker images are repeatable and cross-platform deployment packages. The Docker container isolates software execution to eliminate large categories of security issues.

With new AI tools, such as the Docker Model Runner and MCP Toolkit, Docker continues to address emerging challenges in AI portability, discoverability, and security.

The easiest, most reliable, and most secure way to set up your own EchoKit servers is to use Docker.

Learn more

Read the whole story
alvinashcraft
53 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Docker Model Runner now included with the Universal Blue family

1 Share

Running large language models (LLMs) and other generative AI models can be a complex, frustrating process of managing dependencies, drivers, and environments. At Docker, we believe this should be as simple as docker model run.

That’s why we built Docker Model Runner, and today, we’re thrilled to announce a new collaboration with Universal Blue. Thanks to the fantastic work of these contributors,  Docker Model Runner is now included in OSes such as Aurora and Bluefin, giving developers a powerful, out-of-the-box AI development environment.

What is Docker Model Runner?

For those who haven’t tried it yet, Docker Model Runner is our new “it just works” experience for running generative AI models.

Our goal is to make running a model as simple as running a container.

Here’s what makes it great:

  • Simple UX: We’ve streamlined the process down to a single, intuitive command: docker model run <model-name>.
  • Broad GPU Support: While we started with NVIDIA, we’ve recently added Vulkan support. This is a big deal—it means Model Runner works on pretty much any modern GPU, including AMD and Intel, making AI accessible to more developers than ever.
  • vLLM: Perform high-throughput inference with an NVIDIA GPU

The Perfect Home for Model Runner

If you’re new to it, Universal Blue is a family of next-generation, developer-focused Linux desktops. They provide modern, atomic, and reliable environments that are perfect for “cloud-native” workflows.

As Jorge Castro who leads developer relations at Cloud Native Computing Foundation explains, “Bluefin and Aurora are reference architectures for bootc, which is a CNCF Sandbox Project. They are just two examples showing how the same container pattern used by application containers can also apply to operating systems. Working with AI models is no different – one common set of tools, built around OCI standards.”

The team already ships Docker as a core part of its developer-ready experience. By adding Docker Model Runner to the default installation (specifically in the -dx mode for developers), they’ve created a complete, batteries-included AI development environment.

There’s no setup, no config. If you’re on Bluefin/Aurora, you just open a terminal and start running models.

Get Started Today

If you’re running the latest Bluefin LTS, you’re all set when you turn on developer mode. The Docker engine and Model Runner CLI are already installed and waiting for you. Aurora’s enablement instructions are documented here.

You can run your first model in seconds:

DMR Aurora image 1

This command will download the model (if not already cached) and run it, ready for you to interact with.

If you’re on another Linux, you can get started just as easily. Just follow the instructions on our GitHub repository.

What’s Next?

This collaboration is a fantastic example of community-driven innovation. We want to give a huge shoutout to the greater bootc enthusiast community for their forward-thinking approach and for integrating Docker Model Runner so quickly.

This is just the beginning. We’re committed to making AI development accessible, powerful, and fun for all developers.

How You Can Get Involved

The strength of Docker Model Runner lies in its community, and there’s always room to grow. We need your help to make this project the best it can be. To get involved, you can:

  • Star the repository: Show your support and help us gain visibility by starring the Docker Model Runner repo.
  • Contribute your ideas: Have an idea for a new feature or a bug fix? Create an issue to discuss it. Or fork the repository, make your changes, and submit a pull request. We’re excited to see what ideas you have!
  • Spread the word: Tell your friends, colleagues, and anyone else who might be interested in running AI models with Docker.

We’re incredibly excited about this new chapter for Docker Model Runner, and we can’t wait to see what we can build together. Let’s get to work!

Read the whole story
alvinashcraft
59 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Adobe Firefly now supports prompt-based video editing, adds more third-party models

1 Share
Adobe is updating its AI video-generation app, Firefly, with a new video editor that supports precise prompt-based edits, as well as adding new third-party models for image and video generation, including Black Forest Labs' FLUX.2 and Topaz Astra.
Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

What Is Microsoft Copilot Chat?

1 Share

Copilot Chat is a feature of Microsoft 365 Copilot that allows users to interact with Microsoft Copilot using a conversational style interface. In doing so, users type (or speak) their queries into a chat box. Copilot then uses an underlying Large Language Model based on GPT-5 to parse and then respond to the user’s input.

Why is Microsoft Copilot Chat useful?

Copilot Chat offers diverse functionality within the Microsoft 365 ecosystem and benefits users in a number of different ways.

Microsoft Copilot Chat provides Office application assistance

Microsoft has integrated Copilot into its various Microsoft 365 apps, such as Word, Excel, PowerPoint, Outlook, OneNote, and Microsoft Teams. In each application, Microsoft Copilot acts as an AI assistant, helping end users to work more efficiently.

Office applicationExample Copilot uses
WordCopilot could be used to draft a document or to help the user with proofreading.
ExcelReformat data, such as breaking addresses down into separate columns for name, street address, city, state, and zip code
PowerPointA user can use Copilot to automatically build a presentation based on a document
TeamsCopilot can help to summarize a meeting or to create follow up items.
OutlookCopilot can help a user to strike the right tone in an email message before sending it.
Example use cases for Copilot in Office apps

Microsoft Copilot does a great job of helping users to deal with information overload. Users might, for instance, ask Copilot to generate a summary of a lengthy document, thereby making it so that the user does not have to read the document in its entirety. Copilot works equally well for analyzing Excel spreadsheets.

AI as a research assistant

Copilot chat can be useful when it comes to doing research. Microsoft 365 Copilot typically has access to the same data as the user who is using it. As such, a user who is trying to do research can ask Copilot relevant questions and it will answer the user based on both public data and the organization’s data. As an example, the results might reference files that are stored within a SharePoint library or perhaps notes from a Microsoft Teams meeting.

The data that is used when formulating a response to a user’s prompt varies based on various factors, such as the user’s licenses and the mode that is being used.

Microsoft Copilot Chat
Microsoft Copilot Chat (Image Credit: Brien Posey/Petri.com)

When a user is working in Web mode, the user opens Copilot Chat in Microsoft Edge or in one of the other supported tools and enters a query. The AI chat responds by examining public, Web based data.

In Work Mode, Copilot uses the organization’s data when formulating its response. This may include things like Outlook emails, SharePoint documents, or Microsoft Teams chats. The user also has the ability to attach a file directly to their query, and have Copilot use the file upload as the basis for its response.

This can be extremely useful for a user who wants to create a summary of a document’s key points or who perhaps wants to create an AI-generated FAQ section to go along with a document. Copilot can even use its knowledge of the document to help a user to brainstorm ideas.

In order to use work mode, an organization will need to have a Microsoft 365 subscription. Additionally, users will also require a Microsoft 365 Copilot license.

Task automation

Microsoft Copilot can also be useful for automating tasks. By using or creating agents, organizations can automate various tasks, As an example, an agent may be able to generate a report or trigger an automated workflow. Organizations that want to build their own custom copilots can do so by using Microsoft Copilot Studio.

Simplified compliance

One of the most important benefits to using Copilot Chat is that because it is a part of the Microsoft 365 ecosystem, Chat adheres to the same permissions and offers the same enterprise data protection and Microsoft security as Microsoft 365 itself. This is extremely important for protecting an organization’s data.

If a user were to try to use another AI chat tool then the chat experience could compromise the organization’s data. For instance, file uploads to the third party AI chat tool could cause the AI to ingest the organization’s private data. That data may then be used to further train the AI and could be exposed to users outside of the organization as a result.

The Copilot Chat mobile experience

Copilot Chat is not designed to be used on PCs exclusively. In fact, Microsoft has created a dedicated Copilot app so that mobile users can have the Copilot experience while on the go. The Copilot Mobile app is available on both iOS and Android.

The Microsoft 365 Copilot mobile app is not the only tool for allowing users to receive the Copilot experience on their mobile devices. Copilot Chat is being integrated into the Microsoft 365 mobile apps, meaning that mobile users can use Copilot in apps such as Word, Excel, and PowerPoint.

While the Copilot mobile app might not be quite as full featured as the desktop version, it is great for helping users to perform various tasks while they are away from their computer.

Mobile users often find that the mobile version of Copilot Chat is useful for helping them to catch up quickly. A user might for instance, ask Copilot to read and summarize their Outlook emails.

The mobile version of Copilot is also great for answering questions. Users can ask Copilot general knowledge questions and the Microsoft AI will respond with answers taken from the Internet. Additionally, a properly licensed user can ask questions about specific files or messages, and receive an answer right on their device.

The mobile version can also potentially be helpful for drafting content or brainstorming ideas. Although a mobile device might not be the best platform for authoring a lengthy document, Copilot can be helpful if inspiration strikes while a user is on the go.

Copilot Chat can also help users to review content on their mobile device. A user may for instance, use their iOS or Android device to send a Word document or even a PDF file to Copilot for analysis. Copilot can then answer questions about the file or even provide the user with a file summary.

New capabilities coming to Copilot Chat

More recently, Microsoft has begun to roll out text to speech capabilities for Copilot, which will allow users to verbally converse with Copilot as though it were a person rather than an AI powered assistant. such capabilities should prove to be extremely useful to users who primarily interact with Copilot through the Copilot app. Such users will be able to perform various tasks using nothing more than their voice. Rather than relying on an on-screen keyboard, users will be able to simply tell Copilot what they want to do, with Copilot responding verbally to such prompts.

The post What Is Microsoft Copilot Chat? appeared first on Petri IT Knowledgebase.

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Rust vs C++: competition or evolution in systems programming for 2026

1 Share

C and C++ are the backbone of modern software. Operating systems, databases, game engines, and compilers all trace their roots to these languages. They give developers low-level control over memory, hardware, and performance. It’s a level of control that has defined decades of software development. However, the landscape has shifted, and the Rust vs. C++ conversation has become central to how teams approach modern systems programming.

For a long time, choosing C++ was straightforward. It offered speed, power, and reliability. But today, developers have one more option: Rust. Rust keeps many of the benefits of C++ while addressing some of its most persistent challenges, including memory safety, undefined behavior, and concurrency issues. It also comes with a modern, integrated toolchain that can make development smoother and less error-prone.

Since its 2010 debut, Rust has evolved from a niche project to a serious contender for systems programming. Its design enforces safety at compile time through concepts like ownership, borrowing, and lifetimes. Developers can write high-performance code without risking memory leaks or undefined behavior.

TLDR: C++ and Rust are both high-performance systems programming languages with different strengths. C++ is mature and flexible, offering low-level control and a vast ecosystem, while Rust emphasizes memory safety, concurrency, and modern tooling. Benchmarks show comparable performance, but Rust reduces runtime bugs through compile-time checks, whereas C++ relies on programmer discipline. Both languages are complementary, and the best choice depends on project needs, legacy requirements, and priorities for safety or control.

Rust’s philosophy: safety and speed

Rust’s design is deliberate. Every value has a single owner. Passing it to a function either transfers ownership or borrows it temporarily. The compiler checks these rules at compile time.

This system prevents memory leaks, dangling pointers, and data races before the program even runs. Rust allows unsafe code, but it must be clearly marked.

In short, Rust gives developers control without the common pitfalls of C++. It is safe by default and fast by design. C++ can offer similar power, but its safety relies heavily on the experience and discipline of the programmer.

Head-to-head: C++ versus Rust performance

Look at most benchmarking tests, and there’s a common theme: in a Rust vs C++ battle, Rust will win a few measures, but C++ will win by a few more. (Albeit by a small margin, typically sub-10%.) What matters here is that pure performance scores don’t tell the whole story. But first, some sources. 

Rust vs C++ performance bar chart
  • The Computer Languages Benchmark Game is an ongoing project that compares languages on common algorithms like binary trees, regex, and n-body simulations. C and C++ tend to win out, but Rust is often within 5–10%  – and beats the older language on some measures.
  • Nicholas Nethercote’s Rust Performance Book shows how Rust performs on real-world codebases rather than distinct microbenchmarks, and the difference is telling. On “idiomatic” tasks like parsing data and spawning threads, Rust often parallels or beats C/C++. 
  • Data from Phoronix details some cases where Rust is a clear winner – including PNG decoding, where Rust-based memory-safe decoders “vastly outperformed” libraries for C thanks to efficient concurrency and safer memory handling.
  • An Arxiv paper by Patrik Karlson compared matrix multiplication, merge sort, and file I/O operations – revealing C++ performs better at matrix math, but Rust beats it in merge sort, for similar overall performance when the benchmarks balance out.

For any benchmark pitting C++ against Rust, the C++ solution is probably more optimized than the Rust version; there are simply more C++ programmers out there with more years of experience. And straight benchmarks don’t show any of the sweat, toil, and tears behind the algorithm, such as how many times the C++ developer has to recompile versus a Rust counterpart. Coding isn’t just about benchmark scores; it’s about the process and reliability of the code

The pure benchmarks show roughly equivalent performance numbers, with Rust a few percent behind most of the time. So in domains demanding the last drop of low‑latency speed, C++ is marginally ahead. But here’s the insight: C++’s lead comes from “lab conditions” tests, and that lead disappears in “real world” tests. 

In other words, in the messy reality of a real coding team solving real problems, Rust draws level with C++ and is often ahead. And when you stir Rust’s strengths into the mix – like memory and thread safety guarantees – there are further surprises in store.

Quick Summary:

  • Rust: slightly behind in pure benchmarks, safer concurrency
  • C++: slightly ahead in microbenchmarks, requires high skill for safety

Taking out the trash: memory safety in C++ and Rust

C++ relies on manual memory management. Developers use raw pointers like new and delete or smart ones like std::unique_ptr and std::shared_ptr to manage object lifetimes. (Dev tools can catch many issues, but not 100% reliably.) So the C++ coder spends countless hours dealing with memory leaks, dangling pointers, and buffer overflows.

Rust enforces strict rules at compile time through a system of ownership, borrowing, and lifetimes. Every value in Rust has a single owner; if that ownership moves – as in passing a variable to a function – the original reference is invalidated unless deliberately borrowed across, with a “borrow checker” in the Rust compiler catching miscreants before the code is ready to run. 

Here’s a handy table of the key differences.

FeatureRustC++
Memory allocationAutomatic via ownership and lifetimesManual via raw pointers and smart pointers
DeallocationHandled deterministically when owner goes out of scopeMust be made explicit or managed with smart pointers
Dangling pointer protectionGuaranteed at compile time with borrow checkerNo compile-time guarantees and a common source of bugs
Use-after-freeCaught at compile timeUndefined behaviour
Double freePrevented by ownership modelMust be manually avoided
Concurrency safetyEnforced at compile time, so no data races unless explicitly markedNo automatic protection, with data races common
Null pointer issuesOption<T> replaces nullable pointers in safe codeNull dereferencing is often a cause of crashes
Unsafe codeAllowed but must be clearly markedThe usual case, with all code potentially unsafe
ToolingCompiler enforces safety before runtimeTools needed for runtime detection

Rust’s system reduces runtime bugs and improves reliability. C++ offers maximum freedom, but safety depends entirely on the developer’s discipline.

Addressing complexity: the systems programmer’s learning curve

Neither C++ nor Rust is an easy language for coding newbies. Both are industrial-strength tools, used to build operating systems, graphics engines, embedded systems, and other critical software. Writing reliable code in these languages requires focus, discipline, and careful planning.

Rust can feel particularly challenging at first. Its borrow checker, ownership model, and lifetimes introduce concepts that many developers have not encountered before. For programmers coming from Python, or Java, the compiler may seem strict, but this strictness prevents many types of bugs before the code runs. Certain patterns, such as shared mutable state or cyclic data structures, work differently in Rust, which can make initial prototyping slower.

Some coders have compared it to flying a plane: ultimately you’ll travel faster, but there are far more checks and tests to do before you get off the ground. 

C++ is not easier. Its large feature set, templates, and legacy complexity can be overwhelming, even for experienced programmers. Reading and maintaining complex C++ code is notoriously difficult, and many developers cite this as a persistent challenge.

The key difference is that Rust catches many potential issues at compile time. Unsafe operations are still possible, but the compiler clearly flags them. Over the life of a project, this leads to fewer runtime crashes, more predictable performance, and safer, more maintainable code.

To help developers navigate Rust’s learning curve, JetBrains offers several educational resources:

  • Rust exercises in RustRover Vitaly Bragilevsky’s blog introduces RustRover’s built-in practice experience and explains how the environment guides learners through Rust concepts with real-time feedback. The post will give you an overview of how the exercises work inside the IDE.
  • Learn Rust plugin (compatible with RustRover and CLion): A guided learning plugin that teaches Rust fundamentals through interactive lessons, editor hints, and instant feedback. It works in both RustRover and CLion, so developers can learn inside the IDE while writing real code.
  • 100 Exercises to Learn Rust : Based on 100 Exercises to Learn Rust by Mainmatter’s Luca Palmieri, this course gives you a hands-on, test-driven path through Rust, starting with your first println! and progressing to advanced concepts like ownership, lifetimes, pattern matching, and generics.

With these resources, developers can build confidence in Rust gradually. While the learning curve is initially steep, the payoff is significant: safer code, fewer bugs, and more predictable development outcomes.

Tooling up: assessing C++ and Rust toolchain philosophies

While they’re far more accessible today than 20 years ago, low-level languages can make taking applications from beta to gold a Hard Problem. This means tooling is critical to productivity. C++ has long been the power player, but Rust with a more modern design philosophy and greater focus on developer experience, gives a strong showing. 

Here’s our head-to-head.

C++: it’s powerful, but fragmented

  • C++ has a rich ecosystem, but it’s decentralized, with tools often optimized for a specific environment or purpose. 
  • Diverse build systems like Make, CMake, Meson, Bazel, and Ninja mean C++ developers spend a lot of time getting builds to work consistently across platforms. The State of C++ 2025 report provides a clear overview of how these tools are used in practice across many teams and platforms.
  • Package managers, including vcpkg, conan, hunter, and others, compete vigorously, but there’s no consensus or standards informing the contest, increasing complexity.
  • The compilers – gcc, clang, MSVC, and more – each have their own quirks, flags, and extension sets; this means writing portable C++ needs ultra-deep toolchain awareness.
  • Static analysis tools like clang-tidy, cppcheck, and Coverity can detect issues, but they’re hard to set up, especially for newcomers learning the C family.
  • IDEs like Microsoft’s Visual Studio and JetBrains’ own CLion, Rider, and ReSharper C++ (the JetBrains extension for Visual Studio) offer very mature support, but some advanced features still require additional plugins, which adds setup work for developers.

Summing up: while C++ offers great freedom and maturity in its tool space, the developer experience is inconsistent … and often demands in-depth knowledge to navigate effectively.

Rust vs C++

You can try Rust right inside CLion. The Rust plugin is free for everyone and works seamlessly with your existing C++ setup. Use both languages in one IDE and switch between them whenever you need.

Rust: the “all batteries included” option

Rust ships with a unified, opinionated toolchain that just works:

  • cargo, Rust’s combined package manager and build system, handles compilation, dependencies, testing, benchmarking, documentation, and publishing in one box. Compared to C++, there’s no need to configure makefiles or wrangle libraries by hand.
  • Rust’s toolchain installer and updater, rustup, lets coders switch between versions or targets without the gaps showing, while its IDEs RustRover from JetBrains offer smart autocomplete, inline type hints, and compiler-powered refactors.
  • With built-in formatting and linting, rustfmt and clippy let the Rust pro enforce style and catch common pitfalls before runtime, without needing third-party tools.
  • Excellent documentation toolingcargo doc generates browsable HTML docs automatically from inline comments. It’s not perfect, but it works. 

Rust shows a clear advantage here: its newer tools are well-integrated, perform fast, and are largely standardized across the ecosystem. Once a coder joins the ranks of the Rustaceans (often from a C++ background) he/she will find sharper tools in the toolbox.

Communities, ecosystems, and demographics

C++ has a long history, dating back to the 1970s, and it remains a cornerstone of modern software. Its community is global, with strong presence in North America, Europe, China, and India. C++ is widely taught at universities, and there are over 13 million C/C++ developers worldwide

Many are experienced professionals, with a significant portion of over 35 years. This depth of expertise and documentation means that for systems-level programming, almost any problem has been solved before, providing a strong support network for developers.

Rust, by contrast, belongs to a younger generation of languages. Its community is enthusiastic, fast-growing, and increasingly influential. Around 46% of Rust developers are under 30, while more than a quarter are in their 40s. Two-thirds have less than ten years of coding experience. Despite its relative youth, Rust has consistently been named the “most loved language” by Stack Overflow for nine years in a row. Most Rust developers use the language for hobbies or side projects, but professional adoption is steadily increasing.

https://survey.stackoverflow.co/2025/technology

Both communities face similar demographic challenges: fewer than 6% of developers are female. Beyond demographics, the communities differ philosophically. C++ reflects decades of established practices and legacy systems, while Rust emphasizes modern safety, concurrency, and developer ergonomics.

C++ will remain a dominant force in systems programming for years due to its installed base and mature ecosystem. Rust, however, is gaining traction rapidly, attracting developers with its safety-first approach, modern tooling, and growing library ecosystem. Both communities provide strong support for developers, but each reflects the priorities and challenges of the era in which the language has evolved.

Some use cases: the best fit for C/C++ and Rust

Choosing between C++, Rust, and their differing ecosystems isn’t just about syntax or performance. There’s a long list of other things to consider: the demands of the project, the maturity of the team, the environment in which the code will live, the toolchains available and understood. 

Both languages offer their users incredible power. They allow precise control and high performance. But they approach safety, tooling, and ergonomics very differently. Let’s summarize the differences with their strengths, weaknesses, and ideal use cases.

Short overview of C++

CategorySummary
Key strengthsBroad platform support across industries; Extremely high performance; Mature ecosystem with deep domain libraries; Rich selection of compilers and toolchains; large and experienced talent pool
Key weaknessesManual memory management risks; Fragmented tooling; Undefined behaviors that surface as runtime issues; Accumulated complexity from decades of language evolution
Best use casesReal-time and performance-critical systems; Extending or maintaining legacy C/C++ codebases; Game development; Embedded systems in automotive, industrial, consumer, and IoT environments

Summarizing the Rustacean worldview: here are the pluses and minuses for the newer language. 

Short overview of Rust

CategorySummary
Key strengthsMemory and thread safety are enforced at compile time. Modern and unified tooling; (cargo, clippy, rust-analyzer)Safe concurrency; Strong documentation and package ecosystem.Eliminates broad classes of C/C++ style bugs
Key weaknessesA steeper learning curve and smaller talent pool; Slower compile times; A less developed ecosystem for narrow or highly specialized domains
Best use casesSecurity-sensitive software; Safety-critical embedded systems; New systems programming, such as kernels, drivers, and file systems; Concurrency-heavy services; Greenfield infrastructure where maintainability and safety must be engineered from the start

Wrapping all up: A fair comparison with bright futures for both

C++ and Rust are less competitors and more companions. C++ is the battle-hardened veteran: proven, powerful, and deeply embedded in industrial software, game engines, and high-performance computing. Rust is the insurgent: modern, safe by default, and designed to prevent the mistakes C++ leaves to the programmer’s discipline.

This comparison isn’t about choosing a winner. Each language excels in different contexts. C++ gives expert developers maximum control across any platform. Rust trades some flexibility for reliability, catching many issues at compile time and making projects safer in the long run.

Both languages continue evolving. C++ adds modern features like ranges, concepts, and improved memory models. Rust improves compilation speed, performance, and ecosystem depth while gaining adoption across industries.

Rust didn’t come to replace C++. It provides another option: safe, fast, and enjoyable once you’re on top of the learning curve. The future isn’t C++ or Rust. It’s both, used where each makes the most sense.Let us know what you think in the comments.

Explore Rust for your next project with resources from JetBrains: JetBrains Academy and Rust exercises in RustRover.

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Agents, Protocols, and Why We’re Not Playing Favorites

1 Share

Over the past few weeks, after we announced ACP protocol support, I’ve gotten lots of messages asking something along these lines: “Microsoft just launched AgentHQ for GitHub and VS Code. So, does this mean JetBrains will drop ACP? Will you only support your own thing now?”

The quick answer: We’re not picking sides.

Here’s why – and what it means for you.

ACP and AgentHQ: What’s the difference?

First, let’s clear up a common confusion:

  • ACP (Agent Client Protocol) is something on which JetBrains is working with Zed. It’s an open, neutral way for IDEs and editors (like IntelliJ IDEA, PyCharm, Zed) to talk directly to coding agents – letting them open files, suggest edits, run tests, and more. Think of it like LSP (Language Server Protocol), but designed specifically for AI agents. Cool, right?
  • AgentHQ, launched by GitHub, is different. It’s yet a centralized system – like mission control – for managing AI agents inside the GitHub ecosystem and VS Code. It handles tasks, permissions, governance, and integration deeply tied to GitHub services. Also cool, but different cool, right?

In short: ACP is an open “language” any IDE or agent can speak. AgentHQ is a GitHub-specific platform for managing agents.

Our stance at JetBrains

Very simply: We’re committed to openness.

We want you to use any agent you prefer, whether it comes from us, GitHub, or anywhere else. That means:

  • We’ll support multiple protocols. ACP is open and neutral, but if another standard (like those from Microsoft or others) becomes popular with developers, we’ll happily integrate it too.
  • No lock-in. We built ACP because we believe in portable agents. Your agent shouldn’t be trapped in one IDE or cloud – it should work everywhere you need it to.
  • No secrets. We have no inside track on Microsoft’s future plans regarding ACP or their own protocols. We’re making our choices based on what’s available now, not speculation.

What this means for agent builders

If you’re currently building agents, here’s the practical advice:

  • Use whatever works best for your users. If they’re heavy GitHub users, integrate with AgentHQ. If they use multiple IDEs and editors (like JetBrains IDEs, Zed, or others), ACP is a straightforward way to ensure your agent works everywhere.
  • Don’t wait around for “one protocol to rule them all.” Technology evolves quickly. ACP is stable, open, and usable now. Microsoft’s stack is also real and viable now. Support both if needed – do what serves your users best.

The future we’re building

Our vision at JetBrains is straightforward:

  • Agents should be portable. Your agents should easily move between JetBrains IDEs, VS Code, and beyond.
  • You shouldn’t be locked into a single vendor. Competing should be about creating great user experiences, not proprietary walls – and we’re committed to support what is adopted by the market.

ACP is our way to make sure agents and IDEs communicate openly. GitHub’s AgentHQ contributes to managing agents at scale. These things don’t conflict – they complement each other.

We’re here to help you do your best work, wherever and however you want to do it.

We’ll keep our protocols open, our integrations flexible, and our focus on what’s best for developers – no exceptions.

Denis, Head of AI DevTools Ecosystem

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories