Welcome to the Cloud Wars Minute — your daily cloud news and commentary show. Each episode provides insights and perspectives around the “reimagination machine” that is the cloud.
In today’s Cloud Wars Minute, I explore how Microsoft is using Copilot Studio and multi-agent orchestration to dramatically improve customer support performance.
Highlights
00:09 — Now, one of the best ways to assess the impact of Microsoft Copilot is to examine case studies of the technology in action. Microsoft has announced details of a recent project delivered through Copilot Studio, aimed at enhancing the customer support experience on microsoft.com, building on the Ask Microsoft web agent created using Microsoft Copilot Studio.
00:51 — This new approach resulted in a 61% reduction in latency and up to 70% fewer human escalations. The Microsoft team tested and refined the original web assistant, getting it live within just a few weeks using Copilot Studio tools.
AI Agent & Copilot Summit is an AI-first event to define opportunities, impact, and outcomes with Microsoft Copilot and agents. Building on its 2025 success, the 2026 event takes place March 17-19 in San Diego. Get more details.
01:11 — However, it was the facilities multi-agent orchestration feature that truly enhanced this project, enabling the team to connect the main agent to sub-agents with domain-specific knowledge in areas such as Azure or Microsoft 365 .
01:34 — Firstly, Microsoft is presenting a very tangible use case for Copilot Studio here. Secondly, it highlights the speed at which Copilot Studio can be used to rapidly deploy and easily edit agentic workflows. And finally, it serves as a really good advertisement for multi-agent architecture and orchestration, which I believe unlocks the most capable AI performance.
A few years into the AI shift, the gap between engineers is not talent. It’s coordination: shared norms and a shared language for how AI fits into everyday engineering work. Some teams are already getting real value. They’ve moved beyond one-off experiments and started building repeatable ways of working with AI. Others haven’t, even when the motivation is there. The reason is often simple: The cost of orientation has exploded. The landscape is saturated with tools and advice, and it’s hard to know what matters, where to start, and what “good” looks like once you care about production realities.
The missing map
What’s missing is a shared reference model. Not another tool. A map. Which engineering activities can AI responsibly support? What does quality mean for those outputs? What changes when part of the workflow becomes probabilistic? And what guardrails keep integration safe, observable, and accountable? Without that map, it’s easy to drown in novelty, and easy to confuse widespread experimentation with reliable integration. Teams with the least time, budget, and local support pay the highest price, and the gap compounds.
That gap is now visible at the organizational level. More organizations are trying to turn AI into business value, and the difference between hype and integration is showing up in practice. It’s easy to ship impressive demos. It’s much harder to make AI-assisted work reliable under real-world constraints: measurable quality, controllable failure modes, clear data boundaries, operational ownership, and predictable cost and latency. This is where engineering discipline matters most. AI does not remove the need for it; it amplifies the cost of missing it. The question is how we move from scattered experimentation to integrated practice without burning cycles on tool churn. To do that at scale, we need shared scaffolding: a public model and shared language for what “good” looks like in AI-native engineering.
We have seen why this kind of shared scaffolding matters before. In the early internet era, promise and noise moved faster than standards and shared practice. What made the internet durable was not a single vendor or methodology but a cultural infrastructure: open knowledge sharing, global collaboration, and shared language that made practices comparable and teachable. AI-native engineering needs the same kind of cultural infrastructure, because integration only scales when the industry can coordinate on what “good” means. AI does not remove the need for careful engineering. On the contrary, it punishes the absence of it.
A public scaffold for AI-native engineering
In the second half of 2025, I began to notice growing unease among engineers I worked with and friends in IT. There was a clear sense that AI would change our work in profound ways, but far less clarity on what that actually meant for a person’s role, skills, and daily practice. There was no shortage of trainings, guides, blogs, or tools, but the more resources appeared, the harder it became to judge what was relevant, what was useful, and where to begin. It felt overwhelming. How do you know which topics truly matter to you when suddenly everything is labeled AI? How do you move from hype to useful integration?
I was feeling much of that same uncertainty myself. I was trying to make sense of the shift too, and for a while I think I was waiting for a clearer structure to emerge from elsewhere. It was only when friends started reaching out to me for help and guidance that I realized I might have something meaningful to contribute. I do not consider myself an AI expert. I am finding my way through these changes just like many other engineers. But over the years, I had become known for my work in IT workforce development, skill and capability frameworks, and engineering excellence and enablement. I know how to help people navigate complexity in a practical and sustainable way, and I enjoy bringing clarity to chaos.
That is what led me to start working on the AI Flower as a hobby project in early October 2025, building on frameworks and methods I already had experience with.
When I began sharing it with friends in IT to gather feedback, I saw how much it resonated. It helped them make sense of the complexity around AI, think more clearly about their own upskilling, and begin shaping AI adoption strategies of their own. That is when I realized this casual experiment held real value, and decided I wanted to publish it so it could help empower other engineers and IT organizations in the same way it had helped my friends.
With the AI Flower, I’m offering a public scaffold for AI-native engineering work: a shared reference model that helps engineers, teams, and organizations adopt and integrate AI sustainably and reliably. It’s meant to steer and organize the conversation around AI-assisted engineering, and to invite targeted feedback on what breaks, what’s missing, and what “good” should mean in real production contexts. It’s not meant to be perfect. It’s meant to be useful, freely available, open to contribution, and shaped by the strongest resource our industry has: collective intelligence.
Open knowledge sharing and collaboration cannot be optional. If AI is becoming part of how we design, build, operate, secure, and govern systems, we need more than tools and enthusiasm. Many of us work on systems people rely on every day. When those systems fail, the impact is real. That’s why we owe it to the people who depend on these systems to do this with care, and why we won’t get there in isolation. We need the industry, globally, to converge on shared standards for dependable practice.
The AI Flower visualized: Petals represent engineering disciplines, and each encompasses core engineering activities, best practices, learning resources, AI risk and considerations, and AI guidance per activity.
About the AI Flower
The AI Flower maps the core activities that make up engineering work across the main engineering disciplines. For each activity, it defines what good looks like, based on practices that should already feel familiar to engineers. It then helps people explore how AI can support those activities in practice, providing guidance on how to begin using AI in that work, sharing links to useful learning resources, and outlining the main risks, trade-offs, and mitigations.
But the AI landscape is changing quickly. This activity-based approach helps engineers understand how AI can support core engineering tasks, where risks may arise, and how to start building practical experience. But on its own, it isn’t enough as a long-term model for AI adoption.
As AI capabilities evolve, many engineering activities will become more abstracted, more automated, or absorbed into the infrastructure layer. That means engineers will need to do more than learn how to use AI within today’s activities. They will also need to work with emerging approaches such as context engineering and agentic workflows, which are already reshaping what we consider core engineering work. A concept I call the Skill Fossilization Model captures that progression. It shows how both engineering skills and AI-related skills evolve over time, and how some of them become less visible as work moves to a higher level of abstraction. Together, the AI Flower and the Skill Fossilization Model are meant to help engineers stay adaptable as the field continues to shift.
The main purpose of the AI Flower is to help engineers find their way through these rapid changes and grow with them. While I provide content for each section and activity, the real value lies in the framework and structure itself. To become truly valuable, it will need the insight, care, and contribution of engineers across disciplines, perspectives, and regions.
I genuinely believe the AI Flower, as an open and freely available framework, can serve as a scaffold for that work. This is my contribution to a changing industry. But it will only be useful—it will only “bloom”—if the community tests it, challenges it, and improves it over time.
And if any industry can turn open critique and contribution into shared standards at a global scale, it’s ours, isn’t it?
Join me at AI Codecon to learn more
If the AI Flower resonates and you want the full walkthrough, I’ll be presenting it at O’Reilly’s upcoming AI Codecon. (Registration is free and open to all.)
If you’re concerned about how quickly AI engineering patterns are evolving, that concern is valid. We’ve already seen the center of gravity shift from ad hoc prompt work, to context engineering, to increasingly agentic workflows, and there is more coming. A core design goal of the AI Flower is to stay stable across those shifts by focusing on underlying capabilities rather than specific techniques. I’ll go deeper on that stability principle, including the Skill Fossilization model, at AI Codecon as well.
Even seemingly simple engineering tasks — like updating an API — can become monumental undertakings when you’re dealing with millions of lines of code and thousands of engineers, especially if the changes are security-related. Nowhere is this more apparent than in mobile security, where a single class of vulnerability can be replicated across hundreds of call sites scattered throughout a sprawling, multi-app codebase serving billions of users.
Meta’s Product Security team has developed a two-pronged strategy to address this:
Designing secure-by-default frameworks that wrap potentially unsafe Android OS APIs and make the secure path the easiest path for developers, and
Leveraging generative AI to automate the migration of existing code to those frameworks at scale.
The result is a system that can propose, validate, and submit security patches across millions of lines of code with minimal friction for the engineers who own them.
On this episode of the Meta Tech Podcast, Pascal Hartig talks to Alex and Tanu, from Meta’s Product Security team about the challenges and learnings from the journey of making Meta’s mobile frameworks more secure at a scale few companies ever experience. Tune in to this episode and join us as we explore the compelling crossroads of security, automation, and AI within mobile development.
Download or listen to the episode below:
You can also find the episode wherever you get your podcasts, including:
The Meta Tech Podcast is a podcast, brought to you by Meta, where we highlight the work Meta’s engineers are doing at every level – from low-level frameworks to end-user features.
Claude Code is quickly becoming a go-to AI coding assistant for developers and increasingly for non-developers who want to build with code. But to truly unlock its potential, it needs the right local infrastructure, tool access, and security boundaries.
In this blog, we’ll show you how to run Claude Code with Docker to gain full control over your models, securely connect it to real-world tools using MCP servers, and safely give it autonomy inside isolated sandboxes. Read on for practical resources to help you build a secure, private, and cost-efficient AI-powered development workflow.
Run Claude Code Locally with Docker Model Runner
This post walks through how to configure Claude Code to use Docker Model Runner, giving you full control over your data, infrastructure, and spend. Claude Code supports custom API endpoints through the ANTHROPIC_BASE_URL environment variable. Since Docker Model Runner exposes an Anthropic-compatible API, integrating the two is simple. This allows you to run models locally while maintaining the Claude Code experience.
With your model running under your control, it’s time to connect Claude Code to tools to expand its capabilities.
How to Add MCP Servers to Claude Code with Docker MCP Toolkit
MCP is becoming the de facto standard to connect coding agents like Claude Code to your real tools, databases, repositories, browsers, and APIs. With more than 300 pre-built,containerized MCP servers, one-click deployment in Docker Desktop, and automatic credential handling, developers can connect Claude Code to trusted environments in minutes — not hours. No dependency issues, no manual configuration, just a consistent, secure workflow across Mac, Windows, and Linux.
Set up Claude Code and connect it to Docker MCP Toolkit.
Configure the Atlassian MCP server for Jira integration.
Configure the GitHub MCP server to access repository history and run git commands.
Configure the Filesystem MCP server to scan and read your local codebase.
Automate tech debt tracking by converting 15 TODO comments into tracked Jira tickets.
See how Claude Code can query git history, categorize issues, and create tickets — all without leaving your development environment.
Prefer a video walkthrough? Check out our tutorial on how to add MCP servers to Claude Code with Docker MCP Toolkit.
Connecting tools unlocks powerful automation but with greater capability comes greater responsibility. If you’re going to let agents take action, you need to run them safely.
Docker Sandboxes: Run Claude Code and Other Coding Agents Unsupervised (but Safely)
As Claude Code moves from suggestions to real-world actions like installing packages and modifying files, isolation becomes critical.
Sandboxes provide disposable, isolated environments purpose-built for coding agents. Each agent runs in an isolated version of your development environment, so when it installs packages, modifies configurations, deletes files, or runs Docker containers, your host machine remains untouched.
This isolation lets you run agents like Claude Codewith autonomy. Since they can’t harm your computer, let them run free. Check out our announcement on more secure, easier to use, and more powerful Docker Sandboxes.
Summary
Claude Code is powerful on its own but when used with Docker, it becomes a secure, extensible, and fully controlled AI development environment.
In this post, you learned how to:
Run Claude Code locally using Docker Model Runner with an Anthropic-compatible API endpoint, giving you full control over your data, infrastructure, and cost.
Connect Claude Code to tools using the Docker MCP Toolkit, with 300+ containerized MCP servers for services like Jira, GitHub, and local filesystems — all deployable in one click.
By combining local model execution, secure tool connectivity, and isolated runtime environments, Docker enables you to run AI coding agents like Claude Code with both autonomy and control, making them practical for real-world development workflows.
Agents have enormous potential to power secure, personal AI assistants that automate complex tasks and workflows. Realizing that potential, however, requires strong isolation, a codebase that teams can easily inspect and understand, and clear control boundaries they can trust.
Today, NanoClaw, a lightweight agent framework, is integrating with Docker Sandboxes to deliver secure-by-design agent execution. With this integration, every NanoClaw agent runs inside a disposable, MicroVM-based Docker Sandbox that enforces strong operating system level isolation. Combined with NanoClaw’s minimal attack surface and fully auditable open-source codebase, the stack is purpose-built to meet enterprise security standards from day one.
From Powerful Agents to Trusted Agents
The timing reflects a broader shift in the agent landscape. Agents are no longer confined to answering prompts. They are becoming operational systems.
Modern agents connect to live data sources, execute code, trigger workflows, and operate directly within collaboration platforms such as Slack, Discord, WhatsApp, and Telegram. They are evolving from conversational interfaces into active participants in real work.
That shift from prototype to production introduces two critical requirements: transparency and isolation.
First, transparency.
Organizations need agents built on code they can inspect and understand, with clear visibility into dependencies, source files, and core behavior. NanoClaw delivers exactly that. Its agent behavior is powered by just 15 core source files, with lines of code up to 100 times smaller than many alternatives. That simplicity makes it dramatically easier to evaluate risk, understand system behavior, and build with confidence.
Second, isolation.
Agents must run within restricted environments, with tightly controlled filesystems and limited host access. Through the Docker Sandbox integration, each NanoClaw agent runs inside a dedicated MicroVM that mirrors your development environment, with only your project workspace mounted in. Agents can install packages, modify configurations, and even run Docker itself, while your host machine remains untouched.
In traditional environments, enabling more permissive agent modes can introduce significant risk. Inside a Docker Sandbox, that risk is contained within an isolated MicroVM that can be discarded instantly. This makes advanced modes such as –dangerously-skip-permissions practical in production because their impact is fully confined.
The result is greater autonomy without greater exposure.
Agents no longer require constant approval prompts to move forward. They can install tools, adapt their environment, and iterate independently. Because their actions are contained within secure, disposable boundaries, they can safely explore broader solution spaces while preserving enterprise-grade safeguards.
Powerful agents are easy to prototype. Trusted agents are built with isolation by design.
Together, NanoClaw and Docker make secure-by-default the standard for agent deployment.
“Infrastructure needs to catch up to the intelligence of agents. Powerful agents require isolation,” said Mark Cavage, President and Chief Operating Officer at Docker, Inc. “Running NanoClaw inside Docker Sandboxes gives the agent a secure, disposable boundary, so it can run freely, safely.”
“Teams trust agents to take on increasingly complex and valuable work, but securing agents cannot be based on trust,” said Gavriel Cohen, CEO and co-founder of NanoCo and creator of NanoClaw. “It needs to be based on a provably secure hard boundary, scoped access to data and tools, and control over the actions agents are allowed to take. The security model should not limit what agents can accomplish. It should make it safe to let them loose. NanoClaw was built on that principle, and Docker Sandboxes provides the enterprise-grade infrastructure to enforce it.”
Get Started
Ready to try it out? Deploy NanoClaw in Docker Sandboxes today:
The first Datalore release of the year delivers several new features that make working with data even easier. These updates are already available to Datalore Cloud users. For Datalore On-Premises, instance administrators can enable them by updating their Datalore instance.
Let’s dive in!
Data explorer
Datalore 2026.1 introduces data explorer cells, a new way to explore and visualize data directly from dataframes without writing additional code. You can quickly inspect datasets, filter results, and generate charts from a single interactive cell.
In Table mode, you can search and filter your data, exclude duplicates or missing values, and control which columns are displayed. You can also create new columns using SQL expressions, allowing you to derive new values from existing data without modifying your dataframe.
Visualization mode allows you to build charts such as line, bar, area, scatter, and box plots. Configure axes, apply aggregations, and adjust chart settings directly in the UI. Once your chart is ready, you can download it as a PNG or SVG file for use in reports or presentations. Learn more
Bring Your Own Key (BYOK) for AI
Starting with this release, Datalore On-Premises administrators can choose between JetBrains AI or another provider for all AI features.
If your company has strict security and data governance policies, using instance-wide BYOK for AI enables you to align AI usage with these policies and maintain explicit control over which external services are accessed. It can also be useful if you have specific pricing agreements or consumption commitments with AI providers.
Datalore On-Premises supports OpenAI, Azure OpenAI, and other providers through OpenAI-compatible APIs. This includes self-hosted models running in your environment. Learn more
Sidecar containers
When deploying Datalore On-Premises on Kubernetes, agents can be configured to run as a pod with two containers that share a filesystem: an unprivileged agent container and a privileged sidecar container.
In this architecture, the privileged sidecar container is responsible for mounting external resources. It uses FUSE to mount WebDAV and other data sources as local filesystems, which are then exposed to the notebook agent container through shared volumes.
Because the mounting logic is isolated in the sidecar container, the container running the notebook agent typically does not require elevated privileges, helping maintain a more secure setup. Learn more
For more details about the new features, see What’s new in the Datalore documentation.