Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
153192 stories
·
33 followers

Should enterprises run containers in VMs or bare metal?

1 Share
"Should you run containers on VMs or bare metal" featured image

The wonders of containers — their overall flexibility, portability, and logical encapsulation of everything needed to run an application — have made them a mainstream method of developing and deploying modern apps. The latter is enhanced through orchestration platforms like Kubernetes, particularly for deployments at scale.

Nonetheless, there are still questions about how to maximize containers’ utility. Should you run them on bare metal servers or on virtual machines (VMs)? Is there even a need for virtualization in the era of containers, Kubernetes, and cloud native technologies?

For the lone developer, the answers to these questions are immaterial. But for enterprises with strict requirements for operational flexibility, scalability, performance, service-level agreements (SLAs), compliance, and security, the answers are clear.

“Running containers and virtual machines in a virtualized environment is a much more modern approach, as compared to a bare metal approach that comes from the legacy days of 20, 30 years ago,” says Mark Chuang, head of product marketing, VMware Cloud Foundation at Broadcom.

The operational flexibility of virtualized containers

Bare metal container deployments typically involve large physical clusters running one version of Kubernetes that multiple applications depend on. However, containers running in VMs consist of multiple smaller clusters that can stretch the underlying physical infrastructure. Each cluster has the flexibility to run a different version of Kubernetes that is attuned to the needs of the specific applications on that cluster.

“If you want to update a version of Kubernetes, you can just do it for that cluster,” Chuang says. “You’re not impacting all the other applications at the same time.”

Conversely, on large bare metal clusters with multiple running applications, every application must be updated — which is time-consuming and compromises productivity. In large bare metal clusters, if one application needs the latest version of Kubernetes, all the applications in that cluster must adopt this latest version. Chuang warns that this may result in compatibility issues and be disruptive for application owners.

Enhancing security for containerized applications

Users can fortify security isolation at multiple layers by running containers in virtualized environments. This preserves data integrity while enhancing data privacy and regulatory compliance.

It also minimizes the impact of expensive security breaches by minimizing the blast radius. Because deploying containers in VMs provides multiple layers of isolation, it can reduce an organization’s attack surface.

“If any of those layers is compromised, [the attack’s] not going to get out and spread,” Chuang says.

Security isolation occurs at multiple levels in a virtualized environment.

“In a VMware-based environment, there’s isolation at the vSphere cluster, the workload control plane, vSphere namespace, Kubernetes cluster, and Kubernetes namespace levels,” Chuang explains. “Additionally, you can establish microsegmentation at the networking layer on top of that.”

In bare metal container deployments, namespaces are the only point of isolation because Kubernetes namespaces are usually deployed 1:1 in a single Kubernetes cluster.

If a breach occurs, “all the applications on that host share that same Linux kernel, so if that kernel is ever impacted, you’ve got significant security concerns,” Chuang mentions.

Optimizing performance and meeting SLAs

It’s difficult in a bare metal approach to match the performance SLAs of deploying many containers with virtualization, Chuang says. Both resource load balancing and performance SLAs are better on VMs than they are with containers on bare metal. This is especially true when encountering a “noisy neighbor” in a multiapplication or multitenant environment.

The noisy neighbor problem occurs when multiple apps are running on the same host or cluster, and there’s a spike in demand for resources (involving network, compute, memory, or storage) for one of them. That surge can negatively affect the resource’s availability for other applications, hindering their performance.

Virtualization allows users to uphold SLAs by specifying ahead of time “clear policies about how much of a resource a particular application is going to get,” Chuang says.

Additionally, technologies like live migration and advanced resource scheduling can move workloads to hosts that are not experiencing a surge in resources.

“In a virtualized cluster, you can nondisruptively migrate a workload from one physical host to another to give it the compute, storage, or networking performance it needs to run effectively, or [you can] let the platform perform those tasks automatically,” Chuang says.

This capability, along with specifying policies for resource allocation for applications, improves organizations’ ability to meet performance SLAs. It also reduces the overhead for spinning up and running those applications.

“If you’re not effectively getting the application the resources it needs, then you may have to procure and stand up more servers and more hosts,” Chuang says. “That increases costs.”

Minding the total cost of ownership

A single platform for running virtualized and container workflows improves deployment efficiency — with attendant cost advantages — when internationalizing and scaling applications. Users can implement consistent processes when virtualized and container workloads run on the same underlying infrastructure, compared to running them in silos.

“Most organizations are running a mixture of containerized applications and ones on virtual machines,” Chuang says.

Simplifying the underlying architecture by running both workloads through virtualization provides numerous cost benefits. “You can get very high levels of utilization because you can mix and match; you don’t have any stranded capacity,” Chuang says. Those advantages are magnified for deployments at scale and positively impact cost, particularly at higher utilization rates.

“If you’re effectively maximizing usage of your underlying infrastructure, then you don’t have to purchase as much infrastructure because less of it is sitting idle in stranded siloes,” Chuang says.

Additionally, because deploying containers in virtualization environments can minimize security breaches from spreading, organizations may reduce exposure to costly penalties, regulatory woes, and litigation.

Why virtualization is key for enterprise-grade production

Much of the value of deploying containers — and container orchestration platforms like Kubernetes or Docker — with virtualization pertains to the enterprise. Flexibility for scaling operations is a pivotal concern for production settings. Organizations also require dependable security to meet data privacy, regulatory compliance, and data integrity requirements. Finally, high levels of performance are necessary for mission-critical workloads across industries and use cases.

Virtualized environments facilitate each of these benefits in a cost-effective way, making them worthy of your consideration. The ability to do this on a single platform for virtualized and container applications can reduce overall costs — which is why the hyperscalers (AWS, Google Cloud, and Microsoft Azure) function this way for the majority of customers’ workloads in their environments.

The post Should enterprises run containers in VMs or bare metal? appeared first on The New Stack.

Read the whole story
alvinashcraft
21 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Announcing GitHub Copilot code completions in SQL Server Management 22.2.1

1 Share

Hey folks!  Welcome to a new year, and a new release of SQL Server Management Studio (SSMS).  That’s right, we’re starting off 2026 with SSMS 22.2.1, which may look like “just another” minor release, but it’s not…it’s definitely not.

Fixes

SSMS 22.2.1 contains a set of important bug fixes, thanks to our engineers. You can find the full list of fixes in the release notes, and I’ll note that all but one are linked to an item on the feedback site.  That’s my not-so-subtle way of reminding you that logging issues on the site matters 😊

As always, please search for an existing issue before you create one…and if you do find that someone else has logged the same issue/problem/error, please upvote it so we understand how many folks are affected.

We updated our roadmap when SSMS 22 became generally available, but we didn’t specifically call out that in December and January we would primarily focus on fundamentals work. In addition to some bug fixes, the engineering team is also making improvements internally, related to pipelines, testing, etc.  There are a lot of behind-the-scenes improvements that will help with quality and reliability, though they may not be immediately obvious.

Features

We’ve added support for code completions in the query editor for GitHub Copilot.  Thanks to those who waited patiently for this functionality!  While our GitHub Copilot is based on the one for Visual Studio, we had to add database context into completions functionality.  For completions to behave in the manner that you expect, we had to give the model information about the database, and make sure it provided suggestions quickly. 

Code completions for GitHub Copilot in SSMS are not the same as IntelliSense, and I’d argue they exceed the concept of “IntelliSense on steroids”.  The more T-SQL that exists in the editor window, the more powerful you’ll find the suggestions to be.  If you find that completions and IntelliSense are competing, you may want to try turning off IntelliSense. I’m extremely pleased with the code completion capabilities, and I’m excited for you all to give it a try.

 

What's next?

I mentioned the SSMS Roadmap earlier, and we’ll be updating the AI experiences section to note that Agent mode is on the roadmap for GitHub Copilot.  We’re also working on improvements related to instructions, which is a top request specific to AI Assistance.  If that’s something you’d like to see, feel free to vote on the feedback item and add a comment if there’s something specific in which you’re interested.  Friendly reminder that votes are the quickest and easiest way for us to understand interest.  It's what we check first.  The comments help capture more details about what SSMS users want to see and why.

If you’re interested in other requests for GitHub Copilot in SSMS, please search the feedback site for open suggestions. I'll be adding "GHCP" to the title of all GitHub Copilot-related feedback items - both suggestions and issues - to make it easier for everyone to find. You can filter by “copilot” right now but that doesn’t quite catch everything. While Agent mode is a big space, I’d love to see more ideas about what folks want to see/do with it.

For those in the northern hemisphere, hope you’re staying warm and safe.  For those in the southern hemisphere, please enjoy some sunshine for the rest of us today.  Looking forward to hearing from you all about the 22.2.1 release!

Read the whole story
alvinashcraft
21 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Getting Started with the Blazor Skeleton Component

1 Share

This simple component can improve the user experience of your Blazor app, providing a hint of feedback that they aren’t waiting in vain.

Creating smooth experiences in Blazor applications for better interaction should always be a point to consider during their development.

One of these experiences is the feedback given to the user when processing happens behind the scenes, such as when information is being retrieved to fill a page. One of the components that can help us with this task is the Progress Telerik UI for Blazor Skeleton component, which allows you to create loader-like placeholders in the spaces where information will be. Let’s look at its features and how to integrate it into a Blazor application.

Exploring the Skeleton Component for Blazor

Let’s start by analyzing how the Skeleton component works. First of all, you need to configure the Blazor project to work with Telerik components, according to the installation guides.

The next step is to go to the page where you want to use the component, where you can add it via the TelerikSkeleton tag as shown below:

<div style="min-height:100vh; display:flex; align-items:center; justify-content:center;">
    <div style="width:240px; height:48px;">
        <TelerikSkeleton />
    </div>
</div>

In the code above, I have added some styles to center the component on the page, resulting in the following:

The Telerik Skeleton component for Blazor, displayed in its simplest form

Now, you should consider that the component has the following parameters that you can use to configure it:

  • ShapeType (enum): Allows you to select a predefined shape
  • AnimationType (enum): Allows you to select a predefined animation
  • Visible (bool): Specifies whether the component should be visible on the page or not
  • Width (string) and Height (string): Specify the width and height of the component
  • Class (string): Allows rendering a custom class in the component

From the previous properties, you can configure ShapeType with the values SkeletonShapeType.Text (by default), SkeletonShapeType.Rectangle and SkeletonShapeType.Circle, as well as AnimationType with the values SkeletonAnimationType.None, SkeletonAnimationType.Pulse (by default) and SkeletonAnimationType.Wave.

An example of the Skeleton component with these applied properties is as follows:

<TelerikSkeleton 
    ShapeType="SkeletonShapeType.Circle"
    AnimationType="SkeletonAnimationType.Pulse"
    Width="100px" Height="100px" Visible="true">
</TelerikSkeleton>

When visualizing the component, it looks as follows:

The Skeleton component displaying a pulsing circular animation

Although the control is quite simple to use, its power lies in combining several of these components to recreate interfaces where we want to provide feedback to the user about a loading of information, as we will see next.

Adding the Skeleton Component to a Real Case

So far we have seen the features of the Skeleton component. Now, you may be wondering how to integrate it into your application—that is, how to build an interface using several Skeleton components while data is being loaded, and then show the real data once it has been loaded.

To make this more realistic, let’s assume we have created an application using several Blazor components that simulate a social network. This is the homepage:

@page "/feed"

<div class="feed-container">
    <h3 class="mb-3">Feed</h3>

    <div class="composer card p-3 mb-4">
        <div class="d-flex align-items-start gap-3">
            <TelerikAvatar Type="AvatarType.Text" Width="48px" Height="48px" Rounded="@ThemeConstants.Avatar.Rounded.Full">HP</TelerikAvatar>
            <div class="flex-grow-1">
                <TelerikTextArea Rows="3" Placeholder="What's on your mind?" Width="100%" />
                <div class="mt-2 d-flex gap-2 justify-content-end">
                    <TelerikButton ThemeColor="primary">
                        <TelerikSvgIcon Icon="@SvgIcon.PaperPlane"></TelerikSvgIcon> Post
                    </TelerikButton>
                </div>
            </div>
        </div>
    </div>

    <div class="feed">
        @foreach (var post in Posts)
        {
            <TelerikCard Class="mb-4">
                <CardHeader>
                    <div class="d-flex align-items-center gap-3">
                        <TelerikAvatar Type="AvatarType.Text" Rounded="@ThemeConstants.Avatar.Rounded.Full" Width="48px" Height="48px">@GetInitials(post.UserName)</TelerikAvatar>
                        <div class="flex-grow-1 w-100">
                            <div class="fw-semibold">@post.UserName</div>
                            <div class="text-muted small">@post.PostedAt.ToLocalTime().ToString("g")</div>
                        </div>
                    </div>
                </CardHeader>
                <CardBody>
                    <p class="mb-3">@post.Content</p>
                    @if (!string.IsNullOrWhiteSpace(post.ImageUrl))
                    {
                        <img src="@post.ImageUrl" alt="post image" class="img-fluid rounded" />
                    }
                </CardBody>
                <CardFooter>
                    <div class="d-flex gap-2">
                        <TelerikButton ThemeColor="primary" FillMode="Telerik.Blazor.ThemeConstants.Button.FillMode.Outline">
                            <TelerikSvgIcon Icon="@SvgIcon.Heart"></TelerikSvgIcon> Like
                        </TelerikButton>
                        <TelerikButton FillMode="Telerik.Blazor.ThemeConstants.Button.FillMode.Outline">
                            <TelerikSvgIcon Icon="@SvgIcon.Comment"></TelerikSvgIcon> Comment
                        </TelerikButton>
                        <TelerikButton FillMode="Telerik.Blazor.ThemeConstants.Button.FillMode.Outline">
                            <TelerikSvgIcon Icon="@SvgIcon.Share"></TelerikSvgIcon> Share
                        </TelerikButton>
                    </div>
                </CardFooter>
            </TelerikCard>
        }
    </div>

</div>

@code {    

    private List<Post> Posts { get; set; } = new();

    protected override async Task OnInitializedAsync()
    {
        await Task.Delay(5000);
        LoadPosts();        
    }

    private void LoadPosts()
    {
        Posts = new()
        {
            new Post
            {
                Id = Guid.NewGuid(),
                UserName = "Bot Doe",
                Content = "What a great day to try the Telerik UI for Blazor Skeleton. The UX feels great while data loads!",
                ImageUrl = "https://images.unsplash.com/photo-1500530855697-b586d89ba3ee?q=80&w=1200&auto=format&fit=crop",
                PostedAt = DateTimeOffset.UtcNow.AddMinutes(-35)
            },
            new Post
            {
                Id = Guid.NewGuid(),
                UserName = "Link",
                Content = ".NET 10 migration done — everything feels snappier.",
                ImageUrl = null,
                PostedAt = DateTimeOffset.UtcNow.AddHours(-2)
            },
            new Post
            {
                Id = Guid.NewGuid(),
                UserName = "Ada Lovelace",
                Content = "Pro tip: leverage high-level components to speed up demos.",
                ImageUrl = "https://images.unsplash.com/photo-1518837695005-2083093ee35b?q=80&w=1200&auto=format&fit=crop",
                PostedAt = DateTimeOffset.UtcNow.AddDays(-1)
            }
        };
    }

    private static string GetInitials(string name)
    {
        if (string.IsNullOrWhiteSpace(name)) return "?";
        var parts = name.Trim().Split(' ', StringSplitOptions.RemoveEmptyEntries);
        if (parts.Length == 1) return parts[0].Substring(0, Math.Min(1, parts[0].Length)).ToUpperInvariant();
        return (parts[0][0].ToString() + parts[^1][0].ToString()).ToUpperInvariant();
    }

    private sealed class Post
    {
        public Guid Id { get; set; }
        public string UserName { get; set; } = string.Empty;
        public string Content { get; set; } = string.Empty;
        public string? ImageUrl { get; set; }
        public DateTimeOffset PostedAt { get; set; }
    }
}

<style>
    .feed-container {
        max-width: 820px;
        margin: 0 auto;
        padding: 0 1rem;
    }

    .card {
        box-shadow: var(--kendo-box-shadow, 0 1px 3px rgba(0,0,0,0.08));
        border: 1px solid rgba(0,0,0,0.06);
        border-radius: .5rem;
    }

    .composer .k-textarea {
        width: 100%;
    }

    .feed img {
        max-height: 420px;
        object-fit: cover;
    }

    .gap-2 {
        gap: .5rem;
    }

    .gap-3 {
        gap: 1rem;
    }
</style>

If we run the application right now, you can see that we have a poor user experience, as it does not provide feedback that it is trying to fetch information to fill the UI, and you have to wait until the loading is finished to see something on the screen:

The application is running with a poor user interface, as it fails to inform the user in the UI that data is being fetched

Let’s solve this issue by adding the Skeleton component. What you need to do is try to create a copy of the final graphical interface, but replacing each control with the Skeleton component in a suitable shape corresponding to the final component.

For example, a circular shape could be used for the profile picture, a rectangular shape for photographs, and leave the default shape for the text. The following is an example of this replacement in the header of the component CardHeader:

Component CardHeader with the final components rendered

<CardHeader>
    <div class="d-flex align-items-center gap-3">
        <TelerikAvatar Type="AvatarType.Text" Rounded="@ThemeConstants.Avatar.Rounded.Full" Width="48px" Height="48px">@GetInitials(post.UserName)</TelerikAvatar>
        <div>
            <div class="fw-semibold">@post.UserName</div>
            <div class="text-muted small">@post.PostedAt.ToLocalTime().ToString("g")</div>
        </div>
    </div>
</CardHeader>

Component CardHeader using components TelerikSkeleton that will show the loading effect

<CardHeader>
    <div class="d-flex align-items-center gap-3">
        <TelerikSkeleton ShapeType="SkeletonShapeType.Circle" Width="48px" Height="48px" />
        <div class="flex-grow-1 w-100">
            <TelerikSkeleton ShapeType="SkeletonShapeType.Text" Width="35%" Height="18px" Class="mb-1" />
            <TelerikSkeleton ShapeType="SkeletonShapeType.Text" Width="20%" Height="14px" />
        </div>
    </div>
</CardHeader>

In the previous code, I want you to notice that each type of element has been replaced by the appropriate shape of the Skeleton. In this specific example, I was able to reuse the same containers div to hold the Skeleton components, but if you need to, you can change them to work better for you.

Following this same logic, I am going to create and use a property called IsLoading. This property will allow me to control when to show and hide the loading sections through the creation of a conditional if. In this conditional, we can validate if the loading of the information has concluded, which, if positive, will show the components with information as follows:

@page "/feed"

<div class="feed-container">
    <h3 class="mb-3">Feed</h3>

    ...
    
    @if (IsLoading)
    {
        <div class="feed">
            @for (int i = 0; i < 3; i++)
            {
                <TelerikCard Class="mb-4">
                    <CardHeader>
                        <div class="d-flex align-items-center gap-3">
                            <TelerikSkeleton ShapeType="SkeletonShapeType.Circle" Width="48px" Height="48px" />
                            <div class="flex-grow-1 w-100">
                                <TelerikSkeleton ShapeType="SkeletonShapeType.Text" Width="35%" Height="18px" Class="mb-1" />
                                <TelerikSkeleton ShapeType="SkeletonShapeType.Text" Width="20%" Height="14px" />
                            </div>
                        </div>
                    </CardHeader>
                    <CardBody>
                        <TelerikSkeleton ShapeType="SkeletonShapeType.Text" Width="100%" Height="14px" Class="mb-1" />
                        <TelerikSkeleton ShapeType="SkeletonShapeType.Text" Width="90%" Height="14px" Class="mb-1" />
                        <TelerikSkeleton ShapeType="SkeletonShapeType.Text" Width="80%" Height="14px" />
                        <TelerikSkeleton ShapeType="SkeletonShapeType.Rectangle" Width="100%" Height="220px" />
                    </CardBody>
                    <CardFooter>
                        <div class="d-flex gap-2">
                            <TelerikSkeleton ShapeType="SkeletonShapeType.Rectangle" Width="80px" Height="32px" />
                            <TelerikSkeleton ShapeType="SkeletonShapeType.Rectangle" Width="90px" Height="32px" />                            
                        </div>
                    </CardFooter>
                </TelerikCard>
            }
        </div>
    }
    else
    {
        <div class="feed">
            @foreach (var post in Posts)
            {
                <TelerikCard Class="mb-4">
                    <CardHeader>
                        <div class="d-flex align-items-center gap-3">
                            <TelerikAvatar Type="AvatarType.Text" Rounded="@ThemeConstants.Avatar.Rounded.Full" Width="48px" Height="48px">@GetInitials(post.UserName)</TelerikAvatar>
                            <div>
                                <div class="fw-semibold">@post.UserName</div>
                                <div class="text-muted small">@post.PostedAt.ToLocalTime().ToString("g")</div>
                            </div>
                        </div>
                    </CardHeader>
                    <CardBody>
                        <p class="mb-3">@post.Content</p>
                        @if (!string.IsNullOrWhiteSpace(post.ImageUrl))
                        {
                            <img src="@post.ImageUrl" alt="post image" class="img-fluid rounded" />
                        }
                    </CardBody>
                    <CardFooter>
                        <div class="d-flex gap-2">
                            <TelerikButton ThemeColor="primary" FillMode="Telerik.Blazor.ThemeConstants.Button.FillMode.Outline">
                                <TelerikSvgIcon Icon="@SvgIcon.Heart"></TelerikSvgIcon> Like
                            </TelerikButton>
                            <TelerikButton FillMode="Telerik.Blazor.ThemeConstants.Button.FillMode.Outline">
                                <TelerikSvgIcon Icon="@SvgIcon.Comment"></TelerikSvgIcon> Comment
                            </TelerikButton>
                            <TelerikButton FillMode="Telerik.Blazor.ThemeConstants.Button.FillMode.Outline">
                                <TelerikSvgIcon Icon="@SvgIcon.Share"></TelerikSvgIcon> Share
                            </TelerikButton>
                        </div>
                    </CardFooter>
                </TelerikCard>
            }
        </div>
    }
</div>

@code {    
    private bool IsLoading { get; set; } = true;
    ...

    protected override async Task OnInitializedAsync()
    {        
        ...
        IsLoading = false;
    }
}

In the previous code, you can see that once the loading of information in the method OnInitializedAsync is completed, the value false is assigned to IsLoading, which causes the controls with the final information to be rendered. Also, note that when showing the interface using the Skeleton components, only three items are displayed. The result of the execution is as follows:

Using the Skeleton component in Blazor to enhance user feedback during data loading in the UI

With this, you have been able to see how the loading of data in the graphical interface has been incredibly improved.

Conclusion

In this article, you have been able to learn what the Skeleton control for Blazor from Telerik is and how to use it, which is very useful for providing feedback to the user about a process that involves obtaining information to display in the graphical interface. You have seen its different configuration options, as well as an example in a Blazor app where we implemented the component.

Now it’s your turn to enhance the user experience in your applications by implementing the Skeleton component.

The whole Telerik UI for Blazor UI library is available to test in a 30-day free trial.

Try Now

Read the whole story
alvinashcraft
21 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

This midrange Android phone also runs Windows and Linux

1 Share
Nex Computer runs both Android and this Windows Phone-esque mobile UI that they made in-house.

Nex Computer, a company that makes hardware designed to turn your phone into a laptop, is working on something new: the NexPhone. It's a midrange phone that's designed to double as your computer and comes with Android and Linux installed, both of which will offer desktop experiences when plugged into a monitor.

But the NexPhone's best trick is that it can dual-boot into Windows 11, essentially becoming a full Windows PC when hooked up to a display - and also offers a mobile UI that pays tribute to Windows Phone when it's unplugged. It's a delightfully geeky attempt to answer the age-old question: Why can't your smartphone just be your whol …

Read the full story at The Verge.

Read the whole story
alvinashcraft
59 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

A new era of agents, a new era of posture

1 Share

The rise of AI Agents marks one of the most exciting shifts in technology today. Unlike traditional applications or cloud resources, these agents are not passive components- they reason, make decisions, invoke tools, and interact with other agents and systems on behalf of users. This autonomy brings powerful opportunities, but it also introduces a new set of risks, especially given how easily AI agents can be created, even by teams who may not fully understand the security implications. 

This fundamentally changes the security equation, making securing AI agent a uniquely complex challenge – and this is where AI agents posture becomes critical. The goal is not to slow innovation or restrict adoption, but to enable the business to build and deploy AI agents securely by design.  

A strong AI agents posture starts with comprehensive visibility across all AI assets and goes further by providing contextual insights – understanding what each agent can do and what it connected to, the risks it introduces, how it can be harden, and how to prioritize and mitigate issues before they turn into incidents. 

In this blog, we’ll explore the unique security challenges introduced by AI agents and how Microsoft Defender helps organizations reduce risk and attack surface through AI security posture management across multi-cloud environments. 

Understanding the unique challenges  

The attack surface of an AI agent is inherently broad. By design, agents are composed of multiple interconnected layers – models, platforms, tools, knowledge sources, guardrails, identities, and more. 

Across this layered architecture, threats can emerge at multiple points, including prompt-based attacks, poisoning of grounding data, abuse of agent tools, manipulation of coordinating agents, etc. As a result, securing AI agents demands a holistic approach. Every layer of this multi-tiered ecosystem introduces its own risks, and overlooking any one of them can leave the agent exposed. 

Let’s explore several unique scenarios where Defender’s contextual insights help address these challenges across the entire AI agent stack. 

Scenario 1: Finding agents connected to sensitive data 

Agents are often connected to data sources, and sometimes -whether by design or by mistake- they are granted access to sensitive organizational information, including PII. Such agents are typically intended for internal use – for example, processing customer transaction records or financial data. While they deliver significant value, they also represent a critical point of exposure. If an attacker compromises one of these agents, they could gain access to highly sensitive information that was never meant to leave the organization. Moreover, unlike direct access to a database – which can be easily logged and monitored – data exfiltration through an agent may blend in with normal agent activity, making it much harder to detect. This makes data-connected agents especially important to monitor, protect, and isolate, as the consequences of their misuse can be severe. 

Microsoft Defender provides visibility for those agents connected to sensitive data and help security teams mitigate such risks. In the example shown in Figure 1, the attack path demonstrates how an attacker could leverage an Internet-exposed API to gain access to an AI agent grounded with sensitive data. The attack path highlights the source of the agent’s sensitive data (e.g., a blob container) and outlines the steps required to remediate the threat. 

Figure1 – The attack path illustrates how an attacker could leverage an Internet exposed API to gain access to an AI agent grounded with sensitive data  

Scenario 2: Identifying agents with indirect prompt injection risk 

AI agents regularly interact with external data – user messages, retrieved documents, third-party APIs, and various data pipelines. While these inputs are usually treated as trustworthy, they can become a stealthy delivery mechanism for Indirect Prompt Injection (XPIA), an emerging class of AI-specific attacks. Unlike direct prompt injection, where an attacker issues harmful instructions straight to the model, XPIA occurs where malicious instructions are hidden in external data source that an agent processes, such as a webpage fetched through a browser tool or an email being summarized. The agent unknowingly ingests this crafted content, which embeds hidden or obfuscated commands that are executed simply because the agent trusts the source and operates autonomously. 

This makes XPIA particularly dangerous for agents performing high-privilege operations – modifying databases, triggering workflows, accessing sensitive data, or performing autonomous actions at scale. In these cases, a single manipulated data source can silently influence an agent’s behavior, resulting in unauthorized access, data exfiltration, or internal system compromise. This makes identifying agents suspectable to XPIA a critical security requirement. 

By analyzing an agent’s tool combinations and configurations, Microsoft Defender identifies agents that carry elevated exposure to indirect prompt injection, based on both the functionality of their tools and the potential impact of misuse. Defender then generates tailored security recommendations for these agents and assigns them a dedicated Risk Factor, that help prioritize them. 

in Figure 2, we can see a recommendation generated by the Defender for an agent with Indirect prompt injection risk and lacking proper guardrails – controls that are essential for reducing the possibility of an XPIA event. 

Figure 2 – Recommendation generated by the Defender for an agent with Indirect prompt injection risk and lacking proper guardrails.

In Figure 3, we can see a recommendation generated by the Defender for an agent with both high autonomy and a high risk of indirect prompt injection, a combination that significantly increases the probability of a successful attack.  

In both cases, Defender provides detailed and actionable remediation steps. For example, adding human-in-the-loop control is recommended for an agent with both high autonomy and a high indirect prompt injection risk, helping reduce the potential impact of XPIA-driven actions. 

Figure 3 – Recommendation generated by the Defender for an agent with both high autonomy and a high risk of indirect prompt injection.

Scenario 3: Identifying coordinator agents 

In a multi-agent architecture, not every agent carries the same level of risk. Each agent may serve a different role – some handle narrow, task-specific functions, while others operate as coordinator agents, responsible for managing and directing multiple sub-agents. These coordinator agents are particularly critical because they effectively act as command centers within the system. A compromise of such an agent doesn’t just affect a single workflow – it cascades into every sub agent under its control. Unlike sub-agents, coordinators might also be customer-facing, which further amplifies their risk profile. This combination of broad authority and potential exposure makes coordinator agents potentially more powerful and more attractive targets for attackers, making comprehensive visibility and dedicated security controls essential for their safe operation 

Microsoft Defender accounts for the role of each agent within a multi-agent architecture, providing visibility into coordinator agents and dedicated security controls. Defender also leverages attack path analysis to identify how agent-related risks can form an exploitable path for attackers, mapping weak links with context. 

For example, as illustrated in Figure 4, an attack path can demonstrate how an attacker might utilize an Internet- exposed API to gain access to Azure AI Foundry coordinator agent. This visualization helps security admin teams to take preventative actions, safeguarding the AI agents from potential breaches.  

Figure 4 – The attack path illustrates how an attacker could leverage an Internet exposed API to gain access to a coordinator agent.

Hardening AI agents: reducing the attack surface 

Beyond addressing individual risk scenarios, Microsoft Defender offers broad, foundational hardening guidance designed to reduce the overall attack surface of any AI agent. In addition, a new set of dedicated agents like Risk Factors further helps teams prioritize which weaknesses to mitigate first, ensuring the right issues receive the right level of attention. 

Together, these controls significantly limit the blast radius of any attempted compromise. Even if an attacker identifies a manipulation path, a properly hardened and well-configured agent will prevent escalation. 

By adopting Defender’s general security guidance, organizations can build AI agents that are not only capable and efficient, but resilient against both known and emerging attack techniques. 

Figure 5 – Example of an agent’s recommendations.

Build AI agents security from the ground up 

To address these challenges across the different AI Agents layers, Microsoft Defender provides a suite of security tools tailored for AI workloads. By enabling AI Security Posture Management (AI-SPM) within the Defender for Cloud Defender CSPM plan, organizations gain comprehensive multi-cloud posture visibility and risk prioritization across platforms such as Microsoft Foundry, AWS Bedrock, and GCP Vertex AI. This multi-cloud approach ensures critical vulnerabilities and potential attack paths are effectively identified and mitigated, creating a unified and secure AI ecosystem. 

Together, these integrated solutions empower enterprises to build, deploy, and operate AI technologies securely, even within a diverse and evolving threat landscape. 

To learn more about Security for AI with Defender for Cloud, visit our website and documentation

This research is provided by Microsoft Defender Security Research with contributions by Hagai Ran Kestenberg. 

The post A new era of agents, a new era of posture  appeared first on Microsoft Security Blog.

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Supporting the future of Astro

1 Share
Help make Astro a sustainable open source project by sponsoring our community maintainers.
Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories