Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
152245 stories
·
33 followers

VS Code and Visual Studio - Better Together with Copilot | Visual Studio Live! Las Vegas 2026

1 Share
From: VisualStudio
Duration: 1:09:58
Views: 382

Trying to decide between Visual Studio and VS Code in 2026? You don’t have to. In this @VisualStudioLive session from Visual Studio Live! Las Vegas 2026, Brian Randell shows how VS Code and Visual Studio work better together (especially with GitHub Copilot) to maximize developer productivity across platforms.

Learn when to use each tool, how Copilot transforms your workflow, and why combining both environments can give you the best of speed, flexibility, and power.

🔑 What You’ll Learn
• When to use Visual Studio vs. VS Code (and why you often need both)
• Key differences in performance, debugging, and developer experience
• How GitHub Copilot works across both IDEs and what’s included
• Copilot pricing tiers, models, and “premium requests” explained
• Using AI agents, chat, and code generation in real workflows
• How Visual Studio’s debugger and profiler stand out
• Working with SQL projects (SSDT) in Visual Studio vs VS Code
• Dev containers, Codespaces, and cross-platform development
• Using local AI models (Ollama) and bring-your-own-key setups
• Security considerations with VS Code extensions and supply chain risks

⏱️ Chapters
01:01 The “VS Code vs Visual Studio” dilemma
04:18 What makes VS Code unique (open source, extensions, cross-platform)
07:12 Visual Studio 2026 strengths (performance, debugger, enterprise features)
12:59 GitHub Copilot overview, pricing, and model usage
19:02 Copilot settings, models, and enterprise controls
29:15 Demo: Copilot in Visual Studio (debugger, profiler, AI workflows)
44:41 SQL development: Visual Studio vs VS Code (SSDT vs extensions)
52:32 Dev containers and cross-platform workflows in VS Code
1:00:26 Using custom AI models, APIs, and local models (Ollama)
1:06:36 VS Code extension security and risks
1:08:48 Final recommendations

👤 Speaker
Brian Randell (@brianrandell)
Partner, MCW Technologies

🔗 Links
• Download Visual Studio 2026: http://visualstudio.com/download
• Explore more VS Live! Las Vegas sessions: https://aka.ms/VSLiveLV26
• Join upcoming VS Live! events: https://aka.ms/VSLiveEvents

#vscode #visualstudio2026 #visualstudio2026 #githubcopilot #copilot #vslive

Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

Build Your First AI Search in .NET with Progress Agentic RAG Part 2

1 Share
From: Fritz's Tech Tips and Chatter
Duration: 11:12
Views: 57

Turn a console prototype into a full Blazor web application with document upload, semantic search, and a streaming AI chat UI — all powered by the Progress.Nuclia SDK.

📺 Series: Building AI-Powered Search with Progress Agentic RAG and .NET
🎬 Part 2 of 3

In this video you'll learn how to:
• Register NucliaDbClient with ASP.NET Core dependency injection using AddNucliaDb()
• Upload files and ingest web links into a Knowledge Box
• Browse indexed documents with catalog and resource listing APIs
• Build paragraph-level semantic search with FindAsync and autocomplete with SuggestAsync
• Create a streaming chat UI using AskStreamAsync, IAsyncEnumerable, and Blazor's StateHasChanged()
• Display citations and sources so users know where answers come from

🔗 Links
Progress Agentic RAG docs: https://www.progress.com/agentic-rag
Progress.Nuclia NuGet package: https://www.nuget.org/packages/Progress.Nuclia

📌 Previous: Video 1 — Console app, streaming answers, and citations

Resources & Links:
📦 Progress Agentic RAG: https://www.progress.com/agentic-rag
📚 NuGet Package: https://www.nuget.org/packages/Progress.Nuclia
🔗 SDK Repository: https://github.com/telerik/nuclia-dotnet-sdk

Find me online:
🎥 YouTube: https://youtube.com/csharpfritz
𝕏 X/Twitter: https://x.com/csharpfritz
💻 GitHub: https://github.com/csharpfritz
📺 Twitch: https://twitch.tv/csharpfritz

#dotnet #blazor #ai #rag #semanticsearch #progress #nuclia

Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

Private subnets by default in Azure Virtual Networks: What changed and how to use NAT Gateway

1 Share

Azure is evolving to better support securebydefault cloud architectures.     

Starting with API version 20250701 (released after March 31, 2026), newly created virtual networks now default to using private subnets. This change removes the longstanding platform behavior of automatically enabling outbound internet access through implicit public IPs, also known as default outbound access (DOA). 

As a result: newly deployed virtual machines will not have public outbound connectivity unless explicitly configured. 

What changed?

Previously, Azure automatically assigned a hidden Microsoftowned public IP to virtual machines deployed without an explicit outbound method (such as NAT Gateway, Load Balancer outbound rules, or instancelevel public IPs). This allowed public outbound connectivity without requiring customer configuration. 

While convenient, this model introduced challenges: 

  • Security – Implicit internet access conflicts with Zero Trust principles. 
  • Reliability – Platformmanaged outbound IPs can change unexpectedly. 
  • Operational consistency – VMSS instances or multiNIC VMs may egress using different default outbound IPs.

With API version 20250701 and later: 

  • Subnets in newly created VNets are private by default. 
  • The subnet property `defaultOutboundAccess` is set to false. 
  • Azure no longer assigns implicit outbound public IPs.

This applies across deployment methods including Portal, ARM/Bicep, CLI, and PowerShell. Portal has started using the new model as of April 1, 2026. 

Note: This change has not yet applied to Terraform. 

Am I impacted by this change?

Deployment scenarioBehavior
Existing VNets or VMs using DOA Unchanged 
New VMs in existing VNets Unchanged 
Subnets already using explicit outbound Continue using configured outbound method
New VMs in new VNets (with subnets created using API 07-01-2025 or later)🔒 Subnets private by default 
New VMs in private subnets without explicit outbound configured No public outbound connectivity 

Existing workloads are not impacted. 

If required, you can still create new subnets without the private setting by choosing the appropriate configuration option during creation. See the FAQ section of this blog for more information. However, we strongly recommend transitioning to an explicit outbound method so that: 

  • Your workloads won’t be affected by public IP address changes. 
  • You have greater control over how your VMs connect to public endpoints. 
  • Your VMs use traceable IP resources that you own. 

When is outbound connectivity required?

If your virtual network contains virtual machines, you must configure explicit outbound connectivity. Here are common scenarios that require it: 

  • Virtual machine operating system activation and updates, such as Windows or Linux. 
  • Pulling container images from public registries (Docker Hub or Microsoft Container Registry). 
  • Accessing 3rd party SaaS or public APIs 
  • Virtual machine scale sets using flexible orchestration mode are always secure by default and therefore require an explicit outbound method. 

Private subnets don’t apply to delegated or managed subnets that host PaaS services. In these cases, the service handles outbound connectivity—see the service-specific documentation for details. 

Recommended outbound connectivity method: StandardV2 NAT Gateway

Azure now recommends using an explicit outbound connectivity method such as: 

  • NAT Gateway 
  • Load Balancer outbound rules 
  • Public IP assigned to the VM 
  • Network Virtual Appliance (NVA) / Firewall 

Among these, Azure StandardV2 NAT Gateway is the recommended method for outbound connectivity for scalable and resilient outbound connectivity. 

StandardV2 NAT Gateway:

  • Provides zoneredundancy by default in supported regions 
  • Supports up to 100 Gbps throughput 
  • Provides dual-stack support with IPv4 and IPv6 public IPs 
  • Uses customerowned static public IPs 
  • Enables outbound connectivity without allowing inbound internet access 
  • Requires no route table configuration when associated to a subnet 

When configured, NAT Gateway automatically becomes the subnet’s default outbound path and takes precedence over: 

  • Load Balancer outbound rules 
  • VM instancelevel public IPs 

Note: UDRs for 0.0.0.0/0 traffic directed to virtual appliances/Firewall takes precedence over NAT gateway. 

Flow chart showing priority order for different outbound methods

Migrate from Default Outbound Access to NAT Gateway

To transition from DOA to Azure’s recommended method of outbound, StandardV2 NAT Gateway: 

  1. Go to your virtual network in the portal, and select the subnet you want to modify. 
  2. In the Edit subnet menu, select the ‘Enable private subnet’ checkbox under the Private subnet section

         Enabling private subnet can also be done through other supported clients, below is an example for CLI, in which the default-outbound parameter             is set to false: 

az network vnet subnet update \ --resource-group rgname \ --name subnetname \ --vnet-name vnetname \ --default-outbound false

     3. Deploy a StandardV2 NAT gateway resource. 

     4. Associate one or more StandardV2 public IP addresses or prefixes.

     5. Attach the NAT gateway to the target subnet.

Once associated: 

  • All new outbound traffic from that subnet uses NAT Gateway automatically 
  • VMlevel public IPs are no longer required 
  • Existing outbound connections are not interrupted 

Note: Enabling private subnet on an existing subnet will not affect any VMs already using default outbound IPs. Private subnet ensures that only new VMs don’t receive a default outbound public IP. 

For step-by-step guidance, see migrate default outbound access to NAT Gateway.

FAQ

1. Will my existing workloads lose outbound connectivity?

No. Workloads currently using default outbound IPs are not impacted by this change. The private subnet by default update only affects:

  • Newly created VNets 
  • New subnets created using the updated API, 2025-07-01 
  • New virtual machines deployed into those subnets using the updated API 
  • VMs and subnets using an explicit outbound connectivity method like a NAT gateway, NVA / Firewall, a VM instance level public IP or Load balancer outbound rules is not impacted by this change. 
2. Why can’t my new VM reach the internet or other public endpoints within Microsoft (e.g. VM activation, updates)? 

New subnets are private by default. If your deployment does not include an explicit outbound method — such as a NAT Gateway, Public IP, Load              Balancer outbound rule, or NVA/Firewall— outbound connectivity is not automatically enabled. 

3. My workload has a dependency on default outbound IPs and isn’t ready to move to private subnets, what should I do? 

You can opt-out of the default private subnet setting by disabling the private subnet feature. You can do this in the portal by unselecting the private subnet checkbox: 

Disabling private subnet can also be done through other supported clients, below is an example for CLI, in which the default-outbound parameter            is set to true: 

az network vnet subnet update \ --resource-group rgname \ --name subnetname \ --vnet-name vnetname \ --default-outbound true
4. Why do I see an alert showing that I have a default outbound IP on my VM? 

There's a NIC-level parameter `defaultOutboundConnectivityEnabled` that tracks whether a default outbound IP is allocated to a VM/Virtual                      Machine Scale Set instance. If detected, the Azure portal displays a notification banner and will generate Azure Advisor recommendations about                disabling default outbound connectivity for your VMs / VMSS. 

5. How do I clear this alert?

To remove the default outbound IP and clear the alert:

  1. Configure a StandardV2 NAT gateway (or other explicit outbound method).
  2. Set your subnet to be private or by setting the subnet property defaultOutboundAccess = false using one of the supported clients.
  3. Stop and deallocate any applicable virtual machines (this will remove the default outbound IP currently associated with the VM).  
6. I have a NAT gateway (or UDR pointing to an NVA) configured for my private subnet, why do I still see this alert? 

In some cases, a default outbound IP is still assigned to virtual machines in a non-private subnet, even when an explicit outbound method—such as a NAT gateway or a UDR directing traffic to an NVA/firewall—is configured.  

This does not mean that the default outbound IP is used for egress traffic. 

To fully remove the assignment (and clear the alert): 

  • Set the subnet to private 
  • Stop and deallocate the affected virtual machines 

Summary

The move to private subnets by default improves the security posture of Azure networking deployments by removing implicit outbound internet access. 

Customers deploying new workloads must now explicitly configure outbound connectivity. 
StandardV2 NAT Gateway provides a scalable, resilient method for enabling outbound internet access without exposing workloads to inbound connections or relying on platformmanaged IPs. 

Learn more 

Read the whole story
alvinashcraft
17 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

From Demo to Production: Building Microsoft Foundry Hosted Agents with .NET

1 Share

 

The Gap Between a Demo and a Production Agent

Let's be honest. Getting an AI agent to work in a demo takes an afternoon. Getting it to work reliably in production, tested, containerised, deployed, observable, and maintainable by a team. is a different problem entirely.

Most tutorials stop at the point where the agent prints a response in a terminal. They don't show you how to structure your code, cover your tools with tests, wire up CI, or deploy to a managed runtime with a proper lifecycle. That gap between prototype and production is where developer teams lose weeks.

Microsoft Foundry Hosted Agents close that gap with a managed container runtime for your own custom agent code. And the Hosted Agents Workshop for .NET gives you a complete, copy-paste-friendly path through the entire journey. from local run to deployed agent to chat UI, in six structured labs using .NET 10.

This post walks you through what the workshop delivers, what you will build, and why the patterns it teaches matter far beyond the workshop itself.

What Is a Microsoft Foundry Hosted Agent?

Microsoft Foundry supports two distinct agent types, and understanding the difference is the first decision you will make as an agent developer.

  • Prompt agents are lightweight agents backed by a model deployment and a system prompt. No custom code required. Ideal for simple Q&A, summarisation, or chat scenarios where the model's built-in reasoning is sufficient.
  • Hosted agents are container-based agents that run your own code  .NET, Python, or any framework you choose  inside Foundry's managed runtime. You control the logic, the tools, the data access, and the orchestration.

When your scenario requires custom tool integrations, deterministic business logic, multi-step workflow orchestration, or private API access, a hosted agent is the right choice. The Foundry runtime handles the managed infrastructure; you own the code.

For the official deployment reference, see Deploy a hosted agent to Foundry Agent Service on Microsoft Learn.


What the Workshop Delivers

The Hosted Agents Workshop for .NET is a beginner-friendly, hands-on workshop that takes you through the full development and deployment path for a real hosted agent. It is structured around a concrete scenario: a Hosted Agent Readiness Coach that helps delivery teams answer questions like:

  • Should this use case start as a prompt agent or a hosted agent?
  • What should a pilot launch checklist include?
  • How should a team troubleshoot common early setup problems?

The scenario is purposefully practical. It is not a toy chatbot. It is the kind of tool a real team would build and hand to other engineers, which means it needs to be testable, deployable, and extensible.

The workshop covers:

  • Local development and validation with .NET 10
  • Copilot-assisted coding with repo-specific instructions
  • Deterministic tool implementation with xUnit test coverage
  • CI pipeline validation with GitHub Actions
  • Secure deployment to Azure Container Registry and Microsoft Foundry
  • Chat UI integration using Blazor

What You Will Build

By the end of the workshop, you will have a code-based hosted agent that exposes an OpenAI Responses-compatible /responses endpoint on port 8088.

The agent is backed by three deterministic local tools, implemented in WorkshopLab.Core:

  • RecommendImplementationShape — analyses a scenario and recommends hosted or prompt agent based on its requirements
  • BuildLaunchChecklist — generates a pilot launch checklist for a given use case
  • TroubleshootHostedAgent — returns structured troubleshooting guidance for common setup problems

These tools are deterministic by design, no LLM call required to return a result. That choice makes them fast, predictable, and fully testable, which is the right architecture for business logic in a production agent.

The end-to-end architecture looks like this:

 

The Hands-On Journey: Lab by Lab

The workshop follows a deliberate build → validate → ship progression. Each lab has a clear outcome. You do not move forward until the previous checkpoint passes.

Lab 0 — Setup and Local Run

Open the repo in VS Code or a GitHub Codespace, configure your Microsoft Foundry project endpoint and model deployment name, then run the agent locally. By the end of Lab 0, your agent is listening on http://localhost:8088/responses and responding to test requests.

dotnet restore
dotnet build
dotnet run --project src/WorkshopLab.AgentHost

Test it with a single PowerShell call:

Invoke-RestMethod -Method Post `
    -Uri "http://localhost:8088/responses" `
    -ContentType "application/json" `
    -Body '{"input":"Should we start with a hosted agent or a prompt agent?"}'

Lab 0 instructions →

Lab 1 — Copilot Customisation

Configure repo-specific GitHub Copilot instructions so that Copilot understands the hosted-agent patterns used in this project. You will also add a Copilot review skill tailored to hosted agent code reviews. This step means every code suggestion you receive from Copilot is contextualised to the workshop scenario rather than giving generic .NET advice.

Lab 1 instructions →

Lab 2 — Tool Implementation

Extend one of the deterministic tools in WorkshopLab.Core with a real feature change. The suggested change adds a stronger recommendation path to RecommendImplementationShape for scenarios that require all three hosted-agent strengths simultaneously.

// In RecommendImplementationShape — add before the final return:
if (requiresCode && requiresTools && requiresWorkflow)
{
    return string.Join(Environment.NewLine,
    [
        $"Recommended implementation: Hosted agent (full-stack)",
        $"Scenario goal: {goal}",
        "Why: the scenario requires custom code, external tool access, and " +
        "multi-step orchestration — all three hosted-agent strengths.",
        "Suggested next step: start with a code-based hosted agent, register " +
        "local tools for each integration, and add a workflow layer."
    ]);
}

You then write an xUnit test to cover it, run dotnet test, and validate the change against a live /responses call. This is the workshop's most important teaching moment: every tool change is covered by a test before it ships.

Lab 2 instructions →

Lab 3 — CI Validation

Wire up a GitHub Actions workflow that builds the solution, runs the test suite, and validates that the agent container builds cleanly. No manual steps — if a change breaks the build or a test, CI catches it before any deployment happens.

Lab 3 instructions →

Lab 4 — Deployment to Microsoft Foundry

Use the Azure Developer CLI (azd) to provision an Azure Container Registry, publish the agent image, and deploy the hosted agent to Microsoft Foundry. The workshop separates provisioning from deployment deliberately: azd owns the Azure resources; the Foundry control plane deployment is an explicit, intentional final step that depends on your real project endpoint and agent.yaml manifest values.

Lab 4 instructions →

Lab 5 — Chat UI Integration

Connect a Blazor chat UI to the deployed hosted agent and validate end-to-end responses. By the end of Lab 5, you have a fully functioning agent accessible through a real UI, calling your deterministic tools via the Foundry control plane.

Lab 5 instructions →


Key Concepts to Take Away

The workshop teaches concrete patterns that apply well beyond this specific scenario.

Code-first agent design

Prompt-only agents are fast to build but hard to test and reason about at scale. A hosted agent with code-backed tools gives you something you can unit test, refactor, and version-control like any other software.

Deterministic tools and testability

The workshop explicitly avoids LLM calls inside tool implementations. Deterministic tools return predictable outputs for a given input, which means you can write fast, reliable unit tests for them. This is the right pattern for business logic. Reserve LLM calls for the reasoning layer, not the execution layer.

CI/CD for agent systems

AI agents are software. They deserve the same build-test-deploy discipline as any other service. Lab 3 makes this concrete: you cannot ship without passing CI, and CI validates the container as well as the unit tests.

Deployment separation

The workshop's split between azd provisioning and Foundry control-plane deployment is not arbitrary. It reflects the real operational boundary: your Azure resources are long-lived infrastructure; your agent deployment is a lifecycle event tied to your project's specific endpoint and manifest. Keeping them separate reduces accidents and makes rollbacks easier.

Observability and the validation mindset

Every lab ends with an explicit checkpoint. The culture the workshop builds is: prove it works before moving on. That mindset is more valuable than any specific tool or command in the labs.


Why Hosted Agents Are Worth the Investment

The managed runtime in Microsoft Foundry removes the infrastructure overhead that makes custom agent deployment painful. You do not manage Kubernetes clusters, configure ingress rules, or handle TLS termination. Foundry handles the hosting; you handle the code.

This matters most for teams making the transition from demo to production. A prompt agent is an afternoon's work. A hosted agent with proper CI, tested tools, and a deployment pipeline is a week's work done properly once, instead of several weeks of firefighting done poorly repeatedly.

The Foundry agent lifecycle —> create, update, version, deploy —>also gives you the controls you need to manage agents in a real environment: staged rollouts, rollback capability, and clear separation between agent versions. For the full deployment guide, see Deploy a hosted agent to Foundry Agent Service.


From Workshop to Real Project

This workshop is not just a learning exercise. The repository structure, the tooling choices, and the CI/CD patterns are a reference implementation.

The patterns you can lift directly into a production project include:

  • The WorkshopLab.Core / WorkshopLab.AgentHost separation between business logic and agent hosting
  • The agent.yaml manifest pattern for declarative Foundry deployment
  • The GitHub Actions workflow structure for build, test, and container validation
  • The azd + ACR pattern for image publishing without requiring Docker Desktop locally
  • The Blazor chat UI as a starting point for internal tooling or developer-facing applications

The scenario, a readiness coach for hosted agents. This is also something teams evaluating Microsoft Foundry will find genuinely useful. It answers exactly the questions that come up when onboarding a new team to the platform.


Common Mistakes When Building Hosted Agents

Having run workshops and spoken with developer teams building on Foundry, a few patterns come up repeatedly:

  • Skipping local validation before containerising. Always validate the /responses endpoint locally first. Debugging inside a container is slower and harder than debugging locally.
  • Putting business logic inside the LLM call. If the answer to a user query can be determined by code, use code. Reserve the model for reasoning, synthesis, and natural language output.
  • Treating CI as optional. Agent code changes break things just like any other code change. If you do not have CI catching regressions, you will ship them.
  • Conflating provisioning and deployment. Recreating Azure resources on every deploy is slow and error-prone. Provision once with azd; deploy agent versions as needed through the Foundry control plane.
  • Not having a rollback plan. The Foundry agent lifecycle supports versioning. Use it. Know how to roll back to a previous version before you deploy to production.

Get Started

The workshop is open source, beginner-friendly, and designed to be completed in a single day. You need a .NET 10 SDK, an Azure subscription, access to a Microsoft Foundry project, and a GitHub account.

Clone the repository, follow the labs in order, and by the end you will have a production-ready reference implementation that your team can extend and adapt for real scenarios.

Clone the workshop repository →

Here is the quick start to prove the solution works locally before you begin the full lab sequence:

git clone https://github.com/microsoft/Hosted_Agents_Workshop_dotNET.git
cd Hosted_Agents_Workshop_dotNET

# Set your Foundry project endpoint and model deployment
$env:AZURE_AI_PROJECT_ENDPOINT = "https://<resource>.services.ai.azure.com/api/projects/<project>"
$env:MODEL_DEPLOYMENT_NAME     = "gpt-4.1-mini"

# Build and run
dotnet restore
dotnet build
dotnet run --project src/WorkshopLab.AgentHost

Then send your first request:

Invoke-RestMethod -Method Post `
    -Uri "http://localhost:8088/responses" `
    -ContentType "application/json" `
    -Body '{"input":"Should we start with a hosted agent or a prompt agent?"}'

When the agent answers as a Hosted Agent Readiness Coach, you are ready to begin the labs.


Key Takeaways

  • Hosted agents in Microsoft Foundry let you run custom .NET code in a managed container runtime — you own the logic, Foundry owns the infrastructure.
  • Deterministic tools are the right pattern for business logic in production agents: fast, testable, and predictable.
  • CI/CD is not optional for agent systems. Build it in from the start, not as an afterthought.
  • Separate your provisioning (azd) from your deployment (Foundry control plane) — it reduces accidents and simplifies rollbacks.
  • The workshop is a reference implementation, not just a tutorial. The patterns are production-grade and ready to adapt.

References

Read the whole story
alvinashcraft
38 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Now generally available: License usage insights in Microsoft Entra

1 Share

Organizations rely on Microsoft Entra to secure access in an ever‑changing identity threat landscape without sacrificing workforce productivity. As organizations adopt advanced identity and access capabilities, IT admins often ask for greater transparency into how those capabilities are being used, particularly around licensing.

Today, I’m excited to announce the general availability of Microsoft Entra license usage insights: a redesigned experience in the Microsoft Entra admin center that helps you better understand your license entitlements and how premium capabilities are being used across your organization.

Why it matters

License clarity matters for more than compliance. With license usage insights, you can:

  • Maximize your existing investment by identifying which premium capabilities are included and actively used.
  • Boost adoption of security features you already own but haven't fully deployed.
  • Plan renewals and budgets confidently with six months of usage trend data.
  • Spot potential compliance gaps early as organizational needs evolve.

 

What’s new in the GA release

Since public preview, we’ve introduced several enhancements to make license entitlement and feature usage data easier to find and act on:

  • Six-month usage trends: Understand historical patterns for better forecasting.
  • Clear differentiation between active and guest users for precise reporting.
  • Copilot prompt suggestions to help you explore license usage insights faster.

 

 

The main license usage report view (Billing > Licenses).

 

Where to find the license usage insights

Navigate to Billing > Licenses in the Microsoft Entra admin center. You’ll see two key widgets: License entitlements and Product usage insights.

License entitlements: Displays your total Entra license entitlements, such as Microsoft Entra ID P1, P2, Microsoft Entra Suite, and standalone SKUs. For example, 250 Microsoft Entra Suite licenses entitle your organization to 250 each of Private Access, Internet Access, ID Governance, and Verified ID.

 

 

The License entitlements view.

 

Product usage insights: Shows product and feature usage over the past six months. Quickly see how many licenses are in use versus available and identify adoption trends. Hover over the bar chart for additional details or contact your Microsoft representative for guidance.

 

 

 

The Product usage insights view.

Next steps

Visit Billing > Licenses in the Microsoft Entra admin center to explore license usage insights today. We’d love your feedback on how this visibility into your Microsoft Entra usage supports your workflows and what additional insights would be most valuable. Share your thoughts in the comments or through the Feedback portal.

 

-Joseph Dadzie, Vice President of Product Management

 

Additional resources

Learn more about Microsoft Entra

Prevent identity attacks, ensure least privilege access, unify access controls, and improve the experience for users with comprehensive identity and network access solutions across on-premises and clouds.

 

Read the whole story
alvinashcraft
48 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Microsoft 365 Copilot readiness and resiliency with SharePoint and M365 Backup/Archive

1 Share

As AI becomes embedded in everyday work, governance shifts from being a back‑office function to a strategic enabler. Organizations that succeed with Copilot and agents aren’t just moving fast (70% of early Copilot users said they were more productive) – they are prepared, resilient, and deliberate in balancing AI-driven innovation with security and relevance. AI-first modern governance delivers real value across three dimensions: building readiness before AI is turned on, ensuring resilience when the unexpected happens, and driving relevance so people and AI can focus on what matters. Today, we are excited to announce many content governance innovations that moves organizations forward on their frontier transformation.

Innovations across 3Rs Framework: Readiness, Relevance, Resiliency

As your tenant’s digital estate grows exponentially in this AI era, content governance becomes even more important to get it right.

Readiness covers the elements of content permissions and oversharing, lifecycle, and sprawl of content. These elements take center stage in the boardrooms as they could become an inhibitor for organization’s AI transformations.

Relevance touches upon a key aspect of successful Copilot and agent deployments – how good their responses are? The quality of data AI can access directly determines the quality of its outputs. Dormant and inactive content must be put in cold storage and excluded from AI’s visibility. This increases the quality of responses.

Resiliency is all about business continuity, especially in the era where agents are part of the workforce and organizations must be prepared for unforeseen situations, such as an agent going rogue and accidentally deleting content. We are excited to share new capabilities that help your organization stay resilient.

With this backdrop, let us dive into innovations across these three pillars and additional areas:

Readiness

Enhancements in content oversharing controls

SharePoint Admin Agent

Agent governance

Relevance and Resiliency

Microsoft 365 Archive

Microsoft 365 Backup

Inactive site policy

Security and Compliance capabilities

Enterprise Lifecycle

Readiness

Readiness is about ensuring your content is permissioned rightfully, organized in manageable clusters, and has lifecycle guardrails before AI uses it. We are bringing many innovations to Copilot native governance capabilities powered by SAM (SharePoint Advanced Management). Let’s dive in.

1. Enhancements in content oversharing controls

New Admin role: SharePoint Advanced Management Admin role

We heard your feedback for Microsoft 365 and SharePoint admins to be able to see granular insights, for example file level view of permissions scopes, right within SharePoint admin center. Today, we are thrilled to announce a new admin role called SharePoint Advanced Management Admin to empower you to achieve just that.

Catalog management – Generally Available

As your tenant grows, managing the increasing number of sites and OneDrives can quickly become overwhelming. Catalog Management brings order to this complexity by organizing content into meaningful groups aligned to how your business operates, creating a centralized catalog of sites tailored to your needs. With that foundation in place, you can connect governance tools and actions to specific content groups, enabling more targeted oversight and improvements across your environment.

Data Access Governance – Site permissions report for Users and Groups - Generally Available by June 2026

Site Permissions reports now extend to users and Microsoft 365 and Security Groups, enabling admins to instantly identify every site a user/group can access and the scope of that access — closing a critical blind spot in oversharing governance. And clear remediation pathways are available through Restricted Access Control (RAC) and Restricted Content Discovery (RCD) policies. Learn more:

Data Access Governance - Detailed reports for Everyone and EEEU – Generally available

The new Data Access Governance detailed report deliver complete, item-level visibility  for "Everyone" and "Everyone except external users" permissions across your entire tenant — empowering admins to identify and remediate oversharing risks at scale, especially as organizations prepare for secure Copilot adoption.
In addition, admins can now customize emails for site access reviews (SAR) with their own messaging, assign review requests directly to site owners OR site admins, ensuring clear accountability. They can choose between sending one combined review email to all recipients or individual emails, improving control over urgency and responses. Email delivery status and recipient lists are visible, removing uncertainty about notifications.

Governance hub for Site Owners – Private Preview

Site owners have a critical role to play in ensuring accuracy of site information, including the site's necessity, its owners, members, permissions, and sharing settings. We’re introducing the Governance Reviews Dashboard in Private Preview, a centralized, actionable view where site owners can see all pending site governance reviews (inactivity, ownership, attestation, access reviews) across the sites they own.

The dashboard shows required actions, due timelines, and site enforcement status and allows site owners to act directly from one place. Combined with consolidated policy notifications, it replaces multiple emails with a single, actionable governance surface.

As you see below, the dashboard highlights the Group-based Permissions Review initiated for the owners of the Contoso Finance site.

Restricted access control (RAC) policy for Site Owners – Generally Available

We are bringing content oversharing controls to site admins!

Site owners/admins can now independently manage RAC (Restricted Access Control) policy on a site, by providing appropriate justification on why the policy is being applied or updated.

RAC policy ensures that only users specified in the control groups will be allowed to access content. It also prevents oversharing of content with users outside of the control groups. Copilot honors RAC policy and thus prevents oversharing. 

Restricted content discovery (RCD) policy for Site Owners – Generally Available

Restricted Content Discovery (RCD) policy is very powerful control to prevent content from being discovered in Microsoft 365 Copilot as well as in Declarative agents. Site owners/admins can now independently manage RCD (Restricted Content Discovery) policy on a site, by providing appropriate justification on why the policy is being applied or updated.

2. SharePoint Admin Agent

Introducing the SharePoint Admin Agent, Microsoft’s first-party AI assistant for managing your rapidly growing digital estate across Microsoft 365.

As the content backbone of Microsoft 365 – powering Teams, OneDrive, Loop, Copilot, and more - SharePoint sits at the center of content governance, spanning permissions, lifecycle, resilience, and relevance across users, apps, and now AI agents.

The SharePoint Admin Agent brings these capabilities together in one simple conversational experience, with powerful skills across Storage, Lifecycle, Catalog, Oversharing Management, and Multi-Geo. Admins can ask questions in natural language, gain actionable insights, and take meaningful action without switching portals or writing scripts.

Together, these capabilities help organizations drive the 3Rs of content governance for AI era: Relevance, Readiness, and Resilience.

SharePoint Admin Agent’s Starter Prompts:

You can try various starter prompts categorized under various skills – Permissions, Lifecycle, Storage, and Multi-Geo.

Permissions and Oversharing Management skill – Generally Available

Empowers admins to use natural language to uncover who has access to what, detect oversharing risks, and take targeted action to better protect sensitive content.

Lifecycle skill – Generally Available

Helps admins stay in control of content from creation to end of life by identifying inactive, outdated, and ownerless sites and enabling actions like reviews and archival.

Storage skill – Generally Available

Gives admins a clear view of storage consumption across the tenant so they can spot content sprawl, optimize capacity, and reduce unnecessary costs with confidence.

Multi-Geo skill – Private Preview

Microsoft 365 Multi-Geo enables customers to comply with data residency compliance requirements by ensuring that the data (OneDrive, mailbox, Teams channels & chats, et al) of a user is hosted within a specific geo where the user is located. The new Multi Geo skill in SharePoint Admin agent assists the admin in getting insights on the location status of OneDrive for Business of users and groups.

Recovery Skill – Private Preview

The Recovery skill helps admins leverage the Microsoft 365 Backup tool to find and understand available restore points, so that you can more easily and quickly recover from events that require point in time recovery of your OneDrive or SharePoint data. The initial release focuses on restore point availability if you have enabled Microsoft 365 Backup and have onboarded OneDrive accounts or SharePoint sites. In the future it will provide recommendations based on activity.

3. Agent governance in SharePoint, Teams, and OneDrive

Microsoft Agent 365 is a unified control-plane for agents that enables you to observe, govern and secure agents across your organization – including agents built with Microsoft AI platform and from other partners. Read more.

Specifically, Agent Registry and Agent Map capabilities provides you a breadth view of agents in your organization. To complement that breadth view, we are introducing depth view of agents in the lens of Microsoft 365 resources, specifically SharePoint sites, Teams, and OneDrives. Thrilled to introduce Agent Access Insights and Heatmap.

Agent Access Insights for SharePoint and OneDrive – Generally Available

Agent access insights provide rich information on agents that are accessing SharePoint sites and OneDrive within your tenant. It provides you with details on how much traffic these agents are bringing to your sites and in cases of accidental oversharing it helps you to take necessary actions such as setting up RAC (Restricted Access Control) or RCD (Restricted Content Discovery) policy.

Agent Access Heatmap for SharePoint and OneDrive – Private Preview in May’26

Agents are most effective when working on your organization’s relevant data. Agent Access Heatmap offers rich visual insights on agent access with additional dimensions like sensitivity and agent platform. This gives admins visibility into both the sensitivity of the content being accessed and the frequency of access. Admins now have effective tools for pausing agentic access to a site or scoping more granular access.

Enterprise App Insights – Generally Available

Enterprise Application Insights helps you to gain visibility into how non-Microsoft (3rd party) enterprise applications are accessing SharePoint and OneDrive content. It provides details on which apps are accessing sites, how frequently these apps are accessing content, permission scope (example: Files.Read) as well as what call pattern (Application Only, User and Application, etc.,) they exhibit, enabling you to take measures to enhance the security of your tenancy.

Restrict access for high privileged applications – Private Preview

Third-party applications with broad reaching permission across all files or all sites in your organization can increase the risk of data disclosure. This new setting allows OneDrive and SharePoint Administrators to enforce finer-grained permissions for 3rd party applications or define which specific applications are allowed to have these broadly scoped permissions. Via Baseline Security Mode, admins can also gain rich insights into which applications in their organization are having these broad permissions, and which resources are being accessed.

Relevance and Resiliency

Relevance is about making sure people and AI, are working from the right information at the right time. By reducing noise, retiring stale content, and focusing Copilot and agents on high‑value data, governance helps every insight stay accurate, actionable, and meaningful.

Resiliency, on the other hand, is all about business continuity so that in unexpected situations like ransomware incidents or agents accidentally deleting content you can recover quickly.

Microsoft 365 Archive

File level archive for SharePoint – Public Preview

File level archive for SharePoint is in public preview, with controls spanning admin, site, and file-level UI, and through Graph APIs or PowerShell. File level Archive gives you a way to more surgically file away inactive content, saving you 75% off list in storage costs. That content remains searchable by end users or admins, is easily reactivated in place by any user with file read permissions and continues to work seamlessly with Purview functionality. Best of all, you are only charged for usage if your total storage consumption is greater than your pre-allocated quota, and there are no reactivation fees (just like site archiving)!

Purview data lifecycle management retention policy integration is now in Private Preview. With that, you can leverage these retention policies to move files into archive before they are deleted as part of a more wholistic retain-archive-delete data lifecycle management workflow.

Here are a couple of recent customer examples highlighting the adoption of Microsoft 365 Archive:

*Stats based on customers’ testimonials. Read more at aka.ms/M365Archive/Story/Kantar and aka.ms/M365Archive/Story/Dentsu

SharePoint Extra Storage PAYG meter – Public Preview in June 2026

We’re also making it easier than ever to manage and pay for your active data.

SharePoint Extra Storage PAYG meter will roll in public preview June 2026. This PAYG meter allows you to grow your active storage usage with very little operational overhead. It works seamlessly with Archive PAYG so that you are charged the lowest possible blended rate only for storage that you're using on a daily basis (charged monthly). And you are only charged if total active + archive storage usage exceeds your pre-allocated quota. Now there's no need to prepay up to a year in advance for storage packs that you'll grow into later, no need to try to precisely manage that prepaid level through predictive efforts, and no risk of breaching storage payment compliance. Pricing will be shared when the product is in public preview.

Tenant-wide Version Trimming and What-if Analysis – Private Preview in June 2026

The “what if analysis” capability gives you a way to test which versions the trimming feature will delete and what the impact of that action will be. You can then run the actual tenant-level trimming job to free up storage space. This is very similar to the existing site and doclib level trimming capabilities, but expanded for a tenant-wide job. The initial release will be via PowerShell only. Please request to join the private preview by filling out this form.

SharePoint Admin Agent Storage Agentic Skill general availability. As noted above, the Storage Agentic skill rolling into general availability now helps with SharePoint usage understanding and optimization. Stay tuned for more intelligence and automation coming to this skill this year and beyond, including integration of version trimming reports and execution.

2.SAM (SharePoint Advanced Management) Inactive site policy - Generally available

As part of Copilot native governance capabilities, admins can trigger an inactive site policy per tailored criteria, say any sites that are untouched for two years, and then take automatic actions like set read only or archive. Learn more here.

3. Microsoft 365 Backup

In the AI era, the speed at which individuals get work done is greatly accelerated. So too is the speed at which bad actors can take advantage of weaknesses in cyber-attack scenarios, and the speed at which an inadvertent agentic action can take place. Given this, it's more important than ever to put proper resilience capabilities in place. Microsoft 365 Backup was built to do just that, in a uniquely secure, scalable, and performant manner.

For those not familiar, Microsoft 365 Backup provides ultra-fast large scale and backup (near day-zero protection) and recovery capabilities. The solution is a critical component of any M365 customer’s cyber resiliency story. You can onboard the native M365 Backup application via the M365 admin center, or you can leverage a Microsoft 365 Backup Storage app built by the one of the recognized backup partner solutions (just be sure to ask for the version of the partner app built on M365 Backup Storage to get the performance and security benefits of the platform). For example, Veeam Data Cloud Premium, AvePoint Confide Express, or Cohesity with M365 Backup Storage. These vendors provide extended capabilities that deliver a fuller set of capabilities to you to ensure you never have to trade off speed for feature or service coverage. You can get everything you need and more with one of their better-together M365 Backup Storage hybrid solutions.

We heard from our customers and partners about the need to expand and extend native platform capabilities.

Full Workload Backup - Rolling out to public preview

This allows you the ability to fully protect all your SharePoint sites, OneDrive accounts, and/or Exchange Online mailboxes dynamically with a few simple clicks. With full workload backup you can ensure your tenant is fully protected without any operational overhead to detect and add new sites or users. The tool does that for you automatically.

Feature Discovery experience is now generally available

With this experience, you will be notified within the Backup tool whenever we launch a new capability for you adopt. This way, you can benefit from the manageability and enhanced features we release in regular intervals, all within the Backup tool.

Easier discovery of the daily fast restore points is rolling out to general availability

With this update, we're making it easier to find and leverage the ~daily preloaded restore points that provide the fastest restore performance in the Backup tool. These faster restore points for OneDrive and SharePoint are your optimized path to recover large quantities of sites at speeds of up to and beyond 3TB/hr. (with most tenants completing full tenant restores in a matter of hours).

2-year Backup Restore Point Retention in Private Preview

With this capability, you can select to keep the backup recovery points for up to 2 years after they are created. That means you can use Microsoft 365 Backup to go back in time up to 2 years after a restore point is created.

M365 Backup in the government community cloud (aka GCC) is generally available

Now customers that operate in GCC can benefit from the operationally fast and secure capabilities delivered by Backup. Some of the newest features mentioned in this blog are still in the process of reaching GCC but will be there soon.

Granular file/folder browse/restore is generally available

With this, the Backup tool provides a specialized recovery method to find and restore folders/files within a site in a matter of minutes.

Departmental billing is generally available

Departmental billing gives admins the ability to attach distinct Azure billing policies to distinct Backup policies, is also generally available. Now each department in your organization can control and pay for their own backups.

Security & Compliance capabilities in SharePoint & OneDrive

Extended SharePoint Permissions (ESP) – General Availability

Extended SharePoint Permissions (ESP) extends SharePoint’s existing site permission model so that document libraries can use sensitivity labels to ensure access controls follow files when they’re downloaded from SharePoint. Files downloaded from ESP‑enabled libraries can only be opened by users who have access in SharePoint, and access is evaluated dynamically—changing or revoking SharePoint permissions, deleting the file, or deactivating the site automatically removes access to the downloaded copy.

At the same time, files stored within SharePoint libraries that use ESP continue to behave like first‑class SharePoint content. They remain governed by standard Microsoft 365 experiences—including enterprise search, discovery, auditing, and Copilot interactions. Protection is applied without breaking how documents are stored, discovered, or reasoned over inside SharePoint.

To learn more, check out the Configure SharePoint with a sensitivity label to extend permissions to downloaded documents page.

OneNote support for Sensitivity Labels - General Availability

Organizations can now classify and protect OneNote sections using the same sensitivity labels they already rely on across Word, Excel, Outlook, and SharePoint, ensuring sensitive notes are encrypted and access‑controlled according to organizational policy. Labels are applied at the section level, making it easy to protect sensitive content while still allowing teams to collaborate securely across Windows, Mac, Web, iOS, and Android.

Copilot continues to respect user permissions, and with sensitivity labels applied to OneNote sections, organizations gain an additional layer of classification and control—helping ensure AI‑assisted interactions with notes remain compliant with security and privacy expectations.

To learn more, check out Sensitivity Labels in OneNote Now Generally Available | Microsoft Community Hub

Sensitivity label support for videos – Public Preview in May’26

Video has become a core medium for collaboration—capturing meetings, training, product demos, and internal communications. With sensitivity label support for video files, organizations can now begin extending their information protection strategy to video content stored in SharePoint and Microsoft Stream, bringing videos into the same classification framework used for documents.

Users can continue to watch and share videos as they do today, with labeling applied as governance metadata rather than as a disruptive enforcement layer. By introducing labeling for video files, Microsoft lays the groundwork for more comprehensive protection and compliance scenarios over time—helping organizations understand, manage, and govern video content alongside documents, without changing user workflows on day one.

Label inheritance for Teams Meeting Recordings– Public Preview in May’26

Protecting meeting content shouldn’t require extra steps from users. With this update, Teams meeting recordings will automatically inherit the sensitivity label of the meeting itself, ensuring protection is applied by default and stays consistent from the moment a meeting is recorded.

When a meeting is labeled, that label now flows seamlessly to the recording—without any user action. For meetings protected with encryption, recordings are securely playable online but cannot be downloaded, significantly reducing the risk of data exfiltration while still enabling access for authorized participants. This approach balances strong protection with usability, especially for highly sensitive meetings.

Beyond recordings, the inherited label is consistently enforced across all related meeting artifacts, including transcripts, shared documents, and notes. This delivers a more cohesive, end‑to‑end compliance experience for meetings, closing long‑standing gaps where different artifacts could be protected inconsistently.

Microsoft Baseline Security Mode (BSM) 2026 – Private Preview

Security is no longer a point‑in‑time deployment — it’s a continuous journey. At Ignite’25 we introduced Microsoft Baseline security mode 2025.

Today, we’re announcing the Microsoft Baseline Security Mode (BSM) 2026— a major step forward in helping organizations operationalize secure‑by‑default protections across their Microsoft environments. BSM 2026 builds on the foundation introduced in BSM 2025 by expanding workload coverage, deepening integration across services, and applying the security learnings Microsoft has gained through our own internal transformation under the Secure Future Initiative (SFI).

BSM 2026 expands beyond identity and collaboration workloads to include:

  • Microsoft Intune
  • Microsoft Purview
  • Dynamics 365

This expansion also adds a new pillar “Applications” enabling organizations to enforce consistent, baseline protections across even more of their Microsoft cloud estate — reducing configuration drift and ensuring security standards extend beyond productivity into device management, data governance, and business applications.

To join the preview, go here.

Microsoft 365 Information Barriers (IB) enhancements

Microsoft 365 Information Barriers (IB) is a compliance control that prevents specific people or groups in an organization from communicating or collaborating with each other across Microsoft 365 services. It’s mainly used to avoid conflicts of interest and meet FINRA regulatory requirements.

We are happy to announce the general availability of the following enhancements to Microsoft 365 Information barriers:

  • IB Insights: Insights on usage of IB modes across SharePoint sites and OneDrive are now generally available. These rich insights help you identify and discover IB usage patterns across SharePoint sites and OneDrive.
  • IB support in Gallatin: Information barriers in Microsoft Teams, SharePoint Online and OneDrive, Microsoft 365 Groups will be generally available in Gallatin environment in June 2026.

Learn about Information Barriers | Microsoft Learn

Microsoft 365 Enterprise lifecycle

Cross-tenant sites migration – Generally Available

Microsoft 365 provides enterprise customers with capabilities to support the full lifecycle of their business, whether it be mergers, acquisitions, divestitures, or consolidation – The M365 Cross-Tenant Data Migration features allow admins to manage migrations at scale, securely, with deep product integration to ensure minimal impact to your daily business.

We’re happy to announce general availability of Cross-tenant SharePoint site migration, allowing admins to quickly migrate Communications Sites, Team Sites, and Group-Connected Sites from one Tenant to another with full fidelity and features such as automatic redirection of sharing links.

M365 Cross-tenant user migration with Teams Chat and Meetings – Private Preview

Adding to the Microsoft 356 Cross-Tenant migration capabilities, we’re excited to announce the preview of Cross-Tenant User Data Migration with Orchestrator, featuring Teams Chat and Meetings migration. This preview brings meetings, 1:1 chat, and group chat migration securely and at-scale, with built-in product integrations to ensure minimal business disruption.

It also helps to simplify your role in managing migrations by ensuring orchestration across OneDrive, Mailboxes, and Teams, and enhances the user's experience when they migrate by making sure that everything, they need to be productive is ready and waiting for them when the migration is complete. Read more.

Multi-Geo: Default geo move – Private Preview

Microsoft 365 Multi Geo provides enterprise customers with the ability to expand their Microsoft 365 presence to multiple geographic regions within a single existing Microsoft 365 Tenant. A Multi Geo tenant has a primary geo and one or more satellite geos where the tenant data (files, mailboxes, Teams chats) are hosted.

As Microsoft opens new data centers in new countries, some customers wish to relocate their primary geo to the new go-local location e.g.  a customer may choose to relocate their primary geo for M365 from the broader EUR geo to Germany geo to ensure that the data resides within Germany. Sometimes enterprises change their primary business location to another country and are then required to change primary geo of their M365 tenant.

We are now enabling customers to change their primary geo for SharePoint and OneDrive from a broader region like EUR or APAC to a more specific geo like Germany or Korea respectively. For such moves of primary geo of a tenant, the customer should not have a satellite in the target geo. This capability is currently in private preview.

Restricted site provisioning for Apps – GA

Manage the sprawl and structure of data in your organization by applying this new control that allows you to govern which 3rd party applications or agents can create OneDrives or SharePoint Sites in your organization. Using restricted OneDrive and SharePoint Site creation for Apps, you can apply allow or deny-based policy, and manage which types of SharePoint site can be created on a per-application basis. Available via SharePoint Online Management Shell, using the command Set-SPORestrictedSiteCreationForApps.

Get started now!

If you are new to Microsoft 365, learn how to try or buy a Microsoft 365 subscription.

For all the private previews mentioned in this blog, you can sign up Private Preview.

M365 Conference – Related Sessions

Watch the featured sessions at the M365 Community Conference in Orlando:

 

  1. Agent 365: The Control Plane for All Agents, Tue, April 21, 4.15 pm EST
  2. Microsoft Baseline Security Mode: Simplify, Secure, Succeed, Tue, April 21, 4.15 pm EST
  3. New Security and Compliance Features and Reporting for SharePoint Admin, Site Owners and More, Thurs, April 23, 9 am EST
  4. What's New in Security & Compliance for SharePoint, OneDrive, and Teams, Wed, April 22, 2.30 pm EST
  5. Microsoft 365 Multi Geo and Mergers & Acquisitions, Tue, April 21, 1.45 pm EST
  6. From Chaos to AI-Ready in 30 Days: Meet the SharePoint Governance Agent, Grounded by SAM, Tue, April 21, 11 .30 am EST
  7. Secure and Govern Microsoft 365 Copilot - What Every IT Pro Needs to Know, Wed, April 22, 1.30 pm EST

Resources

To learn more about the features in detail, check out the product capabilities documentations below.

  1. To learn more about Microsoft 365 Backup and the ISV solutions built on the M365 Backup Storage platform, check out this recent webinar: Resiliency in the age of AI with Microsoft 365 Backup
Read the whole story
alvinashcraft
52 seconds ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories