Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
152797 stories
·
33 followers

The End of NGINX Ingress on AKS: What You Need to Know

1 Share

If you run workloads on Azure Kubernetes Service, you may have recently received an email from Microsoft notifying you that support for NGINX Ingress on AKS is coming to an end. This post explains what is happening, why it is happening, and what it means for your specific situation.

Why This Is Happening: A Bit of Background

It’s vital to grasp what actually triggered Microsoft’s position first.

Bye Bye Ingress API

An Ingress resource is a Kubernetes configuration file that describes how external traffic should reach services inside a cluster. The spec was always intentionally minimal: just host and path rules. But cloud load balancers (from AWS, Azure, GCP and others) are capable of far more: timeouts, retries, authentication, canary routing, header manipulation. Since the Ingress spec couldn’t express any of this, vendors (the companies building the ingress controllers that power those load balancers) had to expose these capabilities through annotations, essentially freeform configuration bolted onto the side of the resource. Every vendor did it differently, and the result was a fragmented, unportable mess.

Gateway API is the community’s answer to this. It replaces Ingress with structured, typed resources that can express rich routing behaviour natively, with a clean separation between platform teams and application teams. It has been stable since Kubernetes 1.28 and is the direction the entire ecosystem is moving.

NGINX Ingress Controller Retired

With Ingress on its way out, it was only natural that ingress-nginx, the most widely deployed Ingress controller in the Kubernetes ecosystem, would follow. The project had been running on a handful of volunteers, accumulating technical debt and serious security vulnerabilities for years. In November 2025, the Kubernetes SIG Network and Security Response Committee made it official: maintenance ends in March 2026, after which there will be no releases, no bug fixes, and no security patches.

Microsoft had no real choice but to follow suit. They set their own end date of November 2026 to give AKS customers extra runway, and began investing in what comes next.

What Does This Mean For You

If your AKS clusters use NGINX Ingress today, whether self-managed or through the Application Routing add-on, migration is unavoidable.

If you self-installed ingress-nginx via Helm, you are directly exposed to the upstream retirement in March 2026. After that date, any new vulnerability discovered in NGINX Ingress will remain unpatched indefinitely. Running it in production becomes an increasing security liability.

If you use the AKS Application Routing add-on (the managed NGINX option enabled with --enable-app-routing), you have until November 2026. Microsoft will keep patching critical security issues until then. This buys time to plan, but is not a long-term solution.

If you are not using NGINX Ingress at all (for example, if you are already on Traefik, Istio, or another controller) this announcement does not directly affect you, though the broader shift toward Gateway API is worth keeping an eye on.

At Trailhead, we have already been engaged by multiple clients navigating exactly this transition. Whether you are assessing the scope of your migration, need help translating complex annotation configurations to Gateway API resources, or want a structured migration plan that minimizes production risk, we can help. Reach out to our team and we will get you moving in the right direction.

References:

The post The End of NGINX Ingress on AKS: What You Need to Know appeared first on Trailhead Technology Partners.

Read the whole story
alvinashcraft
11 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

AI-Powered Navigation in Shiny MAUI Shell

1 Share

What if your app could understand “My furnace is broken — it’s urgent!” and automatically open the right form with the description filled in and the priority set to Urgent? That’s exactly what the new AI integration in Shiny MAUI Shell does.

The Idea

Mobile apps have dozens of pages. Users have to know where things are, tap through menus, and manually fill in fields. But with AI chat becoming the norm, we asked: what if the AI could navigate your app for you?

Shiny Shell’s source generator already knows every route in your app and every parameter each page accepts. We just needed to make that metadata available to an AI model — and give it a way to act on what it discovers.

Two Tools, Any Number of Pages

Instead of registering a separate AI tool for every page (which doesn’t scale), we generate just two:

  1. GetAiToolApplicableGeneratedRoutes() — returns all routes that have intent descriptions and parameters. The AI calls this to discover what pages exist and what they do.
  2. NavigateToRoute() — accepts a route name and a Dictionary<string, string> of parameters. The AI calls this to navigate and pre-fill the form.

That’s it. Add a new page with [ShellMap] descriptions and [ShellProperty] inference hints, and the AI automatically discovers it. No tool registration changes needed.

Describe Intent, Not Pages

The key insight is that descriptions should express user intent, not page names:

// Good — the AI matches "my pipe burst" to this route
[ShellMap<WorkOrderPage>(description: "Use when the user reports something broken,
malfunctioning, needing repair, maintenance, or service")]
// Bad — the AI has to guess what "Work order page" means
[ShellMap<WorkOrderPage>(description: "Work order page")]

Similarly, property descriptions tell the AI how to infer values from natural language. Properties can use real types — enums, ints, bools — and the generator handles conversion automatically:

public enum WorkOrderPriority { Low, Medium, High, Urgent }
[ShellProperty("Summarize what is broken based on what the user said", required: true)]
public string Description { get; set; } = string.Empty;
[ShellProperty("Infer urgency from the user's tone. Must be: Low, Medium, High, or Urgent", required: true)]
public WorkOrderPriority Priority { get; set; } = WorkOrderPriority.Medium;

The AI sends "Urgent" as a string, and the generated NavigateToRoute converts it to WorkOrderPriority.Urgent via case-insensitive Enum.Parse. The same works for int, bool, double, DateTime, Guid, and other common types.

Note that AI-compatible ViewModels do not need to implement IQueryAttributable. The generated NavigateToRoute sets [ShellProperty] properties directly on the ViewModel instance — no query attribute plumbing required.

What Gets Generated

The source generator produces GeneratedRouteInfo metadata with full parameter schemas:

public record GeneratedRouteInfo(
string Route,
string Description,
GeneratedRouteParameter[] Parameters
);
public record GeneratedRouteParameter(
string ParameterName,
string Description,
string TypeName,
bool IsRequired
);

The AI model sees the route descriptions, parameter names, types, requirements, and inference hints — everything it needs to match intent and extract values.

Wiring It Up

AI extensions are now enabled by default — just install Microsoft.Extensions.AI:

dotnet add package Microsoft.Extensions.AI

Register the generated AiMauiShellTools class via the AddAiTools() extension:

builder.UseShinyShell(x => x
.AddGeneratedMaps()
.AddAiTools() // registers AiMauiShellTools as singleton
);

Then inject AiMauiShellTools wherever you need AI-powered navigation. It provides a Prompt property (pre-formatted route descriptions for seeding system messages) and a Tools property (ready-to-use AITool[]):

public class ChatViewModel(AiMauiShellTools aiTools)
{
// Seed the system prompt
history.Add(new ChatMessage(ChatRole.System, aiTools.Prompt));
// Use the tools
var options = new ChatOptions { Tools = [.. aiTools.Tools] };
}

The class name is customizable via the ShinyMauiShell_AiToolsClassName MSBuild property if AiMauiShellTools doesn’t fit your naming conventions.

Try It in the Sample

The sample app includes a full working demo with GitHub Copilot authentication. Users authenticate with their own GitHub account through the OAuth device flow, and the app uses the Copilot API as the chat backend. Try saying things like:

  • “My furnace is not working! URGENT” — opens the work order form with description and priority filled in
  • “I’d like to discuss a partnership. My name is Allan, email allan@test.com — opens the contact form with fields populated

Get Started

Check out the AI Integration documentation for the full setup guide, or browse the sample code on GitHub.

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

One Contract, Three Transports — Mediator AI Tooling

1 Share

What if you could write a single C# record and have it automatically become a fully typed AI tool — with zero adapter code? That’s what Shiny Mediator 6.3 delivers.

The Problem

Building AI tool calling today means writing repetitive adapter code. You define a JSON schema by hand, parse arguments from the LLM response, validate them, call your business logic, and serialize the result back. If you already have a mediator contract for the same operation, you’re duplicating intent across two representations. Multiply that by every tool your agent needs — ten, twenty, fifty tools — and it becomes a real maintenance problem.

Worse, the schema and the code drift apart. You rename a property in your contract but forget to update the JSON schema. You add a new required parameter but the tool adapter still treats it as optional. The LLM hallucinates a parameter name that used to exist, and your hand-written parser silently swallows the error. These bugs are subtle, hard to test, and only surface at runtime.

The Contract-First Approach

In Shiny Mediator, a contract is a plain record that describes an operation:

[Description("Get the current weather forecast for a given city")]
public record GetWeather(
[property: Description("The city name to get weather for")]
string City,
[property: Description("Temperature unit: 'celsius' or 'fahrenheit'")]
string Unit = "celsius"
) : IRequest<WeatherResult>;
public record WeatherResult(string City, double Temperature, string Unit, string Condition);

And a handler implements the logic:

[MediatorSingleton]
public partial class GetWeatherHandler : IRequestHandler<GetWeather, WeatherResult>
{
public async Task<WeatherResult> Handle(
GetWeather request, IMediatorContext context, CancellationToken ct)
{
// your logic here
}
}

That’s the only code you write. From here, source generators take over.

AI Tool Generation

Add a [Description] attribute to your contract and set ShinyMediatorGenerateAITools=true in your project:

<PropertyGroup>
<ShinyMediatorGenerateAITools>true</ShinyMediatorGenerateAITools>
</PropertyGroup>

The source generator produces a fully typed AIFunction subclass compatible with Microsoft.Extensions.AI:

// auto-generated
internal sealed class GetWeatherAIFunction : AIFunction
{
private readonly IMediator _mediator;
private static readonly JsonElement _jsonSchema =
JsonDocument.Parse("""
{
"type": "object",
"properties": {
"city": { "description": "The city name to get weather for", "type": "string" },
"unit": { "description": "Temperature unit", "type": "string", "default": "celsius" }
},
"required": ["city"]
}
""").RootElement.Clone();
public override string Name => "GetWeather";
public override string Description => "Get the current weather forecast for a given city";
public override JsonElement JsonSchema => _jsonSchema;
protected override async ValueTask<object?> InvokeCoreAsync(
AIFunctionArguments arguments, CancellationToken cancellationToken)
{
var json = JsonSerializer.SerializeToElement(arguments);
var contract = new GetWeather(
City: json.GetProperty("city").GetString()!,
Unit: json.TryGetProperty("unit", out var u) && u.ValueKind != JsonValueKind.Null
? u.GetString()! : "celsius"
);
var (_, result) = await _mediator.Request<WeatherResult>(contract, cancellationToken);
return result;
}
}

A registration extension is also generated:

builder.Services.AddShinyMediator(x => x
.AddMediatorRegistry()
.AddGeneratedAITools() // registers every [Description] contract as an AITool
);

Then pass the tools to any IChatClient:

var tools = services.GetServices<AITool>().ToList();
var options = new ChatOptions { Tools = tools };
var response = await chatClient.GetResponseAsync(history, options);

Middleware Runs on AI Tool Calls Too

Because the generated AI tools dispatch through the mediator pipeline, every middleware you’ve already configured applies to AI tool calls automatically. Logging, validation, authorization, exception handling, caching — all of it fires without any extra wiring.

This is a significant advantage over hand-rolled AIFunction implementations. When you write a tool adapter manually, it typically calls your service layer directly, bypassing cross-cutting concerns. With the mediator approach, an AI tool call follows the same pipeline as a UI-triggered action or an API call. Your audit log captures it. Your validation middleware rejects bad input before the handler runs. Your error handling middleware catches exceptions and returns structured errors the LLM can interpret.

You can even write middleware that targets AI calls specifically — for example, injecting a MediatorContext value that tells the handler the call originated from an LLM, so you can apply tighter authorization or rate limiting for AI-initiated operations.

Scaling to Many Tools

The real power shows when your agent needs many tools. Instead of maintaining dozens of AIFunction subclasses with hand-written schemas, you just add [Description] to your existing contracts. Every contract with a description attribute becomes a tool at the next build.

Adding a new tool to your agent is the same workflow as adding any new mediator operation:

  1. Define the contract record with [Description]
  2. Implement the handler
  3. Done — the tool is registered automatically

No schema files to maintain. No adapter classes to write. No registration code to update. The source generator handles the JSON schema, argument parsing, DI wiring, and AIFunction implementation.

This also means removing a tool is just deleting the [Description] attribute (or the contract itself). There are no orphaned adapters or stale schema definitions to clean up.

Beyond AI: The Same Contract Powers HTTP Too

The same contract-first approach extends beyond AI tooling. Shiny Mediator also generates HTTP clients and ASP.NET endpoints from your contracts — meaning a single record and handler can serve as an AI tool, a typed HTTP client, and a REST endpoint simultaneously. The transports are generated; you write the logic once.

Why This Matters

Traditional tool-calling setups require you to maintain parallel definitions:

LayerWithout MediatorWith Mediator
Business logicHandler classHandler class
AI tool schemaManual JSON schemaGenerated from contract
AI tool adapterManual AIFunction subclassGenerated
Argument parsingManual deserializationGenerated
DI registrationManual for each toolGenerated
Middleware/validationManual per toolAutomatic via pipeline

With the contract-first approach, adding a new capability to your application — whether it’s exposed as an AI tool, an HTTP endpoint, or both — is one record and one handler.

Full AOT Compliance

The generated AIFunction classes are fully Native AOT compatible. Here’s what makes that possible:

No reflection. The generator reads [Description] attributes, property types, nullability, and default values at compile time. It emits direct property access code — json.GetProperty("city").GetString()! — instead of relying on JsonSerializer.Deserialize<T>() or reflection-based binding.

Static JSON schema. The schema is a compile-time constant string parsed once into a JsonElement on first use. There’s no runtime schema construction, no JsonSerializerOptions configuration, and no dynamic type inspection.

Constructor-based hydration. The generated code constructs the contract using its primary constructor with named arguments. No Activator.CreateInstance, no FormatterServices, no property setters via reflection.

Concrete types throughout. Each generated class is a sealed, non-generic concrete type. The DI registrations are explicit AddSingleton<AITool>(sp => new GetWeatherAIFunction(...)) calls — no open generics or service descriptor scanning at runtime.

This means your AI tools work in trimmed, ahead-of-time compiled applications — including .NET MAUI apps targeting iOS and Android — without linker warnings or runtime failures. The same tools that power your cloud API also run on-device in a fully native binary.

Supported Type Mappings

The generator handles the full range of C# types in your contracts:

C# TypeJSON SchemaNotes
string, Guid, Uri, DateTime"string"
bool"boolean"
int, long, short, byte"integer"
float, double, decimal"number"
enum"string" with "enum" arrayAll values listed for the LLM
T[], IEnumerable<T>"array"
Nullable types (T?)Omitted from "required"
Default valuesIncluded as "default" in schemaFallback used when LLM omits the parameter

ICommand contracts are also supported — the generated tool returns a success message string instead of a typed result.

Getting Started

  1. Add the [Description] attribute to your contracts and their properties
  2. Set <ShinyMediatorGenerateAITools>true</ShinyMediatorGenerateAITools> in your project file
  3. Reference Microsoft.Extensions.AI
  4. Call .AddGeneratedAITools() during mediator setup
  5. Resolve IEnumerable<AITool> from DI and pass to your chat client

Every contract with a [Description] attribute automatically becomes a tool. Add a new contract, and the next build picks it up — no registration changes, no schema files, no adapter classes.

Check out the Sample.CopilotConsole for a working example that wires up AI tools with a chat loop, or browse the Mediator documentation for the full setup guide.

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Turn Any Interface Into an AI Tool — Shiny DI 3.0

1 Share

What if every service interface you already have could become an AI tool with a single attribute? Shiny Extensions DI 3.0 makes that happen — no adapter classes, no hand-rolled schemas, no registration boilerplate. Mark your interface with [Tool], add [Description] to the methods that matter, and the source generator handles the rest.

The Problem

You’ve built your services. Clean interfaces, proper DI registration, everything wired up. Now someone asks you to expose a few of those operations as AI tools for an LLM agent. Suddenly you’re writing AIFunction subclasses by hand — one per operation — each with a constructor that takes the service, a metadata property with hand-written parameter schemas, and an InvokeCoreAsync override that extracts arguments from a dictionary and forwards them to your service method.

For one or two tools, it’s fine. For ten or twenty, it’s tedious. And every time you change a method signature, you have to remember to update the corresponding tool class. The schema drifts, the argument parsing breaks, and the bugs only show up when the LLM calls the tool at runtime.

The Solution: [Tool] + [Description]

[Tool]
[Description("Manages customer orders")]
public interface IOrderService
{
[Description("Places a new order for a customer")]
Task<OrderResult> PlaceOrderAsync(
[Description("The customer identifier")] Guid customerId,
[Description("The product SKU")] string sku,
[Description("Number of units to order")] int quantity
);
[Description("Cancels an existing order")]
Task CancelOrderAsync(
[Description("The order to cancel")] Guid orderId,
[Description("Reason for cancellation")] string reason
);
// No [Description] — not exposed as a tool
Task<List<Order>> GetInternalAuditLogAsync();
}

That’s it. The source generator produces a fully typed AIFunction subclass for each described method, wires up the parameter metadata, and generates a registration extension — all at compile time.

What Gets Generated

For PlaceOrderAsync above, the generator emits a class like this:

public class IOrderServicePlaceOrderAsyncAITool : AIFunction
{
private readonly IOrderService _service;
private static readonly AIFunctionMetadata _metadata =
new AIFunctionMetadata("IOrderServicePlaceOrderAsync")
{
Description = "Places a new order for a customer",
Parameters = new AIFunctionParameterMetadata[]
{
new("customerId")
{
Description = "The customer identifier",
ParameterType = typeof(Guid),
IsRequired = true
},
new("sku")
{
Description = "The product SKU",
ParameterType = typeof(string),
IsRequired = true
},
new("quantity")
{
Description = "Number of units to order",
ParameterType = typeof(int),
IsRequired = true
}
}
};
public Guid CustomerId { get; set; }
public string Sku { get; set; }
public int Quantity { get; set; }
public IOrderServicePlaceOrderAsyncAITool(IOrderService service)
{
_service = service;
}
public override AIFunctionMetadata Metadata => _metadata;
protected override async Task<object?> InvokeCoreAsync(
IEnumerable<KeyValuePair<string, object?>>? arguments,
CancellationToken cancellationToken)
{
// argument extraction and service call
return await _service.PlaceOrderAsync(
this.CustomerId, this.Sku, this.Quantity);
}
}

A second class is generated for CancelOrderAsync. The GetInternalAuditLogAsync method is skipped because it has no [Description].

Registration

All generated tools are registered with a single call:

services.AddGeneratedAITools();

This registers each tool as Transient<AITool, GeneratedToolClass>. You can then resolve all tools and pass them to any IChatClient:

var tools = serviceProvider.GetServices<AITool>().ToList();
var options = new ChatOptions { Tools = tools };
var response = await chatClient.GetResponseAsync(messages, options);

Conditional Generation

The AI tool code is only generated when Microsoft.Extensions.AI is referenced in your project. If you don’t reference it, the [Tool] attribute still compiles (it’s just an attribute), but no AIFunction classes or registration code are emitted. This means existing projects that add the DI package won’t get unexpected dependencies.

AOT-Safe Argument Extraction

The generated InvokeCoreAsync handles the JsonElement-vs-already-deserialized argument problem that trips up most hand-written AI tools. For every standard type, the generator emits a direct JsonElement accessor:

TypeExtractionReflection-free
stringGetString()Yes
int, long, short, byteGetInt32(), GetInt64(), etc.Yes
boolGetBoolean()Yes
double, float, decimalGetDouble(), GetSingle(), GetDecimal()Yes
GuidGetGuid()Yes
DateTimeGetDateTime()Yes
DateTimeOffsetGetDateTimeOffset()Yes
DateOnly, TimeOnly, TimeSpanParse(GetString())Yes
EnumsEnum.Parse<T>(GetString())Yes
Complex typesJsonSerializer.Deserialize<T>()Needs JsonSerializerContext

If the argument arrives as a JsonElement (common when the framework hasn’t pre-deserialized), the correct accessor is used. If it arrives already typed (some frameworks do this), a direct cast is used. Both paths are handled with a single is JsonElement check — no try/catch, no Convert.ChangeType.

CancellationToken Handling

If your service method accepts a CancellationToken, the generator does the right thing automatically:

[Description("Searches products")]
Task<List<Product>> SearchAsync(
[Description("Search query")] string query,
CancellationToken cancellationToken // not exposed as a tool parameter
);

The CancellationToken is excluded from the tool’s parameter metadata and properties. In InvokeCoreAsync, it’s passed through from the framework’s cancellation token — not extracted from the argument dictionary.

Methods Without [Description] Are Skipped

Only methods with [Description] become tools. This gives you fine-grained control over what’s exposed to the LLM. Internal methods, admin operations, or anything you don’t want an AI agent calling — just don’t add the attribute.

Works With Your Existing DI Setup

The [Tool] attribute goes on interfaces, while [Singleton] / [Scoped] / [Transient] go on implementation classes — same as before. You keep using AddGeneratedServices() for your service registrations and add AddGeneratedAITools() alongside it:

services.AddGeneratedServices();
services.AddGeneratedAITools(); // only if M.E.AI is referenced

The two generators are independent. AI tool generation doesn’t affect or depend on your service registrations.

Getting Started

  1. Add [Tool] to the interface
  2. Add [Description] to the interface and the methods you want exposed
  3. Add [Description] to parameters (optional but recommended — it helps the LLM)
  4. Reference Microsoft.Extensions.AI in your project
  5. Call services.AddGeneratedAITools() at startup
  6. Resolve IEnumerable<AITool> and pass to your chat client

Check the DI documentation for the full setup guide and the release notes for the complete changelog.

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Introducing the Secret Guard Plugin

1 Share

Secret Guard cover

Hardcoded secrets still show up in Jenkins for very ordinary reasons.

A token is pasted into a job field during a quick test. A webhook URL with a secret query parameter stays in config.xml. An inline Pipeline header works once and is never revisited. These cases are easy to introduce and easy to overlook.

Once a secret is stored in job configuration or a Jenkinsfile, it becomes harder to rotate and easier to expose through exports, backups, logs, or screenshots.

The Secret Guard Plugin was created to help Jenkins administrators and job authors catch those patterns earlier.

What it checks

Secret Guard is a Jenkins plugin that checks Jenkins jobs and Pipeline definitions for hardcoded secret leakage risks.

It scans common high-risk locations such as:

  • Job config.xml

  • inline Pipeline scripts

  • Pipeline-from-SCM Jenkinsfiles when lightweight SCM access is available

  • multibranch Pipeline Jenkinsfiles when lightweight SCM access is available

  • parameter default values

  • environment variable definitions

  • command content such as sh, bat, powershell, and HTTP-style request usage

It can be used in several practical ways:

  • save-time enforcement for job configuration changes

  • build-time scanning

  • job-level Scan Now

  • global Scan All Jobs

The plugin stores masked results only, so administrators can review findings without persisting raw secret values in plugin reports.

The global Secret Guard page gives administrators a single place to review the latest scan results and run an on-demand scan across jobs.

Secret Guard root action page

Read the whole story
alvinashcraft
2 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

One year on: Progress on our European digital commitments

1 Share

Europe is moving fast to capture the benefits of artificial intelligence, recognizing its potential to raise productivity, strengthen competitiveness, and help modernize public services. At the same time, organizations across Europe are focused on digital sovereignty and resilience: retaining control over their data and critical operations in a period of geopolitical volatility.

These priorities go together. That is why one year ago, we announced a set of European digital commitments to respond to these expectations. They focused on five areas:

  1. Help build a broad AI and cloud ecosystem across Europe
  2. Uphold Europe’s digital resilience even when there is geopolitical volatility
  3. Continue to protect the privacy of European data
  4. Help protect and defend Europe’s cybersecurity
  5. Help strengthen Europe’s economic competitiveness, including for open source

Together, they reflect a simple principle: Europe should be able to use global technology at scale, under European rules, with confidence that it will remain available, secure, and under customer control.

One year on, we take stock of how we’ve put those commitments into practice.

1. Building a broad AI and cloud ecosystem across Europe

A year ago, we detailed plans to increase our European datacenter capacity by 40%, expand cloud operations across 16 European countries, and reach more than 200 datacenters on the continent by 2027. Since then, we have announced new multi-billion euro investments in Portugal, Norway, and the UK, adding to and Switzerland. We also launched new cloud regions in Austria, Denmark, and Belgium. Together, this growing capacity is helping European organizations access cloud and AI capabilities closer to home while supporting sustainable growth through investments such as matching 100% of our annual global electricity consumption with renewable energy.

We emphasize now, as we did when first announcing our digital commitments, that European laws apply to our business practices in Europe, just as local laws govern local practices elsewhere in the world. We remain committed not only to building digital infrastructure for Europe, but also to respecting the role that laws across Europe play in regulating our products and services.

2. Upholding Europe’s digital resilience in a volatile geopolitical environment

For many customers, digital sovereignty is now about more than where data is stored. Institutions and businesses across Europe also want to know whether they can rely on critical digital services when geopolitical pressures rise, and whether they can adopt advanced AI capabilities without losing control.

We have made our with European national governments and the European Commission, including a commitment to promptly and vigorously contest in court any order by any government to suspend or cease cloud operations in Europe.

We also committed to continuity measures, including expanded partnerships with European cloud partners that can support our customers’ operational continuity in extreme scenarios. Reinforcing this approach, we launched a European resiliency partnership with Delos Cloud to safeguard business continuity in Europe in times of crisis. This work also supports closer cooperation among Europe’s sovereign cloud providers, including crisis response coordination and continuity options designed to help customers maintain operations even in the event of geopolitical disruptions.

We also expanded our strategic partnership with Capgemini to offer fully integrated, managed sovereign cloud services. In addition, we are deepening our collaboration with Accenture to help organizations design and implement sovereign cloud and AI solutions, supporting customers in highly regulated sectors as they balance innovation with control, compliance, and resilience.

To further strengthen governance and operational oversight in Europe, Microsoft’s European activities are now overseen by a board of directors composed exclusively of European nationals, reinforcing regional accountability and our commitments to cybersecurity, resilience, and compliance under European law.

3. Protecting the privacy of European data

Privacy, transparency, and customer control remain central to Europe’s expectations for cloud and AI. That’s why over the past year we have built a portfolio of sovereign cloud options, spanning public cloud, private cloud, and national partner solutions, so that customers can choose the level of control and oversight that best fits their legal, operational, and risk requirements. This portfolio spans infrastructure, productivity, and AI workloads across cloud, hybrid, or fully local deployments.

We have continued to implement our Defending Your Data Initiative, including our commitment to challenge government data requests for EU public‑sector or commercial customers where we have a lawful basis to do so.

We also completed the EU Data Boundary, enabling European customer data to be stored and processed.

In order to further reinforce transparency and oversight, we announced Data Guardian, which ensures that all remote access by Microsoft engineers to systems that store and process customer data in Europe is approved and monitored by personnel residing in Europe and logged in a tamper-evident ledger.

Over the past year, we have strengthened our sovereign solutions through new contractual assurances, closer partnerships with European providers, and expanded customer support.

The Microsoft Sovereign Cloud has been enhanced to help customers meet Europe’s growing expectations for control, resilience, and compliance without slowing down innovation. Recent updates add new governance and operational controls, expand productivity options for regulated environments, and strengthen encryption, while making it easier to use advanced AI capabilities that are fully customer-controlled. This includes solutions where AI models can run on customer-owned infrastructure with limited connectivity or even in fully disconnected environments. Earlier this week, we added new capabilities to our private cloud offering allowing organizations to run much larger workloads locally.

Sovereign Landing Zone provides a cloud architecture that embeds governance, compliance, and sovereign controls, helping European organizations deploy cloud environments that align with European regulatory requirements, with less complexity.

External validation of this approach continues to grow. Microsoft was named a leader in Forrester’s latest assessment of sovereign cloud platforms, recognizing the strength of our public cloud, private cloud, and partner-operated approach.

To help customers put this into practice, we opened our first three European Sovereignty and Resilience Studios in Munich, Brussels, and Amsterdam, where governments and enterprises work side by side with Microsoft’s engineers, policy experts, and security teams to capture the full promise of cloud and AI. Additional studios are planned to open in Microsoft’s nine other Innovation Hubs across Europe.

4. Helping protect and defend Europe’s cybersecurity

Cyber threats don’t stop at national borders, and Europe’s security depends on strong public‑private cooperation. During the last year, we have rolled out our European Security Program (ESP), an offering available at no cost to governments across the UK, EU, EFTA, and EU accession countries. It expands threat intelligence sharing and prioritizes new partnerships and investments to help protect critical infrastructure, disrupt cybercrime, and strengthen Europe’s collective ability to respond to attacks.

This program is live across 27 countries across Europe, providing support at no cost within a clear scope through structured briefings, early warnings, and tailored information sharing relevant to each country’s environment.

We have provided cybersecurity support to NATO, Ukraine, and other European governments, including threat intelligence, election protection, and disrupting attacks targeting European governments, companies, and citizens.

Since the start of Russia’s full-scale invasion of Ukraine in 2022, when we helped move critical data and services to secure datacenters across Europe and defend against sustained cyberattacks and eventual kinetic attacks, Microsoft has continued to support the country without interruption, providing more than $600 million in free technology, security, and financial assistance.

We have also expanded collaboration by embedding investigators with Europol’s European Cybercrime Centre (EC3). Together, we are translating technical threat intelligence into coordinated operational action, linking visibility into cybercriminal infrastructure with law enforcement’s ability to investigate, coordinate, and disrupt. This model underpinned recent cybercrime takedowns, including Tycoon 2FA, Lumma Stealer, and RedVDS. And, through our partnership with CyberPeace Institute, more than 300 European nonprofits are receiving cybersecurity support.

All of this work was reinforced in July with the appointment of Freddy Dezeure as Deputy Chief Information Security Officer, a European national based in Europe, who is coordinating Microsoft’s compliance with European cybersecurity regulations. Our European executive cybersecurity presence and oversight  are closely aligned with Microsoft’s broader cybersecurity governance, combining European guidelines with globally consistent security practices.

5. Strengthening Europe’s economic competitiveness, including for open source

We continue to support open ecosystems, including open source, to keep our AI and cloud platforms accessible and interoperable, and to give customers deployment options that fit their needs. There are almost 25 million European software developers active on GitHub, making more than 155 million contributions to public projects in the last year alone. Through Microsoft Foundry, customers can choose from more than 11,000 AI models, both open source and commercial, and run them in sovereign public or private clouds from cloud to the edge. This enables customers to deploy the same Microsoft Foundry model catalog within sovereignty‑aligned infrastructure.

But it is also vital that we support AI solutions that are more multilingual and attuned to cultural context. As part of our commitment to advance European commerce and culture, we launched LINGUA in September 2025 to support projects that collect high‑quality speech and text datasets for Europe’s underrepresented languages. Following an open call, we selected 12 projects spanning 16 languages and dialects across 10 countries, bringing together universities, nonprofits, a government language center, and a public broadcaster to create and digitize open datasets, preserve heritage languages, and develop new evaluation resources for multilingual AI.

We have new AI for Culture projects to digitally preserve iconic European sites and artifacts, including a digital replica of Notre Dame with the French Institut du Patrimoine and Iconem, and we are working with leading institutions to digitize historic cinematic model opera sets and enable access to metadata associated with millions of artifacts. We are also working with the Vatican Library on digitization and AI analysis of historic documents. All of this builds on preservation efforts underway since 2019 for landmarks such as St. Peter’s Basilica in Rome, Mont Saint Michel in France, and Ancient Olympia in Greece.

Relatedly, Céline Geissmann was chosen to lead our Microsoft Open Innovation Center in Strasbourg to work at the intersection of AI, languages, culture, open data, and innovation.

Staying accountable as Europe’s digital landscape evolves

These commitments are our North Star for how we engage in Europe, grounded in European law and values, shaped by European priorities, and designed to progress over time.

As Europe’s digital and geopolitical context continues to evolve, we will keep engaging with policymakers, regulators, customers, and partners to test whether what we are delivering matches what Europe needs. Where it does not, we will adapt.

Trust cannot be claimed. It needs to be earned through our actions, day by day. We are committed to earning that trust by listening, acting, and delivering for Europe.

The post One year on: Progress on our European digital commitments  appeared first on Microsoft On the Issues.

Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories