This post walks through .NET 10 and C# 14 updates from the perspective of an API developer using a real-world example: an order management API with validation, OpenAPI docs and Entity Framework Core.
It’s that time of year: A new version of the .NET platform has shipped. .NET 10 landed last month as an LTS release, with support through November 2028.
To complement Jon Hilton’s .NET 10 Has Arrived—Here’s What Changed for Blazor article and Assis Zang’s What’s New in .NET 10 for ASP.NET Core, let’s look at .NET 10 improvements through the viewpoint of an API developer.
Throughout this post, we’ll walk through updates using a real-world example: an order management API with validation, OpenAPI docs and Entity Framework Core.
NOTE: The examples use Minimal APIs for brevity, but most of these improvements can be used for controller-based APIs as well.
Before .NET 10, teams building Minimal APIs ended up rolling their own validation. The result? Endpoint code that was more about policing inputs than implementing business logic.
Here’s a simplified example that shows the problem.
public static class OrderEndpoints
{
public static void MapOrderEndpoints(this WebApplication app)
{
var group = app.MapGroup("/api/orders");
group.MapPost("/", CreateOrder);
group.MapGet("/{id}", GetOrder);
}
private static async Task<IResult> CreateOrder(
CreateOrderRequest request,
OrderDbContext db)
{
if (string.IsNullOrWhiteSpace(request.CustomerEmail))
return Results.BadRequest("Customer email is required");
if (request.Items is null || request.Items.Count == 0)
return Results.BadRequest("Order must contain at least one item");
foreach (var item in request.Items)
{
if (item.Quantity < 1)
return Results.BadRequest("Quantity must be at least 1");
if (item.ProductId <= 0)
return Results.BadRequest("Invalid product ID");
}
var order = new Order
{
CustomerEmail = request.CustomerEmail,
Items = request.Items.Select(i => new OrderItem
{
ProductId = i.ProductId,
Quantity = i.Quantity
}).ToList(),
CreatedAt = DateTime.UtcNow
};
db.Orders.Add(order);
await db.SaveChangesAsync();
return Results.Created($"/api/orders/{order.Id}", order);
}
private static async Task<IResult> GetOrder(int id, OrderDbContext db)
{
var order = await db.Orders.FindAsync(id);
return order is null ? Results.NotFound() : Results.Ok(order);
}
}
There were ways around this: you could use filters, helper methods or third-party validators. Even so, it was frustrating that Minimal APIs didn’t have the baked-in validation experience you have with controller-based APIs.
.NET 10 adds built-in validation support for Minimal APIs. You can enable it with one registration call:
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddDbContext<OrderDbContext>();
// Enables built-in validation for Minimal APIs
builder.Services.AddValidation();
var app = builder.Build();
Once enabled, ASP.NET Core automatically applies DataAnnotations validation to Minimal API parameters. This includes query, header and request body binding.
You can also disable validation for a specific endpoint using DisableValidation(), which is handy for internal endpoints or partial updates where you intentionally accept incomplete payloads.
With validation handled by the framework and the attributes added to our models, endpoints can focus on business logic.
When validation fails, ASP.NET Core returns a standardized ProblemDetails response with an errors dictionary.
A typical response looks like this.
{
"type": "https://tools.ietf.org/html/rfc9110#section-15.5.1",
"title": "One or more validation errors occurred.",
"status": 400,
"errors": {
"CustomerEmail": [
"The CustomerEmail field is required."
],
"Items[0].Quantity": [
"Quantity must be between 1 and 1000"
]
}
}
.NET 10’s built-in OpenAPI document generation supports OpenAPI 3.1 and JSON Schema 2020-12. The default OpenAPI version for generated documents is now 3.1.
OpenAPI 3.1 aligns better with modern JSON schema expectations and improves how tools interpret your schemas.
using Microsoft.AspNetCore.OpenApi;
using Microsoft.OpenApi.Models;
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddOpenApi(options =>
{
// OpenAPI 3.1 is the default, but you can be explicit
options.OpenApiVersion = Microsoft.OpenApi.OpenApiSpecVersion.OpenApi3_1;
options.AddDocumentTransformer((document, context, cancellationToken) =>
{
document.Info = new OpenApiInfo
{
Title = "Order Management API",
Version = "v1",
Description = "Enterprise order processing system",
Contact = new OpenApiContact
{
Name = "API Support",
Email = "api-support@company.com"
}
};
return Task.CompletedTask;
});
});
var app = builder.Build();
if (app.Environment.IsDevelopment())
{
// JSON at /openapi/v1.json
app.MapOpenApi();
// YAML at /openapi/v1.yaml
app.MapOpenApi("/openapi/{documentName}.yaml");
}
app.Run();
Note the YAML route: in .NET 10, you generate YAML by using a route ending in .yaml or .yml, typically with {documentName} in the path.
OpenAPI 3.0 often expressed nullability using nullable: true.
components:
schemas:
ShippingAddress:
type: object
nullable: true
properties:
street:
type: string
nullable: true
city:
type: string
OpenAPI 3.1 allows us to use union types:
components:
schemas:
ShippingAddress:
type: ["object", "null"]
properties:
street:
type: ["string", "null"]
city:
type: string
This tends to play nicer with tooling that relies heavily on JSON Schema semantics such as OpenAPI Generator and NSwag.
Global query filters are a staple for multi-tenant apps and soft deletes. The classic problem was granularity: IgnoreQueryFilters() disabled all filters at once.
EF Core 10 introduces named query filters, so you can selectively disable one filter while keeping another.
public class OrderDbContext : DbContext
{
private readonly int _tenantId;
public OrderDbContext(DbContextOptions<OrderDbContext> options, ITenantProvider tenant)
: base(options)
=> _tenantId = tenant.TenantId;
public DbSet<Order> Orders => Set<Order>();
protected override void OnModelCreating(ModelBuilder modelBuilder)
{
modelBuilder.Entity<Order>()
.HasQueryFilter("SoftDelete", o => !o.IsDeleted)
.HasQueryFilter("TenantIsolation", o => o.TenantId == _tenantId);
}
}
Now an admin endpoint can disable soft delete without disabling tenant isolation:
public static class AdminEndpoints
{
public static void MapAdminEndpoints(this WebApplication app)
{
var group = app.MapGroup("/api/admin")
.RequireAuthorization("Admin");
group.MapGet("/orders/deleted", GetDeletedOrders)
.WithSummary("Gets deleted orders for the current tenant only");
}
private static async Task<IResult> GetDeletedOrders(OrderDbContext db)
{
var deletedOrders = await db.Orders
.IgnoreQueryFilters(new[] { "SoftDelete" })
.Where(o => o.IsDeleted)
.Select(o => new { o.Id, o.CustomerEmail, o.DeletedAt })
.ToListAsync();
return Results.Ok(deletedOrders);
}
}
This capability is small, but it’s exactly the kind of real-world safety improvement that helps prevent cross-tenant data leaks.
.NET 10 ships alongside C# 14. For API developers, a few features immediately reduce boilerplate and improve readability.
C# 14 introduces field-backed properties, where you can reference the compiler-generated backing field directly using the field keyword.
public sealed class Order
{
public string CustomerEmail
{
get;
set
{
if (string.IsNullOrWhiteSpace(value))
throw new ArgumentException("Email cannot be empty.", nameof(value));
field = value.Trim().ToLowerInvariant();
}
}
}
C# 14 allows null-conditional operators (?. and ?[]) on the left-hand side of assignments and compound assignments. The right-hand side is evaluated only when the receiver isn’t null. This is great for processing patches.
public sealed record OrderPatchRequest(string? NewStatus, int? NewPriority, string? NewCity);
public static class OrderPatchService
{
public static void ApplyPatch(Order? order, OrderPatchRequest? patch)
{
if (patch is null) return;
order?.Status = patch.NewStatus ?? order?.Status;
order?.Priority = patch.NewPriority ?? order?.Priority;
order?.Shipping?.City = patch.NewCity ?? order?.Shipping?.City;
}
}
Note: Increment/decrement operators (++, --) aren’t allowed with null-conditional assignments. Compound assignments like += are supported.
For more details on this feature, check out the post Write Cleaner Code with C# 14’s Null-Conditional Assignment Operator.
C# 14’s headline feature is extension members, which add extension properties, static extension members and even operators using the new extension block syntax.
Don’t you just love the syntax?
public static class OrderExtensions
{
extension(Order source)
{
public decimal TotalValue =>
source.Items.Sum(i => i.Quantity * i.UnitPrice);
public bool IsHighValue => source.TotalValue > 1000m;
public string Summary =>
$"Order #{source.Id}: {source.Items.Count} items, ${source.TotalValue:F2} total";
}
extension(Order)
{
public static Order CreateEmpty(int tenantId) => new Order
{
TenantId = tenantId,
CreatedAt = DateTime.UtcNow,
Items = new List<OrderItem>(),
Status = "Draft"
};
}
}
For more details on extension members, check out Extension Properties: C# 14’s Game-Changing Feature for Cleaner Code. (I don’t get paid by the click, I swear.)
ASP.NET Core in .NET 10 adds a built-in ServerSentEvents result for Minimal APIs, so you can stream updates over a single HTTP connection without manually formatting frames.
using System.Runtime.CompilerServices;
using Microsoft.AspNetCore.Http.HttpResults;
public record OrderStatusUpdate(int OrderId, string Status, string Message, DateTime Timestamp);
public static class OrderStreamingEndpoints
{
public static void MapStreamingEndpoints(this WebApplication app)
{
app.MapGet("/api/orders/{id:int}/status-stream", StreamOrderStatus)
.WithSummary("Stream real-time order status updates");
}
private static ServerSentEventsResult<OrderStatusUpdate> StreamOrderStatus(
int id,
OrderDbContext db,
CancellationToken ct)
{
async IAsyncEnumerable<OrderStatusUpdate> GetUpdates(
[EnumeratorCancellation] CancellationToken cancellationToken)
{
string? last = null;
while (!cancellationToken.IsCancellationRequested)
{
var order = await db.Orders.AsNoTracking()
.FirstOrDefaultAsync(o => o.Id == id, cancellationToken);
if (order is null)
{
yield return new(id, "ERROR", "Order not found", DateTime.UtcNow);
yield break;
}
if (!string.Equals(order.Status, last, StringComparison.Ordinal))
{
yield return new(order.Id, order.Status, $"Order is now {order.Status}", DateTime.UtcNow);
last = order.Status;
}
if (order.Status is "Delivered" or "Cancelled")
yield break;
await Task.Delay(TimeSpan.FromSeconds(5), cancellationToken);
}
}
return TypedResults.ServerSentEvents(GetUpdates(ct), eventType: "order-status");
}
}
Client-side consumption is quite simple, too:
<script>
function trackOrderStatus(orderId) {
const es = new EventSource(`/api/orders/${orderId}/status-stream`);
es.onmessage = (event) => {
const update = JSON.parse(event.data);
console.log(update);
if (update.status === "Delivered" || update.status === "Cancelled") {
es.close();
}
};
es.onerror = () => console.error("SSE connection error");
return es;
}
</script>
.NET 10 is an LTS release. If you’re starting a new project, it’s a no-brainer.
If you’re upgrading from .NET 8 or .NET 9, a few things to keep in mind:
AddValidation support can remove a surprising amount of custom code..yaml/.yml routes.field and null-conditional assignment will show up in your codebase quickly.The upgrade path is generally smooth: change the target in your .csproj, run tests, fix warnings and ship.
.NET 10 delivers meaningful improvements for API developers through thoughtful enhancements rather than revolutionary changes. The combination of built-in Minimal API validation, OpenAPI 3.1 and C# 14 quality-of-life features add up to a more productive and safer development experience.
Happy coding!
If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 25% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that, and need to solve database performance problems quickly. You can also get a quick, low cost health check with no phone time required.
The post bit Obscene: When Will AI Fix The Query Store GUI? appeared first on Darling Data.
High availability (HA) is a core pillar of Azure SQL Database. Unlike traditional SQL Server deployments—where availability architectures must be designed, implemented, monitored, and maintained manually—Azure SQL Database delivers built‑in high availability by design.
By abstracting infrastructure complexity while still offering enterprise‑grade resilience, Azure SQL Database enables customers to achieve strict availability SLAs with minimal operational overhead.
In this article, we’ll cover:
High availability in Azure SQL Database ensures that:
This is possible through the separation of:
Azure SQL Database continuously monitors health signals across these layers and automatically initiates recovery or failover when required.
Regardless of service tier, Azure SQL Database relies on common high‑availability principles:
Applications connect through the Azure SQL Gateway, which automatically routes traffic to the current primary replica—shielding clients from underlying failover events.
The General-Purpose tier uses a compute–storage separation model, relying on Azure Premium Storage for data durability.
Architecture Diagram – General Purpose Tier
Description:
Clients connect via the Azure SQL Gateway, which routes traffic to the primary compute node. The compute layer is stateless, while Azure Premium Storage provides triple‑replicated durable storage.
📚 Reference:
https://learn.microsoft.com/azure/azure-sql/database/service-tier-general-purpose
The Business-Critical tier is designed for mission‑critical workloads requiring low latency and fast failover.
Architecture Diagram – Business Critical Tier
Description:
The primary replica synchronously replicates data to secondary replicas. Read‑only replicas can offload read workloads. Azure SQL Gateway transparently routes traffic to the active primary replica.
📚 Reference:
https://learn.microsoft.com/azure/azure-sql/database/service-tier-business-critical
The Hyperscale tier introduces a distributed storage and compute architecture, optimized for very large databases and rapid scaling scenarios.
Architecture Diagram – Hyperscale Tier
Description:
The compute layer processes queries, while durable log services and distributed page servers manage data storage independently, enabling rapid failover and scaling.
📚 Reference:
https://learn.microsoft.com/azure/azure-sql/database/service-tier-hyperscale
Azure SQL Database continuously monitors critical health signals, including:
Applications should implement retry logic and transient‑fault handling to fully benefit from these mechanisms.
📚 Reference:
https://learn.microsoft.com/azure/architecture/best-practices/transient-faults
Azure SQL Database can be configured with zone redundancy, distributing replicas across Availability Zones in the same region.
📚 Reference:
https://learn.microsoft.com/azure/azure-sql/database/high-availability-sla
Azure SQL Database delivers high availability by default, removing the traditional operational burden associated with SQL Server HA designs.
| Service Tier | HA Model | Typical Failover |
|---|---|---|
| General Purpose | Storage‑based durability | Minutes |
| Business Critical | Multi‑replica, synchronous | Seconds |
| Hyperscale | Distributed compute & storage | Seconds |
By selecting the appropriate service tier and enabling zone redundancy where required, customers can meet even the most demanding availability and resilience requirements with minimal complexity.
This is the second of two parts. Read Part 1:
Monitoring DNS is not simply a matter of checking whether a record resolves. A comprehensive approach follows four key principles:
DNS monitoring is most effective when it targets specific signals that reveal problems with record integrity, server behavior and real-world performance.
The key groups of tests:
Mapping tests verify that users are directed to an appropriate DNS server based on location. This matters because the closest healthy server usually provides the fastest response. If a user’s request is sent across a country or to another continent, latency increases and resilience decreases.
Different managed DNS providers use different methods to determine which server responds to a query. Many compare the geographic location of the querying IP address to the locations of the available servers.
Some DNS providers and public resolvers use the EDNS (Extension Mechanisms for DNS) Client Subnet extension, which includes part of the requester’s subnet in the query. This can help the provider return a geographically appropriate answer, although support for this feature varies due to privacy considerations.
The purpose of this DNS mapping test is to confirm that queries from different regions are answered by the nearest server and that this behavior is consistent. It can also reveal Anycast drift, where some regions are unexpectedly routed to distant or unhealthy POPs due to Border Gateway Protocol path changes. A fast local resolution is often expected to complete within a few tens of milliseconds on major networks.
Record-level tests verify that the data used to resolve a domain name is accurate, consistent and uncompromised. These checks help detect misconfiguration, operational drift and signs of tampering.
Delegation checks confirm that each step in the DNS hierarchy is correct. The test walks from the root to the top-level domain and then to the authoritative zone. For example, it verifies that the nameservers listed for a domain, such as example.com, match what the .com zone expects and that those servers provide correct answers. It also catches common failure modes such as mismatched NS records between parent and child zones.
Once delegation is confirmed, each nameserver should respond reliably over both UDP and TCP. A failure to answer over TCP may indicate a configuration error or a firewall blocking traffic.
It is also useful to verify that the root hints file, when applicable, contains accurate information about root server names and IP addresses. This file is usually preconfigured by providers but should not be assumed infallible.
Start of Authority records contain the serial number and timing values for a zone. Changes to these values give context to shifts in DNS behavior. Sudden differences in serial numbers across nameservers may indicate incomplete zone transfers or unintended updates. In environments where zone files rarely change, any unexpected serial change warrants investigation.
Mail exchange and service records play a central role in email delivery and service discovery. Attackers sometimes target MX records to intercept sensitive communications, so it is important to verify that these records resolve correctly and point to the intended mail or service hosts.
These checks also confirm that record priorities are correct. Misconfigured preference values may send traffic to the wrong server, including servers without proper filtering or authentication controls. For SRV records, verifying that the target hosts actually exist and have matching A/AAAA records helps catch common operational errors.
Primary and secondary nameservers must hold identical zone data. Zone transfer tests verify that secondary servers have received the most recent updates and that no transfer failures or mismatches exist. If a transfer does not complete or if servers fall out of sync, queries may fail or return inconsistent data.
DNSSEC (Domain Name System Security Extensions) provide cryptographic verification for DNS data. Monitoring ensures that DNSSEC is enabled where intended, that the necessary key and signature records are present, and that signatures have not expired. Missing or outdated DNSSEC records can cause validation failures at resolvers. It is also important to track DS records at the parent zone, as mismatched or stale DS entries are a leading cause of DNSSEC-related outages.
Performance tests measure how quickly and consistently a domain resolves and whether recent changes have propagated across global resolvers.
Propagation refers to how long it takes for a record change to reach resolvers worldwide. Until propagation is complete, some users will continue receiving old answers. Depending on TTLs and caching behavior, global propagation may take up to several days. Monitoring helps confirm when changes have fully taken effect.
Experience tests run recursive queries from multiple points along the DNS path. These tests show end-to-end resolution time and reveal patterns in resolver load, cache efficiency and upstream performance. Elevated memory usage, CPU spikes or increased QPS (queries per second) on authoritative servers can also be identified through sustained testing.
For internal zones, experience tests may highlight heavy disk activity that indicates frequent zone transfers. Experience tests can also reveal intermittent Tor root server delays, which often go unnoticed without continuous measurement.
A and AAAA records may occasionally diverge in unexpected ways. Comparing cached answers to freshly queried answers helps identify mismatches, missing IPv6 records or configurations that favor one address family. This also helps detect scenarios where content delivery networks (CDNs) return different addresses than expected based on geography or policy.
Latency can be influenced by resolver load, network capacity, cache misses, delays at the top-level domain layer or slow authoritative servers. Performance tests should measure both the latency from the user to the resolver and the latency incurred during the resolver’s lookup chain.
Packet loss and network instability between nameservers and resolvers may cause intermittent failures. Connectivity tests identify when issues are rooted in the network rather than in the DNS configuration itself. This is especially relevant for Anycast deployments, where a single unhealthy path can create regional failures while the global service appears healthy.
Teams that operate their own DNS infrastructure should monitor the health of the servers themselves. Important metrics include:
Server-level visibility helps identify when performance issues stem from hardware limits or software constraints.
Monitoring DNS is complicated by the fact that many testing tools operate within cloud provider environments. Tests run from within the same cloud region as the authoritative server or application may show near-zero latency that does not reflect the wider internet.
This effect can create misleading results, suggesting that DNS performance is better than what end users actually experience. For an accurate view, monitoring should occur from diverse, internet-connected vantage points rather than solely from cloud-hosted agents.
It is also important to separate your DNS and CDN providers. If both services are tied to the same provider, an outage in the CDN can take your DNS offline as well, making the failure far more widespread and difficult to diagnose. Keeping these layers independent reduces the chance that a single provider outage can disrupt your entire digital footprint.
DNS reliability depends on continuous measurement, distributed visibility and a clear understanding of how users experience resolution across networks. By monitoring mapping, record integrity and performance, teams can detect problems early and maintain dependable digital experiences.
A thoughtful monitoring program does not require complex tooling. It requires awareness, consistent testing and disciplined change management. Start with the essentials outlined here and expand as your services and traffic grow.
The post Getting DNS Right: Principles for Effective Monitoring appeared first on The New Stack.
During the December holiday season, the Svelte team shared 24 days of Svelte knowledge, tips, and insights to unwrap in 2025’s Advent of Svelte. Learn more about under-utilized Svelte features through a series of fun and interactive videos!
There were also a bunch of improvements and showcase items across the Svelte ecosystem this last month. So let’s have a look...
hydratable option (which adds an inline <script> block to the head) now has a csp option in render to support Content Security Policies (svelte@5.46.0, Docs, #17338)For a full list of changes - including all the important bugfixes that went into the releases this month - check out the Svelte compiler’s CHANGELOG and the SvelteKit / Adapter CHANGELOGs.
Featuring Svelte Contributors and Ambassadors
Svienna
This Week in Svelte
Svelte Radio
To Read
That’s it for this month! Let us know if we missed anything on Reddit or Discord.
Until next time 👋🏼!