Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
151923 stories
·
33 followers

Loris Cro on the Rise of Zig

1 Share




Download audio: https://r.zen.ai/r/cdn.simplecast.com/audio/24832310-78fe-4898-91be-6db33696c4ba/episodes/e63ea224-7022-4861-b6a4-5ca9227a15b5/audio/eee48143-1c4f-4f73-9c5e-543c103ce816/default_tc.mp3?aid=rss_feed&feed=gvtxUiIf
Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

What’s New with APIs in .NET 10: Taking a Look at Real Improvements

1 Share

This post walks through .NET 10 and C# 14 updates from the perspective of an API developer using a real-world example: an order management API with validation, OpenAPI docs and Entity Framework Core.

It’s that time of year: A new version of the .NET platform has shipped. .NET 10 landed last month as an LTS release, with support through November 2028.

To complement Jon Hilton’s .NET 10 Has Arrived—Here’s What Changed for Blazor article and Assis Zang’s What’s New in .NET 10 for ASP.NET Core, let’s look at .NET 10 improvements through the viewpoint of an API developer.

Throughout this post, we’ll walk through updates using a real-world example: an order management API with validation, OpenAPI docs and Entity Framework Core.

NOTE: The examples use Minimal APIs for brevity, but most of these improvements can be used for controller-based APIs as well.

Built in Validation for Minimal APIs

Before .NET 10, teams building Minimal APIs ended up rolling their own validation. The result? Endpoint code that was more about policing inputs than implementing business logic.

Here’s a simplified example that shows the problem.

public static class OrderEndpoints
{
    public static void MapOrderEndpoints(this WebApplication app)
    {
        var group = app.MapGroup("/api/orders");
        group.MapPost("/", CreateOrder);
        group.MapGet("/{id}", GetOrder);
    }

    private static async Task<IResult> CreateOrder(
        CreateOrderRequest request,
        OrderDbContext db)
    {
        if (string.IsNullOrWhiteSpace(request.CustomerEmail))
            return Results.BadRequest("Customer email is required");

        if (request.Items is null || request.Items.Count == 0)
            return Results.BadRequest("Order must contain at least one item");

        foreach (var item in request.Items)
        {
            if (item.Quantity < 1)
                return Results.BadRequest("Quantity must be at least 1");
            if (item.ProductId <= 0)
                return Results.BadRequest("Invalid product ID");
        }

        var order = new Order
        {
            CustomerEmail = request.CustomerEmail,
            Items = request.Items.Select(i => new OrderItem
            {
                ProductId = i.ProductId,
                Quantity = i.Quantity
            }).ToList(),
            CreatedAt = DateTime.UtcNow
        };

        db.Orders.Add(order);
        await db.SaveChangesAsync();

        return Results.Created($"/api/orders/{order.Id}", order);
    }

    private static async Task<IResult> GetOrder(int id, OrderDbContext db)
    {
        var order = await db.Orders.FindAsync(id);
        return order is null ? Results.NotFound() : Results.Ok(order);
    }
}

There were ways around this: you could use filters, helper methods or third-party validators. Even so, it was frustrating that Minimal APIs didn’t have the baked-in validation experience you have with controller-based APIs.

.NET 10 adds built-in validation support for Minimal APIs. You can enable it with one registration call:

var builder = WebApplication.CreateBuilder(args);

builder.Services.AddDbContext<OrderDbContext>();

// Enables built-in validation for Minimal APIs
builder.Services.AddValidation();

var app = builder.Build();

Once enabled, ASP.NET Core automatically applies DataAnnotations validation to Minimal API parameters. This includes query, header and request body binding.

You can also disable validation for a specific endpoint using DisableValidation(), which is handy for internal endpoints or partial updates where you intentionally accept incomplete payloads.

With validation handled by the framework and the attributes added to our models, endpoints can focus on business logic.

Validation Error Responses

When validation fails, ASP.NET Core returns a standardized ProblemDetails response with an errors dictionary.

A typical response looks like this.

{
  "type": "https://tools.ietf.org/html/rfc9110#section-15.5.1",
  "title": "One or more validation errors occurred.",
  "status": 400,
  "errors": {
    "CustomerEmail": [
      "The CustomerEmail field is required."
    ],
    "Items[0].Quantity": [
      "Quantity must be between 1 and 1000"
    ]
  }
}

OpenAPI 3.1: Modernizing API Documentation

.NET 10’s built-in OpenAPI document generation supports OpenAPI 3.1 and JSON Schema 2020-12. The default OpenAPI version for generated documents is now 3.1.

OpenAPI 3.1 aligns better with modern JSON schema expectations and improves how tools interpret your schemas.

using Microsoft.AspNetCore.OpenApi;
using Microsoft.OpenApi.Models;

var builder = WebApplication.CreateBuilder(args);

builder.Services.AddOpenApi(options =>
{
    // OpenAPI 3.1 is the default, but you can be explicit
    options.OpenApiVersion = Microsoft.OpenApi.OpenApiSpecVersion.OpenApi3_1;

    options.AddDocumentTransformer((document, context, cancellationToken) =>
    {
        document.Info = new OpenApiInfo
        {
            Title = "Order Management API",
            Version = "v1",
            Description = "Enterprise order processing system",
            Contact = new OpenApiContact
            {
                Name = "API Support",
                Email = "api-support@company.com"
            }
        };

        return Task.CompletedTask;
    });
});

var app = builder.Build();

if (app.Environment.IsDevelopment())
{
    // JSON at /openapi/v1.json
    app.MapOpenApi();

    // YAML at /openapi/v1.yaml
    app.MapOpenApi("/openapi/{documentName}.yaml");
}

app.Run();

Note the YAML route: in .NET 10, you generate YAML by using a route ending in .yaml or .yml, typically with {documentName} in the path.

Schema Improvements in Practice

OpenAPI 3.0 often expressed nullability using nullable: true.

components:
  schemas:
    ShippingAddress:
      type: object
      nullable: true
      properties:
        street:
          type: string
          nullable: true
        city:
          type: string

OpenAPI 3.1 allows us to use union types:

components:
  schemas:
    ShippingAddress:
      type: ["object", "null"]
      properties:
        street:
          type: ["string", "null"]
        city:
          type: string

This tends to play nicer with tooling that relies heavily on JSON Schema semantics such as OpenAPI Generator and NSwag.

EF Core 10: Named Query Filters

Global query filters are a staple for multi-tenant apps and soft deletes. The classic problem was granularity: IgnoreQueryFilters() disabled all filters at once.

EF Core 10 introduces named query filters, so you can selectively disable one filter while keeping another.

public class OrderDbContext : DbContext
{
    private readonly int _tenantId;

    public OrderDbContext(DbContextOptions<OrderDbContext> options, ITenantProvider tenant)
        : base(options)
        => _tenantId = tenant.TenantId;

    public DbSet<Order> Orders => Set<Order>();

    protected override void OnModelCreating(ModelBuilder modelBuilder)
    {
        modelBuilder.Entity<Order>()
            .HasQueryFilter("SoftDelete", o => !o.IsDeleted)
            .HasQueryFilter("TenantIsolation", o => o.TenantId == _tenantId);
    }
}

Now an admin endpoint can disable soft delete without disabling tenant isolation:

public static class AdminEndpoints
{
    public static void MapAdminEndpoints(this WebApplication app)
    {
        var group = app.MapGroup("/api/admin")
            .RequireAuthorization("Admin");

        group.MapGet("/orders/deleted", GetDeletedOrders)
            .WithSummary("Gets deleted orders for the current tenant only");
    }

    private static async Task<IResult> GetDeletedOrders(OrderDbContext db)
    {
        var deletedOrders = await db.Orders
            .IgnoreQueryFilters(new[] { "SoftDelete" }) 
            .Where(o => o.IsDeleted)
            .Select(o => new { o.Id, o.CustomerEmail, o.DeletedAt })
            .ToListAsync();

        return Results.Ok(deletedOrders);
    }
}

This capability is small, but it’s exactly the kind of real-world safety improvement that helps prevent cross-tenant data leaks.

C# 14 Improvements

.NET 10 ships alongside C# 14. For API developers, a few features immediately reduce boilerplate and improve readability.

The field Keyword: Eliminate Backing-field Boilerplate

C# 14 introduces field-backed properties, where you can reference the compiler-generated backing field directly using the field keyword.

public sealed class Order
{
    public string CustomerEmail
    {
        get;
        set
        {
            if (string.IsNullOrWhiteSpace(value))
                throw new ArgumentException("Email cannot be empty.", nameof(value));

            field = value.Trim().ToLowerInvariant();
        }
    }
}

Null-conditional Assignments

C# 14 allows null-conditional operators (?. and ?[]) on the left-hand side of assignments and compound assignments. The right-hand side is evaluated only when the receiver isn’t null. This is great for processing patches.

public sealed record OrderPatchRequest(string? NewStatus, int? NewPriority, string? NewCity);

public static class OrderPatchService
{
    public static void ApplyPatch(Order? order, OrderPatchRequest? patch)
    {
        if (patch is null) return;

        order?.Status = patch.NewStatus ?? order?.Status;
        order?.Priority = patch.NewPriority ?? order?.Priority;
        order?.Shipping?.City = patch.NewCity ?? order?.Shipping?.City;
    }
}

Note: Increment/decrement operators (++, --) aren’t allowed with null-conditional assignments. Compound assignments like += are supported.

For more details on this feature, check out the post Write Cleaner Code with C# 14’s Null-Conditional Assignment Operator.

Extension Members: Properties, Static Members and Operators

C# 14’s headline feature is extension members, which add extension properties, static extension members and even operators using the new extension block syntax.

Don’t you just love the syntax?

public static class OrderExtensions
{
    extension(Order source)
    {
        public decimal TotalValue =>
            source.Items.Sum(i => i.Quantity * i.UnitPrice);

        public bool IsHighValue => source.TotalValue > 1000m;

        public string Summary =>
            $"Order #{source.Id}: {source.Items.Count} items, ${source.TotalValue:F2} total";
    }

    extension(Order)
    {
        public static Order CreateEmpty(int tenantId) => new Order
        {
            TenantId = tenantId,
            CreatedAt = DateTime.UtcNow,
            Items = new List<OrderItem>(),
            Status = "Draft"
        };
    }
}

For more details on extension members, check out Extension Properties: C# 14’s Game-Changing Feature for Cleaner Code. (I don’t get paid by the click, I swear.)

Server-Sent Events: Simplifying Real-Time Updates

ASP.NET Core in .NET 10 adds a built-in ServerSentEvents result for Minimal APIs, so you can stream updates over a single HTTP connection without manually formatting frames.

using System.Runtime.CompilerServices;
using Microsoft.AspNetCore.Http.HttpResults;

public record OrderStatusUpdate(int OrderId, string Status, string Message, DateTime Timestamp);

public static class OrderStreamingEndpoints
{
    public static void MapStreamingEndpoints(this WebApplication app)
    {
        app.MapGet("/api/orders/{id:int}/status-stream", StreamOrderStatus)
           .WithSummary("Stream real-time order status updates");
    }

    private static ServerSentEventsResult<OrderStatusUpdate> StreamOrderStatus(
        int id,
        OrderDbContext db,
        CancellationToken ct)
    {
        async IAsyncEnumerable<OrderStatusUpdate> GetUpdates(
            [EnumeratorCancellation] CancellationToken cancellationToken)
        {
            string? last = null;

            while (!cancellationToken.IsCancellationRequested)
            {
                var order = await db.Orders.AsNoTracking()
                    .FirstOrDefaultAsync(o => o.Id == id, cancellationToken);

                if (order is null)
                {
                    yield return new(id, "ERROR", "Order not found", DateTime.UtcNow);
                    yield break;
                }

                if (!string.Equals(order.Status, last, StringComparison.Ordinal))
                {
                    yield return new(order.Id, order.Status, $"Order is now {order.Status}", DateTime.UtcNow);
                    last = order.Status;
                }

                if (order.Status is "Delivered" or "Cancelled")
                    yield break;

                await Task.Delay(TimeSpan.FromSeconds(5), cancellationToken);
            }
        }

        return TypedResults.ServerSentEvents(GetUpdates(ct), eventType: "order-status");
    }
}

Client-side consumption is quite simple, too:

<script>
  function trackOrderStatus(orderId) {
    const es = new EventSource(`/api/orders/${orderId}/status-stream`);

    es.onmessage = (event) => {
      const update = JSON.parse(event.data);
      console.log(update);
      if (update.status === "Delivered" || update.status === "Cancelled") {
        es.close();
      }
    };

    es.onerror = () => console.error("SSE connection error");
    return es;
  }
</script>

Should You Upgrade to .NET 10?

.NET 10 is an LTS release. If you’re starting a new project, it’s a no-brainer.

If you’re upgrading from .NET 8 or .NET 9, a few things to keep in mind:

  • Minimal API validation: If you’ve been hand-rolling validation, .NET 10’s AddValidation support can remove a surprising amount of custom code.
  • OpenAPI: Built-in OpenAPI generation defaults to 3.1 and supports YAML endpoints via .yaml/.yml routes.
  • EF Core: Named query filters are a real safety upgrade for multi-tenant apps, and JSON column support continues to improve (including bulk update support).
  • C# 14: You can adopt new features incrementally. Even if you ignore extension members entirely, field and null-conditional assignment will show up in your codebase quickly.

The upgrade path is generally smooth: change the target in your .csproj, run tests, fix warnings and ship.

Wrapping Up

.NET 10 delivers meaningful improvements for API developers through thoughtful enhancements rather than revolutionary changes. The combination of built-in Minimal API validation, OpenAPI 3.1 and C# 14 quality-of-life features add up to a more productive and safer development experience.

Happy coding!

References

Platform updates

ASP.NET Core/API Updates

Language Updates

Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

bit Obscene: When Will AI Fix The Query Store GUI?

1 Share

bit Obscene: When Will AI Fix The Query Store GUI?


Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 25% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that, and need to solve database performance problems quickly. You can also get a quick, low cost health check with no phone time required.

The post bit Obscene: When Will AI Fix The Query Store GUI? appeared first on Darling Data.

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Azure SQL Database High Availability: Architecture, Design, and Built‑in Resilience

1 Share

High availability (HA) is a core pillar of Azure SQL Database. Unlike traditional SQL Server deployments—where availability architectures must be designed, implemented, monitored, and maintained manually—Azure SQL Database delivers built‑in high availability by design.

By abstracting infrastructure complexity while still offering enterprise‑grade resilience, Azure SQL Database enables customers to achieve strict availability SLAs with minimal operational overhead.

In this article, we’ll cover:

  • Azure SQL Database high‑availability design principles
  • How HA is implemented across service tiers:
    • General Purpose
    • Business Critical
    • Hyperscale
  • Failover behavior and recovery mechanisms
  • Architecture illustrations explaining how availability is achieved
  • Supporting Microsoft Learn and documentation references

What High Availability Means in Azure SQL Database

High availability in Azure SQL Database ensures that:

  • Databases remain accessible during infrastructure failures
  • Hardware, software, and network faults are handled automatically
  • Failover occurs without customer intervention
  • Data durability is maintained using replication, quorum, and consensus models

This is possible through the separation of:

  • Compute
  • Storage
  • Control plane orchestration

Azure SQL Database continuously monitors health signals across these layers and automatically initiates recovery or failover when required.

Azure SQL Database High Availability – Shared Concepts

Regardless of service tier, Azure SQL Database relies on common high‑availability principles:

  • Redundant replicas
  • Synchronous and asynchronous replication
  • Automatic failover orchestration
  • Built‑in quorum and consensus logic
  • Transparent reconnect via the Azure SQL Gateway

Applications connect through the Azure SQL Gateway, which automatically routes traffic to the current primary replica—shielding clients from underlying failover events.

High Availability Architecture – General Purpose Tier

The General-Purpose tier uses a compute–storage separation model, relying on Azure Premium Storage for data durability.

Key Characteristics

  • Single compute replica
  • Storage replicated three times using Azure Storage
    • Read‑Access Geo‑Redundant Storage (RA‑GRS) optional
  • Stateless compute that can be restarted or moved
  • Fast recovery using storage reattachment

Architecture Diagram – General Purpose Tier

Description:
Clients connect via the Azure SQL Gateway, which routes traffic to the primary compute node. The compute layer is stateless, while Azure Premium Storage provides triple‑replicated durable storage.

Failover Behavior

  • Compute failure triggers creation of a new compute node
  • Database files are reattached from storage
  • Typical recovery time: seconds to minutes

📚 Reference:
https://learn.microsoft.com/azure/azure-sql/database/service-tier-general-purpose 

High Availability Architecture – Business Critical Tier

The Business-Critical tier is designed for mission‑critical workloads requiring low latency and fast failover.

Key Characteristics

  • Multiple replicas (1 primary + up to 3 secondaries)
  • Always On availability group–like architecture
  • Local SSD storage on each replica
  • Synchronous replication
  • Automatic failover within seconds

Architecture Diagram – Business Critical Tier

Description:
The primary replica synchronously replicates data to secondary replicas. Read‑only replicas can offload read workloads. Azure SQL Gateway transparently routes traffic to the active primary replica.

Failover Behavior

  • If the primary replica fails, a secondary is promoted automatically
  • No storage reattachment is required
  • Client connections are redirected automatically
  • Typical failover time: seconds

📚 Reference:
https://learn.microsoft.com/azure/azure-sql/database/service-tier-business-critical 

High Availability Architecture – Hyperscale Tier

The Hyperscale tier introduces a distributed storage and compute architecture, optimized for very large databases and rapid scaling scenarios.

Key Characteristics

  • Decoupled compute and page servers
  • Multiple read replicas
  • Fast scale‑out and fast recovery
  • Durable log service ensures transaction integrity

Architecture Diagram – Hyperscale Tier

Description:
The compute layer processes queries, while durable log services and distributed page servers manage data storage independently, enabling rapid failover and scaling.

Failover Behavior

  • Compute failure results in rapid creation of a new compute replica
  • Page servers remain intact
  • Log service ensures zero data loss

📚 Reference:
https://learn.microsoft.com/azure/azure-sql/database/service-tier-hyperscale 

How Azure SQL Database Handles Failures

Azure SQL Database continuously monitors critical health signals, including:

  • Heartbeats
  • IO latency
  • Replica health
  • Storage availability

Automatic Recovery Actions

  • Restarting failed processes
  • Promoting secondary replicas
  • Recreating compute nodes
  • Redirecting client connections

Applications should implement retry logic and transient‑fault handling to fully benefit from these mechanisms.

📚 Reference:
https://learn.microsoft.com/azure/architecture/best-practices/transient-faults 

Zone Redundancy and High Availability

Azure SQL Database can be configured with zone redundancy, distributing replicas across Availability Zones in the same region.

Benefits

  • Protection against datacenter‑level failures
  • Increased SLA
  • Transparent resilience without application changes

📚 Reference:
https://learn.microsoft.com/azure/azure-sql/database/high-availability-sla 

Summary

Azure SQL Database delivers high availability by default, removing the traditional operational burden associated with SQL Server HA designs.

Service TierHA ModelTypical Failover
General PurposeStorage‑based durabilityMinutes
Business CriticalMulti‑replica, synchronousSeconds
HyperscaleDistributed compute & storageSeconds

By selecting the appropriate service tier and enabling zone redundancy where required, customers can meet even the most demanding availability and resilience requirements with minimal complexity.

Additional References

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Getting DNS Right: Principles for Effective Monitoring

1 Share
Data on computer screens.

This is the second of two parts. Read Part 1:

Monitoring DNS is not simply a matter of checking whether a record resolves. A comprehensive approach follows four key principles:

  1. Test from multiple networks and regions to avoid blind spots.
  2. Validate both correctness and speed, since slow answers can harm user flows even when technically valid.
  3. Measure continuously, not periodically, because many issues manifest as short-lived or regionalized incidents.
  4. Compare control plane changes to real-world propagation patterns to ensure updates are applied as intended.

DNS monitoring is most effective when it targets specific signals that reveal problems with record integrity, server behavior and real-world performance.

The key groups of tests:

  • DNS mapping.
  • DNS record validation.
  • DNS performance measurements.

DNS Mapping

Mapping tests verify that users are directed to an appropriate DNS server based on location. This matters because the closest healthy server usually provides the fastest response. If a user’s request is sent across a country or to another continent, latency increases and resilience decreases.

Different managed DNS providers use different methods to determine which server responds to a query. Many compare the geographic location of the querying IP address to the locations of the available servers.

Some DNS providers and public resolvers use the EDNS (Extension Mechanisms for DNS) Client Subnet extension, which includes part of the requester’s subnet in the query. This can help the provider return a geographically appropriate answer, although support for this feature varies due to privacy considerations.

The purpose of this DNS mapping test is to confirm that queries from different regions are answered by the nearest server and that this behavior is consistent. It can also reveal Anycast drift, where some regions are unexpectedly routed to distant or unhealthy POPs due to Border Gateway Protocol path changes. A fast local resolution is often expected to complete within a few tens of milliseconds on major networks.

DNS Records

Record-level tests verify that the data used to resolve a domain name is accurate, consistent and uncompromised. These checks help detect misconfiguration, operational drift and signs of tampering.

Test DNS Delegation

Delegation checks confirm that each step in the DNS hierarchy is correct. The test walks from the root to the top-level domain and then to the authoritative zone. For example, it verifies that the nameservers listed for a domain, such as example.com, match what the .com zone expects and that those servers provide correct answers. It also catches common failure modes such as mismatched NS records between parent and child zones.

Test Nameserver Records and Root Server References

Once delegation is confirmed, each nameserver should respond reliably over both UDP and TCP. A failure to answer over TCP may indicate a configuration error or a firewall blocking traffic.

It is also useful to verify that the root hints file, when applicable, contains accurate information about root server names and IP addresses. This file is usually preconfigured by providers but should not be assumed infallible.

Monitor SOA Records

Start of Authority records contain the serial number and timing values for a zone. Changes to these values give context to shifts in DNS behavior. Sudden differences in serial numbers across nameservers may indicate incomplete zone transfers or unintended updates. In environments where zone files rarely change, any unexpected serial change warrants investigation.

Check MX and SRV Records

Mail exchange and service records play a central role in email delivery and service discovery. Attackers sometimes target MX records to intercept sensitive communications, so it is important to verify that these records resolve correctly and point to the intended mail or service hosts.

These checks also confirm that record priorities are correct. Misconfigured preference values may send traffic to the wrong server, including servers without proper filtering or authentication controls. For SRV records, verifying that the target hosts actually exist and have matching A/AAAA records helps catch common operational errors.

Check Zone Transfers

Primary and secondary nameservers must hold identical zone data. Zone transfer tests verify that secondary servers have received the most recent updates and that no transfer failures or mismatches exist. If a transfer does not complete or if servers fall out of sync, queries may fail or return inconsistent data.

Verify DNSSEC Configurations

DNSSEC (Domain Name System Security Extensions) provide cryptographic verification for DNS data. Monitoring ensures that DNSSEC is enabled where intended, that the necessary key and signature records are present, and that signatures have not expired. Missing or outdated DNSSEC records can cause validation failures at resolvers. It is also important to track DS records at the parent zone, as mismatched or stale DS entries are a leading cause of DNSSEC-related outages.

DNS Performance

Performance tests measure how quickly and consistently a domain resolves and whether recent changes have propagated across global resolvers.

Track DNS Propagation

Propagation refers to how long it takes for a record change to reach resolvers worldwide. Until propagation is complete, some users will continue receiving old answers. Depending on TTLs and caching behavior, global propagation may take up to several days. Monitoring helps confirm when changes have fully taken effect.

Use DNS Experience Tests

Experience tests run recursive queries from multiple points along the DNS path. These tests show end-to-end resolution time and reveal patterns in resolver load, cache efficiency and upstream performance. Elevated memory usage, CPU spikes or increased QPS (queries per second) on authoritative servers can also be identified through sustained testing.

For internal zones, experience tests may highlight heavy disk activity that indicates frequent zone transfers. Experience tests can also reveal intermittent Tor root server delays, which often go unnoticed without continuous measurement.

Monitor IP Addresses

A and AAAA records may occasionally diverge in unexpected ways. Comparing cached answers to freshly queried answers helps identify mismatches, missing IPv6 records or configurations that favor one address family. This also helps detect scenarios where content delivery networks (CDNs) return different addresses than expected based on geography or policy.

Measure DNS Latency

Latency can be influenced by resolver load, network capacity, cache misses, delays at the top-level domain layer or slow authoritative servers. Performance tests should measure both the latency from the user to the resolver and the latency incurred during the resolver’s lookup chain.

Verify Connectivity

Packet loss and network instability between nameservers and resolvers may cause intermittent failures. Connectivity tests identify when issues are rooted in the network rather than in the DNS configuration itself. This is especially relevant for Anycast deployments, where a single unhealthy path can create regional failures while the global service appears healthy.

Monitor DNS Servers

Teams that operate their own DNS infrastructure should monitor the health of the servers themselves. Important metrics include:

  • Queries per second.
  • CPU and memory usage.
  • Cache hit rates.
  • Disk I/O, especially during zone transfers.
  • Network throughput and dropped packets.

Server-level visibility helps identify when performance issues stem from hardware limits or software constraints.

Complexities of DNS Monitoring

Monitoring DNS is complicated by the fact that many testing tools operate within cloud provider environments. Tests run from within the same cloud region as the authoritative server or application may show near-zero latency that does not reflect the wider internet.

This effect can create misleading results, suggesting that DNS performance is better than what end users actually experience. For an accurate view, monitoring should occur from diverse, internet-connected vantage points rather than solely from cloud-hosted agents.

It is also important to separate your DNS and CDN providers. If both services are tied to the same provider, an outage in the CDN can take your DNS offline as well, making the failure far more widespread and difficult to diagnose. Keeping these layers independent reduces the chance that a single provider outage can disrupt your entire digital footprint.

DNS Monitoring and Reliability Checklist

  • Test DNS from multiple networks and regions, not only cloud data centers.
  • Monitor the full path, including routing and reachability, not just DNS servers.
  • Use more than one recursive resolver to avoid single points of failure.
  • Keep DNS and CDN providers separate to reduce cascading outages.
  • Verify that all authoritative nameservers respond over UDP and TCP.
  • Confirm SOA serial alignment and consistent zone data across servers.
  • Track DNS propagation time after changes.
  • Monitor latency trends and resolver behavior over time.
  • Use alerts that require persistent, multi-region issues before firing.
  • Review routing security measures, such as Resource Public Key Infrastructure (RPKI) adoption, where available.
  • Validate DNSSEC signing and DS record correctness to prevent resolver-based outages.

Conclusion

DNS reliability depends on continuous measurement, distributed visibility and a clear understanding of how users experience resolution across networks. By monitoring mapping, record integrity and performance, teams can detect problems early and maintain dependable digital experiences.

A thoughtful monitoring program does not require complex tooling. It requires awareness, consistent testing and disciplined change management. Start with the essentials outlined here and expand as your services and traffic grow.

The post Getting DNS Right: Principles for Effective Monitoring appeared first on The New Stack.

Read the whole story
alvinashcraft
2 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

What’s new in Svelte: January 2026

1 Share

During the December holiday season, the Svelte team shared 24 days of Svelte knowledge, tips, and insights to unwrap in 2025’s Advent of Svelte. Learn more about under-utilized Svelte features through a series of fun and interactive videos!

There were also a bunch of improvements and showcase items across the Svelte ecosystem this last month. So let’s have a look...

What’s new in Svelte & SvelteKit

  • The hydratable option (which adds an inline <script> block to the head) now has a csp option in render to support Content Security Policies (svelte@5.46.0, Docs, #17338)
  • The Vercel adapter now supports Node 24 (adapter-vercel@6.2.0/adapter-auto@7.0.0, #14982 and #14737)
  • The Svelte CLI is now able to fully setup a SvelteKit project for Cloudflare Workers/Pages development (sv@0.11.0, Docs, #851)
  • The Svelte MCP now exposes tools as both a JS API and CLI (mcp@0.1.16, #128)
  • A huge amount of performance improvements were completed this month in the language-tools so make sure your extensions are up to date!

For a full list of changes - including all the important bugfixes that went into the releases this month - check out the Svelte compiler’s CHANGELOG and the SvelteKit / Adapter CHANGELOGs.


Community Showcase

Apps & Sites built with Svelte

  • GCal Wrapped lets students see how they spent their time this semester, wrapped up beautifully
  • Text Processing Studio is a comprehensive text processing web application that lets you process, compare, transform, and analyze text
  • Flumio is a drag-and-drop workflow automation tool
  • Statue is a markdown-based static site generator designed for performance, flexibility, and developer experience
  • sveltemark is a privacy-first, open-source, local-only markdown editor
  • Lovely Docs provides hierarchically optimized documentation for AI coding agents
  • Ikaw bahala is a Manila-based food platform made for couples and friends who want to search for specific foods locally

Learning Resources

Featuring Svelte Contributors and Ambassadors

Svienna

This Week in Svelte

Svelte Radio

To Read

Libraries, Tools & Components

  • SvelteDoc is a VS Code extension that shows Svelte component props on hover
  • pocket-mocker is an in-page HTTP controller for frontend development to intercept, modify, and simulate API responses directly in the browser
  • Avatune is a production-ready avatar system with AI-powered generation and framework-native components
  • svelte-image-input is a component for loading, scaling and adjusting profile pictures
  • Mint is a digital compositing tool that can be used to crop and resize images, create collages, build mockups, or otherwise complete basic compositing tasks
  • SvelteKit Auto OpenAPI is a type-safe OpenAPI generation and runtime validation for SvelteKit
  • Svelte Drawer is a drawer component for Svelte 5, inspired by Vaul
  • trioxide is a set of customizable components, focused on non-trivial UI pieces that are tedious to reimplement
  • svelte-asciiart is a Svelte 5 component for rendering ASCII art as scalable SVG with optional grid overlay and frame
  • svelte-bash is a fully typed, lightweight, and customizable terminal emulator component with a virtual file system, custom commands, themes, and autoplay mode for demos
  • SvelTTY provides a runtime that allows you to render and interact with Svelte apps in the terminal
  • svelte-tablecn is a powerful data grid and port of tablecn.com
  • Svelte runtime components enables compiling Svelte components from text at runtime, allowing dynamic, user-provided svelte component code to be compiled and mounted in the browser

That’s it for this month! Let us know if we missed anything on Reddit or Discord.

Until next time 👋🏼!

Read the whole story
alvinashcraft
3 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories