Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
149674 stories
·
33 followers

Overview of Azure Workload Modernization

1 Share

 

Azure workload modernization generally means shifting from traditional deployment options, such as running a workload within a VM, to more cloud native components, such as functions, PaaS services, and other cloud architecture components.

  • Shift from VMs to PaaS and Cloud-Native Services: By replatforming to services like Azure App Service for web apps, managed databases (e.g. Azure SQL Database), or container platforms (e.g. Azure Kubernetes Service (AKS)), you offload infrastructure management to Azure. Azure handles patches, scaling, and high availability, so your team can focus on code and features. (Learn more: https://learn.microsoft.com/azure/app-modernization-guidance/plan/plan-an-application-modernization-strategy#iaas-vs-paas)
  • Immediately Leverage Azure’s Built-in Capabilities: You can light up Azure’s ecosystem features for security, compliance, monitoring, and more. For example, without changing any code you can enable Azure Monitor for telemetry and alerting, use Azure’s compliance certifications to meet regulatory needs, and turn on governance controls. Modernizing a workload is about unlocking things like auto-scaling, backup/DR, and patch management that will be handled for you as platform features. (See: https://learn.microsoft.com/azure/well-architected/framework/platform-automation)
  • Treat Modernization as a Continuous Journey. Modernizing isn’t a single “big bang” rewrite, it’s an ongoing process. Once on Azure, plan to iteratively improve your applications as new services and best practices emerge. Implement DevOps pipelines (CI/CD) to regularly deliver updates and refactor parts of the system over time. This allows you to adopt new Azure capabilities (such as improved instance types, updated frameworks, or new managed services) with minimal disruption. By continually integrating improvements – from code enhancements to architecture changes – you ensure your workloads keep getting more efficient, secure, and scalable. (See: https://learn.microsoft.com/azure/app-modernization-guidance/get-started/application-modernization-life-cycle – continuous improvement approach)
  • Use Containers and Event-Driven Architectures to Evolve Legacy Apps: Breaking apart large, tightly-coupled applications into smaller components can drastically improve agility and resilience. Containerize parts of your app and deploy them to a managed orchestrator like Azure Kubernetes Service (AKS) for better scalability and fault isolation. In an AKS cluster, each microservice or module runs independently, so you can update or scale one component without impacting the whole system. In addition, consider introducing serverless functions (via Azure Functions) or event-driven services for specific tasks and background jobs. These approaches enable on-demand scaling and cost efficiency – Azure only runs your code when triggered by events or requests. Adopting microservices and serverless architectures helps your application become more modular, easier to maintain, and automatically scalable to meet demand. (Learn more: https://learn.microsoft.com/azure/architecture/guide/architecture-styles/microservices and https://learn.microsoft.com/azure/azure-functions/functions-overview)
  • Modernize Security and Identity: Update your application’s security posture to align with cloud best practices. Integrate your apps with Microsoft Entra ID for modern authentication and single sign-on, rather than custom or legacy auth methods. This provides immediate enhancements like multi-factor authentication, token-based access, and easier user management across cloud services. Additionally, take advantage of Azure’s global networking and security services, for example, use Azure Front Door to improve performance for users worldwide and add a built-in Web Application Firewall to protect against DDoS and web attacks. By using cloud-native security services (such as Azure Key Vault to manage app secrets and certificates, or Microsoft Defender for Cloud for threat protection), you can significantly strengthen your workload’s security while reducing the operational burden on your team. (See: https://learn.microsoft.com/entra/identity/intro and https://learn.microsoft.com/azure/frontdoor/front-door-overview)

 

Read the whole story
alvinashcraft
31 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Autopilot Mode with Justin Chen

1 Share

Justin and James deep-dive into Autopilot and the evolving VS Code chat UX—why shimmers and collapsed containers declutter conversations, and why the input bar split and new permissions picker matter. Learn how Autopilot (Insiders preview) can auto-approve tools, answer prompts, iterate until a task_complete signal or max retries, and when to use default vs bypass approvals; practical tips for safe, hands-off workflows and feedback.

Follow VS Code:

Special Guest: Justin Chen.





Download audio: https://aphid.fireside.fm/d/1437767933/fc261209-0765-4b49-be13-c610671ae141/4ad3d932-a795-4fba-8c48-b0c3e1a1dec0.mp3
Read the whole story
alvinashcraft
31 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

506: We have no skills

1 Share

James and Frank unpack the exploding world of AI coding agents—covering instructions, MCP tools, custom agents, hooks, plugins and why “skills” matter. They walk through the new .NET Skills repo (P/Invoke, MSBuild, diagnostics, binlogs), show how skills act like practical, on‑demand tutorials for niche tasks, and sketch how tooling will soon auto-load the right skills so agents can just do the thing for you.

Follow Us

⭐⭐ Review Us ⭐⭐

Machine transcription available on http://mergeconflict.fm

Support Merge Conflict





Download audio: https://aphid.fireside.fm/d/1437767933/02d84890-e58d-43eb-ab4c-26bcc8524289/29637723-f04b-4c01-b651-8c415ea135d1.mp3
Read the whole story
alvinashcraft
31 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

records ToString and inheritence

1 Share

What is the output of that code snippet?

Console.Write(new Derived("Test"));

public abstract record Base(string Key)
{
    public override string ToString() => Key;
}

public sealed record DerivedRecord(string Key) : Base(Key);

Probably "Test"?

Read the whole story
alvinashcraft
32 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Validation Options in Wolverine

1 Share

Wolverine — the event-driven messaging and HTTP framework for .NET — provides a rich, layered set of options for validating incoming data. Whether you are building HTTP endpoints or message handlers, Wolverine meets you where you are: from zero-configuration inline checks to full Fluent Validation or Data Annotation middleware support for both command handlers and HTTP endpoints.

Let’s maybe over simplify validation scenarios say they’ll fall into two buckets:

  1. Run of the mill field level validation rules like required fields or value ranges. These rules are the bread and butter of dedicated validation frameworks like Fluent Validation or Microsoft’s Data Annotations markup.
  2. Custom validation rules that are custom to your business domain and might involve checks against the existing state of your system beyond the command messages.

Let’s first look at Wolverine’s Data Annotation integration that is completely baked into the core WolverineFx Nuget. To get started, just opt into the Data Annotations middleware for message handlers like this:

using var host = await Host.CreateDefaultBuilder()
.UseWolverine(opts =>
{
// Apply the validation middleware
opts.UseDataAnnotationsValidation();
}).StartAsync();

In message handlers, this middleware will kick in for any message type that has any validation attributes as this example:

public record CreateCustomer(
// you can use the attributes on a record, but you need to
// add the `property` modifier to the attribute
[property: Required] string FirstName,
[property: MinLength(5)] string LastName,
[property: PostalCodeValidator] string PostalCode
) : IValidatableObject
{
public IEnumerable<ValidationResult> Validate(ValidationContext validationContext)
{
// you can implement `IValidatableObject` for custom
// validation logic
yield break;
}
};
public class PostalCodeValidatorAttribute : ValidationAttribute
{
public override bool IsValid(object? value)
{
// custom attributes are supported
return true;
}
}
public static class CreateCustomerHandler
{
public static void Handle(CreateCustomer customer)
{
// do whatever you'd do here, but this won't be called
// at all if the DataAnnotations Validation rules fail
}
}

By default for message handlers, any validation errors are logged, then the current execution is stopped through the usage of the HandlerContinuation value we’ll discuss later.

For Wolverine.HTTP integration with Data Annotations, use:

app.MapWolverineEndpoints(opts =>
{
// Use Data Annotations that are built
// into the Wolverine.HTTP library
opts.UseDataAnnotationsValidationProblemDetailMiddleware();
});

Likewise, this middleware will only apply to HTTP endpoints that have a request input model that contains data annotation attributes. In this case though, Wolverine is using the ProblemDetails specification to report validation errors back to the caller with a status code of 400 by default.

Fluent Validation Middleware

Similarly, the Fluent Validation integration works more or less the same, but requires the WolverineFx.FluentValidation package for message handlers and the WolverineFx.Http.FluentValidation package for HTTP endpoints. There are some Wolverine helpers for discovering and registering FluentValidation validators in a way that applies some Wolverine-specific performance optimizations by trying to register most validators with a Singleton lifetime just to allow Wolverine to generate more optimized code.

It is possible to override how Wolverine handles validation failures, but I’d personally recommend just using the ProblemDetails default in most cases.

I would like to note that the way that Wolverine generates code for the Fluent Validation middleware is generally going to be more efficient at runtime than the typical IoC dependent equivalents you’ll frequently find in the MediatR space.

Explicit Validation

Let’s move on to validation rules that are more specific to your own problem domain, and especially the type of validation rules that would require you to examine the state of your system by exercising some kind of data access. These kinds of rules certainly can be done with custom Fluent Validation validators, but I strongly recommend you put that kind of validation directly into your message handlers or HTTP endpoints to colocate business logic together with the actual message handler or HTTP endpoint happy path.

One of the unique features of Wolverine in comparison to the typical “IHandler of T” application frameworks in .NET is Wolverine’s built in support for a type of low code ceremony Railway Programming, and this turns out to be perfect for one off validation rules.

In message handlers we’ve long had support for returning the HandlerContinuation enum from Validate() or Before() methods as a way to signal to Wolverine to conditionally stop all additional processing:

public static class ShipOrderHandler
{
// This would be called first
public static async Task<(HandlerContinuation, Order?, Customer?)> LoadAsync(ShipOrder command, IDocumentSession session)
{
var order = await session.LoadAsync<Order>(command.OrderId);
if (order == null)
{
return (HandlerContinuation.Stop, null, null);
}
var customer = await session.LoadAsync<Customer>(command.CustomerId);
return (HandlerContinuation.Continue, order, customer);
}
// The main method becomes the "happy path", which also helps simplify it
public static IEnumerable<object> Handle(ShipOrder command, Order order, Customer customer)
{
// use the command data, plus the related Order & Customer data to
// "decide" what action to take next
yield return new MailOvernight(order.Id);
}
}

But of course, with the example above, you could also write that with Wolverine’s declarative persistence like this:

public static class ShipOrderHandler
{
// The main method becomes the "happy path", which also helps simplify it
public static IEnumerable<object> Handle(
ShipOrder command,
// This is loaded by the OrderId on the ShipOrder command
[Entity(Required = true)]
Order order,
// This is loaded by the CustomerId value on the ShipOrder command
[Entity(Required = true)]
Customer customer)
{
// use the command data, plus the related Order & Customer data to
// "decide" what action to take next
yield return new MailOvernight(order.Id);
}
}

In the code above, Wolverine would stop the processing if either the Order or Customer entity referenced by the command message is missing. Similarly, if this code were in an HTTP endpoint instead, Wolverine would emit a ProblemDetails with a 400 status code and a message stating the data that is missing.

If you were using the code above with the integration with Marten or Polecat, Wolverine can even emit code that uses Marten or Polecat’s batch querying functionality to make your system more efficient by eliminating database round trips.

Likewise in the HTTP space, you could also return a ProblemDetails object directly from a Validate() method like:

public class ProblemDetailsUsageEndpoint
{
public ProblemDetails Validate(NumberMessage message)
{
if (message.Number > 5)
return new ProblemDetails
{
Detail = "Number is bigger than 5",
Status = 400
};
// All good — continue!
return WolverineContinue.NoProblems;
}
[WolverinePost("/problems")]
public static string Post(NumberMessage message) => "Ok";
}

Even More Lightweight Validation!

When reviewing client code that uses the HandlerContinuation or ProblemDetails syntax, I definitely noticed the code can become verbose and noisy, especially compared to just embedding throw new InvalidOperationException("something is not right here"); code directly in the main methods — which isn’t something I’d like to see people tempted to do.

Instead, Wolverine 5.18 added a more lightweight approach that allows you to just return an array of strings from a Before/Validation() method:

    public static IEnumerable<string> Validate(SimpleValidateEnumerableMessage message)
    {
        if (message.Number > 10)
        {
            yield return "Number must be 10 or less";
        }
    }

    // or

    public static string[] Validate(SimpleValidateStringArrayMessage message)
    {
        if (message.Number > 10)
        {
            return ["Number must be 10 or less"];
        }

        return [];
    }

At runtime, Wolverine will stop a handler if there are any messages or emit a ProblemDetails response in HTTP endpoints.

Summary

Hopefully, Wolverine has you covered no matter what with options. A few practical takeaways:

  • Reach for Validate() / ValidateAsync() first whenever IoC services or database queries are involved or the validation logic is just specific to your message handler or HTTP endpoint.
  • Use Data Annotations middleware when your model types are already decorated with attributes and you want zero validator classes.
  • Use Fluent Validation middleware when you want reusable, composable validators shared across multiple handlers or endpoints.

All three strategies generate efficient, ahead-of-time compiled middleware via Wolverine’s code generation engine, keeping the runtime overhead minimal regardless of which path you choose.



Read the whole story
alvinashcraft
32 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Calculate Rolling Average with SQL DATE_BUCKET Function

1 Share

Understand how to use DATE_BUKCET SQL command to group dates and calculate averages in T-SQL for your sales data analysis.

The post Calculate Rolling Average with SQL DATE_BUCKET Function appeared first on MSSQLTips.com.

Read the whole story
alvinashcraft
32 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories