Content Developer II at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
122085 stories
·
29 followers

Autofac ComponentRegistryBuilder in ASP.NET Core – How To Register Dependencies (Part 3)

1 Share
Autofac ComponentRegistryBuilder in ASP.NET Core - How To Register DependenciesThe post Autofac ComponentRegistryBuilder in ASP.NET Core – How To Register Dependencies (Part 3) appeared first on Dev Leader.

In this article, we’ll be exploring how to use Autofac ComponentRegistryBuilder in ASP.NET Core. Prior articles in this series have highlighted some challenges for getting set up to do C# plugin architectures — at least for my own standards. I’ll be walking us through how this approach can help overcome some of the challenges that have previously been highlighted.

This will be part of a series where I explore dependency resolution with Autofac inside of ASP.NET Core. I'll be sure to include the series below as the issues are published:

At the end of this series, you'll be able to more confidently explore plugin architectures inside of ASP.NET Core and Blazor -- which will be even more content for you to explore. Keep your eyes peeled on my Dometrain courses for a more guided approach to these topics later in 2023.


What’s In This Article: Autofac ComponentRegistryBuilder in ASP.NET Core

Remember to check out these platforms:


Where Did We Leave Off With Autofac in ASP.NET Core?

The previous two articles looked at the following scenarios:

  • Setting up using the AutofacServiceProviderFactory as the standard recommended approach
  • Skipping AutofacServiceProviderFactory and using Autofac ContainerBuilder directly

In both cases, we were able to get a web application up and running using Autofac for dependency injection. However, both of these had limitations around:

  • Accessing the WebApplication instance on the container
  • Weird nuances with minimal API support

While both options are absolutely viable — and may be great for you given your constraints — I wanted to push a bit further to see if the wrinkles could be ironed out. I want to strive towards having configuration done in separate Autofac modules and pushing towards a C# plugin architecture for the majority of my application development.


Exploring A Sample ASP.NET Core Application

This one is going to be different than the previous articles — we’ve achieved plugin status. I want to show you how the code from the previous examples can now be broken out into more dedicated pieces. Most of what we’ve gone over before is the same concept, but I’ve reduced the weather route to something more contrived just to eliminate the waste.

The Entry Point Configuration

Here’s how simple our Program.cs file is now:

await new FullResolveWebApi().RunAsync(CancellationToken.None);

That’s right — one line of code. But okay, you’re probably curious where all the setup actually takes place. Let’s go a bit deeper:

using Autofac;

internal sealed class FullResolveWebApi
{
    public async Task RunAsync(CancellationToken cancellationToken)
    {
        var containerBuilder = new MyContainerBuilder();
        using var container = containerBuilder.Build();
        using var scope = container.BeginLifetimeScope();
        
        var app = scope
            .Resolve<ConfiguredWebApplication>()
            .WebApplication;
        await app.RunAsync(cancellationToken).ConfigureAwait(false);
    }
}

This looks familiar to what we saw in the previous example! We’re able to get the goodness of that really lean startup configuration But wait! What’s that custom MyContainerBuilder class?!

using Autofac;

using System.Reflection;

internal sealed class MyContainerBuilder
{
    public IContainer Build()
    {
        ContainerBuilder containerBuilder = new();

        // TODO: do some assembly scanning if needed
        var assembly = Assembly.GetExecutingAssembly();
        containerBuilder.RegisterAssemblyModules(assembly);

        var container = containerBuilder.Build();
        return container;
    }
}

This is missing from the code above if we compare it to the previous article, but it can also be extended to do assembly scanning if that’s a requirement. So far, so good. We have one more piece though, and that’s ConfiguredWebApplication:

internal sealed class ConfiguredWebApplication(
    WebApplication _webApplication,
    IReadOnlyList<PreApplicationConfiguredMarker> _markers)
{
    public WebApplication WebApplication => _webApplication;
}

internal sealed record PreApplicationBuildMarker();

internal sealed record PreApplicationConfiguredMarker();

This marker record might seem a bit confusing but we’ll tie this all together in a dedicated section.

WebApplicationBuilder Autofac Module

Now that we’ve seen how our initial ASP NET Core application boostrap code is looking, it’s time to look at some of the core dependency registration that’s going to be a union of what we saw in the previous articles AND some new behavior:

using Autofac;
using Autofac.Extensions.DependencyInjection;

internal sealed class WebApplicationBuilderModule : global::Autofac.Module
{
    protected override void Load(ContainerBuilder builder)
    {
        builder
            .Register(ctx =>
            {
                var builder = WebApplication.CreateBuilder(Environment.GetCommandLineArgs());
                return builder;
            })
            .SingleInstance();
        builder
            .Register(ctx =>
            {
                var config = ctx.Resolve<WebApplicationBuilder>().Configuration;
                return config;
            })
            .As<IConfiguration>()
            .SingleInstance();

        WebApplication? cachedWebApplication = null;
        builder
            .Register(ctx =>
            {
                if (cachedWebApplication is not null)
                {
                    return cachedWebApplication;
                }

                var webApplicationBuilder = ctx.Resolve<WebApplicationBuilder>();
                ctx.Resolve<IReadOnlyList<PreApplicationBuildMarker>>();

                webApplicationBuilder.Host.UseServiceProviderFactory(new AutofacServiceProviderFactory(containerBuilder =>
                {
                    foreach (var registration in ctx.ComponentRegistry.Registrations)
                    {
                        containerBuilder.ComponentRegistryBuilder.Register(registration);
                    }

                    containerBuilder
                        .RegisterInstance(webApplicationBuilder)
                        .SingleInstance();
                }));

                cachedWebApplication = webApplicationBuilder.Build();
                return cachedWebApplication;
            })
            .SingleInstance();
        builder
            .Register(ctx =>
            {
                var app = ctx.Resolve<WebApplication>();
                app.UseHttpsRedirection();
                return new PreApplicationConfiguredMarker();
            })
            .SingleInstance();
        builder
            .RegisterType<ConfiguredWebApplication>()
            .SingleInstance();
    }
}

You’ll notice that the code snippet above shows that we’re now mixing in the AutofacServiceProviderFactory alongside our standalone Autofac ContainerBuilder approach. What we’re able to do is leverage this code to re-register some of the dependency registrations on the second Autofac ContainerBuilder:

foreach (var registration in ctx.ComponentRegistry.Registrations)
{
    containerBuilder.ComponentRegistryBuilder.Register(registration);
}

Now that we can duplicate our registrations, we get the registration of the WebApplication from the parent container onto the dedicated WebApplication‘s ContainerBuilder. But two things we should note:

  • We need to cache the WebApplication instance. This is because later on when the WebApplication instance itself needs to resolve dependencies that depend on an instance of WebApplication, it will go re-run the registration *even though it’s a single instance*! This is because this is a duplicated registration across the container that has never technically been executed at the time of registration. We may need to pay special attention to this sort of thing as we go forward to avoid expensive re-resolution of types.
  • We see another marker type: PreApplicationConfiguredMarker. What’s with these markers?!

Marker Classes for Controlling Dependency Ordering and Requirements

So far we’ve seen two instances of marker types. These marker types are a way that we can force certain registration code to execute before some other registration code executes. This is a more flexible way of saying “I don’t care which types specifically get registered or which registration code runs, but anyone that needs to be registered before some checkpoint, make sure you return one of these”. This allows us to force code to execute before a checkpoint.

If we consider the code in the example above, we see that the ConfiguredWebApplication instance requires the full collection of PreApplicationConfiguredMarker instances. This means that we can’t even create an instance of ConfiguredWebApplication until all dependent code, as indicated by our marker type, has finished executing. This essentially forces Autofac to run certain code for us because it will attempt to run all code that provides one of these marker instances.

The two markers we see in this example code are very naive/primitive — however, this concept can be expanded to provide more robust checkpoints in your dependency registration process.


C# Plugin Architecture in ASP.NET Core Unlocked!

The cat’s out of the bag! We can now successfully create an Autofac module for a plugin! This code shows us enabling C# plugin architecture in ASP.NET Core as we’re able to add a new discoverable module that adds its own API endpoints:

using Autofac;

namespace Plugins;

internal sealed class PluginModule : Module
{
    protected override void Load(ContainerBuilder builder)
    {
        // load whatever dependencies you want in your plugin
        // taking note that they will be able to have access
        // to services/dependencies in the main application
        // by default
        builder.RegisterType<DependencyA>().SingleInstance();
        builder.RegisterType<DependencyB>().SingleInstance();
        builder.RegisterType<DependencyC>().SingleInstance();

        // minimal APIs can resolve dependencies from the
        // method signature itself
        builder
            .Register(ctx =>
            {
                var app = ctx.Resolve<WebApplication>();

                app.MapGet(
                    "/hello",
                    (
                        DependencyA dependencyA
                      , DependencyB dependencyB
                      , DependencyC dependencyC
                    ) =>
                    {
                        return new
                        {
                            AAA = dependencyA.ToString(),
                            BBB = dependencyB.ToString(),
                            CCC = dependencyC.ToString(),
                        };
                    });

                // this is a marker to signal a dependency
                // before the application can be considered
                // configured
                return new PreApplicationConfiguredMarker();
            })
            .SingleInstance();
    }
}

internal sealed class DependencyA(
    WebApplicationBuilder _webApplicationBuilder)
{
    public override string ToString() => _webApplicationBuilder.ToString();
}

internal sealed class DependencyB(
    Lazy<WebApplication> _webApplication)
{
    public override string ToString() => _webApplication.Value.ToString();
}

internal sealed class DependencyC(
    IConfiguration _configuration)
{
    public override string ToString() => _configuration.ToString();
}

If you’d like a more full explanation as to what you’re seeing, I highly recommend you read the articles linked at the top of this one first just to get an idea of why we have some dependencies set up like this. The TL;DR is that this code demonstrates that we can access some dependencies that are of interest to us when building plugin architectures.

What Benefits For C# Plugin Architecture Have Been Unlocked?

The code examples in this article are essentially marrying approaches from two of the previous articles… so hopefully we have the best of both worlds! Let’s have a look:

  • Can access WebApplicationBuilder instance from the WebApplication‘s dependency container.
  • Can access IConfiguration instance from the WebApplication‘s dependency container.
  • Can access WebApplication instance from the WebApplication‘s dependency container.
  • Can resolve Autofac ContainerBuilder dependencies on the minimal APIs directly
  • Can create separate Autofac modules (i.e. for plugin usage) that register minimal APIs directly onto the WebApplication instance
  • Can get an extremely lightweight (one line!) entry point to our application. The core “skeleton” application code is fundamentally just setting up dependencies to be resolved.

All of these are boxes that I wanted to check before continuing to build plugins. With this infrastructure in place, I feel much more confident!

What Gaps Are Left For C# Plugin Architecture?

Of course, we need to look at both pros AND cons when we analyze things. Let’s dive in:

  • We have two dependency containers to worry about. In theory, the usage of Autofac ComponentRegistryBuilder in ASP.NET Core should allow us to clone registrations between the two, but more complexity can potentially arise from this as we continue.
  • We saw an interesting need to cache the WebApplication instance to avoid dependency recreation — are there other scenarios like this we haven’t hit yet?
  • One of the general plugin architecture concerns: Being able to configure EVERYTHING with plugins can make some things harder to find and structure. Do we really need these little marker types to help organize ourselves?

Wrapping Up Autofac ComponentRegistryBuilder in ASP.NET Core

Overall, I’m quite happy with how using Autofac ComponentRegistryBuilder in ASP.NET Core has allowed us to progress our dependency injection patterns. This approach which I’ve highlighted in the article has made it significantly easier for me to go structure plugins the way that I’d like to in a C# plugin architecture. Not without tradeoffs — but I feel this pattern fits my needs.

If you found this useful and you’re looking for more learning opportunities, consider subscribing to my free weekly software engineering newsletter and check out my free videos on YouTube! Meet other like-minded software engineers and join my Discord community!

Affiliations:

These are products & services that I trust, use, and love. I get a kickback if you decide to use my links. There’s no pressure, but I only promote things that I like to use!

      • RackNerd: Cheap VPS hosting options that I love for low-resource usage!
      • Contabo: Alternative VPS hosting options with very affordable prices!
      • ConvertKit: This is the platform that I use for my newsletter!
      • SparkLoop: This service helps me add different value to my newsletter!
      • Opus Clip: This is what I use for help creating my short-form videos!
      • Newegg: For all sorts of computer components!
      • Bulk Supplements: For an enormous selection of health supplements!
      • Quora: I try to answer questions on Quora when folks request them of me!


    Frequently Asked Questions: Autofac ComponentRegistryBuilder in ASP.NET Core

    TBD

    The post Autofac ComponentRegistryBuilder in ASP.NET Core – How To Register Dependencies (Part 3) appeared first on Dev Leader.
    Read the whole story
    alvinashcraft
    19 minutes ago
    reply
    West Grove, PA
    Share this story
    Delete

    From Construction Worker to Teaching Millions of Developers with John Smilga [Podcast #122]

    1 Share
    On this week's episode of the podcast, I interview prolific programming teacher John Smilga. John grew up in the Soviet Union. He worked construction for 5 years before becoming a developer. Today he has taught millions of fellow devs through his man...

    Read the whole story
    alvinashcraft
    20 minutes ago
    reply
    West Grove, PA
    Share this story
    Delete

    Read Satya Nadella’s Microsoft memo on putting security first

    1 Share
    Illustration of Microsoft CEO Satya Nadella
    Image: Laura Normand / The Verge

    Microsoft is overhauling its security processes after a series of high-profile attacks in recent years. Security is now Microsoft’s “top priority,” the company outlined today in response to ongoing questions about its security practices and the US Cyber Safety Review Board’s labeling of Microsoft’s security culture as “inadequate.”

    Microsoft CEO Satya Nadella is now making it clear to every employee that security should be prioritized above all else. The Verge has obtained a memo from Nadella to Microsoft’s more than 200,000 employees, where he discusses the new security overhaul and how the company is learning from attackers to improve its security processes. Nadella also makes it explicitly clear that employees should not make...

    Continue reading…

    Read the whole story
    alvinashcraft
    2 hours ago
    reply
    West Grove, PA
    Share this story
    Delete

    Microsoft will base part of senior exec comp on security, add deputy CISOs to product groups

    1 Share
    Charlie Bell, executive vice president of Microsoft security, speaks at the GeekWire Summit in 2022. (GeekWire Photo / Dan DeLong)

    Microsoft is changing its security practices, organizational structure, and executive compensation in an attempt to address a series of major security breaches, under growing pressure from government leaders and big customers.

    The company said Friday morning that it will base a portion of senior executive compensation on progress toward security goals, install deputy chief information security officers (CISOs) in each product group, and bring together teams from its major platforms and product teams in “engineering waves” to overhaul security.

    “We will take our learnings from security incidents, feed them back into our security standards, and operationalize these learnings as ‘paved paths’ that can enable secure design and operations at scale,” wrote Charlie Bell, the Microsoft Security executive vice president, in a blog post outlining the changes.

    Bell said the changes build on the Secure Future Initiative (SFI), introduced last fall.

    “Ultimately, Microsoft runs on trust and this trust must be earned and maintained,” he wrote. “As a global provider of software, infrastructure, and cloud services, we feel a deep responsibility to do our part to keep the world safe and secure.”

    The changes follow a critical report by the Cyber Safety Review Board (CSRB) that described Microsoft’s security culture as “inadequate” and called on the company to make security its top priority, effectively reviving the spirit of the Trustworthy Computing initiative that Microsoft co-founder Bill Gates instituted in 2002.

    The report called for security initiatives to be “overseen directly and closely” by Microsoft’s CEO and board, and said “all senior leaders should be held accountable for implementing all necessary changes with utmost urgency.”

    After the CSRB report’s release, Sen. Ron Wyden of Oregon introduced legislation designed in part to reduce the U.S. government’s reliance on Microsoft software, citing the company’s “shambolic cybersecurity practices.”

    Bell wrote that Microsoft is “integrating the recent recommendations from the CSRB” as part of the changes announced Friday, in addition to lessons learned from high-profile cyberattacks.

    The compensation changes announced Friday will apply to Microsoft’s senior leadership team, the top executives who report to CEO Satya Nadella. The company did not say how much of their compensation will be based on security.

    Nadella hinted at these changes last week on the company’s quarterly earnings call when he said the company would be “putting security above all else — before all other features and investments.”

    In an internal memo Friday morning, obtained by GeekWire, Nadella delivered a mandate to employees, expanding on the themes outlined in Bell’s public blog post.

    “If you’re faced with the tradeoff between security and another priority, your answer is clear: Do security,” the Microsoft CEO told employees. “In some cases, this will mean prioritizing security above other things we do, such as releasing new features or providing ongoing support for legacy systems.”

    Bell wrote in his post that the company’s new “security governance framework” will be overseen by Microsoft’s Chief Information Security Office, which is led by Igor Tsyganskiy as Microsoft’s CISO following an executive shakeup in December.

    The deputy CISOs in product teams will report directly to Tsyganskiy, according to the company. This change in organizational and reporting structure was first reported by Bloomberg News on Thursday.

    “This framework introduces a partnership between engineering teams and newly formed Deputy CISOs, collectively responsible for overseeing SFI, managing risks and reporting progress directly to the Senior Leadership Team,” Bell wrote. “Progress will be reviewed weekly with this executive forum and quarterly with our Board of Directors.”

    Microsoft revealed in January of this year that a Russian state-sponsored actor known as Nobelium or Midnight Blizzard accessed its internal systems and executive email accounts. More recently, the company said the same attackers were able to access some of its source code repositories and internal systems.

    In another high-profile incident, in May and June 2023, the Chinese hacking group known as Storm-0558 is believed to have compromised the Microsoft Exchange Online mailboxes of more than 500 people and 22 organizations worldwide, including senior U.S. government officials.

    Read the whole story
    alvinashcraft
    2 hours ago
    reply
    West Grove, PA
    Share this story
    Delete

    Microsoft overhaul treats security as ‘top priority’ after a series of failures

    2 Shares
    Vector collage of the Microsoft logo among arrows and lines going up and down.
    Image: The Verge

    Microsoft is making security its number one priority for every employee, following years of security issues and mounting criticisms. After a scathing report from the US Cyber Safety Review Board recently concluded that “Microsoft’s security culture was inadequate and requires an overhaul,” it’s doing just that by outlining a set of security principles and goals that are tied to compensation packages for Microsoft’s senior leadership team.

    Last November, Microsoft announced a Secure Future Initiative (SFI) in response to mounting pressure on the company to respond to attacks that allowed Chinese hackers to breach US government email accounts. Just days after announcing this initiative, Russian hackers managed to breach Microsoft’s...

    Continue reading…

    Read the whole story
    alvinashcraft
    2 hours ago
    reply
    West Grove, PA
    Share this story
    Delete

    Clean Data, Trusted Model: Ensure Good Data Hygiene for Your LLMs

    1 Share

    Large language models (LLMs) have emerged as powerful engines of creativity, transforming simple prompts into a world of possibilities.

    But beneath their potential capacity lies a critical challenge. The data that flows into LLMs touches countless enterprise systems, and this interconnectedness poses a growing data security threat to organizations.

    LLMs are nascent and not always completely understood. Depending on the model, their inner workings may be a black box, even to their creators — meaning that we can’t fully understand what will happen to the data we put in, and how or where it may come out.

    To stave off risks, organizations will need to build infrastructure and processes that perform rigorous data sanitization of both inputs and outputs, and can monitor and canvas every LLM on an ongoing basis.

    Model Inventory: Take Stock of What You’re Deploying

    As the old saying goes, “You can’t secure what you can’t see.” Maintaining a comprehensive inventory of models throughout both production and development phases is critical to achieving transparency, accountability and operational efficiency.

    In production, tracking each model is crucial for monitoring performance, diagnosing issues and executing timely updates. During development, inventory management helps track iterations, facilitating the decision-making process for model promotion.

    To be clear, this is not a “record-keeping task” — a robust model inventory is absolutely essential in building reliability and trust in AI-driven systems.

    Data Mapping: Know What Data You’re Feeding Models

    Data mapping is a critical component of responsible data management. It involves a meticulous process to comprehend the origin, nature and volume of data that feeds into these models.

    It’s imperative to know where the data originates, whether it contains sensitive information like personally identifiable information (PII) or protected health information (PHI), especially given the sheer quantity of data being processed.

    Understanding the precise data flow is a must; this includes tracking which data goes into which models, when this data is utilized and for what specific purposes. This level of insight not only enhances data governance and compliance but also aids in risk mitigation and the preservation of data privacy. It ensures that machine learning operations remain transparent, accountable and aligned with ethical standards while optimizing the utilization of data resources for meaningful insights and model performance improvements.

    Data mapping bears striking resemblance to compliance efforts often undertaken for regulations like the General Data Protection Regulation (GDPR). Just as GDPR mandates a thorough understanding of data flows, the types of data being processed and their purpose, the data mapping exercise extends these principles to the realm of machine learning. By applying similar practices to both regulatory compliance and model data management, organizations can ensure that their data practices adhere to the highest standards of transparency, privacy and accountability across all facets of operations, whether it’s meeting legal obligations or optimizing the performance of AI models.

    Data Input Sanitation: Weed out Risky Data

    “Garbage in, garbage out” has never rung truer than with LLMs. Just because you have vast troves of data to train a model doesn’t mean you should do so. Whatever data you use should have a reasonable and defined purpose.

    The fact is, some data is just too risky to input into a model.  Some can carry significant risks, such as privacy violations or biases.

    It is crucial to establish a robust data sanitization process to filter out such problematic data points and ensure the integrity and fairness of the model’s predictions. In this era of data-driven decision-making, the quality and suitability of the inputs are just as vital as the sophistication of the models themselves.

    One method rising in popularity is adversarial testing on models. Just as selecting clean and purposeful data is vital for model training, assessing the model’s performance and robustness is equally crucial in the development and deployment stages. These evaluations help detect potential biases, vulnerabilities or unintended consequences that may arise from the model’s predictions.

    There’s already a growing market of startups specializing in providing services for precisely this purpose. These companies offer invaluable expertise and tools to rigorously test and challenge models, ensuring they meet ethical, regulatory and performance standards.

    Data Output Sanitation: Ensure Trust and Coherence

    Data sanitation isn’t limited to just the inputs in the context of large language models; it extends to what’s generated as well. Given the inherently unpredictable nature of LLMs, the output data requires careful scrutiny in order to establish effective guard rails.

    The outputs should not only be relevant but also coherent and sensible within the context of their intended use. Failing to ensure this coherence can swiftly erode trust in the system, as nonsensical or inappropriate responses can have detrimental consequences.

    As organizations continue to embrace LLMs, they will need to pay close attention to the sanitation and validation of model outputs in order to maintain the reliability and credibility of any AI-driven systems.

    The inclusion of a diverse set of stakeholders and experts when creating and maintaining the rules for outputs and when building tools to monitor outputs is are key steps toward successfully safeguarding models.

    Putting Data Hygiene into Action

    Using LLMs in a business context is no longer an option; it’s essential to stay ahead of the competition. This means organizations will have to establish measures to ensure model safety and data privacy. Data sanitization and meticulous model monitoring are a good start, but the landscape of LLMs evolves quickly. Staying abreast of the latest and greatest, as well as regulations, will be key to making continuous improvements to your processes.

    The post Clean Data, Trusted Model: Ensure Good Data Hygiene for Your LLMs appeared first on The New Stack.

    Read the whole story
    alvinashcraft
    2 hours ago
    reply
    West Grove, PA
    Share this story
    Delete
    Next Page of Stories