Content Developer II at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
119340 stories
·
29 followers

Microsoft unveils Copilot Lab to help users get the most out of its AI assistant

1 Share
Find videos, articles, sample prompts, and Copilot tips and tricks in this handy resource hub.
Read the whole story
alvinashcraft
13 hours ago
reply
West Grove, PA
Share this story
Delete

#451: Djangonauts, Ready for Blast-Off

1 Share
Are you interested in contributing to Django? Then there is an amazing mentorship program that helps Python and Django enthusiasts, because contributes and potentially core developers of Django. It's called Djangonauts and their slogan is "where contributors launch." On this episode, we have Sarah Boyce from the Django team and former Djangonaut and now Djangonaut mentor, Tushar Gupta. Not only is this excellent for the Django community, many of other open source communities would do well to keep an eye on how this creative project is working.

Episode sponsors

Neo4j
Posit
Talk Python Courses

Links from the show

Sarah on Mastodon: @sarahboyce@mastodon.social
Sarah on LinkedIn: linkedin.com
Tushar on Twitter: @tushar5526
Djangonaut Space on Mastodon: @djangonaut@indieweb.social
Djangonaut Space on Twitter: @djangonautspace
Djangonaut Space on LinkedIn: linkedin.com

Website: djangonaut.space
Djangonaut Space Launch Video: youtube.com
Sessions: djangonaut.space
Djangonaut Space Interest Form: google.com/forms
Program: github.com
Watch this episode on YouTube: youtube.com
Episode transcripts: talkpython.fm

--- Stay in touch with us ---
Subscribe to us on YouTube: youtube.com
Follow Talk Python on Mastodon: talkpython
Follow Michael on Mastodon: mkennedy




Download audio: https://talkpython.fm/episodes/download/451/djangonauts-ready-for-blast-off.mp3
Read the whole story
alvinashcraft
13 hours ago
reply
West Grove, PA
Share this story
Delete

Is Efficiency A Good Thing? Part II: All the Things That Can Go Wrong

1 Share

In part I of this blog, I discussed what efficiency even is, and explored the question of whether we’re any good at it. Although in some ways we’re getting more efficient, many of our organisations and software systems are still shockingly inefficient.

A pile of garbage

I get intensely annoyed by waste. All of us should take it as a personal mission to eliminate waste. As techies, we have a brilliant opportunity to do this by streamlining processes and optimising software. We should also go further; sometimes, instead of being streamlined, a process should just be eliminated. The management consultant Peter Drucker once said “there’s nothing quite so useless as doing with great efficiency that which should not be done at all.”

My team used to have a regular meeting in which we set the priority of incoming defects. For every defect, we would discuss the potential impact, and then set the priority to “medium”. Every. Single. Time. (We had another field, Severity, which had more meaningful variation.) Eventually, someone suggested that perhaps we could save some effort by writing a shell script which automated setting the priority to “medium”.

A group of people around a table

This was a good idea, but there was a better one. If the field had so little value that we could manage it with a shell script, why did we even have the field? Why not just remove it from the tooling, and eliminate some clutter in our UI?

I sometimes have the same thought about some of the code and verbiage generated by gen AI tools. If the code, or comment, or words, are so predictable that ChatGPT can write it, do we really want that code? AI is good at writing boilerplate, but the best thing to do with boilerplate is eliminate it. A GitHub study found that CoPilot users accepted 30% of its suggestions, and CoPilot produced 40% of the code. These figures seem impressive on the surface, but the more you unpack them, the less good they are. First of all, a 30% success rate means that 70% of the time, CoPilot suggested the wrong thing. But what about that 40% of the code that was written by the AI? I suspect such a codebase will have a low information density – lots of boilerplate, lots of code comments ‘documenting’ parameters whose purpose is self-evident from the name, and so on.

A cat being sick

I’ve certainly seen this kind of bloat in my own experiments with ChatGPT. When I asked ChatGPT to write me a Quarkus PanacheEntity, I was initally delighted with how much code it produced. “Wow,” I thought, “it would have taken me ages to write that much code!” But when I looked closer at the code, I realised I wouldn’t have written that much code, because the code didn’t need to be there.

70% of what the AI produced was waste. It used a bad, outdated, design where it explicitly filled in getters and setters that would normally be generated by the Panache framework. One of the design features of Quarkus is how it eliminates boilerplate, but ChatGPT had carefully put all the boilerplate right back in.

This kind of pointless code may seem unnecessarily flabby, but otherwise harmless. But superfluous code and comments is noise. It inteferes with our ability to understand code and distracts us from the important parts. It’s also a maintenance liability. For example, generic, value-free, comments can easily go out of sync with the code. When that happens, they’re no longer value-free, they’re value-negative.

Our industry learned, long ago, that measuring developers by the lines of code they write is a bad idea. We now need to learn not to measure our AIs by that criteria.

Measuring productivity

All organisations want to be data-driven. What does productivity data look like? How do you measure productivity, if it’s not by lines of code? Some people joke that you should measure junior developers by how much code they write, and senior developers by how much code they delete. This is a nice way of thinking about the difference in roles, but it’s not a proper productivity metric.

A set of calipers

You get what you measure, so make sure what you measure is the thing you really care about. This is harder than it sounds, because often the thing we really care about is hard to measure. For example, an important role of senior developers is to produce other senior developers, but how do you measure the subtle chains of influence involved in that process?

Often, we resort to proxy measurements. This can be dangerous, if we’re unwise in our choice of proxies. I saw a comically awful social media post recently, in which a VC boasted that his founders were measuring their sleep using a Whoop group. The goal wasn’t for the founders to sleep more so they could make good decisions, it was for the founders to sleep less so they could demonstrate … commitment, or drive, or something equally nebulous. The founders were averaging 5.5 hours sleep a night, and this was judged to be a great result. This is not a great result. The effects of sleep deprivation are similar to the effects of alcohol; being tired makes people careless and stupid and causes all sorts of bad decisions and workplace-related accidents.

Being sleep-deprived is an illusion of efficiency, not real efficiency. Using AI to generate reams of value-free code is an illusion of efficiency, not real efficiency.

Why inefficiency is efficient

Sometimes, real efficiency looks like inefficiency. This is a complicated stuff! No wonder we get it wrong.

There’s an enormous amount of research that shows fun in the workplace is good for business. If people are happy at work they work harder, take less sick leave, and are more productive. In 2014, the first DORA report established that job satisfaction is the number one predictor of performance against organizational goals.

At an individual level, the Harvard Business Review found your brain in a positive state is 31% more productive than your brain when it’s neutral or stressed. How can such a positive state of mind be achieved? Well, cat videos are part of the solution. A study from the University of Warwick found that people performed 12% better on a test if they’d just watched a comedy video. Most managers would be delighted if you told them you knew how to achieve a 12% performance improvement … at least, until you mention it involves comedy videos.

A cat video

Not only does having fun at work improve productivity, staring into space can improve productivity. Doing nothing creates the ideal conditions for creativity and problem-solving. For our industry (lucky us!), creativity and problem solving are key elements of our productivity. That means doing nothing is a key part of our job.

What underpins the productivity of idleness? The default mode network is a pattern of brain activity which kicks into life when the rest of the brain goes into an idle state. For example, taking a shower or going for a run can trigger the default mode network. The default mode network is associated with mental time travelling, creativity and problem-solving, so triggering it is a good thing.

It’s not only fluffy-wuffy psychology which says idle time is important for productivity. Un-fluffy, un-wuffy, mathematics confirms it. Queueing theory is the science of how and when works get done.

A queue

It could be a computer doing the work, or a person; either way, the mathematics are the same. A getting-stuff-done process is modelled as

  • an arrival process (new work coming into the system)
  • a queue (work requests waiting to get sorted out)
  • servers (people, or threads, or machines, or whatever is doing the work)
  • completed work (what gets spat out at the end of the process)

A diagram showing an arrival process, a queue, servers, and completed work

The arrival process is usually assumed to be a Poisson distribution; that is, random, but distributed around some average arrival rate. If server capacity is too low, a queue builds up, and wait times are high. If server capacity is high, the queue will be mostly empty, and some servers will be idle. Is this an efficient situation, or is it inefficient? It depends what you’re measuring. Requests are handled quickly (efficient!), but there’s a lot of wasted capacity (inefficient!).

What makes this trade-off particularly interesting is that it’s asymmetric. Because requests come in somewhat randomly, there will be times when requests bunch up a bit, and queues build up. But what happens when requests are unusually sparse? The queue can’t build down below zero.

Because a queue length can’t be negative, busy times hurt the system more than quiet times help it. You can see this effect on a plot lead time as a function of utilisation (how much of the time servers are busy). With a Poisson distribution of arrival times, going from 80% utilization to 90% utilization doubles wait times. If the servers are busy 100% of the time, lead times are infinite. There has to be some slack in the system, or it collapses.

An exponential curve Lead times (purple line) and resource cost (green line) as a function of utilisation. Source: http://brodzinski.com/2015/01/slack-time-value.html

This is the reason train schedules have some slack in them. Trains usually travel a bit below their maximum speed, or pause for longer than strictly necessary at each station. Otherwise, any minor delay could perturb the system unrecoverably.

A train and a train schedule board

On the other hand, too much slack is unaffordable. The green line in the plot above shows the cost of the idle capacity; at low utilisation, it’s big. Provisioning a dedicated pool of hundreds of build machines ensures build wait times are short, but it’s ruinously expensive to have so much idle capacity, and no business would do it. I get my best ideas in the shower, but it wouldn’t work if I spent seven hours a day in the shower, and only one hour at my desk. The management skill is to balance the competing inefficiencies, and come up with something that just about works.

Read the whole story
alvinashcraft
13 hours ago
reply
West Grove, PA
Share this story
Delete

Lightweight In-Memory Message Bus Using .NET Channels

1 Share

Thank you to our sponsors who keep this newsletter free to the reader:

Iron Software's Suite Expansion: Unveiling 9 Developer Libraries. Iron Software amplifies its toolkit, introducing a suite encompassing PDF, OCR (see case study), barcode, and spreadsheet and more. Explore the IronSuite.

ABP Framework is a platform for building enterprise-grade ASP.NET Core applications. It comes with production-ready components, modular architecture, and Domain-Driven Design. Learn more here.

Suppose you're building a modular monolith, a type of software architecture where different components are organized into loosely coupled modules. Or you might need to process data asynchronously. You'll need a tool or service that allows you to implement this.

Messaging plays a crucial role in modern software architecture, enabling communication and coordination between loosely coupled components.

An in-memory message bus is particularly useful when high performance and low latency are critical requirements.

In today's issue, we will:

  • Create the required messaging abstractions
  • Build an in-memory message bus using channels
  • Implement an integration event processor background job
  • Demonstrate how to publish and consume messages asynchronously

Let's dive in.

When To Use an In-Memory Message Bus

I have to preface this by saying that an in-memory message bus is far from a silver bullet. There are many caveats to using it, as you will soon learn.

But first, let's start with the pros of using an in-memory message bus:

  • Because it works in memory, you have a very low-latency messaging system
  • You can implement asynchronous (non-blocking) communication between components

However, there are a few drawbacks to this approach:

  • Potential for losing messages if the application process goes down
  • It only works inside of a single process, so it's not useful in distributed systems

A practical use case for an in-memory message bus is when building a modular monolith. You can implement communication between modules using integration events. When you need to extract some modules into a separate service, you can replace the in-memory bus with a distributed one.

Defining The Messaging Abstractions

We will need a few abstractions to build our simple messaging system. From the client's perspective, we really only need two things. One abstraction is to publish messages, and another is to define a message handler.

The IEventBus interface exposes the PublishAsync method. This is what we will use to publish messages. There's also a generic constraint defined that only allows passing in an IIntegrationEvent instance.

public interface IEventBus
{
    Task PublishAsync<T>(
        T integrationEvent,
        CancellationToken cancellationToken = default)
        where T : class, IIntegrationEvent;
}

I want to be practical with the IIntegrationEvent abstraction, so I'll use MediatR for the pub-sub support. The IIntegrationEvent interface will inherit from INotification. This allows us to easily define IIntegrationEvent handlers using INotificationHandler<T>. Also, the IIntegrationEvent has an identifier, so we can track its execution.

The abstract IntegrationEvent serves as a base class for concrete implementations.

using MediatR;

public interface IIntegrationEvent : INotification
{
    Guid Id { get; init; }
}

public abstract record IntegrationEvent(Guid Id) : IIntegrationEvent;

Simple In-Memory Queue Using Channels

The System.Threading.Channels namespace provides data structures for asynchronously passing messages between producers and consumers. Channels implement the producer/consumer pattern. Producers asynchronously produce data, and consumers asynchronously consume that data. It's an essential pattern for building loosely coupled systems.

One of the primary motivations behind the adoption of .NET Channels is their exceptional performance characteristics. Unlike traditional message queues, Channels operate entirely in memory. This has the disadvantage of the potential for message loss if the application crashes.

The InMemoryMessageQueue creates an unbounded channel using the Channel.CreateUnbounded bounded. This means the channel can have any number of readers and writers. It also exposes a ChannelReader and ChannelWriter, which allow consumers to publish and consume messages.

internal sealed class InMemoryMessageQueue
{
    private readonly Channel<IIntegrationEvent> _channel =
        Channel.CreateUnbounded<IIntegrationEvent>();

    public ChannelReader<IIntegrationEvent> Reader => _channel.Reader;

    public ChannelWriter<IIntegrationEvent> Writer => _channel.Writer;
}

You also need to register the InMemoryMessageQueue as a singleton with dependency injection:

builder.Services.AddSingleton<InMemoryMessageQueue>();

Implementing The Event Bus

The IEventBus implementation is now straightforward with the use of channels. The EventBus class uses the InMemoryMessageQueue to access the ChannelWriter and write an event to the channel.

internal sealed class EventBus(InMemoryMessageQueue queue) : IEventBus
{
    public async Task PublishAsync<T>(
        T integrationEvent,
        CancellationToken cancellationToken = default)
        where T : class, IIntegrationEvent
    {
        await queue.Writer.WriteAsync(integrationEvent, cancellationToken);
    }
}

We will register the EventBus as a singleton service with dependency injection because it's stateless:

builder.Services.AddSingleton<IEventBus, EventBus>();

Consuming Integration Events

With the EventBus implementing the producer, we need a way to consume the published IIntegrationEvent. We can implement a simple background service using the built-in IHostedService abstraction.

The IntegrationEventProcessorJob depends on the InMemoryMessageQueue, but this time for reading (consuming) messages. We'll use the ChannelReader.ReadAllAsync method to get back an IAsyncEnumerable. This allows us to consume all the messages in the Channel asynchronously.

The IPublisher from MediatR helps us connect the IIntegrationEvent with the respective handlers. It's important to resolve it from a custom scope if you want to inject scoped services into the event handlers.

internal sealed class IntegrationEventProcessorJob(
    InMemoryMessageQueue queue,
    IServiceScopeFactory serviceScopeFactory,
    ILogger<IntegrationEventProcessorJob> logger)
    : BackgroundService
{
    protected override async Task ExecuteAsync(CancellationToken stoppingToken)
    {
        await foreach (IIntegrationEvent integrationEvent in
            queue.Reader.ReadAllAsync(stoppingToken))
        {
            try
            {
                using IServiceScope scope = serviceScopeFactory.CreateScope();

                IPublisher publisher = scope.ServiceProvider
                    .GetRequiredService<IPublisher>();

                await publisher.Publish(integrationEvent, stoppingToken);
            }
            catch (Exception ex)
            {
                logger.LogError(
                    ex,
                    "Something went wrong! {IntegrationEventId}",
                    integrationEvent.Id);
            }
        }
    }
}

Don't forget to register the hosted service:

builder.Services.AddHostedService<IntegrationEventProcessorJob>();

Using The In-Memory Message Bus

With all of the necessary abstractions in place, we can finally use the in-memory message bus.

The IEventBus service will write the message to the Channel and immediately return. This allows you to publish messages in a non-blocking way, which can improve performance.

internal sealed class RegisterUserCommandHandler(
    IUserRepository userRepository,
    IEventBus eventBus)
    : ICommandHandler<RegisterUserCommand>
{
    public async Task<User> Handle(
        RegisterUserCommand command,
        CancellationToken cancellationToken)
    {
        // First, register the user.
        User user = CreateFromCommand(command);

        userRepository.Insert(user);

        // Now we can publish the event.
        await eventBus.PublishAsync(
            new UserRegisteredIntegrationEvent(user.Id),
            cancellationToken);

        return user;
    }
}

This solves the producer side, but we also need to create a consumer for the UserRegisteredIntegrationEvent message. This part is greatly simplified because I'm using MediatR in this implementation.

We need to define an INotificationHandler implementation handling the integration event UserRegisteredIntegrationEvent. This will be the UserRegisteredIntegrationEventHandler.

When the background job reads the UserRegisteredIntegrationEvent from the Channel, it will publish the message and execute the handler.

internal sealed class UserRegisteredIntegrationEventHandler
    : INotificationHandler<UserRegisteredIntegrationEvent>
{
    public async Task Handle(
        UserRegisteredIntegrationEvent event,
        CancellationToken cancellationToken)
    {
        // Asynchronously handle the event.
    }
}

Improvement Points

While our basic in-memory message bus is functional, there are several areas we can improve:

  • Resilience - We can introduce retries when we run into exceptions, which will improve the reliability of the message bus.
  • Idempotency - Ask yourself if you want to handle the same message twice. The idempotent consumer pattern elegantly solves this problem.
  • Dead Letter Queue - Sometimes, we won't be able to handle a message correctly. It's a good idea to introduce a persistent storage for these messages. This is called a Dead Letter Queue, and it allows for troubleshooting at a later time.

We've covered the key aspects of building an in-memory message bus using .NET Channels. You can extend this further by implementing the improvements for a more robust solution.

Remember that this implementation only works inside of one process. Consider using a real message broker if you need a more reliable solution.

That's all for today. I'll see you next week.




Read the whole story
alvinashcraft
13 hours ago
reply
West Grove, PA
Share this story
Delete

Azure Static Web App–Assign roles through an Azure Function

1 Share

As a follow-up on the presentation I did at CloudBrew about Azure Static Web Apps I want to write a series of blog posts.

We ended last post about Azure Static Web Apps talking about authorization and you can use role based security by assigning either a built-in role or a custom role. I showed how you could use invitations to assign a custom role.

Today I want to show a second option to assign a custom role using an Azure Function.

We start by creating an Azure Function that will be responsible for assigning roles. Every time a user successfully authenticates with an identity provider, the POST method calls the specified function. The function passes a JSON object in the request body that contains the user's information from the provider.

Once we have our function, we need to configure the static web app to use this function. This can be done by setting the rolesSource value of the auth section in our staticwebapp.config.json file:

If we now authenticate inside our application and call the .auth/me endpoint afterwards, we should see the custom roles coming from the api:

More information

Custom authentication in Azure Static Web Apps | Microsoft Learn

Read the whole story
alvinashcraft
13 hours ago
reply
West Grove, PA
Share this story
Delete

Data Clustering Using a Self-Organizing Map (SOM) with C#

1 Share
Dr. James McCaffrey of Microsoft Research presents a full-code, step-by-step tutorial on technique for visualizing and clustering data.
Read the whole story
alvinashcraft
13 hours ago
reply
West Grove, PA
Share this story
Delete
Next Page of Stories