Content Developer II at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
122397 stories
·
29 followers

PPP 413 | Meeting Goblins and How to Deal With Them, with Rich Maltzman and Jim Stewart

1 Share

Summary

In this episode, Andy interviews Rich Maltzman and Jim Stewart about their book Great Meetings Build Great Teams: A Guide for Project Leaders and Agilists.

They discuss the common reasons why people dislike meetings, such as lack of purpose and poor facilitation. They introduce the concept of 'meeting goblins,' which are negative personalities that emerge during meetings, and provide strategies for dealing with them.

The conversation also covers the challenges and best practices of virtual meetings, as well as the benefits and potential pitfalls of agile ceremonies like daily standups. The conversation focuses on the importance of effective meetings in building great teams. Rich and Jim share their experiences and strategies for running successful meetings, including setting ground rules, timekeeping, and using technology like AI for meeting summaries.

They also discuss the impact of cultural differences on meetings and provide tips for managing diverse teams. The conversation concludes by emphasizing the link between great meetings and great teams, highlighting the role of meetings in fostering collaboration, building relationships, and achieving project goals.

Sound Bites

  • "Meetings are a fact of life, often complained about but also often tolerated."
  • "Connection before context. Before you start right into the meeting, make sure you have a little bit of social interaction."
  • "Goblins are personalities that come out during meetings, and it's up to the meeting facilitator to recognize and address them."
  • "Great meetings aren't just about agendas and facilitation techniques; they're about showing that you care about the project and the team."
  • "Rosie the Reticent is the quiet version of Nadia the Naysayer."
  • "Decision latency is one of the biggest reasons for project failures, so it's crucial to have the right people at meetings."
  • "Understanding national, regional, and organizational cultures is important for effective meetings."

Chapters

  • 00:00 Introduction
  • 02:17 Start of Interview
  • 02:28 Why Do People Hate Meetings
  • 05:06 Meeting Goblins
  • 16:03 Virtual Meetings
  • 19:50 Connection Before Context
  • 20:53 Advantages and Warnings: Agile Standups
  • 27:29 How Culture Impacts Meetings
  • 34:42 When Too Many People Are Invited
  • 41:53 AI and Meetings
  • 47:38 The Link Between Great Meetings and Great Teams
  • 51:26 Interview Wrap Up
  • 52:00 Andy Comments After the Interview
  • 54:16 Outtakes

Learn More

You can learn more about Rich, Jim, and their book here:

If you’d like more on this subject, here are some episodes to check out:

AI for Project Managers and Leaders

With the constant stream of AI news, it's sometimes hard to grasp how these advancements can benefit us as project managers and leaders in our day-to-day work. That's why I developed our e-learning course: AI Made Simple: A Practical Guide to Using AI in Your Everyday Work.

This self-guided course is designed for project managers and leaders aiming to harness AI's potential to enhance your work, streamline your workflow, and boost your productivity.

Go to ai.i-leadonline.com to learn more and join us. The feedback from the program has been fantastic. Take this opportunity to unlock the potential of AI for your team and projects.

Thank you for joining me for this episode of The People and Projects Podcast!

Talent Triangle: Power Skills

 

The following music was used for this episode:

Music: The Fantastical Ferret by Tim Kulig
License (CC BY 4.0): https://filmmusic.io/standard-license

Music: Energetic & Drive Indie Rock
YouTube: https://www.youtube.com/watch?v=S30Oxdmi1dg
License (CC BY 4.0): https://filmmusic.io/standard-license





Download audio: https://traffic.libsyn.com/secure/peopleandprojectspodcast/413-RichAndJim.mp3?dest-id=107017
Read the whole story
alvinashcraft
2 hours ago
reply
West Grove, PA
Share this story
Delete

Making AI powered .NET apps more consistent and intelligent with Redis

1 Share

Hi everyone! 

Today we’re featuring a guest author from another team in Microsoft on our Semantic Kernel blog. Today’s topic will cover how to use Azure Cache for Redis, which is an in-memory datastore that allows you to further expand the performance and scalability of your applications that are using AzureOpenAI. We will turn it over to Catherine Wang to dive into Making AI powered .NET apps more consistent and intelligent with Redis.

Redis is a popular in-memory datastore that can be used to solve critical challenges for building and scaling intelligent applications. In this post, you will learn how Azure Cache for Redis can be used to improve the effectiveness of applications using Azure OpenAI.

Azure cache for Redis is unaffected by the recent Redis license updates:

“Our ongoing collaboration ensures that Azure customers can seamlessly utilize all tiers of Azure Cache for Redis. There will be no interruption to Azure Cache for Redis, Azure Cache for Redis Enterprise, and Enterprise Flash services and customers will receive timely updates and bug fixes to maintain optimal performance.” – Julia Liuson, President, Developer Division

This blog includes two sample applications:

The first is a Semantic Kernel demo chat application based on Demystifying Retrieval Augmented Generation with .NET. I added features that use Redis for saving additional knowledge and enabling memories on chat history. The full sample is at Chat App with Redis

The second is a demo application that features Redis Output Caching in .NET 8 with Redis OM dotnet to improve consistency and resiliency with generative AI. The full sample is at Output Cache with OpenAI

Redis Gives OpenAI models Additional Knowledge

OpenAI models like GPT are trained and knowledgeable in most scenarios, but there is no way for them to know your company’s internal documentation or a very recent blog post. That’s why you need Redis to be a semantic memory store for the additional knowledge.

There are two basic requirements for a semantic memory store:

  1. Intelligent apps cannot directly read unstructured data like text blobs, images, videos, etc. The semantic memory store needs to support saving vector embeddings efficiently.
  2. Intelligent apps need to perform tasks like summarization, comparison, anomaly detection, etc. The semantic memory store needs to support search capabilities. This means indexing, distance algorithms, and search queries for finding relevant data.

Redis Enterprise provides the RediSearch module to meet these requirements. You can save vector embeddings in Redis with built-in FLAT and HNSW indexing algorithms, distance algorithms like COSINE, and KNN search queries.

Semantic Kernel offers a connector for Redis semantic memory store. The code for using Redis as semantic memory store in Semantic Kernel might look like the following (from ChatAppRedis):

//Initialize the Redis connection
ConnectionMultiplexer connectionMultiplexer = await ConnectionMultiplexer.ConnectAsync(redisConnection);
IDatabase database = connectionMultiplexer.GetDatabase();

//Create and use Redis semantic memory store
RedisMemoryStore memoryStore = new RedisMemoryStore(database, vectorSize: 1536);
var memory = new SemanticTextMemory(
    memoryStore,
    new AzureOpenAITextEmbeddingGenerationService(aoaiEmbeddingModel, aoaiEndpoint, aoaiApiKey)
    );

//Code for saving text strings into Redis Semantic Store
await memory.SaveInformationAsync(collectionName, $"{your_text_blob}", $"{an_arbitrary_key}");

Redis Persists Chat History to Enable AI Memories

OpenAI models like GPT do not remember chat history. Semantic Kernel provides Chat History for answering questions based on previous context. For example, you can ask the chat application to tell a joke. Then ask why the joke is funny. The answer to the second question will be related to the first, which is what Chat History enables.

The Chat History object is stored in memory. Customers have asked to save it to an external store, for the following benefits:

  • Resource efficiency – Memory is a scarce resource in the application server.
  • Application resiliency – During server failover, we want to avoid in-memory data being lost and experiencing glitches.

Redis is an ideal choice for saving Chat History, because:

  • Data expiration support – The application can set expiration time on Chat History to keep its memory fresh.
  • Data structure – Redis supports built-in data structures like Hash to easily query for related messages.
  • Resiliency – If a session is interrupted due to a server failover, the chat can continue.

Here is an example conversation. Without chat history persisted in Redis, I can’t ask questions based on previous context.

Image no chat history

With Chat History in Redis, I can continue the previous conversation as I start a new session.

Image with chat history in redis

The code for fetching user messages from Redis to a ChatHistory object might look like the following:

RedisValue[] userMsgList = await _redisConnection.BasicRetryAsync(
    async(db) =>(await db.HashValuesAsync(_userName + ":" + userMessageSet)));

if (userMsgList.Any()) {
  foreach (var userMsg in userMsgList) {
    chat.AddUserMessage(userMsg.ToString());
  }
}

The code for saving user messages to Redis might look like the following:

chat.AddUserMessage(question);

await _redisConnection.BasicRetryAsync(
    async(_db) => _db.HashSetAsync($"{_userName}:{userMessageSet}", [
      new HashEntry(new RedisValue(Utility.GetTimestamp()), question)
    ]));

Redis Hash is used for user messages and assistant messages for each user. Redis Insight provides UI to view and manage saved Chat History data.

Image saved user messages

We can take this Chat History experience even further to convert it to vector embeddings to add consistency and relevancy for answering similar questions. The benefits are:

  • Consistent answers to slightly different questions
  • Cost saving by reduced API calls into OpenAI

Using the Chat App with Redis as a reference, the code for saving previous chat history in a Redis semantic memory store might look like the following:

//Store user and assistant messages as vector embeddings in Redis. Only the previous session is saved.
if (_historyContent.Length > 0)
{
    await memory.SaveInformationAsync(_userName+"_chathistory", _historyContent, "lastsession");
}

The code for searching previous chat history might look like the following:

 await foreach (var result in memory.SearchAsync(_userName+"_chathistory", question, limit: 1))
        stbuilder.Append(result.Metadata.Text);

I receive consistent responses on similar questions. i.e. “Where is French capital city?” and “Where is the French capital city?”

Image semantic chathistory redis

My experimental code has limitations:

  • It only saves history for the last chat session
  • It does not divide the large history object into chunks based on logical grouping
  • The code is messyThat’s why we are adding official support for this experience in Semantic Kernel, see microsoft/semantic-kernel #5436. Please share your feedback on the issue to help us design a great experience.

Redis Improves Web Application Performance

.NET provides several caching abstractions to improve web application performance. These are still applicable with your overall intelligent applications. In addition, the caching abstractions complement semantic caching to provide performant and consistent web responses.

Web Page Output Caching

Repeated web requests with the same parameters introduce unnecessary server utilization and dependency calls. In .NET 8, we introduced Redis Output Caching to improve a web application in the following aspects:

  • Consistency – Output Caching ensures the same requests get consistent responses.
  • Performance – Output Caching avoids repeated dependency calls into datastores or APIs, which accelerate overall web response time.
  • Resource efficiency – Output Caching reduces CPU utilization to render webpages.

Here is the earlier mentioned sample application for using Redis Output Caching to improve the performance calling into DALL-E to generate images based on a prompt. Output Caching with OpenAI Image Generation. It takes minimal coding to use Output Caching.

The code snippet for using .NET 8 Redis output cache might look like the following:

app.MapGet("/cached/{prompt}", async (HttpContext context, string prompt, IConfiguration config) => 
    { await GenerateImage.GenerateImageAsync(context, prompt, config); 
    }).CacheOutput();

Adding Semantic Caching to Ensure Similar Prompts Receive Consistent Response

Redis OM for dotnet just released Semantic Caching feature. It supports using Azure OpenAI embeddings to generate vectors. The following code snippet shows example usage. A full code sample can be found at GenerateImageSC.cs in the OutputCacheOpenAI repo

The code snippet for using Redis as semantic cache might look like the following:

_provider = new RedisConnectionProvider(_config["SemanticCacheAzureProvider"]);
var cache = _provider.AzureOpenAISemanticCache(
    _config["apiKey"], _config["AOAIResourceName"],
    _config["AOAIEmbeddingDeploymentName"], 1536);

if (cache.GetSimilar(_prompt).Length > 0) {
  imageURL = cache.GetSimilar(_prompt)[0];
  await context.Response.WriteAsync(
      "<!DOCTYPE html><html><body> " +
      $"<img src=\"{imageURL}\" alt=\"AI Generated Picture {_prompt}\" width=\"460\" height=\"345\">" +
      " </body> </html>");
}

This way, I can ensure that similar prompts from various users result in the same images for improved consistency and reduced API calls, thus reducing calls into DALL-E and improving the performance. The following screenshots demonstrate the same picture reused for similar prompts.

This is the image returned from prompt “a french garden in monet style”.

Image sc a french garden in monet style resized

This is the image returned from prompt “a monet style french garden”. It is the same as above because previous entry has been semantically cached:

Image sc a monet style french garden resized

This is the entry in Redis semantic cache:

Image sc one entry in semantic cache

The Redis Semantic Cache is complementary to Redis Output Cache because:

  • Semantic Cache further reduces the API dependency calls to improve performance and cost.
  • Output Cache reduces the CPU utilization for rendering web pages.

In conclusion, Redis can be an key part of a solution and design for performant, consistent, and cost-efficient intelligent web applications.

Next Steps

The recently GA-ed Enterprise E5 SKU is cost-efficient for experimenting with the RediSearch module. Check out Azure Cache for Redis.

Try out Redis in your intelligent application today! Leave feedback on your thoughts on these scenarios by commenting in the blog post – we would love to hear from you!

From the Semantic Kernel team, we want to thank Catherine for her time. We’re always interested in hearing from you. If you have feedback, questions or want to discuss further, feel free to reach out to us and the community on the Semantic Kernel GitHub Discussion Channel! We would also love your support, if you’ve enjoyed using Semantic Kernel, give us a star on GitHub.

The post Making AI powered .NET apps more consistent and intelligent with Redis appeared first on Semantic Kernel.

Read the whole story
alvinashcraft
2 hours ago
reply
West Grove, PA
Share this story
Delete

Azure Communication Services May 2024 Feature Updates

1 Share

The Azure Communication Services team is excited to share several new product and feature updates released in April 2024. (You can view previous blog articles here.)  

 

See this month’s updates:

  • Business-to-consumer extensibility with Microsoft Teams for Calling
  • Image Sharing in Teams meetings
  • Deep Noise Suppression Desktop
  • Updated Calling native SDKs for Android, iOS, and Windows
  • Updated Calling native UI Library for Android and iOS

 

Business-to-consumer extensibility with Microsoft Teams for Calling

 

Now in general availability, developers can take advantage of calling interoperability for Microsoft Teams users in Azure Communication Services Calling workflows.


Developers can use Call Automation APIs to bring Teams users into business-to-consumer (B2C) calling workflows and interactions, helping you deliver advanced customer service solutions. This interoperability is offered over VoIP to reduce telephony infrastructure overhead. Developers can add Teams users to Azure Communication Services calls using the user's Entra object ID (OID).

 

Use Cases

  1. Teams as an extension of agent desktop: Connect your CCaaS solution to Teams and enable your agents to handle customer calls on Teams. Having Teams as the single-pane-of-glass solution for both internal and B2C communication increases agent productivity and empowers them to deliver first-class service to customers.
  2. Expert Consultation: Businesses can invite subject matter experts, on Teams, into their customer service workflows for expedient issue resolution, and to improve their first call resolution rate.   

Victor_Chapel_0-1715317456927.png

In a world where customers need quick resolution and seamless interactions, Azure Communication Services B2C extensibility with Microsoft Teams makes it easy for customers to reach sales and support teams and for businesses to deliver effective customer experiences.

 

For more information, see Call Automation workflows interop with Microsoft Teams.

 

Image Sharing in Microsoft Teams meetings

 

Microsoft Teams users can now share images with Azure Communication Services users in the context of a Teams meeting. This feature is now generally available. Image sharing enhances collaboration in real time for meetings. Image overlay is also supported for users to look at it in detail.

 

Image sharing is helpful in many scenarios, such as a business sharing photos showcasing their work or doctors sharing images with patients for after care instructions.

 

                                                                Victor_Chapel_0-1715363821054.png

 

Try out this feature using either our UI Library or the Chat SDK. Note that the SDK is available in C# (.NET), JavaScript, Python, and Java:

 

Deep Noise Suppression for Desktop

 

Deep noise suppression is currently in public preview. Noise suppression improves VoIP and video calls by eliminating background noise, making it easier to talk and listen. For example, if you're taking an Azure Communication Services WebJS call in a coffee shop with considerable noise, turning on noise suppression can significantly improve the calling experience by eliminating the background noise from the shop.

 

For more information, see Add audio quality enhancements to your audio calling experience.

 

Calling native SDKs for Android, iOS, and Windows

 

We updated the Calling native SDKs to improve the customer experience. The April release includes:

  • Custom background for video calls
  • Proxy configuration
  • Android TelecomManager
  • Unidirectional Data Channel
  • Time To Live lifespan for push notifications

 

Custom background for video calls

 

Custom backgrounds for video calls is now generally available. This feature enables customers to remove distractions behind them. The custom image backgrounds feature enables customers to upload their own personalized images and use those as background.

 

                                                                Victor_Chapel_0-1715363212593.jpeg

 

For example, now business owners can use the Calling SDK to show custom backgrounds in place of the actual background. You can, for example, upload an image of a modern and spacious office and set it as its background for video calls. Anyone who joins the call sees the customized background, which looks realistic and natural. You can also use custom branding images as background to show a fresh image to your customers.

 

For more information, see QuickStart: Add video effects to your video calls.

 

Proxy configuration

 

Proxy configuration is now generally available. Some environments such as highly regulated industries or those dealing with confidential information require proxies to secure and control network traffic. You can use the Calling SDK to configure the HTTP and media proxies for your Azure Communication Services calls. This way, you can ensure that your communications are compliant with the network policies and regulations. You can use the native SDK methods to set the proxy configuration for your app.

 

For more information, see Tutorial: Proxy your calling traffic.

 

Android TelecomManager

 

Android TelecomManager is in public preview. It is a system service that manages audio and video calls on Android devices. Use Android TelecomManager to provide a consistent user experience across different Android apps and devices, such as showing incoming and outgoing calls in the system UI, routing audio to the device, and handling call interruptions. Now you can integrate your app with the Android TelecomManager to take advantage of its features for your custom calling scenarios.

 

For more information, see Integrate with TelecomManager on Android.

 

Unidirectional Data Channel

 

The Data Channel API is generally available. Data Channel includes unidirectional communication, which enables real-time messaging during audio and video calls. Using this API, you can integrate data exchange functions into the applications, providing a seamless communication experience for users. The Data Channel API enables users to instantly send and receive messages during an ongoing audio or video call, promoting smooth and efficient communication. In group call scenarios, a participant can send messages to a single participant, a specific set of participants, or all participants within the call. This flexibility enhances communication and collaboration among users during group interactions.

 

For more information, see Data Channel.

 

Time To Live lifespan for push notifications

 

The Time To Live (TTL) for push notifications is now generally available. TTL is the duration for which a push notification token is valid. Using a longer duration TTL can help your app reduce the number of new token requests from your users and improve the experience. 

 

For example, suppose you created an app that enables patients to book virtual medical appointments. The app uses push notifications to display incoming call UI when the app is not in the foreground. Previously, the app had to request a new push notification token from the user every 24 hours, which could be annoying and disruptive. With the extended TTL feature, you can now configure the push notification token to last for up to 6 months, depending on your business needs. This way, the app can avoid frequent token requests and provide a smoother calling experience for your customers.

 

For more information, see TTL token in Enable push notifications for calls.

 

Calling SDK native UI Library updates

 

The April updates include Troubleshooting on the native UI Library for Android and iOS, and Audio only mode in the UI Library. 

 

Using the Azure Communication Services Calling SDK native UI Library, you can now generate encrypted logs for troubleshooting and provide your customers with an optional Audio only mode for joining calls. 

 

Troubleshooting on the native UI Library for Android and iOS

 

Now in general availability, you can encrypt logs when troubleshooting on the Calling SDK native UI Library for Android and iOS. We've made it easy for you to generate encrypted logs to share with Azure support. While ideally calls just work, or developers self-remediate issues, customers always have Azure support as a last-line-of-defense. And we strive to make those engagements as easy and fast as possible. 

 

For more information, see Troubleshoot the UI Library.

 

Audio only mode in the UI Library

 

The Audio only mode in the Calling SDK UI Library is now generally available. It enables participants to join calls using only their audio, without sharing or receiving video. Participants can use this feature to conserve bandwidth and maximize privacy. When activated, the Audio only mode automatically disables the video function for both sending and receiving streams and adjusts the UI to reflect this change by removing video-related controls. 

 

For more information, see Enable audio only mode in the UI Library.

 

You can learn more about these updates and Azure Communications Services Communication Platform as a Service in our overview 

Read the whole story
alvinashcraft
5 hours ago
reply
West Grove, PA
Share this story
Delete

How we’re building more inclusive and accessible components at GitHub

1 Share

Eric sets the stage to discuss his incredible labor of work (and love) to make GitHub more inclusive and accessible:

Before we discuss the particulars of these updates, I would like to call attention to the most important aspect of the work: direct participation of, and input from daily assistive technology users.

Disabled people’s direct involvement in the inception, design, and development stages is indispensable. It’s crucial for us to go beyond compliance and weave these practices into the core of our organization. Only by doing so can we create genuinely inclusive experiences.

Eric Bailey

This isn’t the meat of Eric’s post, but it’s worth putting the spotlight on it because this is the sort of framing I rarely see in articles about accessibility. Many articles are framed as a one-way conversation from the perspective of an accessibility practitioner or expert about a particular point of WCAG conformance or they wax vengeance on those who know less or not any better for making honest mistakes — but are reading to learn and improve.

Eric is purely focused on two things here: (1) humans and (2) creating usable experiences. There’s a vulnerability in his writing when he describes relying on the direct feedback of GitHub users who are impacted by his work rather than resting on the laurels of what I know he is extremely good at.

Another point I want to highlight is how Eric describes the need to “go beyond compliance” when it comes to accessibility practices. We have WCAG success criteria meticulously organized around usability principles, and while they can be tough to pass, they are technically guidelines as opposed to rules or some binding principles. They’re benchmarks, but real “compliance” is likely to go above and beyond to account for the real impairments that affect the real experiences of the real people using sites like GitHub. Meeting a success criterion is a guidepost, but the real “score” you want to pass are those imposed by users. Conformance doesn’t always lead to accessible experiences, after all.

That’s all I wanted to note here. But you’ll be doing yourself a real solid reading the full article. This is accessibility at a massive scale for a product that drives so much of the work we do to create web experiences. It’s a rare peek at the work, challenges, nuances, and tough decisions of someone not investing sweat equity toward a great cause, but also extremely good at what they do. Just read it already.

Read the whole story
alvinashcraft
5 hours ago
reply
West Grove, PA
Share this story
Delete

Building Resilient Cloud Applications With .NET

1 Share

Thank you to our sponsors who keep this newsletter free to the reader:

Build API applications visually using Postman Flows. Postman Flows is a visual tool for building API-driven applications for the API-first world. You can use Flows to chain requests, handle data, and create real-world workflows in your Postman workspace. Get started here.

9 Best Practices for Building Blazor Web Applications: In this article, you will learn nine best practices for building Blazor web applications by the .NET developer and YouTube influencer Claudio Bernasconi. Read it here.

From my experience working with microservices systems, things don't always go as planned. Network requests randomly fail, application servers become overloaded, and unexpected errors appear. That's where resilience comes in.

Resilient applications can recover from transient failures and continue to function. Resilience is achieved by designing applications that can handle failures gracefully and recover quickly.

By designing your applications with resilience in mind, you can create robust and reliable systems, even when the going gets tough.

In this newsletter, we'll explore the tools and techniques we have in .NET to build resilient systems.

Resilience: Why You Should Care

Sending HTTP requests is a common approach for remote communication between services. However, HTTP requests are susceptible to failures from network or server issues. These failures can disrupt service availability, especially as dependencies increase and the risk of cascading failures grows.

So, how can you improve the resilience of your applications and services?

Here are a few strategies you can consider to increase resilience:

  • Retries: Retry requests that fail due to transient errors.
  • Timeouts: Cancel requests that exceed a specified time limit.
  • Fallbacks: Define alternative actions or results for failed operations.
  • Circuit Breakers: Temporarily suspend communication with unavailable services.

You can use these strategies individually or in combination for optimal HTTP request resilience.

Let's see how we can introduce resilience in a .NET application.

Resilience Pipelines

With .NET 8, integrating resilience into your applications has become much simpler. We can use Microsoft.Extensions.Resilience and Microsoft.Extensions.Http.Resilience, which are built on top of Polly. Polly is a .NET resilience and transient fault-handling library. Polly allows us to define resilience strategies such as retry, circuit breaker, timeout, rate-limiting, fallback, and hedging.

Polly received a new API surface in its latest version (V8), which was implemented in collaboration with Microsoft. You can learn more about the Polly V8 API in this video.

If you were previously using Microsoft.Extensions.Http.Polly, it is recommended that you switch to one of the previously mentioned packages.

Let's start by installing the required NuGet packages:

Install-Package Microsoft.Extensions.Resilience
Install-Package Microsoft.Extensions.Http.Resilience

To use resilience, you must first build a pipeline consisting of resilience strategies. Each strategy that we configure as part of the pipeline will execute in order of configuration. Order is important with resilience pipelines. Keep that in mind.

We start by creating an instance of ResiliencePipelineBuilder, which allows us to configure resilience strategies.

ResiliencePipeline pipeline = new ResiliencePipelineBuilder()
    .AddRetry(new RetryStrategyOptions
    {
        ShouldHandle = new PredicateBuilder().Handle<ConflictException>(),
        Delay = TimeSpan.FromSeconds(1),
        MaxRetryAttempts = 2,
        BackoffType = DelayBackoffType.Exponential,
        UseJitter = true
    })
    .AddTimeout(new TimeoutStrategyOptions
    {
        Timeout = TimeSpan.FromSeconds(10)
    })
    .Build();

await pipeline.ExecuteAsync(
    async ct => await httpClient.GetAsync("https://modularmonolith.com", ct),
    cancellationToken);

Here's what we're adding to the resilience pipeline:

  • AddRetry - Configures a retry resilience strategy, which we can further configure by passing in a RetryStrategyOptions instance. We can provide a predicate for the ShouldHandle property to define which exceptions the resilience strategy should handle. The retry strategy also comes with some sensible default values.
  • AddTimeout - Configures a timeout strategy that will throw a TimeoutRejectedException if the delegate does not complete before the timeout. We can provide a custom timeout by passing in a TimeoutStrategyOptions instance. The default timeout is 30 seconds.

Finally, we can Build the resilience pipeline and get back a configured ResiliencePipeline instance that will apply the respective resilience strategies. To use the ResiliencePipeline, we can call the ExecuteAsync method and pass in a delegate.

Resilience Pipelines and Dependency Injection

Configuring a resilience pipeline every time we want to use it is cumbersome. .NET 8 introduces a new extension method for the IServiceCollection interface that allows us to register resilience pipelines with dependency injection.

Instead of manually configuring resilience every time, you ask for a pre-made pipeline by name.

We start by calling the AddResiliencePipeline method, which allows us to configure the resilience pipeline. Each resilience pipeline needs to have a unique key. We can use this key to resolve the respective resilience pipeline instance.

In this example, we're passing in a string key which allows us to configure the non-generic ResiliencePipelineBuilder.

services.AddResiliencePipeline("retry", builder =>
{
    builder.AddRetry(new RetryStrategyOptions
    {
        Delay = TimeSpan.FromSeconds(1),
        MaxRetryAttempts = 2,
        BackoffType = DelayBackoffType.Exponential,
        UseJitter = true
    });
})

However, we can also specify generic arguments when calling AddResiliencePipeline. This allows us to configure a typed resilience pipeline using ResiliencePipelineBuilder<TResult>. Using this approach, we can access the hedging and fallback strategies.

In the following example, we're configuring a fallback strategy by calling AddFallback. This allows us to provide a fallback value that we can return in case of a failure. The fallback could be a static value or come from another HTTP request or the database.

services.AddResiliencePipeline<string, GitHubUser?>("gh-fallback", builder =>
{
    builder.AddFallback(new FallbackStrategyOptions<GitHubUser?>
    {
        FallbackAction = _ =>
            Outcome.FromResultAsValueTask<GitHubUser?>(GitHubUser.Empty)
    });
});

To use resilience pipelines configured with dependency injection, we can use the ResiliencePipelineProvider. It exposes a GetPipeline method for obtaining the pipeline instance. We have to provide the key used to register the resilience pipeline.

app.MapGet("users", async (
    HttpClient httpClient,
    ResiliencePipelineProvider<string> pipelineProvider) =>
{
    ResiliencePipeline<GitHubUser?> pipeline =
        pipelineProvider.GetPipeline<GitHubUser?>("gh-fallback");

    var user = await pipeline.ExecuteAsync(async token =>
        await httpClient.GetAsync("api/users", token),
        cancellationToken);
})

Resilience Strategies and Polly

Resilience strategies are the core component of Polly. They're designed to run custom callbacks while introducing an additional layer of resilience. We can't run these strategies directly. Instead, we execute them through a resilience pipeline.

Polly categorizes resilience strategies into reactive and proactive. Reactive strategies handle specific exceptions or results. Proactive strategies decide to cancel or reject the execution of callbacks using a rate limiter or a timeout resilience strategy.

Polly has the following built-in resilience strategies:

  • Retry: The classic "try again" approach. Works great for temporary network glitches. You can configure how many retries you have and even add some randomness (jitter) to avoid overloading the system if everyone retries at once.
  • Circuit-breaker: Like an electrical circuit breaker, this prevents hammering a failing system. If errors pile up, the circuit breaker "trips" temporarily to give the system time to recover.
  • Fallback: Provides a safe, default response if your primary call fails. It might be a cached result or a simple "service unavailable" message.
  • Hedging: Makes multiple requests simultaneously, taking the first successful response. It is helpful if your system has numerous ways of handling something.
  • Timeout: Prevents requests from hanging forever by terminating them if the timeout is exceeded.
  • Rate-limiter: Throttles outgoing requests to prevent overwhelming external services.

HTTP Request Resilience

Sending HTTP calls to external services is how your application interacts with the outside world. These could be third-party services like payment gateways and identity providers or other services your team owns and operates.

The Microsoft.Extensions.Http.Resilience library comes with ready-to-use resilience pipelines for sending HTTP requests.

We can add resilience to outgoing HttpClient requests using the AddStandardResilienceHandler method.

services.AddHttpClient<GitHubService>(static (httpClient) =>
{
    httpClient.BaseAddress = new Uri("https://api.github.com/");
})
.AddStandardResilienceHandler();

This also means you can eliminate any delegating handlers you previously used for resilience.

The standard resilience handler combines five Polly strategies to create a resilience pipeline suitable for most scenarios. The standard pipeline contains the following strategies:

  • Rate limiter: Limits the maximum number of concurrent requests sent to the dependency.
  • Total request timeout: Introduces a total timeout, including any retry attempts.
  • Retry: Retries a request if it fails because of a timeout or a transient error.
  • Circuit breaker: Prevents sending further requests if too many failures are detected.
  • Attempt timeout: Introduces a timeout for an individual request.

You can customize any aspect of the standard resilience pipeline by configuring the HttpStandardResilienceOptions.

Takeaway

Resilience isn't just a buzzword; it's a core principle for building reliable software systems. We're fortunate to have powerful tools like Microsoft.Extensions.Resilience and Polly at our disposal. We can use them to design systems that gracefully handle any transient failures.

Good monitoring and observability are essential to understand how your resilience mechanisms work in production. Remember, the goal isn't to eliminate failures but to gracefully handle them and keep your application functioning.

Ready to dive deeper into resilient architecture? My advanced course on building modular monoliths will equip you with the skills to design and implement robust, scalable systems. Check out Modular Monolith Architecture.

Challenge: Take a look at your existing .NET projects. Are there any critical areas where a little resilience could go a long way? Pick one and try applying some of the techniques we've discussed here.

That's all for today.

See you next week.




Read the whole story
alvinashcraft
5 hours ago
reply
West Grove, PA
Share this story
Delete

Motivated by play (Friends)

1 Share

Annie Sexton has been on quite a journey since she was last on the show back in early ‘22. On this episode, Annie takes us on that journey, shares her new-found perspective & tells us about how she’s approaching her side project this time around.

Leave us a comment

Changelog++ members get a bonus 12 minutes at the end of this episode and zero ads. Join today!

Sponsors:

  • Tailscale – Adam loves Tailscale! Tailscale is programmable networking software that’s private and secure by default. It’s the easiest way to connect devices and services to each other, wherever they are. Secure, remote access to production, databases, servers, kubernetes, and more. Try Tailscale for free for up to 100 devices and 3 users at changelog.com/tailscale, no credit card required.
  • Sentry – Code breaks, fix it faster. Don’t just observe. Take action. Sentry is the only app monitoring platform built for developers that gets to the root cause for every issue. 90,000+ growing teams use sentry to find problems fast. Use the code CHANGELOG when you sign up to get $100 OFF the team plan.
  • Coda – Your all-in-one collaborative workspace. Coda brings teams and tools together for a more organized work day.

Featuring:

Show Notes:

Something missing or broken? PRs welcome!





Download audio: https://op3.dev/e/https://cdn.changelog.com/uploads/friends/43/changelog--friends-43.mp3
Read the whole story
alvinashcraft
7 hours ago
reply
West Grove, PA
Share this story
Delete
Next Page of Stories