Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
142227 stories
·
32 followers

AI Agent MCP Tools: QuickStart to MCP Tools Development with Azure AI Foundry SDK

1 Share

As AI agents become more sophisticated, the need for seamless integration with powerful cloud-based tools grows. With the Azure AI Foundry SDK and MCP (Model Context Protocol) tools, creates a dynamic duo that empowers developers to build, deploy, and manage intelligent agents with ease.

Solution Overview

The AI-Foundry-Agent-MCP GitHub repository provides a hands-on solution for integrating MCP tools with the Azure AI Foundry SDK. This setup allows developers to:

  • Access and deploy state-of-the-art models from Azure AI Foundry.
  • Use MCP tools to manage model context, knowledge bases, and evaluation pipelines.
  • Rapidly prototype and scale AI agent solutions in a cloud-native environment.

Getting Started

The repo includes a step-by-step guide to get your environment up and running:

  1. Clone the repo and navigate to the project directory:git clone https://github.com/ccoellomsft/AI-Foundry-Agent-MCP.git cd ai-foundry-agents-mcp/ai-foundry-agents-mcp-tools/Python

     

  2. Set up a virtual environment:python -m venv labenv source ../labenv/Scripts/activate

     

  3. Install dependencies:pip install -r requirements.txt azure-ai-projects mcp
  4. Configure your environment - Edit the .env file to include your Azure AI Foundry project endpoint and model deployment name.
  5. Authenticate with Azure and select your subscription when prompted:az login

     

  6. Run client.py:python client.py

     

  7. Run with Sample Queries:
    • What exchange is MSFT listed on?
    • Give me a list of Microsoft's popular products.
    • What is Microsoft's stock price?

Creating an MCP Tool

The root of MCP tools is within the server.py file where tools(python functions) can be added as needed. The Agent will leverage the tool functions description when determining which one to use. The description or comment in the code is what the agent uses to understand the tool's purpose.

.tool() def get_msft_stock_info() -> dict: """Returns MSFT stock information.""" return { "symbol": "MSFT", "companyName": "Microsoft Corporation", "exchange": "NASDAQ", "currency": "USD", "currentPrice": 442.18, "open": 438.50, "high": 445.00, "low": 437.20, "previousClose": 439.10, "volume": 18234000, "marketCap": 3280000000000, "peRatio": 38.2, "eps": 11.57, "dividendYield": 0.78, "timestamp": "2025-07-10T13:50:00Z" }

 When a user prompt includes something like "What's the current Microsoft stock price?" or "Give me information on Microsoft's products" the agent will match the intent of the prompt with the tool's description. Within the MCP tool function an endpoint to another service can be called, parameters can be passed into the function, and any applicable logic can also be added here.

Conclusion

By combining the Azure AI Foundry SDK with MCP tools, you gain access to a rich ecosystem of models, data indexing, and deployment capabilities, all within a unified development workflow. Whether you're building chatbots, copilots, or intelligent search systems, this toolkit accelerates your journey from prototype to production.  Implementing MCP tools with Azure AI Foundry offers a powerful and scalable approach to building intelligent, context-aware AI solutions. This integration not only streamlines the development lifecycle of AI agents but also ensures they operate with contextual intelligence, adaptability, and enterprise-grade security. As AI continues to evolve, leveraging these tools together positions teams to deliver smarter, more responsible, and impactful AI-driven experiences.

Read the whole story
alvinashcraft
42 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

RNR 337 - Meta Quest for React Native w/ Markus Leyendecker

1 Share

Jamon sits down with Markus Leyendecker from Meta to talk about using React Native on Meta Quest. They cover what’s already working, what’s still coming together, and why mixed reality might be the next big frontier for React Native developers.

Spoiler alert: Jamon might have purchased a headset after the recording of this episode! 

Show Notes


Connect With Us!


This episode is brought to you by Infinite Red!

Infinite Red is an expert React Native consultancy located in the USA. With nearly a decade of React Native experience and deep roots in the React Native community (hosts of Chain React and the React Native Newsletter, core React Native contributors, creators of Ignite and Reactotron, and much, much more), Infinite Red is the best choice for helping you build and deploy your next React Native app.





Download audio: https://cdn.simplecast.com/audio/2de31959-5831-476e-8c89-02a2a32885ef/episodes/2a6414f0-bff7-429d-b572-d480738735d3/audio/7a4b5952-81be-44fc-b8b7-c95f0e34b7e0/default_tc.mp3?aid=rss_feed&feed=hEI_f9Dx
Read the whole story
alvinashcraft
42 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

ESLint v9.31.0 released

1 Share

Highlights

Explicit resource management support in core rules

Four core rules have been updated to better support explicit resource management, a new feature in ES2026 JavaScript, including support for using and await using syntax.

The init-declarations rule no longer reports on initializing using and await using variables when the option is "never", because these variables must be initialized. For example:

async function foobar() {
 await using quux = getSomething();
}

The no-const-assign rule now reports on modifying using and await using variables. For example:

if (foo) {
 using a = getSomething();
 a = somethingElse;
}

The no-loop-func rule no longer reports on references to using and await using variables, because these variables are constant. For example:

for (using i of foo) {
    var a = function() { return i; }; // OK, all references are referring to block scoped variables in the loop.
    a();
}

The no-undef-init rule no longer reports on using and await using variables initialized to undefined. For example:

using foo = undefined;

Improved RuleTester output for incorrect locations

The run method of the RuleTester class has been enhanced to indicate when multiple properties of a reported error location in a test case do not match. For example:

      AssertionError [ERR_ASSERTION]: Actual error location does not match expected error location.
+ actual - expected

  {
+   column: 31,
+   endColumn: 32
-   column: 32,
-   endColumn: 33
  }

Previously, the output would only show one property even if there were multiple mismatches:

      AssertionError [ERR_ASSERTION]: Error column should be 32

31 !== 32

      + expected - actual

      -31
      +32

Features

Bug Fixes

  • 07fac6c fix: retry on EMFILE when writing autofix results (#19926) (TKDev7)
  • 28cc7ab fix: Remove incorrect RuleContext types (#19910) (Nicholas C. Zakas)

Documentation

  • 664cb44 docs: Update README (GitHub Actions Bot)
  • 40dbe2a docs: fix mismatch between globalIgnores() code and text (#19914) (MaoShizhong)
  • 5a0069d docs: Update README (GitHub Actions Bot)
  • fef04b5 docs: Update working on issues info (#19902) (Nicholas C. Zakas)

Chores

  • 3ddd454 chore: upgrade to @eslint/js@9.31.0 (#19935) (Francesco Trotta)
  • d5054e5 chore: package.json update for @eslint/js release (Jenkins)
  • 0f4a378 chore: update eslint (#19933) (renovate[bot])
  • 76c2340 chore: bump mocha to v11 (#19917) (루밀LuMir)
Read the whole story
alvinashcraft
43 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

The JavaScript Error Handling Handbook

1 Share

Errors and exceptions are inevitable in application development. As programmers, it is our responsibility to handle these errors gracefully so that the user experience of the application is not compromised. Handling errors correctly also helps programmers debug and understand what caused them so they can deal with them.

JavaScript has been a popular programming language for over three decades. We build web, mobile, PWA, and server-side applications using JavaScript and various popular JavaScript-based libraries (like ReactJS) and frameworks (like Next.js, Remix, and so on).

Being a loosely typed language, JavaScript imposes the challenge of handling type safety correctly. TypeScript is useful for managing types, but we still need to handle runtime errors efficiently in our code.

Errors like TypeError, RangeError, ReferenceError are probably pretty familiar to you if you’ve been building with JavaScript for a while. All these errors may cause invalid data, bad page transitions, unwanted results, or even the entire application to crash – none of which will make end users happy!

In this handbook, you’ll learn everything you need to know about error handling in JavaScript. We will start with an understanding of errors, their types, and occurrences. Then you’ll learn how to deal with these errors so that they don’t cause a bad user experience. At the end, you’ll also learn to build your own custom error types and clean-up methodologies to handle your code flow better for optimizations and performance.

I hope you enjoy reading along and practising the tasks I have provided at the end of the article.

This handbook is also available as a video session as part of the 40 Days of JavaScript initiative. Please check it out.

Table of Contents

  1. Errors in JavaScript

  2. Handling Errors With the try and catch

  3. Error Handling: Real-World Use Cases

  4. Anatomy of the Error Object

  5. Throwing Errors and Re-throwing Errors

  6. The finally with try-catch

  7. Custom Errors

  8. Task Assignments for You

  9. 40 Days of JavaScript Challenge Initiative

  10. Before We End...

Errors in JavaScript

Errors and exceptions are the events that disrupt program execution. JavaScript parses and executes code line by line. The source code gets evaluated with the grammar of the programming language to ensure it is valid and executable. If there is a mismatch, JavaScript encounters a parsing error. You’ll need to make sure you follow the right syntax and grammar of the language to stay away from parsing errors.

Take a look at the code snippet below. Here, we have made the mistake of not closing the parentheses for the console.log.

console.log("hi"

This will lead to a Syntax Error like this:

Uncaught SyntaxError

Other types of errors can happen due to wrong data input, trying to read a value or property that doesn’t exist, or acting on inaccurate data. Let’s see some examples:

console.log(x); // Uncaught ReferenceError: x is not defined

let obj = null;
console.log(obj.name); // Uncaught TypeError: Cannot read properties of null

let arr = new Array(-1) // Uncaught RangeError: Invalid array length

decodeURIComponent("%"); // Uncaught URIError: URI malformed

eval("var x = ;"); // Uncaught EvalError

Here is the list of possible runtime errors you may encounter, along with their descriptions:

  • ReferenceError – Occurs when trying to access a variable that is not defined.

  • TypeError – Occurs when an operation is performed on a value of the wrong type.

  • RangeError – Occurs when a value is outside the allowable range.

  • SyntaxError – Occurs when there is a mistake in the syntax of the JavaScript code.

  • URIError – Occurs when an incorrect URI function is used in encoding and decoding URIs.

  • EvalError – Occurs when there is an issue with the eval() function.

  • InternalError – Occurs when the JavaScript engine runs into an internal limit (stack overflow).

  • AggregateError – Introduced in ES2021, used for handling multiple errors at once.

  • Custom Errors – These are user-defined errors, and we will learn how to create and use them soon.

Have you noticed that all the code examples we used above result in a message explaining what the error is about? If you look at those messages closely, you will find a word called Uncaught. It means the error occurred, but it was not caught and managed. That’s exactly what we will now go for – so you know how to handle these errors.

Handling Errors With the try and catch

JavaScript applications can break for various reasons, like invalid syntax, invalid data, missing API responses, user mistakes, and so on. Most of these reasons may lead to an application crash, and your users will see a blank white page.

Rather than letting the application crash, you can gracefully handle these situations using try…catch.

try {
    // logic or code
} catch (err) {
    // handle error
}

The try Block

The try block contains the code – the business logic – which might throw an error. Developers always want their code to be error-free. But at the same time, you should be aware that the code might throw an error for several possible reasons, like:

  • Parsing JSON

  • Running API logic

  • Accessing nested object properties

  • DOM manipulations

  • … and many more

When the code inside the try block throws an error, the code execution of the remaining code in the try block will be suspended, and the control moves to the nearest catch block. If no error occurs, the catch block is skipped completely.

try {
  // Code that might throw an error
} catch (error) {
  // Handle the error here
}

The catch Block

The catch block runs only if an error was thrown in the try block. It receives the Error object as a parameter to give us more information about the error. In the example shown below, we are using something called abc without declaring it. JavaScript will throw an error like this:

try {
    console.log("execution starts here");
    abc;
    console.log("execution ends here");
} catch (err) {
    console.error("An Error has occured", err);
}

JavaScript executes code line by line. The execution sequence of the above code will be:

  • First, the string "execution starts here" will be logged to the console.

  • Then the control will move to the next line and find the abc there. What is it? JavaScript doesn’t find any definition of it anywhere. It’s time to raise the alarm and throw an error. The control doesn’t move to the next line (the next console log), but rather moves to the catch block.

  • In the catch block, we handle the error by logging it to the console. We could do many other things like show a toast message, send the user an email, or switch off a toaster (why not if your customer needs it).

Without try...catch, the error would crash the app.

Error Handling: Real-World Use Cases

Let’s now see some of the real-world use cases of error handling with try…catch.

Handling Division by Zero

Here is a function that divides a number by another number. So, we have parameters passed to the function for both numbers. We want to make sure that the division never encounters an error for dividing a number by zero (0).

As a proactive measure, we have written a condition that if the divisor is zero, we will throw an error saying that division by zero is not allowed. In every other case, we will proceed with the division operation. In case of an error, the catch block will handle the error and do what’s needed (in this case, logging the error to the console).

function divideNumbers(a, b) {
    try {
        if (b === 0) {
            const err = new Error("Division by zero is not allowed.");
            throw err;
        }
        const result = a/b;
        console.log(`The result is ${result}`);
    } catch(error) {
        console.error("Got a Math Error:", error.message)
    }
}

Now, if we invoke the function with the following arguments, we will get a result of 5, and the second argument is a non-zero value.

divideNumbers(15, 3); // The result is 5

But if we pass the 0 value for the second argument, the program will throw an error, and it will be logged into the console.

divideNumbers(15, 0);

Output:

Math Error

Handling JSON

Often, you will get JSON as a response to an API call. You need to parse this JSON in your JavaScript code to extract the values. What if the API sends you some malformed JSON by mistake? You cann’t afford to let your user interface crash because of this. You need to handle it gracefully – and here comes the try…catch block again to the rescue:

function parseJSONSafely(str) {
  try {
    return JSON.parse(str);
  } catch (err) {
    console.error("Invalid JSON:", err.message);
    return null;
  }
}

const userData = parseJSONSafely('{"name": "tapaScript"}'); // Parsed
const badData = parseJSONSafely('name: tapaScript');         // Handled gracefully

Without try...catch, the second call will crash the app.

Anatomy of the Error Object

Getting errors in programming can be a scary feeling. But Errors in JavaScript aren’t just some scary, annoying messages – they are structured objects that carry a lot of helpful information about what went wrong, where, and why.

As developers, we need to understand the anatomy of the Error object to help us better with faster debugging and smarter recovery in production-level application issues.

Let’s deep dive into the Error object, its properties, and how to use it effectively with examples.

What is the Error Object?

The JavaScript engine throws an Error object when something goes wrong during runtime. This object contains helpful information like:

  • An error message: This is a human-readable error message.

  • The error type: TypeError, ReferenceError, SyntaxError, and so on that we discussed above.

  • The stack trace: This helps you navigate to the root of the error. It is a string containing the stack trace at the point the error was thrown.

Let’s take a look at the code snippet below. The JavaScript engine will throw an error in this code, as the variable y is not defined. The error object contains the error name (type), message, and the stack trace information.

function doSomething() {
  const x = y + 1; // y is not defined
}
try {
  doSomething();
} catch (err) {
  console.log(err.name);    // ReferenceError
  console.log(err.message); // y is not defined
  console.log(err.stack); // ReferenceError: y is not defined
                          // at doSomething (<anonymous>:2:13)
                          // at <anonymous>:5:3
}

Tip: If you need any specific properties from the error object, you can use destructuring to do that. Here is an example where we are only interested in the error name and message, not the stack.

try {
  JSON.parse("{ invalid json }");
} catch ({name, message}) {
  console.log("Name:", name);       // Name: SyntaxError
  console.log("Message:", message); // Message: Expected property name or '}' in JSON at position 2 (line 1 column 3)
}

Throwing Errors and Re-throwing Errors

JavaScript provides a throw statement to trigger an error manually. It is very helpful when you want to handle an invalid condition in your code (remember the divide by zero problem?).

To throw an error, you need to create an instance of the Error object with details and then throw it.

throw new Error("Something is bad!");

When the code execution encounters a throw statement,

  • It stops the execution of the current code block immediately.

  • The control moves to the nearest catch block (if any).

  • If the catch block is not found, the error will not be caught. The error gets bubbled up, and may end up crashing the program. You can learn more in-depth about events and event bubbling from here.

Rethrowing

At times, catching the error itself in the catch block is not enough. Sometimes, you may not know how to handle the error completely, and you might want to do additional things, like:

  • Adding more context to the error.

  • Logging the error into a file-based logger.

  • Passing the error to someone more specialized to handle it.

This is where rethrow comes in. With rethrowing, you catch an error, do something else with it, and then throw it again.

function processData() {
  try {
    parseUserData();
  } catch (err) {
    console.error("Error in processData:", err.message);
    throw err; // Rethrow so the outer function can handle it too
  }
}

function main() {
  try {
    processData();
  } catch (err) {
    handleErrorBetter(err);
  }
}

In the code above, the processData() function catches an error, logs it, and then throws it again. The outer main() function can now catch it and do something more to handle it better.

In the real-world application development, you would want to separate the concerns for errors, like:

  • API Layer – In this layer, you can detect HTTP failures

      async function fetchUser(id) {
        const res = await fetch(`/users/${id}`);
        if (!res.ok) throw new Error("User not found"); // throw it here
        return res.json();
      }
    
  • Service Layer – In this layer, you handle business logic. So the error will be handled for invalid conditions.

      async function getUser(id) {
        try {
          const user = await fetchUser(id);
          return user;
        } catch (err) {
          console.error("Fetching user failed:", err.message);
          throw new Error("Unable to load user profile"); // rethrowing 
        }
      }
    
  • UI Layer – Show a user-friendly error message.

      async function showUserProfile() {
        try {
          const user = await getUser(123);
          renderUser(user);
        } catch (err) {
          displayError(err.message); // A proper message show to the user
        }
      }
    

The finally with try-catch

The try…catch block gives us a way to handle errors gracefully. But you may always want to execute some code irrespective of whether an error occurred or not. For example, closing the database connection, stopping a loader, resetting some states, and so on. That’s where finally comes in.

try {
  // Code might throw an error
} catch (error) {
  // Handle the error
} finally {
  // Always runs, whether an error occured or not
}

Let’s take an example:

function performTask() {
  try {
    console.log("Doing something cool...");
    throw new Error("Oops!");
  } catch (err) {
    console.error("Caught error:", err.message);
  } finally {
    console.log("Cleanup: Task finished (success or fail).");
  }
}

performTask();

In the performTask() function, the error is thrown after the first console log. So, the control will move to the catch block and log the error. After that, the finally block will execute its console log.

Output:

Doing something cool...
Caught error: Oops!
Cleanup: Task finished (success or fail).

Let’s take a more real-world use case of making an API call and handling the loading spinner:

async function loadUserData() {
  showSpinner(); // Show the spinner here

  try {
    const res = await fetch('/api/user');
    const data = await res.json();
    displayUser(data);
  } catch (err) {
    showError("Failed to load user.");
  } finally {
    hideSpinner(); // Hide spinner for both success and fail cases.
  }
}

Usually, we show a loading spinner while making an API (asynchronous) call from the browser. Irrespective of the successful response or an error from the API call, we must stop showing the loading spinner. Instead of executing the code logic twice to stop the spinner (once inside the try block and then again inside the catch block), you can do it inside the finally block.

Caution with finally

The finally block can override return values or a thrown error. This behaviour may be confusing and can lead to bugs as well.

function test() {
  try {
    return 'from try';
  } finally {
    return 'from finally';
  }
}

console.log(test());

What do you think the above code returns?

It will return 'from finally'. The return 'from try' is completely ignored. The return from finally overrides it silently.

Let’s see one more example of the same problem:

function willThrow() {
  try {
    throw new Error("Original Error");
  } finally {
    throw new Error("Overriding Error"); // The original error is lost
  }
}

try {
  willThrow();
} catch (err) {
  console.log(err.message); // "Overriding Error"
}

Here, the original error ("Original Error”) is swallowed. The finally block overrides the actual root cause.

When using finally:

  • Avoid returns and throws from finally as much as possible.

  • Avoid performing logic in the finally block that may impact the actual outcome. The try block is the best place for that.

  • Any critical decision-making must be avoided inside the finally block

  • Use finally for cleanup activities, such as closing files, connections, and stopping loading spinners, etc.

  • Ensure the finally block contains side-effect-free code.

Custom Errors

Using the generic Error and its existing types, like ReferenceError, SyntaxError, and so on, can be a bit vague in complex applications. JavaScript lets you create custom errors that are more related to your business use cases. The custom errors can provide:

  • Additional contextual information about the error.

  • Clarity about the error

  • More readable logs

  • The ability to handle multiple error cases conditionally.

A custom error in JavaScript is a user-defined error type that extends the built-in Error class. The custom error should be an ES6 Class that extends JavaScript’s Error class. We can use the super() in the constructor function to inherit the message property of the Error class. You can optionally assign a name and clean the stack trace for the custom error.

class MyCustomError extends Error {
  constructor(message) {
    super(message);         // Inherit message property
    this.name = this.constructor.name; // Optional but recommended
    Error.captureStackTrace(this, this.constructor); // Clean stack trace
  }
}

Let’s now see a real-world use case for a custom error.

A Real-World Use Case of Custom Errors

Using a form on a web page is an extremely common use case. A form may contain one or more input fields. It is recommended to validate the user inputs before we process the form data for any server actions.

Let’s create a custom validation error we can leverage for validating multiple form input data, like the user’s email, age, phone number, and more.

First, we’ll create a class called ValidationError that extends the Error class. The constructor function sets up the ValidationError class with an error message. We can also instantiate additional properties, like name, field, and so on.

class ValidationError extends Error {
  constructor(field, message) {
    super(`${field}: ${message}`);
    this.name = "ValidationError";
    this.field = field;
  }
}

Now, let's see how to use ValidationError. We can validate a user model to check its properties and throw a ValidationError whenever the expectations mismatch.

function validateUser(user) {
  if (!user.email.includes("@")) {
    throw new ValidationError("email", "Invalid email format");
  }
  if (!user.age || user.age < 18) {
    throw new ValidationError("age", "User must be 18+");
  }
}

In the code snippet above,

  • We throw an invalid email format validation error if the user’s email doesn’t include the @ symbol.

  • We throw another validation error if the age information of the user is missing or is below 18.

A custom error gives us the power to create domain/usage-specific error types to keep the code more manageable and less error-prone.

Task Assignments for You

If you have read the handbook this far, I hope you now have a solid understanding of JavaScript Error Handling. Let’s try out some assignments based on what we have learned so far. It’s going to be fun.

Find the Output

What will be the output of the following code snippet and why?

try {
    let r = p + 50;
    console.log(r);
} catch (error) {
    console.log("An error occurred:", error.name);
}

Options are:

  • ReferenceError

  • SyntaxError

  • TypeError

  • No error, it prints 10

Payment Process Validation

Write a function processPayment(amount) that verifies if the amount is positive and does not exceed the balance. If any condition fails, throw appropriate errors.

Hint: You can think of creating a Custom Error here.

40 Days of JavaScript Challenge Initiative

There are 101 ways of learning something. But nothing can beat structured and progressive learning methodologies. After spending more than two decades in Software Engineering, I’ve been able to gather the best of JavaScript together to create the 40 Days of JavaScript challenge initiative.

Check it out if you want to learn JavaScript with fundamental concepts, projects, and assignments for free (forever). Focusing on the fundamentals of JavaScript will prepare you well for a future in web development.

Before We End...

That’s all! I hope you found this article insightful.

Let’s connect:

  • Subscribe to my YouTube Channel.

  • Follow on LinkedIn if you don't want to miss the daily dose of up-skilling tips.

  • Check out and follow my open-source work on GitHub.

See you soon with my next article. Until then, please take care of yourself and keep learning.



Read the whole story
alvinashcraft
43 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Global Error Handling in ASP.NET Core: From Middleware to Modern Handlers

1 Share

Dev Productivity is Down. Here's Why. 68% of developers save 10+ hours weekly with AI, but half still lose just as much to broken processes. Atlassian's 2025 Developer Experience Report reveals rising time waste, disconnects with leadership, and the gap between feeling productive and being productive. Think your tools are smarter than your org chart? You're not alone. Download the full report.

ReSharper is coming to VS Code. JetBrains is bringing the power of ReSharper and AI assistant to Visual Studio Code! Here's your chance to influence their future - join the public preview to get early access, test powerful new tools, and share your feedback directly with the development team.

Let's talk about something we all deal with but often put off until the last minute - error handling in our ASP.NET Core apps.

When something breaks in production, the last thing you want is a cryptic 500 error with zero context. Proper error handling isn't just about logging exceptions. It's about making sure your app fails gracefully and gives useful info to the caller (and you).

In this article, I'll walk through the main options for global error handling in ASP.NET Core.

We'll look at how I used to do it, what ASP.NET Core 9 offers now, and where each approach makes sense.

Middleware-Based Error Handling

The classic way to catch unhandled exceptions is with custom middleware. This is where most of us start, and honestly, it still works great for most scenarios.

internal sealed class GlobalExceptionHandlerMiddleware(
    RequestDelegate next,
    ILogger<GlobalExceptionHandlerMiddleware> logger)
{
    public async Task InvokeAsync(HttpContext context)
    {
        try
        {
            await next(context);
        }
        catch (Exception ex)
        {
            logger.LogError(ex, "Unhandled exception occurred");

            // Make sure to set the status code before writing to the response body
            context.Response.StatusCode = ex switch
            {
                ApplicationException => StatusCodes.Status400BadRequest,
                _ => StatusCodes.Status500InternalServerError
            };

            await context.Response.WriteAsJsonAsync(
                new ProblemDetails
                {
                    Type = ex.GetType().Name,
                    Title = "An error occured",
                    Detail = ex.Message
                });
        }
    }
}

Don't forget to add the middleware to the request pipeline:

app.UseMiddleware<GlobalExceptionHandlerMiddleware>();

This approach is solid and works everywhere in your pipeline. The beauty is its simplicity: wrap everything in a try-catch, log the error, and return a consistent response.

But once you start adding specific rules for different exception types (e.g. ValidationException, NotFoundException), this becomes a mess. You end up with long if / else chains or more abstractions to handle each exception type.

Plus, you're manually crafting JSON responses, which means you're probably not following RFC 9457 (Problem Details) standards.

Enter IProblemDetailsService

Microsoft recognized this pain point and gave us IProblemDetailsService to standardize error responses. Instead of manually serializing our own error objects, we can use the built-in Problem Details format.

internal sealed class GlobalExceptionHandlerMiddleware(
    RequestDelegate next,
    IProblemDetailsService problemDetailsService,
    ILogger<GlobalExceptionHandlerMiddleware> logger)
{
    public async Task InvokeAsync(HttpContext context)
    {
        try
        {
            await next(context);
        }
        catch (Exception ex)
        {
            logger.LogError(ex, "Unhandled exception occurred");

            // Make sure to set the status code before writing to the response body
            context.Response.StatusCode = ex switch
            {
                ApplicationException => StatusCodes.Status400BadRequest,
                _ => StatusCodes.Status500InternalServerError
            };

            await problemDetailsService.TryWriteAsync(new ProblemDetailsContext
            {
                HttpContext = httpContext,
                Exception = exception,
                ProblemDetails = new ProblemDetails
                {
                    Type = exception.GetType().Name,
                    Title = "An error occured",
                    Detail = exception.Message
                }
            });
        }
    }
}

This is much cleaner. We're now using a standard format that API consumers expect, and we're not manually fiddling with JSON serialization. But we're still stuck with that growing switch statement problem. You can learn more about using Problem Details in .NET here.

The Modern Way: IExceptionHandler

ASP.NET Core 8 introduced IExceptionHandler, and it's a game-changer. Instead of one massive middleware handling everything, we can create focused handlers for specific exception types.

Here's how it works:

internal sealed class GlobalExceptionHandler(
    IProblemDetailsService problemDetailsService,
    ILogger<GlobalExceptionHandler> logger) : IExceptionHandler
{
    public async ValueTask<bool> TryHandleAsync(
        HttpContext httpContext,
        Exception exception,
        CancellationToken cancellationToken)
    {
        logger.LogError(exception, "Unhandled exception occurred");

        httpContext.Response.StatusCode = exception switch
        {
            ApplicationException => StatusCodes.Status400BadRequest,
            _ => StatusCodes.Status500InternalServerError
        };

        return await problemDetailsService.TryWriteAsync(new ProblemDetailsContext
        {
            HttpContext = httpContext,
            Exception = exception,
            ProblemDetails = new ProblemDetails
            {
                Type = exception.GetType().Name,
                Title = "An error occured",
                Detail = exception.Message
            }
        });
    }
}

The key here is the return value. If your handler can deal with the exception, return true. If not, return false and let the next handler try.

Don't forget to register it with DI and the request pipeline:

builder.Services.AddExceptionHandler<GlobalExceptionHandler>();
builder.Services.AddProblemDetails();

// And in your pipeline
app.UseExceptionHandler();

This approach is so much cleaner. Each handler has one job, and the code is easy to test and maintain.

Chaining Exception Handlers

You can chain multiple exception handlers together, and they'll run in the order you register them. ASP.NET Core will use the first one that returns true from TryHandleAsync.

Example: One for validation errors, one global fallback.

builder.Services.AddExceptionHandler<ValidationExceptionHandler>();
builder.Services.AddExceptionHandler<GlobalExceptionHandler>();

Let's say you're using FluentValidation (and you should be). Here's a complete setup:

internal sealed class ValidationExceptionHandler(
    IProblemDetailsService problemDetailsService,
    ILogger<ValidationExceptionHandler> logger) : IExceptionHandler
{
    public async ValueTask<bool> TryHandleAsync(
        HttpContext httpContext,
        Exception exception,
        CancellationToken cancellationToken)
    {
        if (exception is not ValidationException validationException)
        {
            return false;
        }

        logger.LogError(exception, "Unhandled exception occurred");

        httpContext.Response.StatusCode = StatusCodes.Status400BadRequest;
        var context = new ProblemDetailsContext
        {
            HttpContext = httpContext,
            Exception = exception,
            ProblemDetails = new ProblemDetails
            {
                Detail = "One or more validation errors occurred",
                Status = StatusCodes.Status400BadRequest
            }
        };

        var errors = validationException.Errors
            .GroupBy(e => e.PropertyName)
            .ToDictionary(
                g => g.Key.ToLowerInvariant(),
                g => g.Select(e => e.ErrorMessage).ToArray()
            );
        context.ProblemDetails.Extensions.Add("errors", errors);

        return await problemDetailsService.TryWriteAsync(context);
    }
}

And in your app, just throw like this:

// In your controller or service - IValidator<CreateUserRequest>
public async Task<IActionResult> CreateUser(CreateUserRequest request)
{
    await _validator.ValidateAndThrowAsync(request);

    // Your business logic here
}

The execution order is important. The framework will try each handler in the order you registered them. So put your most specific handlers first, and your catch-all handler last.

Summary

We've come a long way from the days of manually crafting error responses in middleware. The evolution looks like this:

For new projects, I'd go straight to IExceptionHandler. It's cleaner, more maintainable, and gives you the flexibility to handle different exception types exactly how you want.

The key takeaway? Don't let error handling be an afterthought. Set it up early, make it consistent, and your users (and your future self) will thank you when things inevitably go wrong.

Thanks for reading.

And stay awesome!




Read the whole story
alvinashcraft
43 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

So Microsoft Deleted Some of Our Packages From NuGet.org Without Notice

1 Share

“Software supply chain management” is one of those terms that sounds like Venture Capital-funded vendor marketing bullshit right up until it isn’t.

In 2016 the npm left-pad incident taught many of us in the software industry the importance of:

  1. The fragility of depending directly on central package management systems, such as npm or nuget.org, hence why artifact proxying tools like JFrog Artifactory became so important; and
  2. How centralized package management systems probably need to make stronger security and availability guarantees, such as not allowing hard deletes of packages in the first place.

One of the distinguishing features of nuget.org is they make it very, very hard for authors to delete their packages - only in exceptional cases, such as malware inclusion, will they allow the permanent deletion of packages.

Imagine my surprise yesterday, when I discovered that two of our Akka.NET packages were deleted1, by Microsoft, without any advanced notice. I only discovered that this was an issue when one of my own Akka.NET applications failed to build on CI/CD due to missing package versions.

Akka.Coordination.Azure deleted from NuGet.org

I’ll get into the reasons why they did this, but the bottom line is: this is a disturbing precedent that really should never be repeated.

In essence, Microsoft’s adjacent business units abused NuGet to deal with their own security vulnerabilities - getting a level of access that would never be granted to any other publisher on the platform.

Microsoft.Identity.Client Security Vulnerabilities

Yesterday we received the following email to our nuget.org addresses for the Akka.NET organization:

Microsoft.Identity.Client vulnerability disclosure email

The GitHub release they link to doesn’t actually mention the vulnerability at all, and version 4.72.1 of the Microsoft.Identity.Client NuGet package still has the vulnerability. So, we weren’t actually sure what to make of those directions.

Both of our impacted packages, Akka.Coordination.Azure and Akka.Discovery.Azure, don’t take a direct dependency on this package at all.

Rather Microsoft.Identity.Client is a transitive dependency of Microsoft’s Azure.Identity package, which we reference for authenticating these plugins’ access to Azure resources.

Immediately after receiving this email, which makes zero mention of our package versions being deleted, we investigated and found that:

  1. The “vulnerability” was just a typo in a public-facing XML-DOC comment that happens to point to a typo-squatting URL that is commonly used in phishing attacks. Sucks, but it’s not a “real” CVE in the sense of it impacting actual program execution - a user would have to manually do something with that information in order to be vulnerable.
  2. Azure.Identity’s developers had presumably been contacted by the AAD team already. Their most recent version of their plugin (at the time), 1.14.1, hadn’t been updated with a non-“vulnerable” version of Microsoft.Identity.Client.

Given both of those data points, we figured this was probably a nothing-burger and went about our business. “We’ll update our plugins once there’s a new version of Azure.Identity” was the decision.

It was only later when I tried to build one of my own Akka.NET applications that we discovered that the package versions had been deleted outright, which we fixed via a new update that took a direct dependency on Microsoft.Identity.Client2.

A Bad Precedent

The Microsoft Entra / Azure Active Directory people were trying to address a legitimate security concern. I totally get it. But there are new CVE disclosures on Microsoft packages virtually every month.

NuGet has a built-in system for remedying this:

  1. CVE and deprecation disclosures on the NuGet.org feed and
  2. Built in support for logging build warnings when vulnerable packages are restored.

This is the normal process by which we and every other active package author have resolved CVEs from upstream dependencies for years. Why was this a special case that merited the extreme step of deleting other people’s packages without notice?

This precedent bothers me for three reasons:

  1. Undermines confidence in indefinite package availability. Hard-deleting packages is supposed to be a giant no-no for the NuGet ecosystem. If criteria for deleting packages now includes “every time Microsoft makes a boo-boo”, that’s impossible for us to predict or mitigate as OSS vendors on the NuGet platform. Our only recourse there would be to host our own NuGet feed and push our users to that, which would have the affect of killing our distribution and the ability for other authors to derive our work.
  2. Unique access for Microsoft alone. Imagine if we had a major vulnerability in one of our Akka.NET packages that exposed all of our users to a severe CVE - would Petabridge be given permission to hard-delete any of the 200+ packages that depend on us to remediate it? Obviously not. If a bigger vendor like AWS or Google had a similar issue to Microsoft’s, do you think they’d get permission to delete any third party authors’ package versions? Probably not.
  3. The arbitrariness of it. Why did only our new package versions get deleted, and not the older ones that also transitively referenced the vulnerable version? Why did the email that was sent to us make zero mention of our packages being deleted? By what process was this decided and enforced by NuGet?

Microsoft trying to solve a vulnerability issue isn’t the problem - it’s the way they did it. If you look back at my post about the .NET Foundation back-dooring their own projects by abusing the Foundation’s administrative access, read this key paragraph:

But using that administrative access is a nuclear option - OSS foundations must have it but they must also never use it outside of these narrow cases. As soon as you make a move using this access without the maintainers’ consent the fallout is going to turn the relationship between foundation and maintainer radioactive, as the maintainer’s autonomy over the project is now compromised.

We trust NuGet as our distribution platform for our intellectual property and our customers trust it for being able to readily access it. If that trust in the perpetual availability of our IP can be disrupted any time a Microsoft organization fucks up and introduces a vulnerability, that’s a real problem for us and our users.

What’s the limiting principle here going forward? And why did this vulnerability need to be treated differently than any of the other hundreds of vulnerabilities disclosed in Microsoft packages over the past 10 years?

Update

A user on X reported that this also happened to their packages too:

  1. I didn’t even realize that Deleted was a possible status for a package version on NuGet. I’ve been publishing packages there for nearly 15 years! 

  2. Later on, after I bitched about it on Twitter, the Azure.Identity team also released a new update

Read the whole story
alvinashcraft
43 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories