As AI agents become more sophisticated, the need for seamless integration with powerful cloud-based tools grows. With the Azure AI Foundry SDK and MCP (Model Context Protocol) tools, creates a dynamic duo that empowers developers to build, deploy, and manage intelligent agents with ease.
Solution Overview
The AI-Foundry-Agent-MCP GitHub repository provides a hands-on solution for integrating MCP tools with the Azure AI Foundry SDK. This setup allows developers to:
Access and deploy state-of-the-art models from Azure AI Foundry.
Use MCP tools to manage model context, knowledge bases, and evaluation pipelines.
Rapidly prototype and scale AI agent solutions in a cloud-native environment.
Getting Started
The repo includes a step-by-step guide to get your environment up and running:
Clone the repo and navigate to the project directory:git clone https://github.com/ccoellomsft/AI-Foundry-Agent-MCP.git cd ai-foundry-agents-mcp/ai-foundry-agents-mcp-tools/Python
Set up a virtual environment:python -m venv labenv source ../labenv/Scripts/activate
Configure your environment - Edit the .env file to include your Azure AI Foundry project endpoint and model deployment name.
Authenticate with Azure and select your subscription when prompted:az login
Run client.py:python client.py
Run with Sample Queries:
What exchange is MSFT listed on?
Give me a list of Microsoft's popular products.
What is Microsoft's stock price?
Creating an MCP Tool
The root of MCP tools is within the server.py file where tools(python functions) can be added as needed. The Agent will leverage the tool functions description when determining which one to use. The description or comment in the code is what the agent uses to understand the tool's purpose.
When a user prompt includes something like "What's the current Microsoft stock price?" or "Give me information on Microsoft's products" the agent will match the intent of the prompt with the tool's description. Within the MCP tool function an endpoint to another service can be called, parameters can be passed into the function, and any applicable logic can also be added here.
Conclusion
By combining the Azure AI Foundry SDK with MCP tools, you gain access to a rich ecosystem of models, data indexing, and deployment capabilities, all within a unified development workflow. Whether you're building chatbots, copilots, or intelligent search systems, this toolkit accelerates your journey from prototype to production. Implementing MCP tools with Azure AI Foundry offers a powerful and scalable approach to building intelligent, context-aware AI solutions. This integration not only streamlines the development lifecycle of AI agents but also ensures they operate with contextual intelligence, adaptability, and enterprise-grade security. As AI continues to evolve, leveraging these tools together positions teams to deliver smarter, more responsible, and impactful AI-driven experiences.
Jamon sits down with Markus Leyendecker from Meta to talk about using React Native on Meta Quest. They cover what’s already working, what’s still coming together, and why mixed reality might be the next big frontier for React Native developers.
Spoiler alert: Jamon might have purchased a headset after the recording of this episode!Â
Infinite Red is an expert React Native consultancy located in the USA. With nearly a decade of React Native experience and deep roots in the React Native community (hosts of Chain React and the React Native Newsletter, core React Native contributors, creators of Ignite and Reactotron, and much, much more), Infinite Red is the best choice for helping you build and deploy your next React Native app.
Explicit resource management support in core rules
Four core rules have been updated to better support explicit resource management, a new feature in ES2026 JavaScript, including support for using and await using syntax.
The init-declarations rule no longer reports on initializing using and await using variables when the option is "never", because these variables must be initialized. For example:
asyncfunctionfoobar(){await using quux =getSomething();}
The no-const-assign rule now reports on modifying using and await using variables. For example:
if(foo){ using a =getSomething(); a = somethingElse;}
The no-loop-func rule no longer reports on references to using and await using variables, because these variables are constant. For example:
for(using i of foo){vara=function(){return i;};// OK, all references are referring to block scoped variables in the loop.a();}
The no-undef-init rule no longer reports on using and await using variables initialized to undefined. For example:
using foo =undefined;
Improved RuleTester output for incorrect locations
The run method of the RuleTester class has been enhanced to indicate when multiple properties of a reported error location in a test case do not match. For example:
AssertionError [ERR_ASSERTION]: Actual error location does not match expected error location.
+ actual - expected
{
+ column: 31,
+ endColumn: 32
- column: 32,
- endColumn: 33
}
Previously, the output would only show one property even if there were multiple mismatches:
AssertionError [ERR_ASSERTION]: Error column should be 32
31 !== 32
+ expected - actual
-31
+32
Features
35cf44c feat: output full actual location in rule tester if different (#19904) (ST-DDT)
Errors and exceptions are inevitable in application development. As programmers, it is our responsibility to handle these errors gracefully so that the user experience of the application is not compromised. Handling errors correctly also helps programmers debug and understand what caused them so they can deal with them.
JavaScript has been a popular programming language for over three decades. We build web, mobile, PWA, and server-side applications using JavaScript and various popular JavaScript-based libraries (like ReactJS) and frameworks (like Next.js, Remix, and so on).
Being a loosely typed language, JavaScript imposes the challenge of handling type safety correctly. TypeScript is useful for managing types, but we still need to handle runtime errors efficiently in our code.
Errors like TypeError, RangeError, ReferenceError are probably pretty familiar to you if you’ve been building with JavaScript for a while. All these errors may cause invalid data, bad page transitions, unwanted results, or even the entire application to crash – none of which will make end users happy!
In this handbook, you’ll learn everything you need to know about error handling in JavaScript. We will start with an understanding of errors, their types, and occurrences. Then you’ll learn how to deal with these errors so that they don’t cause a bad user experience. At the end, you’ll also learn to build your own custom error types and clean-up methodologies to handle your code flow better for optimizations and performance.
I hope you enjoy reading along and practising the tasks I have provided at the end of the article.
This handbook is also available as a video session as part of the 40 Days of JavaScript initiative. Please check it out.
Errors and exceptions are the events that disrupt program execution. JavaScript parses and executes code line by line. The source code gets evaluated with the grammar of the programming language to ensure it is valid and executable. If there is a mismatch, JavaScript encounters a parsing error. You’ll need to make sure you follow the right syntax and grammar of the language to stay away from parsing errors.
Take a look at the code snippet below. Here, we have made the mistake of not closing the parentheses for the console.log.
console.log("hi"
This will lead to a Syntax Error like this:
Other types of errors can happen due to wrong data input, trying to read a value or property that doesn’t exist, or acting on inaccurate data. Let’s see some examples:
console.log(x); // Uncaught ReferenceError: x is not definedlet obj = null;
console.log(obj.name); // Uncaught TypeError: Cannot read properties of nulllet arr = newArray(-1) // Uncaught RangeError: Invalid array lengthdecodeURIComponent("%"); // Uncaught URIError: URI malformedeval("var x = ;"); // Uncaught EvalError
Here is the list of possible runtime errors you may encounter, along with their descriptions:
ReferenceError – Occurs when trying to access a variable that is not defined.
TypeError – Occurs when an operation is performed on a value of the wrong type.
RangeError – Occurs when a value is outside the allowable range.
SyntaxError – Occurs when there is a mistake in the syntax of the JavaScript code.
URIError – Occurs when an incorrect URI function is used in encoding and decoding URIs.
EvalError – Occurs when there is an issue with the eval() function.
InternalError – Occurs when the JavaScript engine runs into an internal limit (stack overflow).
AggregateError – Introduced in ES2021, used for handling multiple errors at once.
Custom Errors – These are user-defined errors, and we will learn how to create and use them soon.
Have you noticed that all the code examples we used above result in a message explaining what the error is about? If you look at those messages closely, you will find a word called Uncaught. It means the error occurred, but it was not caught and managed. That’s exactly what we will now go for – so you know how to handle these errors.
Handling Errors With the try and catch
JavaScript applications can break for various reasons, like invalid syntax, invalid data, missing API responses, user mistakes, and so on. Most of these reasons may lead to an application crash, and your users will see a blank white page.
Rather than letting the application crash, you can gracefully handle these situations using try…catch.
The try block contains the code – the business logic – which might throw an error. Developers always want their code to be error-free. But at the same time, you should be aware that the code might throw an error for several possible reasons, like:
Parsing JSON
Running API logic
Accessing nested object properties
DOM manipulations
… and many more
When the code inside the try block throws an error, the code execution of the remaining code in the try block will be suspended, and the control moves to the nearest catch block. If no error occurs, the catch block is skipped completely.
try {
// Code that might throw an error
} catch (error) {
// Handle the error here
}
The catch Block
The catch block runs only if an error was thrown in the try block. It receives the Error object as a parameter to give us more information about the error. In the example shown below, we are using something called abc without declaring it. JavaScript will throw an error like this:
JavaScript executes code line by line. The execution sequence of the above code will be:
First, the string "execution starts here" will be logged to the console.
Then the control will move to the next line and find the abc there. What is it? JavaScript doesn’t find any definition of it anywhere. It’s time to raise the alarm and throw an error. The control doesn’t move to the next line (the next console log), but rather moves to the catch block.
In the catch block, we handle the error by logging it to the console. We could do many other things like show a toast message, send the user an email, or switch off a toaster (why not if your customer needs it).
Without try...catch, the error would crash the app.
Error Handling: Real-World Use Cases
Let’s now see some of the real-world use cases of error handling with try…catch.
Handling Division by Zero
Here is a function that divides a number by another number. So, we have parameters passed to the function for both numbers. We want to make sure that the division never encounters an error for dividing a number by zero (0).
As a proactive measure, we have written a condition that if the divisor is zero, we will throw an error saying that division by zero is not allowed. In every other case, we will proceed with the division operation. In case of an error, the catch block will handle the error and do what’s needed (in this case, logging the error to the console).
functiondivideNumbers(a, b) {
try {
if (b === 0) {
const err = newError("Division by zero is not allowed.");
throw err;
}
const result = a/b;
console.log(`The result is ${result}`);
} catch(error) {
console.error("Got a Math Error:", error.message)
}
}
Now, if we invoke the function with the following arguments, we will get a result of 5, and the second argument is a non-zero value.
divideNumbers(15, 3); // The result is 5
But if we pass the 0 value for the second argument, the program will throw an error, and it will be logged into the console.
divideNumbers(15, 0);
Output:
Handling JSON
Often, you will get JSON as a response to an API call. You need to parse this JSON in your JavaScript code to extract the values. What if the API sends you some malformed JSON by mistake? You cann’t afford to let your user interface crash because of this. You need to handle it gracefully – and here comes the try…catch block again to the rescue:
Without try...catch, the second call will crash the app.
Anatomy of the Error Object
Getting errors in programming can be a scary feeling. But Errors in JavaScript aren’t just some scary, annoying messages – they are structured objects that carry a lot of helpful information about what went wrong, where, and why.
As developers, we need to understand the anatomy of the Error object to help us better with faster debugging and smarter recovery in production-level application issues.
Let’s deep dive into the Error object, its properties, and how to use it effectively with examples.
What is the Error Object?
The JavaScript engine throws an Error object when something goes wrong during runtime. This object contains helpful information like:
An error message: This is a human-readable error message.
The error type: TypeError, ReferenceError, SyntaxError, and so on that we discussed above.
The stack trace: This helps you navigate to the root of the error. It is a string containing the stack trace at the point the error was thrown.
Let’s take a look at the code snippet below. The JavaScript engine will throw an error in this code, as the variable y is not defined. The error object contains the error name (type), message, and the stack trace information.
functiondoSomething() {
const x = y + 1; // y is not defined
}
try {
doSomething();
} catch (err) {
console.log(err.name); // ReferenceErrorconsole.log(err.message); // y is not definedconsole.log(err.stack); // ReferenceError: y is not defined// at doSomething (<anonymous>:2:13)// at <anonymous>:5:3
}
Tip: If you need any specific properties from the error object, you can use destructuring to do that. Here is an example where we are only interested in the error name and message, not the stack.
try {
JSON.parse("{ invalid json }");
} catch ({name, message}) {
console.log("Name:", name); // Name: SyntaxErrorconsole.log("Message:", message); // Message: Expected property name or '}' in JSON at position 2 (line 1 column 3)
}
Throwing Errors and Re-throwing Errors
JavaScript provides a throw statement to trigger an error manually. It is very helpful when you want to handle an invalid condition in your code (remember the divide by zero problem?).
To throw an error, you need to create an instance of the Error object with details and then throw it.
thrownewError("Something is bad!");
When the code execution encounters a throw statement,
It stops the execution of the current code block immediately.
The control moves to the nearest catch block (if any).
If the catch block is not found, the error will not be caught. The error gets bubbled up, and may end up crashing the program. You can learn more in-depth about events and event bubbling from here.
Rethrowing
At times, catching the error itself in the catch block is not enough. Sometimes, you may not know how to handle the error completely, and you might want to do additional things, like:
Adding more context to the error.
Logging the error into a file-based logger.
Passing the error to someone more specialized to handle it.
This is where rethrow comes in. With rethrowing, you catch an error, do something else with it, and then throw it again.
functionprocessData() {
try {
parseUserData();
} catch (err) {
console.error("Error in processData:", err.message);
throw err; // Rethrow so the outer function can handle it too
}
}
functionmain() {
try {
processData();
} catch (err) {
handleErrorBetter(err);
}
}
In the code above, the processData() function catches an error, logs it, and then throws it again. The outer main() function can now catch it and do something more to handle it better.
In the real-world application development, you would want to separate the concerns for errors, like:
API Layer – In this layer, you can detect HTTP failures
asyncfunctionfetchUser(id) {
const res = await fetch(`/users/${id}`);
if (!res.ok) thrownewError("User not found"); // throw it herereturn res.json();
}
Service Layer – In this layer, you handle business logic. So the error will be handled for invalid conditions.
asyncfunctiongetUser(id) {
try {
const user = await fetchUser(id);
return user;
} catch (err) {
console.error("Fetching user failed:", err.message);
thrownewError("Unable to load user profile"); // rethrowing
}
}
UI Layer – Show a user-friendly error message.
asyncfunctionshowUserProfile() {
try {
const user = await getUser(123);
renderUser(user);
} catch (err) {
displayError(err.message); // A proper message show to the user
}
}
The finally with try-catch
The try…catch block gives us a way to handle errors gracefully. But you may always want to execute some code irrespective of whether an error occurred or not. For example, closing the database connection, stopping a loader, resetting some states, and so on. That’s where finally comes in.
try {
// Code might throw an error
} catch (error) {
// Handle the error
} finally {
// Always runs, whether an error occured or not
}
In the performTask() function, the error is thrown after the first console log. So, the control will move to the catch block and log the error. After that, the finally block will execute its console log.
Let’s take a more real-world use case of making an API call and handling the loading spinner:
asyncfunctionloadUserData() {
showSpinner(); // Show the spinner heretry {
const res = await fetch('/api/user');
const data = await res.json();
displayUser(data);
} catch (err) {
showError("Failed to load user.");
} finally {
hideSpinner(); // Hide spinner for both success and fail cases.
}
}
Usually, we show a loading spinner while making an API (asynchronous) call from the browser. Irrespective of the successful response or an error from the API call, we must stop showing the loading spinner. Instead of executing the code logic twice to stop the spinner (once inside the try block and then again inside the catch block), you can do it inside the finally block.
Caution with finally
The finally block can override return values or a thrown error. This behaviour may be confusing and can lead to bugs as well.
It will return 'from finally'. The return 'from try' is completely ignored. The return from finally overrides it silently.
Let’s see one more example of the same problem:
functionwillThrow() {
try {
thrownewError("Original Error");
} finally {
thrownewError("Overriding Error"); // The original error is lost
}
}
try {
willThrow();
} catch (err) {
console.log(err.message); // "Overriding Error"
}
Here, the original error ("Original Error”) is swallowed. The finally block overrides the actual root cause.
When using finally:
Avoid returns and throws from finally as much as possible.
Avoid performing logic in the finally block that may impact the actual outcome. The try block is the best place for that.
Any critical decision-making must be avoided inside the finally block
Use finally for cleanup activities, such as closing files, connections, and stopping loading spinners, etc.
Ensure the finally block contains side-effect-free code.
Custom Errors
Using the generic Error and its existing types, like ReferenceError, SyntaxError, and so on, can be a bit vague in complex applications. JavaScript lets you create custom errors that are more related to your business use cases. The custom errors can provide:
Additional contextual information about the error.
Clarity about the error
More readable logs
The ability to handle multiple error cases conditionally.
A custom error in JavaScript is a user-defined error type that extends the built-in Error class. The custom error should be an ES6 Class that extends JavaScript’s Error class. We can use the super() in the constructor function to inherit the message property of the Error class. You can optionally assign a name and clean the stack trace for the custom error.
Let’s now see a real-world use case for a custom error.
A Real-World Use Case of Custom Errors
Using a form on a web page is an extremely common use case. A form may contain one or more input fields. It is recommended to validate the user inputs before we process the form data for any server actions.
Let’s create a custom validation error we can leverage for validating multiple form input data, like the user’s email, age, phone number, and more.
First, we’ll create a class called ValidationError that extends the Error class. The constructor function sets up the ValidationError class with an error message. We can also instantiate additional properties, like name, field, and so on.
Now, let's see how to use ValidationError. We can validate a user model to check its properties and throw a ValidationError whenever the expectations mismatch.
functionvalidateUser(user) {
if (!user.email.includes("@")) {
thrownew ValidationError("email", "Invalid email format");
}
if (!user.age || user.age < 18) {
thrownew ValidationError("age", "User must be 18+");
}
}
In the code snippet above,
We throw an invalid email format validation error if the user’s email doesn’t include the @ symbol.
We throw another validation error if the age information of the user is missing or is below 18.
A custom error gives us the power to create domain/usage-specific error types to keep the code more manageable and less error-prone.
Task Assignments for You
If you have read the handbook this far, I hope you now have a solid understanding of JavaScript Error Handling. Let’s try out some assignments based on what we have learned so far. It’s going to be fun.
Find the Output
What will be the output of the following code snippet and why?
try {
let r = p + 50;
console.log(r);
} catch (error) {
console.log("An error occurred:", error.name);
}
Options are:
ReferenceError
SyntaxError
TypeError
No error, it prints 10
Payment Process Validation
Write a function processPayment(amount) that verifies if the amount is positive and does not exceed the balance. If any condition fails, throw appropriate errors.
Hint: You can think of creating a Custom Error here.
40 Days of JavaScript Challenge Initiative
There are 101 ways of learning something. But nothing can beat structured and progressive learning methodologies. After spending more than two decades in Software Engineering, I’ve been able to gather the best of JavaScript together to create the 40 Days of JavaScript challenge initiative.
Check it out if you want to learn JavaScript with fundamental concepts, projects, and assignments for free (forever). Focusing on the fundamentals of JavaScript will prepare you well for a future in web development.
Before We End...
That’s all! I hope you found this article insightful.
Dev Productivity is Down. Here's Why.
68% of developers save 10+ hours weekly with AI, but half still lose just as much to broken processes.
Atlassian's 2025 Developer Experience Report reveals rising time waste, disconnects with leadership, and the gap between feeling productive and being productive.
Think your tools are smarter than your org chart? You're not alone.
Download the full report.
ReSharper is coming to VS Code.
JetBrains is bringing the power of ReSharper and AI assistant to Visual Studio Code!
Here's your chance to influence their future - join the public preview to get early access,
test powerful new tools, and share your feedback directly with the development team.
Let's talk about something we all deal with but often put off until the last minute - error handling in our ASP.NET Core apps.
When something breaks in production, the last thing you want is a cryptic 500 error with zero context.
Proper error handling isn't just about logging exceptions.
It's about making sure your app fails gracefully and gives useful info to the caller (and you).
In this article, I'll walk through the main options for global error handling in ASP.NET Core.
We'll look at how I used to do it, what ASP.NET Core 9 offers now, and where each approach makes sense.
The classic way to catch unhandled exceptions is with custom middleware.
This is where most of us start, and honestly, it still works great for most scenarios.
internalsealedclassGlobalExceptionHandlerMiddleware(RequestDelegate next,ILogger<GlobalExceptionHandlerMiddleware> logger){publicasyncTaskInvokeAsync(HttpContext context){try{awaitnext(context);}catch(Exception ex){ logger.LogError(ex,"Unhandled exception occurred");// Make sure to set the status code before writing to the response body context.Response.StatusCode = ex switch{ ApplicationException => StatusCodes.Status400BadRequest, _ => StatusCodes.Status500InternalServerError
};await context.Response.WriteAsJsonAsync(newProblemDetails{ Type = ex.GetType().Name, Title ="An error occured", Detail = ex.Message
});}}}
Don't forget to add the middleware to the request pipeline:
This approach is solid and works everywhere in your pipeline.
The beauty is its simplicity: wrap everything in a try-catch, log the error, and return a consistent response.
But once you start adding specific rules for different exception types (e.g. ValidationException, NotFoundException), this becomes a mess.
You end up with long if / else chains or more abstractions to handle each exception type.
Plus, you're manually crafting JSON responses, which means you're probably not following
RFC 9457 (Problem Details) standards.
Microsoft recognized this pain point and gave us IProblemDetailsService to standardize error responses.
Instead of manually serializing our own error objects, we can use the built-in Problem Details format.
internalsealedclassGlobalExceptionHandlerMiddleware(RequestDelegate next,IProblemDetailsService problemDetailsService,ILogger<GlobalExceptionHandlerMiddleware> logger){publicasyncTaskInvokeAsync(HttpContext context){try{awaitnext(context);}catch(Exception ex){ logger.LogError(ex,"Unhandled exception occurred");// Make sure to set the status code before writing to the response body context.Response.StatusCode = ex switch{ ApplicationException => StatusCodes.Status400BadRequest, _ => StatusCodes.Status500InternalServerError
};await problemDetailsService.TryWriteAsync(newProblemDetailsContext{ HttpContext = httpContext, Exception = exception, ProblemDetails =newProblemDetails{ Type = exception.GetType().Name, Title ="An error occured", Detail = exception.Message
}});}}}
This is much cleaner.
We're now using a standard format that API consumers expect, and we're not manually fiddling with JSON serialization.
But we're still stuck with that growing switch statement problem.
You can learn more about using Problem Details in .NET here.
ASP.NET Core 8 introduced IExceptionHandler, and it's a game-changer.
Instead of one massive middleware handling everything, we can create focused handlers for specific exception types.
You can chain multiple exception handlers together, and they'll run in the order you register them.
ASP.NET Core will use the first one that returns true from TryHandleAsync.
Example: One for validation errors, one global fallback.
// In your controller or service - IValidator<CreateUserRequest>publicasyncTask<IActionResult>CreateUser(CreateUserRequest request){await _validator.ValidateAndThrowAsync(request);// Your business logic here}
The execution order is important.
The framework will try each handler in the order you registered them.
So put your most specific handlers first, and your catch-all handler last.
For new projects, I'd go straight to IExceptionHandler.
It's cleaner, more maintainable, and gives you the flexibility to handle different exception types exactly how you want.
The key takeaway?
Don't let error handling be an afterthought.
Set it up early, make it consistent, and your users (and your future self) will thank you when things inevitably go wrong.
“Software supply chain management” is one of those terms that sounds like Venture Capital-funded vendor marketing bullshit right up until it isn’t.
In 2016 the npm left-pad incident taught many of us in the software industry the importance of:
The fragility of depending directly on central package management systems, such as npm or nuget.org, hence why artifact proxying tools like JFrog Artifactory became so important; and
How centralized package management systems probably need to make stronger security and availability guarantees, such as not allowing hard deletes of packages in the first place.
Imagine my surprise yesterday, when I discovered that two of our Akka.NET packages were deleted1, by Microsoft, without any advanced notice. I only discovered that this was an issue when one of my own Akka.NET applications failed to build on CI/CD due to missing package versions.
I’ll get into the reasons why they did this, but the bottom line is: this is a disturbing precedent that really should never be repeated.
In essence, Microsoft’s adjacent business units abused NuGet to deal with their own security vulnerabilities - getting a level of access that would never be granted to any other publisher on the platform.
Rather Microsoft.Identity.Client is a transitive dependency of Microsoft’s Azure.Identity package, which we reference for authenticating these plugins’ access to Azure resources.
Immediately after receiving this email, which makes zero mention of our package versions being deleted, we investigated and found that:
The “vulnerability” was just a typo in a public-facing XML-DOC comment that happens to point to a typo-squatting URL that is commonly used in phishing attacks. Sucks, but it’s not a “real” CVE in the sense of it impacting actual program execution - a user would have to manually do something with that information in order to be vulnerable.
Azure.Identity’s developers had presumably been contacted by the AAD team already. Their most recent version of their plugin (at the time), 1.14.1, hadn’t been updated with a non-“vulnerable” version of Microsoft.Identity.Client.
Given both of those data points, we figured this was probably a nothing-burger and went about our business. “We’ll update our plugins once there’s a new version of Azure.Identity” was the decision.
It was only later when I tried to build one of my own Akka.NET applications that we discovered that the package versions had been deleted outright, which we fixed via a new update that took a direct dependency on Microsoft.Identity.Client2.
A Bad Precedent
The Microsoft Entra / Azure Active Directory people were trying to address a legitimate security concern. I totally get it. But there are new CVE disclosures on Microsoft packages virtually every month.
NuGet has a built-in system for remedying this:
CVE and deprecation disclosures on the NuGet.org feed and
Built in support for logging build warnings when vulnerable packages are restored.
This is the normal process by which we and every other active package author have resolved CVEs from upstream dependencies for years. Why was this a special case that merited the extreme step of deleting other people’s packages without notice?
This precedent bothers me for three reasons:
Undermines confidence in indefinite package availability. Hard-deleting packages is supposed to be a giant no-no for the NuGet ecosystem. If criteria for deleting packages now includes “every time Microsoft makes a boo-boo”, that’s impossible for us to predict or mitigate as OSS vendors on the NuGet platform. Our only recourse there would be to host our own NuGet feed and push our users to that, which would have the affect of killing our distribution and the ability for other authors to derive our work.
Unique access for Microsoft alone. Imagine if we had a major vulnerability in one of our Akka.NET packages that exposed all of our users to a severe CVE - would Petabridge be given permission to hard-delete any of the 200+ packages that depend on us to remediate it? Obviously not. If a bigger vendor like AWS or Google had a similar issue to Microsoft’s, do you think they’d get permission to delete any third party authors’ package versions? Probably not.
The arbitrariness of it. Why did only our new package versions get deleted, and not the older ones that also transitively referenced the vulnerable version? Why did the email that was sent to us make zero mention of our packages being deleted? By what process was this decided and enforced by NuGet?
But using that administrative access is a nuclear option - OSS foundations must have it but they must also never use it outside of these narrow cases. As soon as you make a move using this access without the maintainers’ consent the fallout is going to turn the relationship between foundation and maintainer radioactive, as the maintainer’s autonomy over the project is now compromised.
We trust NuGet as our distribution platform for our intellectual property and our customers trust it for being able to readily access it. If that trust in the perpetual availability of our IP can be disrupted any time a Microsoft organization fucks up and introduces a vulnerability, that’s a real problem for us and our users.
What’s the limiting principle here going forward? And why did this vulnerability need to be treated differently than any of the other hundreds of vulnerabilities disclosed in Microsoft packages over the past 10 years?
Update
A user on X reported that this also happened to their packages too:
I didn’t even realize that Deleted was a possible status for a package version on NuGet. I’ve been publishing packages there for nearly 15 years! ↩