Today, we’re announcing MAI-Image-1, our first image generation model developed entirely in-house, debuting in the top 10 text-to-image models on LMArena.
At Microsoft AI, we’re creating AI for everyone – a supportive, helpful presence always in the service of humanity. We’ve shared how purpose-built models are essential for this mission, and we announced our first two in-house models in August. MAI-Image-1 marks the next step on our journey and paves the way for more immersive, creative and dynamic experiences inside our products.
We trained this model with the goal of delivering genuine value for creators, and we put a lot of care into avoiding repetitive or generically-stylized outputs. For example, we prioritized rigorous data selection and nuanced evaluation focused on tasks that closely mirror real-world creative use cases – taking into account feedback from professionals in the creative industries. This model is designed to deliver real flexibility, visual diversity and practical value.
MAI-Image-1 excels at generating photorealistic imagery, like lighting (e.g., bounce light, reflections), landscapes, and much more. This is particularly so when compared to many larger, slower models. Its combination of speed and quality means users can get their ideas on screen faster, iterate through them quickly, and then transfer their work to other tools to continue refining.
We are committed to ensuring safe and responsible outcomes. That has driven us to begin testing this model in LMArena so that we can gather insights and feedback. We’re excited to be making MAI-Image-1 available in Copilot and Bing Image Creator very soon. For now, give it a try in LMArena and let us know what you think!
Build the future with us!
We’re a lean, fast-moving lab made up of some of the world’s most talented minds. We have an ambitious mission we truly believe in. We’re also fortunate to partner with incredible product teams giving our models the chance to reach billions of users and create immense positive impact. If you’re a brilliant, highly-ambitious and low ego individual, you’ll fit right in – come and join us as we work on our next generation of models!
1124. This week, we look at blue idioms, including the political history of "blue states," the medical reason for being "blue in the face," and the astronomical reason for a "blue moon." Then, we look at the difference between 'plumb' (with a B), and 'plum' (without a B).
LLMs have, at this point, inserted themselves into almost every walk of life.
In this post, I’m going to cover Copilot Studio, and how you can use that to customise your own LLM agent.
Disclaimer
Microsoft are changing the interface almost weekly, so whilst this might be useful if you read this post on or around the time it was published, it may simply serve as a version of Wayback Machine! Hopefully, the basic principles will remain the same, however.
What is Copilot?
Copilot is Microsoft’s answer to the LLM revolution. If you want to try it out, it’s now part of O365 - there’s an app and a free trial (links to follow).
How can I change Copilot (and why would I want to)?
Let’s start with the why. LLMs are great for general queries - for example:
“How do I change a car tyre?”
“What date was Napolean born?”
“Write my homework / letter for me…”
However, what LLMs generally are fairly bad at is understanding your specific context. You can give it that context:
“You are a travel agent, advise me on the best trips to the Galapagos…”
But you may want it to draw from your data as a preference.
On to the how…
Confusingly, there are multiple sites and applications that currently refer to themselves as Copilot Studio. The O365 Version is a lightweight view of the application. It lets you do most things, but there are certain features restricted to the Full Copilot Studio.
O365 Version
The lightweight one allows you to customise the agent to an extent - you can give it a personality, and even give it a certain scope of knowledge; however, that scope must be in the form of a URL.
Looking at that screenshot, you can see that I’ve done my best to warp the mind of the agent. What I actually wanted to do was to feed it some incorect historical facts, too - however, it only accepts a URL and, to make it worse, only accepts a simple URL - so you can’t (for example) upload something to One Drive and use that.
Full Copilot Studio
The full studio version offers other options, including the ability to upload a file as a knowledge source.
We can set-up the same agent here (sadly, you can’t use the same one):
.
That works well (in that it gives you annoying and irrelevant historical facts):
.
Messing with its mind
That’s great, but what if we now want to change it so that it uses a set of incorrect historical facts…
Then we can instruct it to only use data from that source:
We can test that inside the studio window:
Based on this reply, you can push the bot to respond and to trust the knowledge source - however, in my example, I’m essentially asking it to lie:
There are ways around this. For example, you can simply tell it to site the source, at which point it will simply say that “according to the defined source x is the case”.
Conclusion
Hopefully this post has introduced some of the key concepts of Copilot Studio. It may seem like the development of “Askin” is a little frivolous (even if it is, it’s my blog post, so I’m allowed) - but there is a wider purpose here: let’s imagine that you have a knowledge base, and that knowledge base must be preferred to knowledge found elsewhere on the internet. For example, your bot is providing advice about medicine, or social services, or even a timetable for a conference - you always want your knowledge source to be the only source used - even where that conflicts with something found elsewhere on the web.
Ryan welcomes Dhruv Batra, co-founder and chief scientist at Yutori, to explore the future of AI agents, how AI usage is changing the way people interact with advertisements and the web as a whole, and the challenges that proactive AI agents may face when being integrated into workflows and personal internet use.
Azure Functions on Azure Container Apps lets you run serverless functions in a flexible, scalable container environment. As the platform evolves, there are two mechanisms to deploy Images with Function Programming Model as Azure Container apps:
Functions V1 (Legacy Microsoft.Web RP Model)
Functions V2 (New Microsoft.App RP Model)
V2 is latest and recommended approach to host Functions on Azure Container Apps. In this article, we will learn about differences in different approaches and how you can transition to V2 model.
V1 Limitations (Legacy approach)
Function App Deployments has limited functionality and experience therefore transition to V2 is encouraged. Below are the limitations of V1
Troubleshooting Limitations
Direct container access and real-time log viewing are not supported.
Console access and live log output are restricted due to system-generated container configurations.
Low-level diagnostics are available via Log Analytics, while application-level logs can be accessed through Application Insights.
Portal UI experience limitations
Lacks experiences on multi revision, easy auth, health probes, custom domain
DAPR Integration Challenges
Compatibility issues with DAPR with .NET isolated functions, particularly during build processes due to dependency conflicts.
Functions V2 (Improved and Recommended)
Deployment with -–kind=functionapp using Microsoft.App RP reflects the newer approach of deployment (Function on Container Apps V2)
Simplified Resource Management internally: Instead of relying on proxy Function App (as in the V1 model), V2 provisions a native Azure Container App resource directly. This shift eliminates the need to dual-resource management previously involving both the proxy and the container app, thereby simplifying operations by consolidating everything into a single, standalone resource.
Feature rich and fully native: As a result, V2 brings the native features of Azure Container Apps for the images deployed with Azure functions programming model, including
Since V2 contains significant upgrade on experience and functionality, it’s recommended to transition to V2 from existing V1 deployments.
Legacy Direct Function image deployment approach
Some customers continue to deploy Function images as standard container apps (without kind=functionapp) using the Microsoft.App resource provider. While this method enables access to native Container Apps features, it comes with key limitations:
Not officially supported.
No auto-scale rules — manual configuration required
No access to new V2 capabilities in roadmap (e.g., List Functions, Function Keys, Invocation Count)
Recommendation: Transition to Functions on Container Apps V2, which offers a significantly improved experience and enhanced functionality.
Checklist for the transitioning to Functions V2 on Azure Container Apps
Below is the transition guide
1. Preparation
Identify your current deployment: Confirm you are running Functions V1 (Web RP) in Azure Container Apps
Locate your container image: Ensure you have access to the container image used in your V1 deployment.
Document configuration: Record all environment variables, secrets, storage account connections, and networking settings from your existing app.
Check Azure Container Apps environment quotas: Review memory, CPU, and instance limits for your Azure Container Apps environment. Request quota increases if needed.
2. Create the New V2 App
Create a new Container App with kind=functionapp:
Use the Azure Portal (“Optimize for Functions app” option)
Or use the CLI (az functionapp create) and specify your existing container image.
No code changes required: You can use the same container image you used for V1-no need to modify your Functions code or rebuild your image.
Replicate configuration: Apply all environment variables, secrets, and settings from your previous deployment.
3. Validation
Test function triggers: Confirm all triggers (HTTP, Event Hub, Service Bus, etc.) work as expected.
Test all integrations: Validate connections to databases, storage, and other Azure services.
4. DNS and Custom Domain Updates (optional)
Review DNS names: The new V2 app will have a different default DNS name than your v1 app.
Update custom domains:
If you use a custom domain (e.g., api.yourcompany.com), update your DNS records (CNAME or A record) to point to the new V2 app’s endpoint after validation.
Re-bind or update SSL/TLS certificates as needed.
Notify Users and Stakeholders: Inform anyone who accesses the app directly about the DNS or endpoint change.
Test endpoint: Ensure the new DNS or custom domain correctly routes traffic to the V2 app.
5. Cutover
Switch production traffic: Once validated, update DNS, endpoints, or routing to direct traffic to the new V2 app.
Monitor for issues: Closely monitor the new deployment for errors, latency, or scaling anomalies.
Communicate with stakeholders: Notify your team and users about the transition and any expected changes.
6. Cleanup
Remove the old V1 app: Delete the previous V1 deployment to avoid duplication and unnecessary costs.
Update documentation: Record the new deployment details, configuration, and any lessons learned
Feedback and Support
We’re continuously improving Functions on Container Apps V2 and welcome your input.
Share Feedback: Let us know what’s working well and what could be better.
In the world of .NET, memory management is an important aspect of any application. Fortunately, you don't have to shoulder this immense task yourself. .NET handles it with the superpower of the Garbage Collector (GC). A GC is an engine that keeps your app fast, responsive, and resource-efficient. Although on a surface level, you don't need to know much about everything going on below your brackets, it is better to understand how memory management works in your application. In this blog, we will discuss the GC and explore ways to better harness its capabilities.
What is a Garbage Collector in .NET?
The GC is a component of the Common Language Runtime (CLR) that performs automatic memory management. The GC allocates memory to a managed program and releases it when it is no longer needed. The automatic management relieves developers from writing code for memory management tasks and ensures unused objects do not consume memory indefinitely.
Any application uses memory for storing data and objects. Operations such as variable declaration, data fetching, file streaming, and buffer initialization are stored in memory. As we know, these operations can be frequent in any application. However, once these objects are no longer in use, we need to reclaim their memory, because any client or server has limited resources. If memory is not freed, all these unused objects accumulate, leading the program to a memory leak. Developers have to handle this situation manually by writing memory-clearing programs. However, this is a tiresome process because applications frequently use variables and other memory objects. A big chunk of the program would be dealing with the deallocation of memory. Besides its time consumption, manual memory release often requires pointer tweaking, and any mishaps in the pointers can crash the application. You may face a double free situation where releasing the same object multiple times can cause undefined behaviour.
This is where the GC comes to the rescue and handles the entire process of allocation and deallocation automatically.
What is the managed heap?
The managed heap is a segment of memory to store and manage objects allocated by the CLR for a process. It's a key part of NET's automatic memory management system, which is handled by the GC. The managed heap is the working ground of a GC.
Phases in Garbage Collection
1. Marking Phase
In the marking phase, the GC scans memory to identify live objects that are still reachable (referenced) from your program's active code. The GC then lists all live objects and marks unreferenced objects as garbage.
2. Relocating Phase
Now the GC updates the references of live objects, so the pointers will remain valid after all these objects are moved in memory later.
3. Compacting Phase
Here, GC reclaims heap memory from dead objects and compacts live objects together to eliminate gaps in the heap. Compaction of live objects reduces fragmentation and enables new memory allocation faster.
Heap generations in Garbage Collection
The GC in .NET follows a generational approach that organizes objects in the managed heap based on their lifespan. The GC uses this division because compacting a portion of the managed heap is faster than compacting the entire heap. Also, most of the garbage consists of short-lived objects such as local variables and lists, so a generational approach is practical too.
Generation 0
Generation 0 contains short-lived objects such as local variables, strings inside loops, etc. Since most objects in the application are short-lived, they are reclaimed for garbage collection in generation 0 and don't survive to the next generation. If the application needs to create new objects while the heap is full, the GC performs a collection to free address space for the new object. In Generation 0, garbage collection frequently reclaims memory from unused objects of the executed methods.
Generation 1
Generation 1 is the buffer zone between Generation 0 and 2. Objects that survived multiple GC cycles are promoted in this generation. Temporary but reused data structures, which are used by multiple methods, have a shorter lifetime than the application itself. The usual timespan of Generation 1 objects spans seconds to minutes. Naturally, these objects do not need to become unnecessary too early like Generation 0 objects. Hence, the GC performs collection less often than Generation 0, maintaining a balance between performance and memory use.
Generation 2
Generation 2 contains long-lived data whose lifespan can be as long as the application lifetime. Survivors of multiple collections end up in Generation 2, such as singleton interfaces, Static collections, application caches, large objects, or Dependency Injection container services.
Unmanaged resources
Most of your application relies on the GC for memory deallocation. However, unmanaged resources require explicit cleanup. The most prominent example of unmanaged resources is an object that wraps an operating system resource, such as a file handle, window handle, or network connection. Objects like FileStream, StreamReader, StreamWriter, SqlConnection, SqlCommand, NetworkStream, and SmtpClient encapsulate an unmanaged resource, and the GC doesn't have specific knowledge about how to clean up the resource. We have to call them in using a block that calls Dispose() to release their unmanaged handles properly. You can also write code to place the Dispose() method when needed.
An example of using a block is below
using (var resource = new FileResource("mytext.txt"))
{
resource.Write("Hello!");
} // Dispose() automatically called here
.NET Garbage Collector best practices to boost app performance
Limit Large Object Heap
In .NET, any object larger than 85000 bytes is allocated on the Large Object Heap (LOH) instead of the normal managed heap. The GC does not compact the LOH because copying large objects imposes a performance penalty. This can lead to fragmentation and wasted space. Their cleanup is expensive since it is performed in Generation 2. Large JSON serialisation results, large collections, data buffers, and image byte arrays are common examples of LOH. Try to limit the usage of such objects, and if it is not practical to fully avoid them, go for reusing them rather than creating them multiple times separately.
For example:
// ❌ BAD: Creates new 1MB buffer each call
void Process()
{
byte[] buffer = new byte[1024 * 1024];
// use buffer
}
// ✅ GOOD: Reuse buffer
static byte[] sharedBuffer = new byte[1024 * 1024];
void Process()
{
// reuse sharedBuffer safely
}
Minimize unnecessary object allocations
Be careful for short-lived objects as well. Although they are collected in Generation 0 but it still increases the collector's workload. Avoid creating variables repeatedly inside frequently called methods. Instead, reuse these objects when possible.
// ❌ Avoid
for (int i = 0; i < 10000; i++)
var sb = new StringBuilder();
// ✅ Better
var sb = new StringBuilder();
for (int i = 0; i < 10000; i++)
sb.Clear();
Use value types (structs) wisely
For small, short-lived data, opt for value types, as they live on the stack and are auto-cleaned. That way, you can save GC cycles and improve the application's speed. To know more about value types, check out Exploring C# Records and Their Use Cases.
Avoid long-lived object references
Long-lived references are promoted to the Generation 2 heap. That means they occupy memory longer, slowing down GC and increasing overall memory usage. Remove references (set to null, clear collections) once objects are no longer needed.
As in the code:
// ❌ Bad: Keeping large object references alive unintentionally
static List<byte[]> cache = new List<byte[]>();
void LoadData()
{
cache.Add(new byte[1024 * 1024]); // never cleared
}
// ✅ Better: Clear or dereference when not needed
cache.Clear();
Cache Intelligently
While cache can unload the application and improve performance. But overusing caches can fill the heap with long-lived Generation 2 objects. Only cache data where necessary, and if you use the MemoryCache, fine-tune expiration/size limits.
Avoid memory leaks (event and static references)
Unsubscribed event handlers or long-lived static lists can keep objects alive in the Generation 2 heap for a long time.
The .NET GC is not just a memory sweeper but an unsung hero working round-the-clock to reclaim the heap space and keep your applications alive and efficient. It examines dead objects in memory and releases their memory to keep the resource from being overburdened and fragmentation. In this post, we walked through the different phases of the GC and learned about unmanaged resources. Finally, we went through some tips to make the GC work better. Garbage collection is a huge area that we only scratched the surface of in this post.