Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
151811 stories
·
33 followers

Bugs that survive the heat of continuous fuzzing

1 Share

Even when a project has been intensively fuzzed for years, bugs can still survive.

​​OSS-Fuzz is one of the most impactful security initiatives in open source. In collaboration with the OpenSSF Foundation, it has helped to find thousands of bugs in open-source software.

Today, OSS-Fuzz fuzzes more than 1,300 open source projects at no cost to maintainers. However, continuous fuzzing is not a silver bullet. Even mature projects that have been enrolled for years can still contain serious vulnerabilities that go undetected. In the last year, as part of my role at GitHub Security Lab, I have audited popular projects and have discovered some interesting vulnerabilities.

Below, I’ll show three open source projects that were enrolled in OSS-Fuzz for a long time and yet critical bugs survived for years. Together, they illustrate why fuzzing still requires active human oversight, and why improving coverage alone is often not enough.

Gstreamer

GStreamer is the default multimedia framework for the GNOME desktop environment. On Ubuntu, it’s used every time you open a multimedia file with Totem, access the metadata of a multimedia file, or even when generating thumbnails for multimedia files each time you open a folder.
In December 2024, I discovered 29 new vulnerabilities, including several high-risk issues.

To understand how 29 new vulnerabilities could be found in a software that has been continuously fuzzed for seven years, let’s have a look at the public OSS-Fuzz statistics available here. If we look at the GStreamer stats, we can see that it has only two active fuzzers and a code coverage of around 19%. By comparison, a heavily researched project like OpenSSL has 139 fuzzers (yes, 139 different fuzzers, that is not a typo).

Comparing OSS-Fuzz statistics for OpenSSL and GStreamer.

And the popular compression library bzip2 reports a code coverage of 93.03%, a number that is almost five times higher than GStreamer’s coverage.

OSS-Fuzz project statistics for the bzip2 compression library.

Even without being a fuzzing expert, we can guess that GStreamer’s numbers are not good at all.

And this brings us to our first reason: OSS-Fuzz still requires human supervision to monitor project coverage and to write new fuzzers for uncovered code. We have good hope that AI agents could soon help us fill this gap, but until that happens, a human needs to keep doing it by hand.

The other problem with OSS-Fuzz isn’t technical. It’s due to its users and the false sense of confidence they get once they enroll their projects. Many developers are not security experts, so for them, fuzzing is just another checkbox on their security to-do list. Once their project is “being fuzzed,” they might feel it is “protected by Google” and forget about it. Even if the project actually fails during the build stage and isn’t being fuzzed at all (which happens to more than one project in OSS-Fuzz).

This shows that human security expertise is still required to maintain and support fuzzing for each enrolled project, and that doesn’t scale well with OSS-Fuzz’s success!

Poppler

Poppler is the default PDF parser library in Ubuntu. It’s the library used to render PDFs when you open them with Evince (the default document viewer in Ubuntu versions prior to 25.04) or Papers (the default document viewer for GNOME desktop and the default document viewer from newer Ubuntu releases).

If we check Poppler stats in OSS-Fuzz, we can see it includes a total of 16 fuzzers and that its code coverage is around 60%. Those are quite solid numbers; maybe not at an excellent level, but certainly above average.

That said, a few months ago, my colleague Kevin Backhouse published a 1-click RCE affecting Evince in Ubuntu. The victim only needs to open a malicious file for their machine to be compromised. The reason a vulnerability like this wasn’t found by OSS-Fuzz is a different one: external dependencies.

Poppler relies on a good bunch of external dependencies: freetype, cairo, libpng… And based on the low coverage reported for these dependencies in the Fuzz Introspector database, we can safely say that they have not been instrumented by libFuzzer. As a result, the fuzzer receives no feedback from these libraries, meaning that many execution paths are never tested.

Coverage report table showing line coverage percentages for various Poppler dependencies.

But it gets even worse: Some of Evince’s default dependencies aren’t included in the OSS-Fuzz build at all. That’s the case with DjVuLibre, the library where I found the critical vulnerability that Kevin later exploited.

DjVuLibre is a library that implements support for the DjVu document format, an open source alternative to PDF that was popular in the late 1990s and early 2000s for compressing scanned documents. It has become much less widely used since the standardization of the PDF format in 2008.

The surprising thing is that while this dependency isn’t included among the libraries covered by OSS-Fuzz, it is shipped by default with Evince and Papers. So these programs were relying on a dependency that was “unfuzzed” and at the same time, installed on millions of systems by default.

This is a clear example of how software is only as secure as the weakest dependency in its dependency graph.

Exiv2

Exiv2 is a C++ library used to read, write, delete, and modify Exif, IPTC, XMP, and ICC metadata in images. It’s used by many mainstream projects such as GIMP and LibreOffice among others.

Back in 2021, my teammate Kevin Backhouse helped improve the security of the Exiv2 project. Part of that work included enrolling Exiv2 in OSS-Fuzz for continuous fuzzing, which uncovered multiple vulnerabilities, like CVE-2024-39695, CVE-2024-24826, and CVE-2023-44398.

Despite the fact that Exiv2 has been enrolled in OSS-Fuzz for more than three years, new vulnerabilities have still been reported by other vulnerability researchers, including CVE-2025-26623 and CVE-2025-54080.

In that case, the reason is a very common scenario when fuzzing media formats: Researchers always tend to focus on the decoding part, since it is the most obviously exploitable attack surface, while the encoding side receives less attention. As a result, vulnerabilities in the encoding logic can remain unnoticed for years.

From a regular user perspective, a vulnerability in an encoding function may not seem particularly dangerous. However, these libraries are often used in many background workflows (such as thumbnail generation, file conversions, cloud processing pipelines, or automated media handling) where an encoding vulnerability can be more critical.

The five-step fuzzing workflow

At this point it’s clear that fuzzing is not a magic solution that will protect you from everything. To assure minimum quality, we need to follow some criteria.

In this section, you’ll find the fuzzing workflow I’ve been using with very positive results in the last year: the five-step fuzzing workflow (preparation – coverage – context – value – triaging).

Five-step fuzzing workflow diagram. (preparation - coverage - context - value - triaging)

Step 1: Code preparation

This step involves applying all the necessary changes to the target code to optimize fuzzing results. These changes include, among others:

  • Removing checksums
  • Reducing randomness
  • Dropping unnecessary delays
  • Signal handling

If you want to learn more about this step, check out this blog post

Step 2: Improving code coverage

From the previous examples, it is clear that if we want to improve our fuzzing results, the first thing we need to do is to improve the code coverage as much as possible.

In my case, the workflow is usually an iterative process that looks like this:

Run the fuzzers > Check the coverage > Improve the coverage > Run the fuzzers > Check the coverage > Improve the coverage > …

The “check the coverage” stage is a manual step where i look over the LCOV report for uncovered code areas and the “improve the coverage” stage is usually one of the following:

  • Writing new fuzzing harnesses to hit new code that would otherwise be impossible to hit
  • Creating new input cases to trigger corner cases

For an automated, AI-powered way of improving code coverage, I invite you to check out the Plunger module in my FRFuzz framework. FRFuzz is an ongoing project I’m working on to address some of the caveats in the fuzzing workflow. I will provide more details about FRFuzz in a future blog post.

Another question we can ask ourselves is: When can we stop increasing code coverage? In other words, when can we say the coverage is good enough to move on to the next steps?

Based on my experience fuzzing many different projects, I can say that this number should be >90%. In fact, I always try to reach that level of coverage before trying other strategies, or even before enabling detection tools like ASAN or UBSAN.

To reach this level of coverage, you will need to fuzz not only the most obvious attack vectors such as decoding/demuxing functions, socket-receivers, or file-reading routines, but also the less obvious ones like encoders/muxers, socket-senders, and file-writing functions.

You will also need to use advanced fuzzing techniques like:

  • Fault injection: A technique where we intentionally introduce unexpected conditions (corrupted data, missing resources, or failed system calls) to see how the program behaves. So instead of waiting for real failures, we simulate these failures during fuzzing. This helps us to uncover bugs in execution paths that are rarely executed, such as:
    • Failed memory allocations (malloc returning NULL)
    • Interrupted or partial reads/writes
    • Missing files or unavailable devices
    • Timeouts or aborted network connections

A good example of fault injection is the Linux kernel Fault injection framework

  • Snapshot fuzzing: Snapshot fuzzing takes a snapshot of the program at any interesting state, so the fuzzer can then restore this snapshot before each test case. This is especially useful for stateful programs (operating systems, network services, or virtual machines). Examples include the QEMU mode of AFL++ and the AFL++ Nyx mode.

Step 3: Improving context-sensitive coverage

By default, the most common fuzzers (aka AFL++, libfuzzer, and honggfuzz) track the code coverage at the edge level. We can define an “edge” as a transition between two basic blocks in the control-flow graph. So if execution goes from block A to block B, the fuzzer records the edge A → B as “covered.” For each input the fuzzer runs, it updates a bitmap structure marking which edges were executed as a 0 or 1 value (currently implemented as a byte in most fuzzers).

In the following example, you can see a code snippet on the left and its corresponding control-flow graph on the right:

Edge coverage explanation.
Edge coverage = { (0,1), (0,2), (1,2), (2,3), (2,4), (3,6), (4,5), (4,6), (5,4) }

Each numbered circle corresponds to a basic block, and the graph shows how those blocks connect and which branches may be taken depending on the input. This approach to code coverage has demonstrated to be very powerful given its simplicity and efficiency.

However, edge coverage has a big limitation: It doesn’t track the order in which blocks are executed. 

So imagine you’re fuzzing a program built around a plugin pipeline, where each plugin reads and modifies some global variables. Different execution orders can lead to very different program states, while the edge coverage can still look identical. Since the fuzzer thinks it has already explored all the paths, the coverage-guided feedback won’t keep guiding it, and the chances of finding new bugs will drop.

To address this, we can make use of context-sensitive coverage. Context-sensitive coverage not only tracks which edges were executed, but it also tracks what code was executed right before the current edge.

For example, AFL++ implements two different options for context-sensitive coverage:

  • Context- sensitive branch coverage: In this approach, every function gets its own unique ID. When an edge is executed, the fuzzer takes the IDs from the current call stack, hashes them together with the edge’s identifier, and records the combined value.

You can find more information on AFL++ implementation here

  • N-Gram Branch Coverage: In this technique, the fuzzer combines the current location with the previous N locations to create a context-augmented coverage entry. For example:
    • 1-Gram coverage: looks at only the previous location
    • 2-Gram coverage: considers the previous two locations
    • 4-Gram coverage: considers the previous four

You can see how to configure it in AFL++ here

In contrast to edge coverage, it’s not realistic to aim for a coverage >90% when using context-sensitive coverage. The final number will depend on the project’s architecture and on how deep into the call stack we decide to track. But based on my experience, anything above 60% can be considered a very good result for context-sensitive coverage.

Step 4: Improving value coverage

To explain this section, I’m going to start with an example. Take a look at the following web server code snippet:

Example of a simple webserver code snippet.

Here we can see that the function unicode_frame_size has been executed 1910 times. After all those executions, the fuzzer didn’t find any bugs. It looks pretty secure, right?

However, there is an obvious div-by-zero bug when r.padding == FRAME_SIZE * 2:

Simple div-by-zero vulnerability.

Since the padding is a client-controlled field, an attacker could trigger a DoS in the webserver, sending a request with a padding size of exactly 2156 * 2 = 4312 bytes. Pretty annoying that after 1910 iterations the fuzzer didn’t find this vulnerability, don’t you think?

Now we can conclude that even having 100% code coverage is not enough to guarantee that a code snippet is free of bugs. So how do we find these types of bugs? And my answer is: Value Coverage.

We can define value coverage as the coverage of values a variable can take. Or in other words, the fuzzer will now be guided by variable value ranges, not just by control-flow paths. 

If, in our earlier example, the fuzzer had value-covered the variable r.padding, it could have reached the value 4312 and in turn, detected the divide-by-zero bug.

So, how can we make the fuzzer to transform variable values in different execution paths? The first naive implementation that came to my mind was the following one:

inline uint32_t value_coverage(uint32_t num) {

   uint32_t no_optimize = 0;
  
   if (num < UINT_MAX / 2) {
       no_optimize += 1;
       if(num < UINT_MAX / 4){
           no_optimize += 2;
           ...
       }else{
           no_optimize += 3
           ...
       }

   }else{
       no_optimize += 4;
       if(num < (UINT_MAX / 4) * 3){
           no_optimize += 5;
           ...
       }else{
           no_optimize += 6;
           ...
       }
   }

   return no_optimize;
}

In this code, I implemented a function that maps different values of the variable num to different execution paths. Notice the no_optimize variable to avoid the compiler from optimizing away some of the function’s execution paths.

After that, we just need to call the function for the variable we want to value-cover like this:

static volatile uint32_t vc_noopt;

uint32_t webserver::unicode_frame_size(const HttpRequest& r) {

   //A Unicode character requires two bytes
   vc_noopt = value_coverage(r.padding); //VALUE_COVERAGE
   uint32_t size = r.content_length / (FRAME_SIZE * 2 - r.padding);

   return size;
}

Given the huge number of execution paths this can generate, you should only apply it to certain variables that we consider “strategic.” By strategic, I mean those variables that can be directly controlled by the input and that are involved in critical operations. As you can imagine, selecting the right variables is not easy and it mostly comes down to the developers and researchers experience.

The other option we have to reduce the total number of execution paths is by using the concept of “buckets”: Instead of testing all 2^32 possible values of a 32 bits integer, we can group those values into buckets, where each bucket transforms into a single execution path. With this strategy, we don’t need to test every single value and can still achieve good results.

These buckets also don’t need to be symmetrically distributed across the full range. We can emphasize certain subranges by creating smaller buckets or, create bigger buckets for ranges we are not so interested in.

Now that I’ve explained the strategy, let’s take a look at what real-world options we have to get value coverage in our fuzzers:

  • AFL++ CmpLog / Clang trace-cmp: These focus on tracing comparison values (values used in calls to ==, memcmp, etc.). They wouldn’t help us find our divide-by-zero bug, since they only track values used in comparison instructions.
  • Clang trace-div + libFuzzer -use_value_profile=1: This one would work in our example, since it traces values involved in divisions. But it doesn’t give us variable-level granularity, so we can only limit its scope by source file or function, not by specific variable. That limits our ability to target only the “strategic” variables.

To overcome these problems with value coverage, I wrote my own custom implementation using the LLVM FunctionPass functionality. You can find more details about my implementation by checking the FRFuzz code here.

The last mile: almost undetectable bugs

Even when you make use of all up-to-date fuzzing resources, some bugs can still survive the fuzzing stage. Below are two scenarios that are especially hard to tackle with fuzzing.

Big input cases

These are vulnerabilities that require very large inputs to be triggered (on the order of megabytes or even gigabytes). There are two main reasons they are difficult to find through fuzzing:

  • Most fuzzers cap the maximum input size (for example 1 MB in the case of AFL), because larger inputs lead to longer execution times and lower overall efficiency.
  • The total possible input space is exponential: O(256ⁿ), where n is the size in bytes of the input data. Even when coverage-guided fuzzers use heuristic approaches to tackle this problem, fuzzing is still considered a sub-exponential problem, with respect to input size. So the probability of finding a bug decreases rapidly as the input size grows.

For example, CVE-2022-40303 is an integer overflow bug affecting libxml2 that requires an input larger than 2GB to be triggered.

Bugs that require “extra time” to be triggered

These are vulnerabilities that can’t be triggered within the typical per-execution time limit used by fuzzers. Keep in mind that fuzzers aim to be as fast as possible, often executing hundreds or thousands of test cases per second. In practice, this means per-execution time limits on the order of 1–10 milliseconds, which is far too short for some classes of bugs.

As an example, my colleague Kevin Backhouse found a vulnerability in the Poppler code that fits well in this category: the vulnerability is a reference-count overflow that can lead to a use-after-free vulnerability.

Reference counting is a way to track how many times a pointer is referenced, helping prevent vulnerabilities such as use-after-free or double-free. You can think of it as a semi-manual form of garbage collection.

In this case, the problem was that these counters were implemented as 32-bit integers. If an attacker can increment the counter up to 2^32 times, it will wrap the value back to 0 and then trigger a use-after-free in the code.

Kevin wrote a proof of concept that demonstrated how to trigger this vulnerability. The only problem is that it turned out to be quite slow, making exploitation unrealistic: The PoC took 12 hours to finish.

That’s an extreme example of a bug that needs “extra time” to manifest, but many vulnerabilities require at least seconds of execution to trigger. Even that is already beyond the typical limits of existing fuzzers, which usually set per-execution timeouts well under one second.

That’s why finding vulnerabilities that require seconds to trigger is almost a chimera for fuzzers. And this effectively discards a lot of real-world exploitation scenarios from what fuzzers can find.

It’s important to note that although fuzzer timeouts frequently turn out to be false alarms, it’s still a good idea to inspect them. Occasionally they expose real performance-related DoS bugs, such as quadratic loops.

How to proceed in these cases?

I would like to be able to give you a how-to guide on how to proceed in these scenarios. But the reality is we don’t have effective fuzzing strategies for these case corners yet.

At the moment, mainstream fuzzers are not able to catch these kinds of vulnerabilities. To find them, we usually have to turn to other approaches: static analysis, concolic (symbolic + concrete) testing, or even the old-fashioned (but still very profitable) method of manual code review.

Conclusion

Despite the fact that fuzzing is one of the most powerful options we have for finding bugs in complex software, it’s not a fire-and-forget solution. Continuous fuzzing can identify vulnerabilities, but it can also fail to detect some attack vectors. Without human-driven work, entire classes of bugs have survived years of continuous fuzzing in popular and crucial projects. This was evident in the three OSS-Fuzz examples above.

I proposed a five-step fuzzing workflow that goes further than just code coverage, covering also context-sensitive coverage and value coverage. This workflow aims to be a practical roadmap to ensure your fuzzing efforts go beyond the basics, so you’ll be able to find more elusive vulnerabilities.

If you’re starting with open source fuzzing, I hope this blog post helped you better understand current fuzzing gaps and how to improve your fuzzing workflows. And if you’re already familiar with fuzzing, I hope it gives you new ideas to push your research further and uncover bugs that traditional approaches tend to miss.

Want to learn how to start fuzzing? Check out our Fuzzing 101 course at gh.io/fuzzing101 >

The post Bugs that survive the heat of continuous fuzzing appeared first on The GitHub Blog.

Read the whole story
alvinashcraft
13 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

AI IDEs – What Do You Choose?

1 Share
From: VisualStudio
Duration: 1:06:23
Views: 177

In this recorded Live! 360 session, Brian Randell breaks down the fast-moving world of AI-powered IDEs and developer tools—helping you decide what to use, when to use it, and why. From GitHub Copilot and Visual Studio 2026 to Claude Code, Cursor, and fully local models, this talk cuts through the hype with practical, experience-based guidance.

Through live demos and real workflows, Brian explores IDE-integrated AI, CLI-based agents, and local-first AI setups using tools like Claude, Ollama, and Copilot. You’ll see how agent modes, planning modes, MCP servers, and model selection impact productivity, cost, security, and reliability—and why AI works best as a developer companion, not a replacement.

🔑 What You’ll Learn
• How AI IDEs and agents are changing developer workflows
• The differences between IDE plugins, AI-first IDEs, and CLI agents
• When GitHub Copilot, Claude Code, Cursor, or local models make sense
• How agent mode, planning mode, and MCP servers work in practice
• Cost, context-window, and model-selection tradeoffs
• How to run AI tools locally for privacy, air-gapped, or offline environments
• Why documentation, constraints, and iteration matter when working with AI

⏱️ Chapters
00:00 Session intro + framing
02:30 Setting up the the demo
03:25 Opening the project in VS Code (solution structure overview)
05:31 Start Demo: Claude Code in the terminal
06:02 Initializing Claude in the repo (creating its project context file)
10:38 Plan Mode: requesting new service methods + Dapper data layer
14:05 Current AI IDE landscape & market trends
19:55 Major architectures of AI IDEs: Plugin/Extension, Standalone, CLI/Terminal
22:19 Claude plan review + proceeding with changes
27:13 Market leaders and their distinguishing features
34:15 Visual Studio + GitHub Copilot overview (modes/models/MCP)
43:30 Local AI workflow: Ollama + Continue in VS Code
56:25 Tool roundup: Cursor, Windsurf, JetBrains, CLIs
1:03:00 Enterprise Needs: Security, compliance, governance, and audit trails
1:04:00 Final guidance: how to choose what’s right for you

👤 Speaker: Brian Randell

🔗 Links
Download Visual Studio 2026: http://visualstudio.com/download
Explore more Live! 360 sessions: https://aka.ms/L360Orlando25
Join upcoming VS Live! events: https://aka.ms/VSLiveEvents

#GitHubCopilot #VisualStudio2026 #DeveloperTools #AgenticAI

Read the whole story
alvinashcraft
14 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

3 Easy PowerShell Scripting Tips for Coding Masochists

1 Share

Tip: Read the ‘Important Notes’ section, because these are notes that are important.

1. Optimize Authentication Requests

  • What: Check for Existing Authentication Contexts before Repeating
  • How: Use Connection Context output
  • Why: Avoid repetitive, unnecessary authentication
  • Where: Your script code, Azure Automation Runbooks
  • When: Right now

Most API’s provide return data or establish a “context” once you complete the authentication process. When using cmdlets like Connect-AzAccount, Connect-Entra, Connect-ExchangeOnline, Connect-MicrosoftTeams, Connect-MgGraph, Connect-PnPOnline, and so on, you can either redirect the output of these to a variable, or use a context function to fetch them.

Why? If you run the same script or code block repeatedly, and it prompts for authentication every time, it not becomes a hassle, but it can waste time. How much this factors into time savings will depend on your environment(s) and usage patterns. Consider the following code example:

# Comment heading, because you always have one, right?

Connect-AzAccount # will force authentication every time

# ...more code...

Each time you run that, it will prompt for the Azure authentication. A small change can make it so you only get prompted the first time…

if (-not (Get-AzContext)) {
    # will only get invoked when there is no existing context
    Connect-AzAccount
}

If you happen to work with multiple tenants, you may want to add a check for the specific tenant ID as well…

$tenantId = "your_kickass_tenant_id"

if ((Get-AzContext).Tenant.Id -ne $tenantId) {
    # only invoked if context doesn't match or there is no context
    Connect-AzAccount -Tenant $tenantId 
}

More examples…

$tenantId = "your_kickass_tenant_id"
if ((Get-EntraContext).TenantId -ne $tenantId) {
    # only invoked when you haven't had coffee
    Connect-Entra -Tenant $tenantId
}

if ((Get-MgContext).TenantId -ne $tenantId) {
    # only invoked when you're paying attention, same kick-ass Tenant Id most likely
    Connect-MgGraph -TenantId $tenantId -NoWelcome
}

$spoSiteUrl = "your_kickass_sharepoint_online_site_url"
if ((Get-PnPContext).Url -ne $spoSiteUrl) {
    # only invoked when you first connect to your kick-ass sharepoint site
    Connect-PnPOnline -Url $spoSiteUrl -Interactive
}

You can also use Get-PnPConnection as an alternative. The MicrosoftTeams module doesn’t have a context-related cmdlet that I know of, which kind of sucks, like a broken vacuum cleaner. But life isn’t all bad.

2. Avoid Re-typing Credentials

  • What: Avoid Re-entering Passwords, Tenant and Subscription IDs
  • How: Store Credentials, Tenant ID’s, Subscription ID’s in Secret Vaults
  • Why: To reduce mistakes, limit security exposure
  • Where: On your computer, in Azure KeyVaults, or Azure Automation Credentials and Variables
  • When: As soon as possible

You may have noticed that some of the examples above define $tenantId or $spoSiteUrl. You may be doing this with other things like subscription Id’s, resource groups, usernames, and more. This is VERY BAD – Do NOT do that!

Any sensitive values should be stored securely so that if your scripts land in the wrong hands, they don’t hand the keys to your stolen car.

If you’re using any of the PowerShell Connect- functions that support a -Credential parameter, you can save a little time by feeding that from a credential vault. One simple way to do this is with the SecretManagement module. This works with various credential vaults like Windows Credential Manager, LastPass, 1Password, BitWarden and more.

Note: This does not circumvent safety controls like Entra Privileged Identity Management (PIM)

$myCredential = Get-Secret -Name AzureLogin123 -Vault PersonalVault

3. Suppress Unwanted Noise

  • What: Disable or reduce unneeded output
  • How: Use parameters like -NoWelcome, -WarningAction SilentlyContinue, Out-Null (or $null = ... )
  • Why: Clean output reduces processing overhead and avoids pipeline noise
  • Where: Every Connect- cmdlet or function that returns noisy output that you aren’t putting to use.
  • When: Always

Each time you connect to Microsoft Graph, it displays a welcome message that looks like the top half of a CVS receipt, only without coupons. There’s a marketing tip for Microsoft: Inject coupon codes in your API connection responses. You’re welcome.

You will also see: “NOTE: You can use the -NoWelcome parameter to suppress this message.” So, guess what: You can add -NoWelcome to quiet it down. They don’t have a -STFU parameter, but you could always wrap that yourself.

In addition to benign output, there are situations where even Warning output can make things messy. For example, within Azure Automation Runbooks, if you have output sent to a Log Analytics Workspace, the Warning output stream doesn’t need idiotic boy-who-cried-wolf warnings filling up your logs.

Important Notes

These notes are important.

  • As with any PowerShell content, things will change over time and some parameters may be added, replaced or removed. The examples provided herein, forthwith, notwithstanding and ipso facto, lorem ipsum are semi-valid as of 2025, December 29, anno domini, 12:25:32 PM Eastern Standard Time, planet Earth, Solar System 423499934.
  • Never run any script code, provided by humans, reptiles or AI services, in any production environment without thoroughly testing in non-production environments. Unless of course, you just don’t care about being fired or sued, or sent to a torture facility in El Salvadore.
  • References to trademark names like CVS are coincidental and imply no sponsorship, condonement, favor, agreements, contracts, eye winks, strange head nods, or thumbs-up gestures from either party. And who has time for parties. Anyhow, I have a prescription to pick up at CVS.
  • If you don’t care for humor, that’s okay. Neither do I.


Read the whole story
alvinashcraft
14 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Microsoft AI in 2025: My Top 60 Announcements (Chronological) 📅

1 Share

⚠ This post was curated manually based on public announcements, blog posts, and official Microsoft communications from 2025. While I carefully reviewed the content, dates, and URLs, this article was assisted by AI tooling — and, well, AI is still AI 🤖. There may be occasional inaccuracies, outdated links, or missing context. If you spot something that looks off, please feel free to reach out or leave a comment — corrections and improvements are always welcome.

I liked Google’s approach of recapping the year in AI, so here is my personal compilation of Microsoft’s top AI announcements of 2025 (sorted by date, January to December, and of course with tons of help of my personal notes and Copilot!)

January 2025 – A Wave of AI Investments and Partnerships

  1. $3B Investment in India for AI – Microsoft kicked off 2025 by announcing a US$3 billion investment in India’s cloud and AI infrastructure, including new data centers to advance the country’s AI capabilities aka.ms. This builds on an earlier $3B commitment and marks Microsoft’s largest data center investment in Asia.
  2. Skilling 10 Million People in AI – As part of the India initiative, Microsoft will train 10 million Indians in AI skills by 2030 to ensure the workforce can leverage AI aka.ms. This massive skilling program aligns with India’s vision of becoming an “AI-first” nation.
  3. Strategic AI Partnerships in India – During CEO Satya Nadella’s January visit to India, Microsoft inked AI partnerships with major organizations across core sectors news.microsoft.com. For example, Microsoft is working with RailTel (public sector) to build an AI Center of Excellence for Indian Railways and with Apollo Hospitals (healthcare) on AI solutions in clinical workflows news.microsoft.com news.microsoft.com. Similar alliances with Bajaj Finserv (finance), the Mahindra Group (manufacturing) and upGrad (education) aim to infuse AI innovations into India’s key industries.
  4. Microsoft–Pearson AI Learning Partnership – Microsoft and education giant Pearson announced a multiyear partnership to integrate AI into learning and upskilling services news.microsoft.com. Pearson will use Azure AI and Microsoft 365 Copilot across its offerings and workforce, while the companies jointly develop AI-powered courses and credentials to prepare students and workers for an AI-driven economy news.microsoft.com.
  5. Deepening the OpenAI Alliance – Microsoft and OpenAI broadened their partnership to “drive the next phase of AI” blogs.microsoft.com. Microsoft reaffirmed that Azure remains the exclusive cloud provider for OpenAI (powering all OpenAI APIs and ChatGPT), and in return Microsoft gains first access to OpenAI’s new innovations. The companies also detailed that Microsoft’s investment includes rights to OpenAI’s intellectual property and a shared revenue model, ensuring both benefit as AI usage grows blogs.microsoft.com.
  6. New CoreAI Engineering Division – CEO Satya Nadella announced the formation of a CoreAI – Platform and Tools engineering group to accelerate AI development across Microsoft blogs.microsoft.com. Led by EVP Jay Parikh, CoreAI unites teams from Azure AI, Developer Division (GitHub, VS Code), and the Office of the CTO. Its mission: build the unified AI stack (cloud infrastructure, AI developer tools, frameworks) for Microsoft’s own products and for external developers to create “agentic” applications blogs.microsoft.com.
  7. “AI for Good” Grants Program – Marking its 50th anniversary in its home state, Microsoft launched a $5 million AI for Good Open Call in Washington State news.microsoft.com. The program will award grants and Azure compute resources to nonprofits, academics, and startups using AI to address societal challenges in areas like sustainability, healthcare, education, and accessibility news.microsoft.com. It reflects Microsoft’s commitment to ensuring AI benefits communities, not just businesses.

February 2025 – AI Roadshow and New AI Infrastructure

  1. Global AI Tour in UAE – Microsoft kicked off a global AI roadshow, with an event in Dubai highlighting how organizations in the UAE are embracing AI news.microsoft.com. Microsoft showcased customer success stories (from AI-powered government services to AI in energy and finance) and reiterated its support for the UAE’s National AI Strategy. The UAE’s swift AI adoption was on full display – as Microsoft’s Charles Lamanna noted, many UAE firms are “not only embracing the latest AI advancements but also rapidly developing their own” AI applications news.microsoft.com news.microsoft.com.
  2. Telco AI Model in Fabric – At a regional event, Microsoft unveiled a Telecom Industry Data Model integrated with Microsoft Fabric to accelerate AI in telecommunications microsoft.com. This industry-specific data schema and pipeline, available in Fabric’s analytics platform, helps telecom operators unify network data and customer data for AI analytics. It enables use cases like predictive network maintenance and AI-driven customer service, and is designed to handle the massive scale of telco data microsoft.com.
  3. Imagine Cup Winners – AI for Accessibility – The 2025 Imagine Cup champion was an AI innovation: a student team from Kenya built an AI-powered device to assist people with low vision news.microsoft.com. The device, which uses computer vision to describe surroundings to visually impaired users, earned top honors at Microsoft’s global student tech competition. This win exemplifies how young developers are using Microsoft’s AI tools to create life-changing solutions.
  4. AI Security: Zero Trust for AI Agents – Microsoft extended its Zero Trust cybersecurity approach to the AI era news.microsoft.com. In February, the company announced new security features to defend AI systems and Copilot AI agents from attacks. This includes monitoring and verification of AI-generated content, controls to prevent prompt tampering, and identity management for bots. The effort underscores that as organizations adopt AI “co-workers,” those AI agents must be protected with the same rigor as human users news.microsoft.com.

March 2025 – Breakthrough in Quantum AI

  1. Majorana 1 Quantum Chip – Microsoft introduced Majorana 1, the world’s first topological quantum computing chip, heralding a new approach to building scalable quantum computers news.microsoft.com. Majorana 1 uses exotic topological qubits (based on Majorana particles) which are more stable against noise news.microsoft.com. This breakthrough – developed by Microsoft Quantum labs – promises a path to quantum machines with over one million qubits, a threshold needed for solving real-world AI and chemistry problems that today’s computers cannot handle news.microsoft.com news.microsoft.com. Microsoft stated that Majorana 1 could enable quantum computers capable of tackling “the most complex industrial and societal problems in years, not decades” news.microsoft.com.

April 2025 – AI for Everyone, Everywhere

  1. Nadella’s “AI Companion” Vision – At Microsoft’s 50th anniversary event in early April, Satya Nadella outlined Microsoft’s ambition to deliver a true “AI companion” for every user news.microsoft.com. Nadella’s keynote described integrating AI deeply into everyday tools – an assistant that is “always on” across work and life. This hinted at unifying Copilot experiences across Windows, Office, Bing, and beyond, so AI can understand a user’s context and help them continuously. This personal AI will be “helpful, useful for everyone, and centered on each person’s unique needs,” Nadella said, setting the stage for upcoming consumer AI features.
  2. Copilot Search in Bing – Microsoft unveiled Copilot Search for Bing, an AI-enhanced search mode that seamlessly blends web search with generative AI blogs.bing.com. When enabled, Bing’s Copilot Search can summarize the web’s information into a concise answer (with citations) or present key insights and options instead of just links blogs.bing.com. For example, searching a complex question yields an AI-generated overview plus references, saving users time. Copilot Search also suggests related follow-up queries to help users explore a topic in depth blogs.bing.com. This feature – rolling out worldwide in Bing on desktop and mobile – aims to “simplify the search process” by providing the convenience of an answer engine with the transparency of a search engine blogs.bing.com blogs.bing.com.
  3. Open-Sourcing Copilot Extensions – Embracing openness, Microsoft announced it will open source the GitHub Copilot Chat extension for Visual Studio Code directionsonmicrosoft.com. By releasing the source code, Microsoft invites developers to inspect how Copilot’s chat interface works and even contribute to its development. This move, coming on the heels of open-sourcing parts of WSL (Windows Subsystem for Linux), underscored Microsoft’s commitment to an open AI ecosystem. (It also helps address user demands for transparency in how AI coding assistants operate directionsonmicrosoft.com.)

May 2025 – Microsoft Build: All-In on AI and Agents

Microsoft’s annual Build 2025 developer conference was dominated by AI announcements – from new products and previews to infrastructure and partnerships. Below are the key Build highlights (Microsoft even published a list of 25 big AI announcements azure.microsoft.com):

  1. Azure AI Foundry Enhancements – Microsoft announced a wave of updates to Azure AI Foundry, its unified platform for building AI applications and agents. Azure AI Foundry has already grown to serve 70,000+ customers, processing 100 trillion tokens last quarter and powering 2 billion+ daily Azure AI searches azure.microsoft.com. At Build, Microsoft unveiled 10 new Foundry innovations to help developers “build the open agentic web,” including new model hosting options, tool integrations, and cost optimizations. This reflects Microsoft’s vision of Azure as the “home for AI” at planetary scale.
  2. Entra Agent ID (Preview) – A crucial enterprise feature announced at Build was Microsoft Entra Agent ID directionsonmicrosoft.com. In preview, Entra Agent ID can give every AI agent (whether built with Copilot or Azure AI) a unique, first-class identity in an organization’s directory directionsonmicrosoft.com. Much like employee user IDs, these agent IDs allow IT admins to define an AI agent’s access permissions, apply conditional access policies, and monitor agent activities. This capability is groundbreaking for AI governance – it treats AI “colleagues” as full digital identities that can be managed and secured via Entra (Azure AD), just like human accounts directionsonmicrosoft.com.
  3. Copilot Tuning – Microsoft introduced Microsoft 365 Copilot Tuning, a new low-code way for organizations to customize Copilot’s underlying models using their own data microsoft.com. Via the Copilot Studio tool, developers or power users can feed company documents or knowledge into Copilot and fine-tune it to use internal terminology and follow company policies microsoft.com. Copilot Tuning can also create domain-specific Copilot plugins/agents that perform specialized tasks (for example, a “Finance Copilot” tuned on finance data). This advancement lets companies get more accurate and relevant results from Copilot, effectively training the AI on a “corporate brain” in a governed way microsoft.com.
  4. Windows AI Foundry & Hybrid Loop – Build saw Windows fully embrace AI: Microsoft announced Windows AI Foundry, an evolution of the Windows platform to be AI-ready blogs.windows.com. This includes a new runtime (nicknamed “Hybrid Loop”) enabling developers to run transformer models locally on Windows 11 PCs with inference via DirectML and NPUs. Windows AI Foundry allows AI models to be selected, optimized, and fine-tuned on Windows and seamlessly deployed to Azure – giving developers a consistent experience across client and cloud blogs.windows.com. Essentially, Windows is becoming an extension of Azure AI for devs. Microsoft also noted that Windows ML (the built-in ONNX-based inference engine) now integrates with Azure AI Foundry, so developers can run Azure AI models offline on PCs using any hardware (CPU, GPU, NPU) blogs.windows.com.
  5. Natural Language Web (NLWeb) – Microsoft, along with industry partners, unveiled NLWeb (Natural Language Web) directionsonmicrosoft.com – an open initiative to make websites more like conversational agents. Conceived by renowned technologist R.V. Guha (a recent Microsoft hire known for creating RSS and Schema.org), NLWeb provides a framework for websites to expose a natural language interface directionsonmicrosoft.com. For example, an e-commerce site could let users “ask” for products in plain English or a library website could answer questions about its catalog. At Build, Microsoft showed how NLWeb can turn a standard website into an “agentic app” that understands and responds in natural language, powered by a combination of schema, metadata, and AI APIs directionsonmicrosoft.com directionsonmicrosoft.com.
  6. SQL Server 2025 Preview (Vector DB) – Microsoft announced SQL Server 2025 in public preview, positioning it as the first AI-ready enterprise database microsoft.com. Beyond traditional SQL improvements, SQL Server 2025 can function as a vector database for AI applications microsoft.com. It can store embeddings and perform similarity searches, which are crucial for semantic search and GPT-based apps. Microsoft integrated secure machine learning into the DB engine, so customers can bring AI models (like classification or forecasting models) directly into SQL procedures with privacy controls. With SQL 2025, Microsoft is effectively transforming its flagship database into a hybrid analytical engine for both relational and AI workloads microsoft.com.
  7. GitHub Copilot “Coding Agent” – GitHub, at Build, announced the evolution of Copilot from an autocomplete tool into a more autonomous “coding agent.” The new GitHub Copilot X Coding Assistant (formerly codenamed Project Padawan) became generally available to Copilot for Business customers directionsonmicrosoft.com. This AI agent can handle higher-level development tasks: it can generate code for a full feature from a natural language spec, commit the changes, and even open a pull request for review directionsonmicrosoft.com. It can refactor code across a repo, find and fix bugs, and write tests – essentially acting like a junior developer. GitHub showed examples of asking Copilot to “Add a credit card expiration date validation to our checkout” and watching it modify multiple files to implement the feature. This moves Copilot closer to a true AI pair programmer that not only suggests code but orchestrates development work directionsonmicrosoft.com.
  8. Model Context Protocol (MCP) – Microsoft threw its weight behind the emerging Model Context Protocol (MCP) standard for agent interoperability directionsonmicrosoft.com. Satya Nadella and CTO Kevin Scott discussed MCP in keynotes, calling it the “USB-C for AI” that will connect different AI agents and tools directionsonmicrosoft.com. Microsoft joined the MCP steering committee and announced it is integrating MCP support across its products – Azure AI agents, Copilot Studio, GitHub Copilot, Semantic Kernel, Teams, Windows, etc. directionsonmicrosoft.com. Concretely, this means a Copilot agent could hand off a task to a third-party AI agent or app if both speak MCP. By championing MCP, Microsoft signaled a future where heterogenous AI agents can cooperate, regardless of who built them, much like devices interoperating via a common plug.
  9. Azure AI Supercomputer & NVIDIA Partnership – At Build, Microsoft and NVIDIA announced that Azure is the first cloud to deploy an interconnected NVIDIA Grace Hopper “Superchip” system (nicknamed Grace-Blackwell), creating one of the world’s most powerful AI supercomputers in Azure azure.microsoft.com. This AI-optimized data center cluster, coming online in 2025, combines ARM-based Grace CPU cores with Hopper GPUs and ultra-fast connectivity. It will enable training of extremely large AI models (multi-trillion parameter scale) with better energy efficiency. Microsoft noted this is part of expanding AI infrastructure to every Azure region – an effort to bring “AI supercomputing everywhere” for low-latency AI. (By year’s end, Microsoft had AI supercomputer clusters in over 70 Azure regions worldwide azure.microsoft.com.)
  10. More Open AI Models in Azure – Microsoft expanded the catalog of models available in Azure OpenAI Service and Azure AI Foundry Models azure.microsoft.com. New additions announced at Build included Grok 3 (a model from Elon Musk’s new AI startup xAI) and FluxVolt 1.1 from Black Forest Labs azure.microsoft.com. Microsoft also highlighted its close Hugging Face partnership, bringing popular open-source models to Azure with full support. By integrating these, Azure offers developers a one-stop shop to choose the right model (OpenAI, Microsoft, or third-party) for each task. Satya Nadella quipped that “the best platform for AI is the one with the most diverse models.”
  11. **Microsoft **<a name=”discovery”></a>Researcher “Discovery” Agent – Microsoft Discovery was unveiled as a new experimental platform that uses AI agents to accelerate scientific R&D azure.microsoft.com. A collaboration between Microsoft Research and Azure, Discovery employs specialized agents (for literature review, hypothesis generation, experiment design, etc.) orchestrated together. In a demo, Discovery’s agents ingested thousands of chemistry papers and autonomously suggested promising materials for CO<sub>2</sub> capture – a task that would take human researchers months. Microsoft Discovery showcases how agentic AI can tackle complex, research-heavy problems by breaking them into subtasks and speeding up insights azure.microsoft.com.
  12. Cosmos DB + Fabric = AI Data Hub – Microsoft announced deeper integration between Azure Cosmos DB (its NoSQL database) and its new analytics platform Microsoft Fabric azure.microsoft.com. Cosmos DB data (e.g. JSON documents) can now be directly analyzed in Fabric and even indexed as vectors for AI without complex ETL. Additionally, Azure AI Foundry can now connect to Cosmos DB as a native data source azure.microsoft.com – enabling AI agents to seamlessly query unstructured and semi-structured data stored in Cosmos. This effectively turns Cosmos DB into a ready-made “vector database” for applications like Copilot, where it can store embeddings and serve as the knowledge store for enterprise data azure.microsoft.com.
  13. Azure–SAP Partnership – Microsoft and SAP used Build to deepen their cloud partnership azure.microsoft.com. They introduced SAP Business Data Cloud (BDC) on Azure and announced native integration of SAP’s data into Azure AI services. The idea is that data from SAP applications (like SAP ERP or SuccessFactors) can flow into Azure’s Data Lake and be used by Azure OpenAI and Power Platform. For instance, a Copilot in Power BI could now easily query SAP business data. By launching SAP BDC on Azure and supporting SAP Datasphere integration, Microsoft enables customers to apply Azure’s AI and analytics on top of core business data residing in SAP systems azure.microsoft.com. (This also counters rival plans by Google Cloud and others to integrate with SAP – Microsoft underscored that it is the “preferred cloud” for SAP’s own AI ambitions.)
  14. Azure AI Foundry Agent Service (GA) – The Azure AI Foundry Agent Service – essentially Microsoft’s managed platform for deploying multi-agent AI applications – reached general availability azure.microsoft.com. This service (in preview since late 2024) allows companies to run large-scale, resilient AI agent solutions on Azure without worrying about the underlying infrastructure. Microsoft noted that enterprises like JM Family and Fujitsu have used Foundry Agent Service to automate complex business processes with multiple collaborating agents azure.microsoft.com. With GA, Microsoft added reliability features, cost controls, and compliance certifications to make it production-ready. This is a key step in Microsoft’s vision of “AI agents as a platform” for businesses.
  15. Multi-Agent Orchestration – To complement agent services, Microsoft introduced Multi-Agent Orchestration in Azure AI Foundry azure.microsoft.com. This new workflow system allows AI agents to work together on tasks by dynamically delegating subtasks to specialized agents. It includes a Connected Agents API so one agent can invoke another as a tool (e.g. a planning agent calling a vision agent). It also provides a shared state and Multi-agent Workflows with step-by-step coordination logic azure.microsoft.com. Notably, multi-agent orchestration has built-in support for the Model Context Protocol (MCP), which Microsoft is championing for inter-agent communication. This orchestration framework is crucial for building more sophisticated AI solutions – e.g. an “AI workflow” for loan processing that involves a document-reading agent, an analysis agent, and an approval agent working in concert.
  16. Digital Twin Builder (Preview) – Microsoft introduced a Digital Twin Builder in preview as part of Microsoft Fabric azure.microsoft.com. This no-code/low-code tool lets organizations create and visualize digital twins of real-world entities at scale. For example, a company could model a factory’s machines and sensors as digital twins in Fabric, then use Azure AI to run simulations or predictive maintenance on those twin models. The Fabric Digital Twin Builder provides a simple interface to define entities and relationships (like a virtual graph of assets) without heavy coding azure.microsoft.com. It leverages Azure’s IoT and data capabilities under the hood. By bringing digital twin modeling into Fabric, Microsoft is blurring the lines between IoT analytics and AI – making it easier to create “AI-driven simulations” of physical systems.
  17. AI-Assisted App Modernization – GitHub announced new Copilot capabilities for application modernization azure.microsoft.com. In preview, Copilot can now analyze an enterprise’s legacy Java or .NET applications and automatically suggest code updates to modernize them (for example, upgrading from .NET Framework to .NET 8). It can handle dependency updates, identify deprecated APIs, and even assist in re-architecting monolithic apps into cloud-native patterns azure.microsoft.com. Mainframe modernization help is also on the roadmap. This “Copilot for app modernization” aims to save developers weeks of effort when bringing old software into the cloud era. It demonstrates how AI can preserve institutional knowledge embedded in decades-old code by efficiently translating it to newer frameworks.
  18. “Agentic” Retrieval Augmentation – Azure Cognitive Search introduced an Agentic Retrieval update that significantly improves how AI agents retrieve knowledge from enterprise data azure.microsoft.com. Using advanced vector search and meta-optimizations, the new system improved answer relevance by ~40% on complex multi-part queries in early tests azure.microsoft.com. In practice, this means Copilots and bots integrated with Azure Search will get more accurate and contextually relevant results when looking up information (especially for long or detailed user questions). This is important for ChatGPT-style Q&A over company data – reducing instances where the AI might respond with irrelevant info. Essentially, Microsoft is baking smarter search into its AI assistants’ retrieval step, so they have higher quality data to ground their answers.
  19. PostgreSQL + VS Code + Copilot – Microsoft previewed a new PostgreSQL extension for Visual Studio Code with GitHub Copilot built-in azure.microsoft.com. This marries an open-source database tool with AI assistance: developers working with Postgres in VS Code can now get Copilot’s help to write and optimize SQL queries, generate schema definitions, and even explain query results in natural language azure.microsoft.com. The Copilot-enhanced Postgres extension simplifies workflows (for example, a dev can ask in plain English, “Show me total sales per category” and Copilot will produce the SQL). By integrating Copilot into database development, Microsoft is bringing AI to data engineers and database admins – not just application code developers.
  20. Foundry Observability (Preview) – Microsoft knows that debugging AI agents can be hard, so it announced Foundry Observability features now in preview azure.microsoft.com. This provides developers with end-to-end monitoring, telemetry, and trace logs for AI agents built or running on Azure AI Foundry azure.microsoft.com. In practice, devs get a dashboard showing each reasoning step an agent took, what tools or APIs it invoked (with parameters), how long each step took, and any errors or exceptions encountered. It’s like an Application Insights for AI logic. Observability is crucial for enterprise adoption of AI – it helps developers trust and verify what the AI is doing, troubleshoot unexpected outputs, and identify performance bottlenecks. By adding these tools, Microsoft is helping make AI systems less of a “black box.”
  21. “Chat with Your Data” in Power BI – The Build conference showcased Power BI’s new Copilot that lets users “chat” with their business data in natural language azure.microsoft.com. In preview, users can ask questions within Power BI like “What were our top 5 products by revenue last quarter?” and the Copilot will generate the answer – possibly with a chart – by querying the data model azure.microsoft.com. This leverages Fabric’s data ecosystem and semantic models to interpret the query intent. The Copilot can also refine questions, provide observations (“Revenue is up 8% QoQ, driven by Product X azure.microsoft.com.”), and even create new reports on the fly. Microsoft emphasized that this will democratize data analysis – employees won’t need deep BI or SQL skills to glean insights, they can just ask AI. (All behind the scenes, data security is respected – Copilot will only answer based on data the user has access to.)

June 2025 – New AI Tools and Previews

  1. Bing Image & Video Creator – Following the success of Bing’s AI Image Creator, Microsoft in June launched the Bing Video Creator, which uses generative AI to turn text prompts into short video clips blogs.bing.com. Users can type a description (e.g. “a serene sunset over a futuristic city, animated”) and the Video Creator will generate a brief video with sound. While the videos are simple and a bit experimental, the tool – available free on Bing – showcases progress in multimodal AI. Microsoft noted that AI video generation is nascent (and put ethical safeguards to prevent misuse), but this addition gives Bing an edge as the first major search engine with integrated text-to-video creation blogs.bing.com.
  2. Fabric Digital Twin Builder (Public Preview) – In June, Microsoft made the Digital Twin Builder (announced at Build) available for public preview azure.microsoft.com. Early adopters – including manufacturers and smart city planners – began creating digital replicas of real-world assets in Microsoft Fabric. Using a graphical interface, they can model entities (like buildings, factories, vehicles) and their telemetry. These twins can then be linked to Azure IoT and analyzed with Azure AI. For instance, an HVAC company can simulate HVAC units as twins and have an AI Copilot predict maintenance needs. Analysts lauded that Microsoft is integrating digital twins into its mainstream data platform (Fabric), making the tech more accessible beyond specialized IoT platforms azure.microsoft.com.
  3. Autonomous SRE Agent – Microsoft previewed a new Site Reliability Engineering (SRE) AI Agent for Azure, which uses AI to monitor and heal cloud services automatically azure.microsoft.com. This SRE agent watches metrics and logs for anomalies 24/7, and if it detects an incident (say a spike in error rate), it can perform initial diagnostics and even attempt safe mitigations on its own. For example, it might scale out an overloaded microservice or rollback a recent deployment if it correlates it to the outage. Microsoft claims some customers saw the AI resolve half of incidents without paging human engineers azure.microsoft.com. This kind of self-healing cloud infrastructure, powered by Azure’s AI, is part of Microsoft’s pitch that its cloud is uniquely intelligent.
  4. PostgreSQL Goes OpenAI – June also saw the release of an updated PostgreSQL extension for VS Code that has GitHub Copilot “built in” azure.microsoft.com. When developers connect to a Postgres database in VS Code, Copilot can now assist with SQL tasks. For example, if a dev highlights a SQL query that’s running slowly, Copilot can suggest an index creation or a query rewrite to improve performance. It can also generate complex SQL joins or write entire CRUD stored procedures from a comment. This tight integration of Copilot into database workflows boosts developer productivity – and also represents Microsoft weaving AI into the full software stack (not just code editors but data tools too) azure.microsoft.com.
  5. Foundry Local for Offline AI – Microsoft released Foundry Local (in preview) for Windows and Mac – a lightweight runtime that allows Azure AI Foundry models and agents to run fully offline on local devices azure.microsoft.com. With Foundry Local, developers can build apps that use AI without any cloud calls – keeping data local for privacy and working even with no internet. For instance, a field service app on a rugged laptop could use a GPT-4 model via Foundry Local to analyze equipment data on-site, with no cloud dependency. This addresses customer demands for edge AI and data sovereignty. Foundry Local supports Windows 11 devices (leveraging NPUs where available) and even macOS, and it can later sync with Azure when back online azure.microsoft.com. It demonstrates Microsoft’s hybrid approach: AI workloads can seamlessly move between cloud and edge.

July 2025 – AI Partnerships in Sports and Services

  1. Premier League + Microsoft – On July 1, the English Premier League (EPL) and Microsoft announced a five-year partnership to transform the fan experience with AI news.microsoft.com. Microsoft becomes the Official Cloud and AI partner for the world’s most-watched football league. Plans include using Azure AI and Dynamics 365 to personalize content for 1.8 billion fans – for example, automated match highlights tailored to each fan’s favorite team or predictive stats during live broadcasts. They will also create a “smart” digital platform for the EPL’s global fan community with AI-driven engagement. The partnership underscores how major sports organizations are embracing AI to scale and customize their product for a massive audience news.microsoft.com. (Fun fact: Microsoft’s AI will even help the Premier League archive decades of footage by automatically tagging plays and players in historical videos.)
  2. Accenture–Microsoft AI Lab – Microsoft continued to deepen its alliance with Accenture by expanding their joint efforts on Generative AI for cybersecurity newsroom.accenture.com. In mid-July, the companies announced a new collaboration to develop AI solutions that help enterprises – for example, AI copilots that assist security analysts in triaging incidents, or GPT-based bots that can simulate cyber-attacks to test defenses. Microsoft is providing Azure OpenAI and security products (Defender, Sentinel), while Accenture brings industry expertise to tailor these AI tools for sectors like finance and healthcare newsroom.accenture.com. The partnership also extends to Accenture training 250,000 of its consultants on Microsoft’s AI tech. This news exemplifies Microsoft’s strategy of teaming up with global systems integrators to accelerate AI adoption in the enterprise.

August 2025 – Next-Gen AI Models Arrive

  1. GPT-5 on Azure – Perhaps the biggest AI model news of 2025 was the debut of OpenAI’s GPT-5, which Microsoft made generally available on Azure in August azure.microsoft.com. GPT-5 is a breakthrough large language model – topping many benchmarks as the first to clearly surpass GPT-4. It offers 272k token context (allowing much longer documents and conversations) and integrates advanced reasoning abilities from OpenAI’s research azure.microsoft.com. Microsoft’s Azure is the exclusive cloud to host GPT-5, and through Azure OpenAI Service, businesses can now access GPT-5 with the scalability, security, and compliance guardrails needed for production azure.microsoft.com. Microsoft noted that GPT-5 in Azure pairs “frontier reasoning with high-performance generation” on Azure’s new AI supercomputers, enabling customers to move from pilot projects to powerful AI deployments confidently azure.microsoft.com. In short, the most powerful AI model is now at the fingertips of Azure customers – a marquee moment for Microsoft’s AI platform.
  2. Microsoft’s Own AI Models (MAI-1) – Microsoft’s new internal AI R&D team (now part of Microsoft AI, led by Mustafa Suleyman) gave a first peek at its in-house foundation models microsoft.ai. They introduced MAI-1-preview, Microsoft’s first end-to-end trained large language model, and MAI-Voice-1, a highly natural text-to-speech model microsoft.ai. MAI-1 is a mixture-of-experts language model trained on Microsoft’s GB200 supercomputer with 15,000 H100 GPUs, and was made available for testing on an open evaluation platform (LMArena) microsoft.aimicrosoft.ai. Meanwhile, MAI-Voice-1 can generate a minute of expressive speech in under one second – it’s already powering new “Copilot voice” features like reading emails aloud in Outlook microsoft.aimicrosoft.ai. These are the first fruits of Microsoft’s effort to build its own custom AI models (complementing OpenAI’s). Microsoft indicated that in the future, Copilot will likely use a mix of OpenAI models and Microsoft’s MAI models to deliver the best results microsoft.ai.

September 2025 – (No major Microsoft AI announcements – the company focused on rolling out the many Build and Ignite features announced in prior months.)

October 2025 – Trust and Infrastructure

  1. Local Copilot for UAE – In a significant move for AI data residency, Microsoft announced that Microsoft 365 Copilot will support in-country data processing in the UAE by early 2026 news.microsoft.com. This means Copilot’s AI services for qualified UAE customers will run entirely within Microsoft’s UAE datacenters in Dubai and Abu Dhabi, rather than the global cloud news.microsoft.com. The announcement – made with UAE government officials – aligns with the UAE’s stringent national AI guidelines. Keeping Copilot’s data processing local reduces latency and addresses sovereignty concerns: organizations like UAE ministries and banks can use AI assistants while ensuring sensitive data stays within country borders news.microsoft.com news.microsoft.com. This UAE move follows a similar one in Europe (announced at Ignite) and reflects Microsoft’s strategy to meet regional compliance needs so AI adoption isn’t hampered by privacy regulations.

November 2025 – Ignite: The Era of AI Agents

Microsoft Ignite 2025 in late November was all about scaling AI adoption – especially AI agents and copilots for business. Key announcements included:

  1. Agent 365 Control Plane – Microsoft unveiled Microsoft Agent 365 (A365), a new control plane to deploy, monitor, and manage AI agents across the enterprise crn.com. Think of A365 as an “agent management platform” akin to an identity or device management system, but for AI workers. Agent 365 provides a unified registry of all an organization’s AI agents (whether built with Microsoft Copilot tools or third-party) and a dashboard to track their status and performance crn.comcrn.com. Crucially, it integrates with Entra ID and Microsoft Defender – allowing security teams to detect suspicious agent activities, apply Zero Trust policies (an agent can be disabled or its access revoked instantly via A365), and ensure compliance rules are enforced for AI outputs. As companies deploy dozens of copilots and autonomous agents, Agent 365 will be the tool to keep them governed and secure at scale.
  2. “Hey Copilot” Voice Activation – Microsoft 365 Copilot is getting more conversational: at Ignite, Microsoft announced voice activation and new voice commands for Copilot crn.com. Users can now say “Hey Copilot” on their Windows 11 PC or in the Teams mobile app to invoke the AI assistant hands-free (much like “Hey Cortana” of the past) crn.com. Copilot can also read out answers in natural speech. In addition, a suite of voice commands was introduced – for example, on mobile you can now ask “What are my emails about today?” and Copilot will summarize your unread Outlook emails aloud crn.com. Or say “Draft a reply saying I’ll send the report by EOD” and Copilot will compose the email draft. These voice features (rolling to Frontier program testers first) make Copilot more accessible and useful in on-the-go scenarios.
  3. Windows 365 for AI Agents – Microsoft launched a private preview of Windows 365 for Agents crn.com, extending its Cloud PC virtualization to AI workloads. With this, enterprises can spin up Cloud PCs specially optimized for running AI agents (with GPU and memory resources for large models) and governed via Intune like normal Cloud PCs crn.comcrn.com. The idea is that autonomous or semi-autonomous agents can be given their own isolated Cloud PC “sandbox” to execute in – with full compliance controls, network policies, and scaling on demand. Several AI startups (Manus AI, GenSpark, etc.) are already using W365 for Agents to deploy their AI SaaS offerings securely crn.com. For customers, this provides a way to host AI applications “in the cloud but under IT’s control,” rather than running everywhere on unpredictable endpoints.
  4. Azure Copilot (Preview) – Taking the Copilot branding into cloud management, Microsoft announced Azure Copilot in limited preview crn.com. Azure Copilot is like an SRE/devops assistant that lives in the Azure Portal and CLI. You can ask it in natural language to perform Azure tasks – e.g. “Create 3 virtual networks with these subnets” or “Diagnose why my web app is slow” – and it will generate the Azure CLI or PowerShell commands, or surface insights (like “the database CPU is 95%”). Essentially, it’s ChatGPT but fine-tuned for Azure administration. Over time, Azure Copilot is expected to automate routine cloud ops, suggest best practices, and accelerate troubleshooting for developers and IT pros managing Azure resources crn.com.
  5. New Copilots & Agents in Teams – Microsoft announced multiple AI enhancements for Microsoft Teams. First, the Teams AI library (preview) will let organizations create custom “enterprise copilots” that can be deployed in Teams channels to integrate with third-party systems crn.com. For example, a sales team could have a CRM Copilot in their Teams chat that, via the new Model Context Protocol, can pull data from Salesforce or Dynamics when asked (“@CRM Copilot, what deals are stuck in the proposal stage?”) crn.com. Second, Microsoft made the Teams Meeting Recap AI (facilitator agent) generally available crn.com. This agent automatically generates meeting notes, to-do action items, and key points in real time during a Teams meeting – freeing participants from note-taking crn.com. It can even remind the group if the discussion veers off agenda. Together, these updates show Teams evolving into a platform for multi-agent collaboration (where human teams work alongside AI assistants specific to their domain).
  6. End-to-End AI Security & Compliance – Addressing enterprise concerns, Microsoft announced it is embedding more security and compliance tooling directly into its AI platforms directionsonmicrosoft.com. For instance, Microsoft Defender for Cloud and Purview compliance scans are now integrated into Azure AI Foundry and Copilot Studio pipelines directionsonmicrosoft.com. This means when a developer builds or deploys an AI app, security checks (for secrets, vulnerabilities, etc.) and compliance checks (for sensitive data use, bias, etc.) can run automatically. Additionally, Microsoft highlighted the new Entra Agent ID (from Build) as key to a strong Zero Trust approach for AI – by giving every agent an identity, organizations can apply Conditional Access and monitor sign-in logs for AI activities directionsonmicrosoft.com. These measures ensure that as companies embrace AI, they can extend their existing security frameworks to cover AI systems (so an AI breach is handled with the same rigor as a user account breach).
  7. Windows 11 AI Features Update – Microsoft announced the next update to Windows 11 will bring a slew of AI-powered features for all users crn.com. One highlight is offline support for Windows Copilot on devices with built-in NPUs – certain AI tasks (like summarizing text on screen) can now run locally, improving speed and privacy crn.com. The update also adds AI to everyday Windows experiences: for example, any text box in any app can now offer rewrite and compose suggestions via Copilot (useful for emails or social posts) crn.com. In Microsoft 365, an AI-powered Speaker Coach can listen to your spoken words (in, say, PowerPoint rehearsals or Teams calls) and provide real-time grammar and clarity improvements – effectively acting as a personal speaking tutor using Azure AI speech models crn.com. Even accessibility is improved: Windows’s AI can automatically generate descriptive Alt-Text for images in documents and on web pages for screen readers crn.com. These features, collectively called the Windows AI Boost, underscore that AI is becoming a native part of the PC experience.

December 2025 – Closing the Year with AI at Scale

  1. $17.5 Billion AI Investment in India – Microsoft ended the year by deepening its commitment to India: On Dec 9, the company announced a new US$17.5 billion investment over 4 years to expand India’s AI and cloud infrastructure news.microsoft.com. This is the largest investment Microsoft has ever made in a single country. It will fund a huge expansion of Microsoft’s India data center regions (making India one of the largest Azure hubs globally) and support development of India-specific AI solutions. Satya Nadella noted this aims to “advance India from digital public infrastructure to AI public infrastructure” at national scale news.microsoft.com. It follows Nadella’s meeting with India’s Prime Minister, where they discussed India’s ambitions to lead in AI innovation and the need for sovereign AI infrastructure news.microsoft.com.
  2. $19 Billion for Canada + Sovereign Cloud – Also in December, Microsoft announced a landmark CAD $19 billion investment in Canada to bolster its AI footprint there blogs.microsoft.com. The plan includes adding new Azure Availability Zones in Canada, deploying next-gen AI supercomputing clusters in Canadian data centers, and launching programs to skill 300,000 Canadians in AI by 2026 blogs.microsoft.com blogs.microsoft.com. A centerpiece is the creation of a Canada Digital Sovereignty Cloud for AI – essentially an isolated Canadian Azure cloud that keeps all data and processing in-country (addressing Canadian public sector requirements). Microsoft is also collaborating with Canadian partners on AI research in climate and healthcare. This huge bet underscores Microsoft’s strategy of investing in allied countries to ensure they have local AI infrastructure and talent to be competitive.
  3. $80 B in AI CapEx (FY 2025) – In Microsoft’s year-end report, President Brad Smith revealed that by the end of FY 2025 Microsoft will have invested approximately $80 billion in AI-related capital expenditures blogs.microsoft.com. Over half of that (~$40+ billion) is being spent in the United States alone blogs.microsoft.com. This staggering number (which is multiple times Microsoft’s spend just a few years ago) has gone into building out Azure’s fleet of GPU and AI accelerator machines, constructing new data centers, and research into advanced cooling and power for AI hardware blogs.microsoft.com. For context, $80B is more than the combined CapEx of some competitors. Microsoft highlighted this to demonstrate its commitment to meeting surging AI demand – effectively saying “no one is investing more in AI infrastructure than we are.” This investment is enabling everything from OpenAI’s model training to new AI features across Microsoft’s products.
  4. AI Safety & Cyber Defense Hub – To complement its AI growth, Microsoft doubled down on AI safety and security. It opened a dedicated Threat Intelligence Hub in Ottawa, Canada focused on AI-related cyber threats blogs.microsoft.com. Staffed with threat researchers and AI experts, this center works closely with Canada’s government to track nation-state actors and cybercriminal groups leveraging AI for attacks blogs.microsoft.com blogs.microsoft.com. Microsoft shared that in 2025 it observed over 80% of nation-state cyberattacks involve AI or automation in some form blogs.microsoft.com. The Ottawa hub will develop new defenses (like AI systems to detect AI-generated phishing). Microsoft also published a five-point “Safe AI” blueprint for governments, advocating steps like AI provenance (watermarking AI content) and stronger AI ethics guidelines blogs.microsoft.com blogs.microsoft.com. This emphasis shows Microsoft’s recognition that AI’s advancement must be matched by equal progress in AI governance and security.
  5. “Project Amelie” – AI Builds AI – In late December, Microsoft’s Azure AI Labs and Microsoft Research unveiled Project Amelie, an experimental autonomous AI agent that can build complete machine learning pipelines from a simple prompt azure.microsoft.com. Demonstrated as a tech preview, one could tell Amelie (via Azure AI Studio) something like “Help me predict retail sales from these datasets,” and the agent will choose appropriate algorithms, preprocess the data, train models, and evaluate results – essentially acting as a data scientist. Amelie was developed as a collaboration between Azure’s AI Foundry team and MSR, leveraging advanced meta-learning. While not a product yet, Microsoft showed off Amelie to illustrate the future of AI-assisted development: AI agents that can design other AI systems. It’s an exciting (and to some, unnerving) glimpse of AI self-improvement. Microsoft emphasized that such agents will be carefully controlled and audited (fitting with its responsible AI stance), but the productivity potential is huge – imagine vastly reducing the time and expertise needed to develop custom AI solutions azure.microsoft.com.

Each of these 60 announcements contributed to making 2025 a landmark year for AI at Microsoft. From massive investments and infrastructure breakthroughs to product innovations and partnerships, it’s clear Microsoft went “all in” on AI in 2025 – setting the stage for even more to come in 2026. Microsoft’s CEO might have summed it up best, echoing his April keynote: “We are aiming to make AI useful for every person and every organization – as a copilot, a security guard, a creative partner, and much more.” With that vision, Microsoft is racing to weave AI into the fabric of everything it builds.





Read the whole story
alvinashcraft
14 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Weaving design, part 2

1 Share
This is the second post in a series on designing software for weavers. These posts contain random thoughts and musings based on design challenges that I come across as I build a real-world project. In the first post I give the context for this project and presented the code for a initial domain-driven design. In this post and the next one, we’ll move to the next stage, converting between a text representation and the domain model.
Read the whole story
alvinashcraft
14 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Introducing Oh My Posh Visual Configurator: Finally, a Drag-and-Drop Terminal Theme Builder! ✨

1 Share
Introducing Oh My Posh Visual Configurator: Finally, a Drag-and-Drop Terminal Theme Builder! ✨

If you've ever spent way too much time digging through JSON files trying to get your terminal prompt just right, this post is for you. I'm incredibly excited to share something I've been working on: the Oh My Posh Visual Configurator—a web-based drag-and-drop builder that makes creating beautiful terminal prompts actually fun (yes, really!).

Introducing Oh My Posh Visual Configurator: Finally, a Drag-and-Drop Terminal Theme Builder! ✨
The main interface showing the segment picker, configuration canvas, and live preview

The Problem We All Know Too Well

Let me be honest with you—I love Oh My Posh. It's hands down one of the best tools for customizing your terminal prompt across PowerShell, Bash, Zsh, Fish, and basically any shell you can think of. But here's the thing: configuring it has always meant diving into JSON, YAML, or TOML files and doing a lot of trial-and-error to get segments positioned correctly.

You know the drill:

  1. Edit the config file
  2. Save it
  3. Open a new terminal to see if it worked
  4. Realize the colors clash
  5. Go back to step 1
  6. Repeat for the next hour

There had to be a better way.

Enter the Visual Configurator

The Oh My Posh Visual Configurator solves this problem with a modern, intuitive web interface. No installation required—just open your browser and start building!

👉 Launch the Configurator

What Can You Do With It?

🎨 Browse 103+ Segments

The left sidebar gives you access to every segment Oh My Posh supports, organized into logical categories:

  • System: Path, OS, Shell, Session, Battery, Time, Execution Time
  • Version Control: Git, Mercurial, SVN, Fossil, and more
  • Languages: Node.js, Python, Go, Rust, Java, .NET, PHP, Ruby... basically everything
  • Cloud & Infrastructure: AWS, Azure, GCP, Kubernetes, Docker, Terraform
  • CLI Tools: NPM, Yarn, Angular, React, Flutter
  • And more: Music players, health trackers, weather widgets
Introducing Oh My Posh Visual Configurator: Finally, a Drag-and-Drop Terminal Theme Builder! ✨
The Languages category expanded showing 26+ programming language segments

🖱️ Drag-and-Drop Interface

Just click a segment to add it to your prompt, or drag it exactly where you want it. Reordering is as simple as dragging segments around. Want to move that Git status before your path? Just drag it there!

Introducing Oh My Posh Visual Configurator: Finally, a Drag-and-Drop Terminal Theme Builder! ✨
Click any segment to customize its colors, style, and template

⚡ Live Preview

This is where it gets really cool. Every change you make shows up instantly in the preview panel at the bottom. Toggle between dark and light terminal backgrounds to make sure your theme looks great everywhere. The preview even renders powerline arrows and diamond shapes accurately!

🎛️ Full Customization

Click any segment and the Properties Panel opens up with everything you need:

  • Style Selection: Powerline, Plain, Diamond, or Accordion
  • Color Pickers: Full hex color support for foreground and background
  • Template Editor: Customize exactly what each segment displays using Go templates
  • Block Settings: Configure prompt alignment, add new blocks, or create right-aligned prompts

📦 Import & Export

Already have an Oh My Posh config? Import it! The configurator supports JSON, YAML, and TOML formats. When you're done, export in whichever format you prefer and drop it right into your shell configuration.

Community Sharing 🤝

One of my favorite features is community sharing. Found the perfect configuration? Share it with everyone!

Introducing Oh My Posh Visual Configurator: Finally, a Drag-and-Drop Terminal Theme Builder! ✨
Browse official samples and community-contributed themes
  1. Click the Share button in the header
  2. Fill in your theme details—name, description, tags
  3. Submit your configuration
  4. Your theme appears in the Community collection for others to use

Introducing Oh My Posh Visual Configurator: Finally, a Drag-and-Drop Terminal Theme Builder! ✨
Easy-to-use sharing dialog with step-by-step instructions

This means you can browse themes created by other developers, try them instantly in the configurator, and tweak them to match your style. The community grows together!

Under the Hood: Some Fun Architecture Decisions 🏗️

For my fellow developers who enjoy the technical details, here's what's powering this thing:

React 19 + TypeScript + Vite

I went with the latest React 19 for this project, paired with TypeScript for type safety (trust me, when you're dealing with complex configuration objects, types are your friend). Vite provides blazing-fast builds and hot module replacement that makes development a joy.

@dnd-kit for Drag-and-Drop

After evaluating several drag-and-drop libraries, @dnd-kit won out. It's modern, accessible, performant, and plays nicely with React 18+. The sortable preset made implementing segment reordering straightforward, and it handles edge cases like keyboard navigation out of the box.

Zustand for State Management

I'm a big fan of Zustand for state management. It's lightweight, has zero boilerplate, and the persistence middleware lets me save your configuration to localStorage automatically. Your work is never lost!

// Simplified example of our store structure
const useConfigStore = create(
  persist(
    (set) => ({
      config: defaultConfig,
      addSegment: (blockId, segment) => set((state) => /* ... */),
      updateSegment: (blockId, segmentId, updates) => set((state) => /* ... */),
      // ...
    }),
    { name: 'ohmyposh-config' }
  )
)

Dynamic Segment Loading

With 103+ segments, loading everything upfront would be wasteful. Instead, segment metadata lives in separate JSON files by category (public/segments/*.json), loaded on demand when you expand a category. This keeps the initial bundle small and the UI snappy.

Tailwind CSS 4.1

Styling is handled by Tailwind CSS 4.1, which gives us:

  • Consistent design tokens
  • Dark theme by default (it's a terminal tool after all!)
  • Rapid iteration on UI polish
  • Small production bundles through automatic purging

100% Client-Side

Your configurations never leave your browser. Everything runs locally—no backend servers, no data collection, no privacy concerns. Export your config and it's yours forever.

Built with GitHub Copilot & VS Code

Here is what blows my mind about this app is that I built it in under an hour. I started by jotting down a product requirements doc in the GitHub mobile app using Copilot, based on a simple idea. Then I created an issue, assigned it to Copilot, and went to bed. By the next morning, I had a working prototype! I spent about 50 minutes in VS Code adding features like community feeds and cleaning up the code for future updates. Copilot even set up all the GitHub Actions workflows and deployed everything to GitHub Pages in seconds. Turning an idea into a live app in just hours—something that used to take me weeks—is absolutely incredible.

Getting Started in 30 Seconds

  1. Open the apphttps://jamesmontemagno.github.io/ohmyposh-configurator/
  2. Add segments: Click segments from the left sidebar or drag them to the canvas
  3. Customize: Click any segment to adjust colors, templates, and styles
  4. Preview: Watch your prompt update in real-time at the bottom
  5. Export: Choose JSON, YAML, or TOML and download your configuration
  6. Apply it:

# PowerShell
oh-my-posh init pwsh --config ~/your-theme.json | Invoke-Expression

# Bash
eval "$(oh-my-posh init bash --config ~/your-theme.json)"

# Zsh
eval "$(oh-my-posh init zsh --config ~/your-theme.json)"

That's it! You've got a beautiful, custom terminal prompt.

Sample Configurations to Get You Started

Not sure where to begin? We've included 6 professional sample configurations:

  • Minimal: Clean and simple
  • Developer: Language-aware with Git integration
  • DevOps: Cloud and Kubernetes focused
  • Full Featured: All the bells and whistles
  • Powerline Classic: Traditional powerline style
  • Diamond Style: Rounded segment separators
Introducing Oh My Posh Visual Configurator: Finally, a Drag-and-Drop Terminal Theme Builder! ✨
Community-contributed themes available in the Theme Library

Load any of these from the Sample Picker and customize from there!

What's Next?

I'm really excited about where this project can go. Some ideas on the roadmap:

  • More starter templates based on community feedback
  • Theme galleries curated by the community
  • Advanced template editor with autocomplete
  • PWA support for offline use
  • Keyboard shortcuts for power users

Try It Out and Let Me Know!

I'd love to hear what you think. Give it a spin at https://jamesmontemagno.github.io/ohmyposh-configurator/ and let me know how it goes!

Found a bug? Have a feature request? Open an issue on GitHub.

Created an awesome theme? Share it with the community using the Share button!

Happy coding, and may your terminal always look beautiful! 🚀


This blog was drafted with GitHub Copilot and Claude Sonnet 4.5 based off previous blogs. Screenshots were taken with the Playwright MCP server using the cloud agent completely autonomously. I reviewed this blog post and tweaked it as necessary.

Read the whole story
alvinashcraft
15 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories