For the past few months we’ve been architecting, refactoring, and refining. 146 commits. Dozens of PRs. One PR touched 493 files.
The long-awaited Beta isn’t just about revolutionary features. It’s about getting the architecture right and ensuring the API is stable for the future.
Terminal.Gui v2 is the ultimate framework for building cross-platform Terminal UI (TUI) apps with code that belongs in 2026, not 1999.
Here’s the stuff I’m the most proud of. To learn more and get started head to https://gui-cs.github.io/Terminal.Gui.
Terminal.Gui v1 was built on a static singleton. Application.Init(), Application.Run(), global state everywhere. 2007-era architecture that survived into the 2020s through inertia.
You can’t test static singletons properly. Can’t mock them. Can’t run multiple instances. Modern C# developers expect better.
So we fixed it. Backward compatibility matters, so Tom built an instance-based IApplication interface and made the static API a thin (obsolete) wrapper.
How it was:
// Global. Singleton. The 2000s called, they want their pattern back.
Application.Init();
Window top = new Window();
top.Add(myView);
Application.Run(top);
Application.Shutdown();
How it is now:
// Proper resource management. Testable. Feels right.
using (IApplication app = Application.Create().Init())
{
Window top = new();
top.Add(myView);
app.Run(top);
}
The difference matters for tests, dependency injection, and running multiple app instances (useful for testing different drivers). The old way made all of that painful or impossible.
The new way: views get their IApplication through context. Tests create mock applications without global state pollution. Disposal actually cleans up.
Type-safe dialog results:
using (IApplication app = Application.Create().Init())
{
app.Run<ColorPickerDialog>();
Color? selectedColor = app.GetResult<Color>();
}
No casting. No object?. Generics doing their job.
Renamed Application.TopLevels to Application.SessionStack. “TopLevels” made sense in 2007, confused everyone in 2025. “SessionStack” describes what it is: a stack of session tokens.
// Clear stack-based session management
SessionToken? main = app.Begin(mainWindow); // Push
SessionToken? dlg = app.Begin(dialog); // Push (dialog becomes modal)
app.End(dlg); // Pop (restore main)
app.End(main); // Pop (empty, app stops)
Creating custom dialog classes for every input type is tedious. Need a color? Create ColorPickerDialog. Need a date? Create DatePickerDialog. Each with its own result property, cancel handling, buttons.
Now any view can become a type-safe dialog.
Before:
ColorPickerDialog dialog = new();
Application.Run(dialog);
if (dialog.Canceled) return null;
Color? color = dialog.SelectedColor;
After:
Color? color = mainWindow.Prompt<ColorPicker, Color?>(
resultExtractor: cp => cp.SelectedColor,
beginInitHandler: prompt => prompt.Title = "Pick Color"
);
One line. Type-safe. Returns null on cancel. Compiler checks result types at build time.
For views that hold a value, there’s IValue<T>:
public class ColorPicker : View, IValue<Color?>
{
public Color? Value { get; set; }
// Events follow the Cancellable Work Pattern
}
Now Prompt knows how to extract the result automatically:
Color? result = mainWindow.Prompt<ColorPicker, Color?>();
No extractor needed. The system recognizes IValue<T> and extracts automatically.
PR #4472. Four hundred ninety-three files changed. Complete rewrite of the mouse system, ANSI driver, and input injection.
Input injection means programmatic mouse events—clicks, double-clicks, drags—injected directly into the application pipeline. With virtual time control. Millisecond precision. No manual testing, no UI automation tools, no hacks.
Here’s a test for double-click detection. An actual, working test:
[Fact]
public void Button_DoubleClick_RaisesEvent()
{
VirtualTimeProvider time = new();
using IApplication app = Application.Create(time);
app.Init(DriverRegistry.Names.ANSI);
Button button = new() { Text = "Double-Click Me" };
var doubleClicked = false;
button.Accepting += (s, e) => {
if (e.Context?.Binding is MouseBinding {
Mouse.Flags: var f
} && f.HasFlag(MouseFlags.LeftButtonDoubleClicked))
{
doubleClicked = true;
}
};
// Inject first click
app.InjectMouse(new() {
Flags = MouseFlags.LeftButtonPressed,
ScreenPosition = new Point(0, 0)
});
time.Advance(TimeSpan.FromMilliseconds(50));
app.InjectMouse(new() {
Flags = MouseFlags.LeftButtonReleased
});
// Inject second click within threshold
time.Advance(TimeSpan.FromMilliseconds(200));
app.InjectMouse(new() {
Flags = MouseFlags.LeftButtonPressed
});
app.InjectMouse(new() {
Flags = MouseFlags.LeftButtonReleased
});
Assert.True(doubleClicked);
}
The test injects mouse events at specific timestamps, advances virtual time, asserts on double-click flags. Runs in milliseconds, deterministically.
VirtualTimeProvider controls time in tests. Two clicks 200ms apart? Advance time 200ms. No waiting. Tests run instantly whether on a fast desktop or slow CI runner.
I have a thing for terminals. Spent time customizing shells, studying ANSI escape sequences, caring about CSI vs SS3 sequences. My original post mentioned fonts—it extends to the whole terminal stack.
When we rebuilt the mouse system, I wanted a proper ANSI driver. Pure escape sequences. Cross-platform. VT100-era standards that still work in 2026.
The ANSI driver doesn’t use platform-specific APIs for anything but reading & writing terminal IO. No Win32 Console calls. No Unix syscalls. Just ANSI escape sequences—the same ones terminals have understood since the 1970s.
Input parsing:
Output:
Every terminal speaks ANSI. Windows Terminal, iTerm2, gnome-terminal, Alacritty, WezTerm—all ANSI.
One driver. Every platform. Consistent behavior. No “works on Linux, weird on macOS” bugs.
The ANSI parser is actually three parsers working together:
Console Input → Raw Bytes → Parser Triage
├─→ AnsiKeyboardParser → Key events
├─→ AnsiMouseParser → Mouse events
└─→ AnsiResponseParser → DSR responses
Each parser handles its own sequence format. Keyboard parser: CSI and SS3. Mouse parser: SGR. Response parser: terminal replies.
When you inject a mouse event in tests, it goes through AnsiMouseEncoder (converts to raw SGR format like ESC [ < 0 ; 10 ; 5 M), then AnsiMouseParser (back to mouse event structure). Round-trip encoding.
Another hat tip to contributor Tom; his work on this was brilliant.
Edge cases: terminals that send SS3 for F1 but CSI for F2. Mouse coordinates 1-based in ANSI but 0-based internally. Escape sequences meaning different things depending on modifiers.
Took months. Now we have a driver that works everywhere. When someone reports “doesn’t work in my terminal,” switching to ANSI driver usually fixes it.
This is where input injection pays off. Want to test how Terminal.Gui handles a Shift+F5 keypress? Inject the ANSI sequence:
app.InjectKey(AnsiKeyboardEncoder.Encode(
Key.F5.WithShift
));
Want to test mouse wheel scrolling?
app.InjectMouse(AnsiMouseEncoder.Encode(new Mouse {
Flags = MouseFlags.WheeledDown,
ScreenPosition = new Point(10, 5)
}));
The encoders produce actual ANSI sequences—the same bytes a real terminal would send. The parsers consume them. We’re testing the real path, not a mock.
Terminal emulators support seven cursor styles via ANSI DECSCUSR sequences. Blinking block, steady underline, blinking bar.
Terminal.Gui’s cursor handling was scattered—state everywhere, no explicit model. Fixed:
public record Cursor
{
public Point? Position { get; init; } // Null = hidden
public CursorStyle Style { get; init; } = CursorStyle.Hidden;
public bool IsVisible => Position.HasValue && Style != CursorStyle.Hidden;
}
public enum CursorStyle
{
BlinkingBlock = 1, // █ Traditional terminal cursor
SteadyBlock = 2, // █ Steady (no blink)
BlinkingUnderline = 3, // _ Classic
SteadyUnderline = 4, // _ Steady
BlinkingBar = 5, // | Text editor style
SteadyBar = 6, // | Steady editor cursor
Hidden = -1 // No cursor
}
Maps directly to ANSI DECSCUSR (CSI Ps SP q). Windows driver translates to Win32 API automatically. Cross-platform cursor styles.
Using it in a view:
protected override void OnDrawContent(Rectangle viewport)
{
int cursorCol = _cursorPosition - _scrollOffset;
if (cursorCol >= 0 && cursorCol < Viewport.Width && HasFocus)
{
Point screenPos = ViewportToScreen(new Point(cursorCol, 0));
Cursor = new Cursor
{
Position = screenPos,
Style = CursorStyle.BlinkingBar
};
}
else
{
Cursor = new Cursor { Position = null }; // Hidden
}
}
Immutable record, value-type semantics, thread-safe. Caches intelligently—99% of PositionCursor() calls were redundant.
The command system had CommandContext<TBinding>. Generic, type-constrained. You had to know the binding type at compile time. Extending meant fighting the type system. Pattern matching was awkward.
Simplified:
Before:
public record struct CommandContext<TBinding> : ICommandContext
{
public TBinding? Binding { get; set; }
}
After:
public record struct CommandContext : ICommandContext
{
public IInputBinding? Binding { get; set; }
}
No generics. Just a common IInputBinding interface that KeyBinding, MouseBinding, and InputBinding all implement. Now you can pattern match:
var source = ctx.Binding switch
{
KeyBinding kb => $"Keyboard: {kb.Key}",
MouseBinding mb => $"Mouse: {mb.MouseEvent?.Flags}",
InputBinding => "Programmatic",
_ => "Unknown"
};
Pattern matching works when you let it. Best design gets out of the language’s way.
Every API now has awesome (until you find bugs and report them) API reference documentation that includes sample code and cross-references. I take pride in this, so if you find problems, please submit issues!
Start here: https://gui-cs.github.io/Terminal.Gui/api/Terminal.Gui.App.html
The Deep Dives go deep and broad. See https://gui-cs.github.io/Terminal.Gui/docs/index.html
Upgraded to .NET 10 and C# 14. Modern language features, collection expressions.
Modernized TextView to use the new Viewport/ScrollBar system. Touches more code than expected, makes everything slightly better.
Fixed macOS test runners that randomly hung. Fixed keyboard test timing issues. Stabilized stress tests.
If you’re building apps with Terminal.Gui: You get type-safe dialogs, testable mouse interactions, proper resource management, and an architecture that feels like 2026, not 2006.
If you’re contributing to Terminal.Gui: The codebase is now testable in ways it never was. Input injection works. The architecture has proper separation. The docs explain the “why” not just the “what.”
If you have a Terminal.Gui v1 app: You will need to port. Tons has changed. But we’ve provided a migration guide that will help. In the end, you’ll find the updated API is far simpler, easier to understand, and most of your porting work will be removing code that’s no longer needed. Pro-tip: Just point your favorite AI coding agent at https://gui-cs.github.io/Terminal.Gui/docs/migratingfromv1.html and let it do it for you.
This is a Beta. The architecture is solid and we are now holding a very high bar for further API changes.
Use it. Test it. Tell me what’s broken. GitHub issues are open.
https://github.com/gui-cs/Terminal.Gui/issues
To @BDisp, who tests thoroughly and catches bugs I miss. He’s always eager to dive into the trickiest bugs and work until they are resolved.
To @tznind, who’s architectural insights made IApplication and the new driver model possible. Plus Terminal.Gui.Designer is da’bomb.
To @dodexahedron, who regularly challenges us to do things right.
To all the dozens more who’ve submitted Issues and PRs!
To the AI agents (Claude, Copilot); you know your role.
And, of course, to Miguel de Icaza who started this 17 years ago.
-Tig
The post Terminal.Gui: Still Absurd, Now Beta! first appeared on tig.log.Since our initial launch of Copilot code review (CCR) last April, usage has grown 10X, now accounting for more than one in five code reviews on GitHub.
Behind the scenes, we’ve been running continuous experiments to enhance comment quality. We also moved to an agentic architecture that retrieves repository context and reasons across changes. At every step of the way, we’ve listened to your feedback: your survey answers and even your simple thumbs-up and thumbs-down reactions on comments have helped us identify key issues and iterate on our UX to provide a comprehensive review experience.
Copilot code review handles pull request reviews and summaries, allowing teams to focus on more complex tasks.
Suvarna Rane, Software Development Manager, General Motors
As Copilot code review evolved over time, so has our definition of a “good code review.” When we started building it in 2024, our goal was simple thoroughness. Since then, we’ve learned that what developers actually value is high-signal feedback that helps them move a pull request forward quickly. Today, Copilot code review leverages the best models, memory, and agentic tool-calling to conduct comprehensive reviews. To get here, we’ve used a continuous evaluation loop to tune the agent’s judgment, focusing on three qualities that shape that experience: accuracy, signal, and speed.
Our aim has been for Copilot code review to deliver sound judgment, prioritizing consequential logic and maintainability issues. We evaluate performance in two ways: through internal testing against known code issues, and through production signals from real pull requests. In production, we track two key indicators:
Together, these signals help ensure that Copilot code review surfaces issues that matter, and that faster merges come from confident fixes, not less scrutiny.

In code review, more comments don’t necessarily mean a better review. Our goal isn’t to maximize comment volume, but to surface issues that actually matter.
A high-signal comment helps a developer understand both the problem and the fix:

Silence is better than noise. In 71% of the reviews, Copilot code review surfaces actionable feedback. In the remaining 29%, the agent says nothing at all.
As our ability to identify high-signal findings improves, we’re also able to comment more confidently, now averaging about 5.1 comments per review without increasing review churn or lowering our quality threshold.
In code review, speed matters, but signal matters more. Copilot code review is designed to provide a reliable first pass shortly after a pull request is opened. That being said, meaningful reviews still require analysis. As reasoning capabilities improve, so does the computation required to surface deeper issues.
We treat this as a deliberate trade-off. In one recent change, adopting a more advanced reasoning model improved positive feedback rates by 6%, even though review latency increased by 16%.
For us, that’s the right exchange. A slightly slower review that surfaces real issues is far more valuable than instant feedback that adds noise. We continue to reduce latency wherever possible, but never at the expense of high-signal findings developers can trust.
Given our new definition of “good,” we redeveloped our code review system. Today’s agentic design can retrieve context intelligently and explore the repository to understand logic, architecture, and specific invariants.
This shift alone has driven an initial 8.1% increase in positive feedback.
Here’s why:
By iterating on how the agent interacts with pull requests, we’ve reduced noise and made feedback more actionable. Here’s what that means for you.

As AI continues to accelerate software development, it’s more important than ever to help teams review and trust code at scale. Copilot code review helps teams keep pace by surfacing high-signal feedback directly in pull requests, enabling developers to catch issues earlier and merge with greater confidence.
More than 12,000 organizations now run Copilot code review automatically on every pull request. At WEX, this shift toward default AI –assisted reviews has helped scale Copilot adoption across the engineering organization:
Today, two-thirds of developers are using Copilot — including the organization’s most active contributors. WEX has since expanded adoption by making Copilot code review a default across every repository. Developers are also heavily utilizing agent mode and the coding agent to drive autonomy, helping WEX see a huge lift in deployments, with ~30% more code shipped. — WEX customer story
Going forward, we’re focused on deeper personalization and high-fidelity interactivity, refining the agent to learn your team’s unwritten preferences while enabling two-way conversations that let you refine fixes and explore alternatives before merging.
As Copilot capabilities continue to evolve, from coding and planning to review and automation, the goal is simple: help developers move faster while maintaining the trust and quality that great software demands.
Copilot code review is a premium feature available with Copilot Pro, Copilot Pro+, Copilot Business, and Copilot Enterprise. See the following resources to:
Already enabled Copilot code review? See these docs to set up automatic Copilot code reviews on every pull request within your repository or organization.
Have thoughts or feedback? Please let us know in our community discussion post.
The post 60 million Copilot code reviews and counting appeared first on The GitHub Blog.
Women’s History Month—and International Women’s Day on March 8, 2026—always gives me pause for reflection. It’s a moment to think about how far we’ve come and think about who we choose to uplift as we look ahead.
Throughout my career, I’ve been inspired by extraordinary women leaders—trailblazers who broke barriers, opened doors, and reshaped what leadership in technology looks like. But today, I want to shine a light on another group that inspires me just as deeply: women early in their careers—the builders, learners, and question-askers who are defining the future of cybersecurity and developing their skills in the era of AI.
These women are entering the field at a moment of unprecedented complexity. Cyberthreats are accelerating. AI is reshaping how we defend, detect, and respond. And the stakes—for trust, safety, and resilience—have never been higher.
That’s exactly why it has never been more critical to have a wide range of experiences and perspectives in our defender community.
Help educate everyone in your organization with cybersecurity awareness resources and training curated by the security experts at Microsoft.
Cybersecurity is fundamentally about understanding people—how they behave, how they make decisions, how systems can be misused, and where harm can occur. That’s why diversity of perspectives, backgrounds, experiences, and people is a security imperative.
The ISACA paper titled “The Value of Diversity and Inclusion in Cybersecurity” concludes that cybersecurity teams lacking diversity are at greater risk of engaging in limited threat modeling, exhibiting reduced innovation, and making less robust decisions in complex security environments. At Microsoft Security, we recognize that the cyberthreats we encounter are as varied and multifaceted as humanity itself.
To stay ahead, our teams must reflect that diversity across gender, background, culture, discipline, and lived experience.
When teams bring different perspectives to the table,
Women early in their career bring something incredibly powerful to cybersecurity and AI: fresh perspective paired with fearless curiosity. Women bring empathy, clarity, systems thinking, and collaborative leadership that directly strengthen our ability to detect cyberthreats, understand human behavior, and build secure products that work for everyone.
This makes me think of my valued friend and colleague, Lauren Buitta, who is the founder and chief executive officer (CEO) of Girl Security. Lauren has been a tireless advocate for providing women early in career—especially those from underrepresented backgrounds, with the skills and confidence needed to enter security careers. She often says, “Security isn’t just a discipline—it’s empowerment through knowledge.” That philosophy extends to Girl Security’s work preparing the next generation to navigate and lead in an AI-powered world. Her efforts show us that nurturing curiosity early on can have lasting effects throughout life.
They challenge assumptions that may no longer hold. They ask “why” before accepting “how.” They’re often the first to notice gaps—in data, in design, in who is represented and who is missing. Supporting women at this stage isn’t just about equity. It’s about strengthening the future of security itself. These actions build a stronger, more resilient security ecosystem.
Investing in women early in their cybersecurity and AI security careers is essential. Early access to education, opportunity, and confidence building experiences helps more women see themselves in this field—and choose to stay.
But if we stop there, we shouldn’t be surprised when the numbers don’t move. In fact, independent global analyses from the Global Cybersecurity Forum and Boston Consulting Group show that women represent just 24% of the cybersecurity workforce worldwide—a figure reinforced by LinkedIn’s real-time labor market data. What I’ve realized is this: To change outcomes, we have to cultivate women throughout their careers—from first exposure to technical mastery, from early roles to leadership, and from individual contributor to decisionmaker. Otherwise, we’ll continue to bring women into the field without creating the conditions that allow them to grow, advance, and remain.
That means pairing early career investment with sustained support, inclusive cultures, and everyday actions that reinforce belonging and opportunity over time.
Here are meaningful steps we can all take—not just to widen the pipeline, but to strengthen it end to end:
1. Share stories from a diverse set of role models at every career stage.
Representation fuels imagination. When women early in career see themselves reflected in cybersecurity, they’re more likely to enter the field. When women midcareer and in senior roles see paths forward, they’re more likely to stay and lead.
2. Reevaluate job descriptions at entry and beyond.
Rigid expectations or narrow definitions of technical expertise discourage qualified candidates from applying, and can also limit progression into advanced or leadership roles.
3. Invest in inclusive training and early career programs and sustain learning over time.
Accessible, hands-on learning builds confidence early. Continued upskilling, reskilling, and leadership development ensure women can evolve alongside rapidly changing security and AI technologies.
4. Volunteer with organizations driving cybersecurity and AI education.
Groups like Girl Security and Women in CyberSecurity (WiCyS) are changing outcomes for thousands of girls and women. Your time, mentorship, or sponsorship helps build momentum early—and reinforces pathways later. I welcome you to join Nicole Ford, Vice President Customer Security Officer at Microsoft, who will be hosting a leadership lunch at the WiCyS conference to discuss cultivating leaders for the future and though advocacy and sponsorship.
5. Partner with community groups offering mentorship and sponsorship opportunities.
Mentorship is one of the strongest predictors of early career success. Sponsorship—advocacy that opens doors to stretch roles, visibility, and advancement—is critical for long term progression.
6. Be an ally every day across the full career journey.
Introduce emerging talent to your networks. Encourage them to speak up. Create space for them to lead. Advocate for their ideas in rooms they aren’t in yet—especially as stakes and visibility increase.
At Microsoft, our mission is to empower every person and every organization on the planet to achieve more. That starts by ensuring the next generation of cybersecurity and AI security professionals has equitable access to opportunity, education, and belonging.
This Women’s History Month, let’s celebrate not only the women who have led the way — but the women who are just getting started.
They’re actively shaping security today, not just influencing its future. Security is a team sport and we need everyone in this team because together, we can build a safer, more inclusive digital future for all.
To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.
The post Women’s History Month: Encouraging women in cybersecurity at every career stage appeared first on Microsoft Security Blog.
OpenAI is building an internal GitHub alternative after recurring GitHub outages exposed fragility in code hosting for agentic development. Meta created a flat applied-AI engineering organization to connect hardware, tooling and data pipelines for faster model improvement. Amazon explored ads inside chatbots, US officials considered caps on NVIDIA and AMD chip exports to China to limit training-cluster scale, Apple unveiled M5 Macs with neural accelerators, and Stripe announced automatic token-based billing to enable usage-based pricing for AI apps.
The AI Daily Brief helps you understand the most important news and discussions in AI.
Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614
Get it ad free at http://patreon.com/aidailybrief
Learn more about the show https://aidailybrief.ai/