Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
151909 stories
·
33 followers

GitButler: A Modern Git Client That Redesigns How You Work with Branches

1 Share

Originally published at recca0120.github.io

git rebase -i is the Git command I find most awkward to use. Every time I need to reorder commits, split, or squash them, I end up editing that list in vim and hoping the rebase doesn't stop halfway through on a conflict. When it does, the repo state gets hard to reason about.

GitButler starts from the premise that Git's concepts are sound, but the interface can be much better. It's a full Git client with both a GUI and a but CLI, built on Tauri + Svelte + Rust. The underlying storage is still standard Git — but the hardest parts of the daily workflow have been redesigned.

The Core Difference: Parallel Branches

The standard Git workflow is: switch to a branch, do your work, switch to another. For two simultaneous tasks you context-switch constantly, or open multiple worktrees and manage them manually.

GitButler's Parallel Branches let you work on multiple branches at once without switching. Drag a file's changes to whichever branch it belongs to — that's it.

This is particularly useful for AI agent workflows, where an agent touches multiple areas simultaneously. Different tasks can be split into different branches without waiting for one to finish before starting the next.

Stacked Branches

Building on top of another in-progress branch is common — open feat/api, then start feat/ui on top of it. The traditional approach is to rebase feat/ui onto feat/api, then manually rebase again every time the base branch changes.

GitButler's Stacked Branches automate this. Edit any commit in the stack and everything above it automatically restacks.

Commit Management Without rebase -i

This is the most immediately noticeable improvement. Every commit operation in GitButler is drag-and-drop:

  • Uncommit: send a commit back to the working directory
  • Reword: edit a commit message inline
  • Amend: fold working-directory changes into any commit
  • Move: reorder commits by dragging
  • Split: break one commit into multiple
  • Squash: merge commits together

Everything that used to require git rebase -i is now a drag-and-drop operation.

Unlimited Undo

Every operation is recorded in the Undo Timeline — commits, rebases, all mutations. You can go back to any point, so there's no fear of unrecoverable states.

The but CLI has matching commands:

but operations-log     # view operation history
but undo               # undo the last operation

Conflicts Don't Block Your Flow

Standard git rebase stops on the first conflict and waits. Multiple conflicts mean multiple interruptions before the rebase can complete.

GitButler's First Class Conflicts make rebase always succeed. Conflicted commits are marked and can be resolved later, in any order — they don't block the rest of the work.

GitHub / GitLab Integration

Without leaving GitButler:

  • Open and update PRs
  • Check CI status
  • Browse branch lists

CLI:

but forge pr create    # open a PR
but forge pr list      # list PRs

AI Integration

Built-in AI generates:

  • Commit messages
  • Branch names
  • PR descriptions

You can also install hooks that let Claude Code or other AI agents manage Git through GitButler directly.

Installation

# macOS
brew install gitbutler

# or download the GUI directly
# https://gitbutler.com/downloads

The but CLI installs alongside the GUI app.

but --help

How It Compares to git worktree

I've written before about using git worktree to work on multiple branches in parallel. Both solve the "parallel work" problem, but from different angles:

git worktree GitButler
Nature Native Git feature Full Git client
Interface CLI GUI + CLI
Parallel branches Multiple directories Single directory
Commit management Requires rebase -i Drag and drop
Learning curve Low (works like Git) New UI to learn

Worktree fits people comfortable with the CLI who want minimal tooling. GitButler fits workflows involving complex commit manipulation or teams who want a polished GUI.

License

GitButler uses a Fair Source license — use it, read the source, contribute, but don't build a competing product with it. It converts to MIT after 2 years — open source with an expiring non-compete clause.

References

Read the whole story
alvinashcraft
40 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Microsoft Agent Framework–Building a multi-agent workflow with DevUI in .NET

1 Share

Yesterday, I created a minimal .NET project with DevUI and registered a couple of standalone agents. That gets you surprisingly far for interactive testing. But real business scenarios quickly outgrow a single agent: you need data flowing through multiple specialized steps, decisions being made along the way, and a clear picture of the whole pipeline.

That's what workflows are for. In this post, we'll build a content review pipeline as a concrete example — a Writer agent drafts a response, a Reviewer agent critiques it, and a deterministic formatting step finalizes the output. All of it visualized in DevUI.

Agents vs Workflows — the key distinction

The Agent Framework docs put it cleanly: an agent is LLM-driven and dynamic — it decides which tools to call and in what order, based on the conversation. A workflow is a predefined graph of operations, some of which may be AI agents, but the topology is explicit and deterministic. You decide exactly what runs after what.

When should you choose a workflow over a single agent with tools? A few signals:

  • You need multiple agents with different system prompts to collaborate in a fixed order
  • You have deterministic steps (validation, formatting, API calls) interleaved with LLM steps
  • You need checkpointing, fault tolerance, or human-in-the-loop approval between steps
  • You want type-safe message contracts between pipeline stages

Remark: The key message is to remain pragmatic: if you can write a regular function to handle the task, do that. Use an agent when you need LLM reasoning. Use a workflow when you need explicit control over a multi-step process.

Core Workflow concepts

Three concepts make up the mental model:

Executors are the processing nodes. Each executor receives typed messages, does work (LLM call, API call, data transform), and emits typed output messages. The recommended way to define one is a partial class that inherits from Executor, with handler methods marked [MessageHandler]. The partial keyword enables compile-time source generation for handler registration.

Edges define the connections between executors — which executor's output feeds which other executor's input. They can be simple direct connections, or conditional branches that route based on message content.

WorkflowBuilder ties executors and edges into a directed graph. Once built, you execute it with InProcessExecution.RunStreamingAsync() or RunAsync().

What we're building

A content review pipeline with three stages:

1. Writer executor (AI agent) — takes a topic prompt and drafts a short paragraph

2. Reviewer executor (AI agent) — receives the draft and returns concise actionable feedback

3. Formatter executor (deterministic) — wraps the final output in a structured response envelope

The data flow looks like this:

Step 1: Install extra packages

On top of the packages from the previous post, you need the workflows package:


        dotnet add package Microsoft.Agents.AI.Hosting --prerelease
	dotnet add package Microsoft.Agents.AI.DevUI --prerelease
        dotnet add package Microsoft.Agents.AI.Workflows --prerelease
        dotnet add package OllamaSharp

Step 2: Define the Executors

WriterExecutor.cs

The Writer executor is a thin wrapper around a ChatClientAgent. It receives a string (the user's topic), runs the agent, and returns the draft as a string that gets forwarded to the next executor automatically.

ReviewerExecutor.cs

The Reviewer receives the draft string and returns a string containing its feedback. Same pattern, different agent instructions.

FormatterExecutor.cs

The Formatter is a purely deterministic executor — no AI, just C# code. It calls context.YieldOutputAsync() to emit the final result back to the workflow caller.

Step 3: Wire it up in Program.cs

Now the workflow is assembled in Program.cs alongside the DevUI registration. The key difference from the single-agent setup is that instead of calling builder.AddAIAgent(), we construct agents manually so we can pass them into executor instances, then hand the workflow to the hosting layer.

Step 3 — Run and Explore in DevUI

dotnet run

In the DevUI sidebar you'll now see ContentReviewPipeline listed as a workflow entry rather than an agent. Click on Configure and run, type a topic — "the benefits of electric vehicles" works well — and hit Run workflow.

Unfortunately, that didn't work and resulted in an error. It turns out that using mixed workflows in C# doesn't work yet in DevUI. 

However, an agent only workflow does work. So, let's update our example and change it into an agent only workflow:

Restart the application, click on Configure and run again, type a topic — "the benefits of electric vehicles" — and hit Run workflow.

You'll see each executor fire in sequence. Because agents are wrapped as executors in the workflow, the reasoning trail shows the Writer's output, then the Reviewer's feedback, the full chain of thought, step by step.

Tip: DevUI's workflow visualization is one of its strongest features. Unlike the single-agent view, you can see the executor graph on the left and watch messages propagate through it in real time, which makes it easy to spot where in the pipeline things are going wrong.

Calling the workflow programmatically

If you want to call the workflow directly from code (outside of DevUI), you can use the RunAsync method:


Conclusion

Switching from a single agent to a workflow in the Agent Framework is mostly additive. You keep the same setup from part one and add executor classes plus a WorkflowBuilder call. 

Unfortunately, the DevUI backend part for ASP.NET Core can’t handle all type of workflows yet, so we’ll have to wait for an update. When that happens, I’ll write a follow-up post. Pinky promise!

Meanwhile, you can explore my working end-to-end example here: wullemsb/DevUIExample at workflows

More information

Microsoft Agent Framework Workflows | Microsoft Learn

Microsoft Agent Framework Workflows - Executors | Microsoft Learn

wullemsb/DevUIExample at workflows


Read the whole story
alvinashcraft
40 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Sperm Whales' Communication Closely Parallels Human Language, Study Finds

1 Share
An anonymous reader quotes a report from the Guardian: We may appear to have little in common with sperm whales – enormous, ocean-dwelling animals that last shared a common ancestor with humans more than 90 million years ago. But the whales' vocalized communications are remarkably similar to our own, researchers have discovered. Not only do sperm whale have a form of "alphabet" and form vowels within their vocalizations but the structure of these vowels behaves in the same way as human speech, the new study has found. Sperm whales communicate in a series of short clicks called codas. Analysis of these clicks shows that the whales can differentiate vowels through the short or elongated clicks or through rising or falling tones, using patterns similar to languages such as Mandarin, Latin and Slovenian. The structure of the whales' communication has "close parallels in the phonetics and phonology of human languages, suggesting independent evolution," the paper, published in the Proceedings B journal, states. Sperm whale coda vocalizations are "highly complex and represent one of the closest parallels to human phonology of any analyzed animal communication system," it added. [...] The new study shows that "sperm whale communication isn't just about patterns of clicks -- it involves multiple interacting layers of structure," said Mauricio Cantor, a behavioral ecologist at the Marine Mammal Institute who was not involved in the research. "With this study, we're starting to see that these signals are organized in ways we didn't fully appreciate before." The latest discovery around sperm whale speech has inched forward the possibility of someday fully understanding the creatures and even communicating with them. Project CETI has set a goal of being able to comprehend 20 different vocalized expressions, relating to actions such as diving and sleeping, within the next five years. A future where we're able to fully understand what the whales are saying and be able to have a conversation with them is "totally within our grasp," said David Gruber, founder and president of Project CETI. "We've already got a lot further than I thought we could. But it will take time, and funding. At the moment we are like a two-year-old, just saying a few words. In a few years' time, maybe we will be more like a five-year-old."

Read more of this story at Slashdot.

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Intel's New Core Series 3 Is Its Answer To the MacBook Neo

1 Share
Intel has launched a new budget-focused Core Series 3 processor line for lower-cost laptops -- "Intel's response to budget CPUs that are appearing in laptops like the Apple MacBook Neo," writes PCWorld's Mark Hachman. From the report: Intel unexpectedly launched the Core Series 3, based on its excellent "Panther Lake" (Core Ultra Series 3) architecture and 18A manufacturing, for devices for home consumers and small business on Thursday. Intel announced that a number of partners will launch laptops based upon the chip, including Acer, Asus, HP, Lenovo, and others. Although those laptops will be available beginning today, a number of them will begin shipping later this year, the partners said. All of it -- from the specifications down to the messaging -- feels extremely aimed at trimming the fat and delivering to users just what they'll want. Intel's new Core Series 3 family just includes two "Cougar Cove" performance cores and four low-power efficiency "Darkmont" cores, with two Xe graphics cores on top of it. Intel isn't really worrying about AI, with an NPU capable of just 17 TOPS, though the company claims the CPU, NPU, and GPU combined reach 40 TOPS of performance. Yes, laptops will use pricey DDR5 memory, but at the lower end: just DDR5-6400 speeds. Support for three external displays will be included, though, maximizing multiple screens for maximum productivity. Intel used the term "all day battery life" without elaboration. [...] Intel Core Series 3 delivers up to 47 percent better single-thread performance, up to 41 percent better multi thread performance, and up to 2.8x better GPU AI performance, Intel said. Compared against Intel's older Core 7 150U, Intel is saying that the new chip will outperform it by 2.1 times in content-creation and 2.7 times the AI performance. [...] We still don't know what Intel will charge for the chip, nor do we know what you'll be able to buy a Core Series 3 laptop for.

Read more of this story at Slashdot.

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

No country left behind with sovereign AI

1 Share
Ryan welcomes Stephen Watt, distinguished engineer and VP of Red Hat’s Office of the CTO, to chat about digital sovereignty and sovereign AI.
Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

🏆 Agents League Winner Spotlight – Reasoning Agents Track

1 Share

Agents League was designed to showcase what agentic AI can look like when developers move beyond single‑prompt interactions and start building systems that plan, reason, verify, and collaborate.

Across three competitive tracks—Creative Apps, Reasoning Agents, and Enterprise Agents—participants had two weeks to design and ship real AI agents using production‑ready Microsoft and GitHub tools, supported by live coding battles, community AMAs, and async builds on GitHub.

Today, we’re excited to spotlight the winning project for the Reasoning Agents track, built on Microsoft Foundry: CertPrep Multi‑Agent System — Personalised Microsoft Exam Preparation by Athiq Ahmed.

 

The Reasoning Agents Challenge Scenario

The goal of the Reasoning Agents track challenge was to design a multi‑agent system capable of effectively assisting students in preparing for Microsoft certification exams. Participants were asked to build an agentic workflow that could understand certification syllabi, generate personalized study plans, assess learner readiness, and continuously adapt based on performance and feedback. The suggested reference architecture modeled a realistic learning journey: starting from free‑form student input, a sequence of specialized reasoning agents collaboratively curated Microsoft Learn resources, produced structured study plans with timelines and milestones, and maintained learner engagement through reminders. Once preparation was complete, the system shifted into an assessment phase to evaluate readiness and either recommend the appropriate Microsoft certification exam or loop back into targeted remediation—emphasizing reasoning, decision‑making, and human‑in‑the‑loop validation at every step.

All details are available here: agentsleague/starter-kits/2-reasoning-agents at main · microsoft/agentsleague.

 

The Winning Project: CertPrep Multi‑Agent System

The CertPrep Multi‑Agent System is an AI solution for personalized Microsoft certification exam preparation, supporting nine certification exam families.

At a high level, the system turns free‑form learner input into a structured certification plan, measurable progress signals, and actionable recommendations—demonstrating exactly the kind of reasoned orchestration this track was designed to surface.

 

 

Inside the Multi‑Agent Architecture

At its core, the system is designed as a multi‑agent pipeline that combines sequential reasoning, parallel execution, and human‑in‑the‑loop gates, with traceability and responsible AI guardrails.

The solution is composed of eight specialized reasoning agents, each focused on a specific stage of the learning journey:

  • LearnerProfilingAgent – Converts free‑text background information into a structured learner profile using Microsoft Foundry SDK (with deterministic fallbacks).
  • StudyPlanAgent – Generates a week‑by‑week study plan using a constrained allocation algorithm to respect the learner’s available time.
  • LearningPathCuratorAgent – Maps exam domains to curated Microsoft Learn resources with trusted URLs and estimated effort.
  • ProgressAgent – Computes a weighted readiness score based on domain coverage, time utilization, and practice performance.
  • AssessmentAgent – Generates and evaluates domain‑proportional mock exams.
  • CertificationRecommendationAgent – Issues a clear “GO / CONDITIONAL GO / NOT YET” decision with remediation steps and next‑cert suggestions.

Throughout the pipeline, a 17‑rule Guardrails Pipeline enforces validation checks at every agent boundary, and two explicit human‑in‑the‑loop gates ensure that decisions are made only when sufficient learner confirmation or data is present.

CertPrep leverages Microsoft Foundry Agent Service and related tooling to run this reasoning pipeline reliably and observably:

  • Managed agents via Foundry SDK
  • Structured JSON outputs using GPT‑4o (JSON mode) with conservative temperature settings
  • Guardrails enforced through Azure Content Safety
  • Parallel agent fan‑out using concurrent execution
  • Typed contracts with Pydantic for every agent boundary
  • AI-assisted development with GitHub Copilot, used throughout for code generation, refactoring, and test scaffolding

Notably, the full pipeline is designed to run in under one second in mock mode, enabling reliable demos without live credentials.

 

User Experience: From Onboarding to Exam Readiness

Beyond its backend architecture, CertPrep places strong emphasis on clarity, transparency, and user trust through a well‑structured front‑end experience. The application is built with Streamlit and organized as a 7‑tab interactive interface, guiding learners step‑by‑step through their preparation journey.

From a user’s perspective, the flow looks like this:

  1. Profile & Goals Input
    Learners start by describing their background, experience level, and certification goals in natural language. The system immediately reflects how this input is interpreted by displaying the structured learner profile produced by the profiling agent.
  2. Learning Path & Study Plan Visualization
    Once generated, the study plan is presented using visual aids such as Gantt‑style timelines and domain breakdowns, making it easy to understand weekly milestones, expected effort, and progress over time.
  3. Progress Tracking & Readiness Scoring
    As learners move forward, the UI surfaces an exam‑weighted readiness score, combining domain coverage, study plan adherence, and assessment performance—helping users understand why the system considers them ready (or not yet).
  4. Assessments and Feedback
    Practice assessments are generated dynamically, and results are reported alongside actionable feedback rather than just raw scores.
  5. Transparent Recommendations
    Final recommendations are presented clearly, supported by reasoning traces and visual summaries, reinforcing trust and explainability in the agent’s decision‑making.

The UI also includes an Admin Dashboard and demo‑friendly modes, enabling judges, reviewers, or instructors to inspect reasoning traces, switch between live and mock execution, and demonstrate the system reliably without external dependencies.

 

Why This Project Stood Out

This project embodies the spirit of the Reasoning Agents track in several ways:

  • Clear separation of reasoning roles, instead of prompt‑heavy monoliths
  • Deterministic fallbacks and guardrails, critical for educational and decision‑support systems
  • Observable, debuggable workflows, aligned with Foundry’s production goals
  • Explainable outputs, surfaced directly in the UX

It demonstrates how agentic patterns translate cleanly into maintainable architectures when supported by the right platform abstractions.

 

Try It Yourself

Explore the project, architecture, and demo here:

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories