Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
147631 stories
·
33 followers

GeekWire Podcast in Fremont: Seahawks, AI, and Seattle’s future

1 Share
The crowd at Fremont Brewing for a live recording of the GeekWire Podcast. (GeekWire Photo / Curt Milton)

We took the GeekWire Podcast on the road this week, but not very far — recording the show in Seattle’s Fremont neighborhood, the “Center of the Universe,” just a few blocks from our own offices, with a lively crowd, great beer, and plenty to talk about in Seattle tech and beyond.

The special event at Fremont Brewing was presented by the Fremont Chamber of Commerce.

Fresh off the Seahawks’ Super Bowl victory, we debate different tech and business moguls as candidates for owning the Seahawks or Sonics — including unlikely but interesting-to-consider possibilities ranging from Jeff Bezos and Lauren Sanchez to Costco’s Jim Sinegal. (Who wouldn’t want $1.50 hot dogs and sodas at Lumen Field?) 

John Cook and Todd Bishop record the GeekWire Podcast at Fremont Brewing on Thursday. (GeekWire Photo / Curt Milton)

Then we dig into the debate over Seattle’s tech future, sparked by angel investor Charles Fitzgerald’s GeekWire column, “A warning to Seattle: Don’t become the next Cleveland,” which led to a response and ultimately a great conversation with Cleveland Mayor Justin Bibb.

Fremont Chamber Executive Director Pete Hanning joins us to talk about the neighborhood’s tech corridor, why Fremont is seeing some of the highest return-to-office rates on the West Coast, and how Fremont balances its quirky identity with serious business.

The Fremont Chamber’s Pete Hanning, left, talks with John Cook and Todd Bishop on the show. (GeekWire Photo / Curt Milton)

In the final segment, test your Seattle tech knowledge with our Fremont-themed tech trivia, plus audience Q&A, in which Todd comes clean about his relationship with Claude.

Subscribe to GeekWire in Apple Podcasts, Spotify, or wherever you listen.

Audio edited by Curt Milton.

Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Autonomous agents - Technique 3: Provide tools for steps your agent can't easily handle (like Agent Flows)

1 Share

Continuing the theme of this series on how to build effective AI agents which have some autonomy and operate on the instructions you give, we need to address another aspect of keeping an agent on rails - specifically, when agents take action. That might be updating an external system or database, sending an e-mail or other message, asking for human approval in a step, cross-referencing some organisational data, or any number of other things we may want an agent to do. The challenge is that steps like these often need to be very defined and exact - simply specifying what should happen in agent instructions is often never going to work. "Update our CRM with the new lead details" or "Raise an invoice for the order" are vague guidance with nowhere near enough context - even the most capable AI agent backed by the latest LLM will fail on those without help. If the agent could talk it would conceivably say "What CRM, where? How do I authenticate? How are leads stored, and how do I ensure the lead is associated with a client and I'm not creating a duplicate?" 

In the last post we focused on writing good instructions for agents - but most agents need more than that. They need to call out to tools which are pre-defined and wrap up the complex details of taking actions on specific systems, integrating data, or performing precise steps in a process. Every agent framework has a 'tools' concept, and for Microsoft agents built with Copilot Studio, this is agent flows - ultimately Power Automate flows triggered from Copilot Studio agents which. This post covers how to make your agent more reliable in actions it performs by calling out to agent flows, including the specific help Microsoft give you to simplify this.  

But first, here's a recap of the full series:

Articles in this series

  1. Techniques for autonomous agents in Copilot Studio - intro 
  2. Scenario video - Microsoft architect with proposal generation
  3. Technique 1 - Getting AI-suitable descriptions right - data, tools, agents themselves 
  4. Technique 2 - Define explicit steps in agent instructions when "reasoning the process" isn't appropriate
  5. Technique 3 - Provide tools for steps your agent can’t easily handle [like agent flows] (this article)
  6. Technique 4 - Leveraging Power Platform and Microsoft 365 capabilities in your agents
  7. Technique 5 - Understand cost, capability, and governance implications of agents you create

Agent flows - what are they?

As a concept, agent flows isn't anything too new - this is Power Automate within the Microsoft ecosystem but adapted for the AI and agent world. An agent flow is essentially a Power Automate cloud flow which can only be called from an agent. Here's a quick primer on some of the differences and commonalities:

Agent flows - a primer
  • Agent flows share the same workflow designer as Power Automate, the same set of connectors, and the same approach to key concepts like triggers, actions, child flows, and variables
  • An agent flow must be created in Copilot Studio (not Power Automate) and start with the Run a flow from Copilot trigger and finish with the Respond to Copilot action.
  • Licensing is different - Agent Flows run under Copilot Studio licensing rather than Power Automate licensing (i.e. they consume Copilot credits)
  • Agent flows can use Premium connectors without charge (since it's being covered by the Copilot Studio licensing)
  • Agent flows DO bring special support for calling from agents - in particular, if your flow has a series of input parameters (let's say pieces of an address), the agent can automatically determine which pieces of data it should pass to each. This works surprisingly well if you name your inputs properly - more on this later
  • Agent flows are designed to be shared across agents - they essentially become a toolkit of well-defined actions and sub-processes used by your agents, some of which may be specific to an agent and some shared across many
  • Agent flows give better tracking, analytics, and overall control across the actions they take compared to steps an agent would run just from it's instructions - this is helpful in anything you need full control and monitoring over 

This is essentially 'tool calling' in the Copilot Studio agent world.

How agent flows are used in my scenario

Agent flows give us consistent execution across processes and actions - and since we all know about the non-deterministic nature of LLMs by now, it's clear that many agents need this. For the 'technology architect agent' discussed in this series, if you read the the last article you might remember we were hitting issues trying to get the agent to do certain things:
  • Issue 1 - agent was failing to create the Word proposal document as requested - which we said would include the technology recommendation, rationale, and licensing uplifts etc. required for this change
  • Issue 2 - agent was failing to log it's output to a SharePoint list as requested - this is to give a simple audit trail of requests and corresponding recommendations
Agent flows are needed to fix this.

I created two flows, one for each sub-process:

Let's look at these one by one. 
Agent flow to create proposal document
The objective here is to:
  • Creating a draft customer proposal containing all the details of the technology upgrade that the agent has determined - essentially, accelerate our consultants who would normally create such documents from scratch
  • Ensure the document is on the Advania branded Word template 
So this is essentially "create a templated document and drop information into placeholders". There are a few ways to do this in Microsoft 365, and this agent flow effectively automates my chosen approach - I'm using a capability in SharePoint Premium/Syntex called Content Assembly, and this provides a handy Power Automate action. Here's the start of the flow in designer:

We'll go into this in more detail in the next post - Technique 4 - Leveraging Power Platform and Microsoft 365 capabilities in your agents. The key message is that to build effective low-code agents in the Microsoft world, you need a solid understanding of wider M365 building blocks and how these can be plugged into your agents - otherwise you'll hit limits of agent building and automation. 

To summarise here, the approach I'm using for templated document creation is a Microsoft Syntex/SharePoint Premium capability called Content Assembly. I've already done the setup work for this which involves:
  • Creating a SharePoint list with columns for all the info pieces your document needs
  • Creating a 'modern template' in SharePoint, where you upload your branded document and insert placeholders in the right locations for each piece of data to be dropped in
For full details of on this approach, see my article Automate creation of new documents with Syntex Content Assembly.

For my agent, the piece which does the magic is this item - the 'Generate document using Microsoft Syntex' action available in Power Automate and therefore agent flows. Individual blocks of text like the client name, requirement summary, proposed approach etc. are passed into this action for them to be dropped into the document:
 
 
What's happening here is that these pieces of data are being retrieved from a SharePoint list item and then passed into this action, and therefore the document. But that needs something to create the list item in the first place, and that's our agent itself - and specifically, it's my other agent flow which does that step. Let's look at that now.
Agent flow to create SharePoint list item with agent's output
To show what's happening here, let's start with the actual list that stores this data - here's a partial view of it:

The full list of columns is:

All of those items are dropped in by the agent. This is where we come to the important support that agent flows give in simplifying all this - "proactive slot filling". Because my agent flow has three clear input parameters, I can simply ask the agent to work out what to pass in from their names - and again, this is a place where descriptive naming and rich descriptions are absolutely critical for AI and agents, I named them carefully. To do this, in the Tools > Inputs area of my agent I use the "dynamically fill with AI" option for each parameter:

With the 'dynamically fill with AI' approach, the agent itself works out what to pass into each parameter based on their name and pieces of information it's already determined from the conversation. There's quite a bit to what's possible here and Microsoft document it at Implement slot-filling best practices - it's essentially NLU working with some pre-determined entities and the ability to define your own. What this means is you don't need to do the work of parsing out individual pieces of information from either the agent's earlier output or queries and prompts supplied by the end-user - this is AI running over the user/agent conversation so far and extracting what it thinks the right answers are likely to be. The alternative would be you doing this hard work and then passing in 'hard-coded' values to your flow parameters. Of course, the dynamic AI approach won't always work perfectly and it's an area of agent development that needs careful scenario testing using different types of data - and to say it one more time, good naming is critical of course or the AI has no chance.

So that covers how data gets passed in, and from there it's down to whatever steps you implement in your flow using all the standard Power Automate capabilities. As you can imagine, to create our SharePoint list item which then drives the proposal document creation I simply use the SharePoint connector's Create Item action:

Thus, we now have our automation chain of:
  1. Architect asks the agent for a proposal on how the client's use case should be addressed
  2. Agent uses it's data sources and reasoning to derive an approach that makes sense for this client (based on stated needs, technologies in play or suitable to adopt, licensing etc.)
  3. Agent provides it's response
  4. Agent calls tool (agent flow 1) to add SharePoint list item containing key elements of the response
  5. Agent calls tool (agent flow 2) to create the Word proposal document on the Advania branded template using the SharePoint list item
  6. Agent notifies user that it's done
We now have a fully working agent doing it's advanced reasoning and creating the draft proposal document for our architect to enhance and take forward to the client.

The result

Now that we codified exactly where and how to create proposal document (via the agent flows), we now have a document successfully dropped into my chosen SharePoint library:

The draft proposal has all the details of the agent's output and was created on our organisational template:


The next step is to start analysing and enhancing the agent's output - checking the reasoning, architectural and licensing guidance, and turning this into a polished client-ready proposal. But the heavy lifting of contemplating the requirement, embarking upon the research, considering different options, ensuring each granular requirement specified by the client is met, deriving any licensing considerations and uplifts, then structuring a draft proposal - all this is done.

A word on Express Mode for agent flows

A final thing to understand about agent flows is express mode. In Microsoft's framework, agent flows fail if they take longer than two minutes to execute - express mode is a way of opting-in to model giving faster execution times with some limitations, and it's for agent flows only rather than extending to Power Automate flows too. There are no additional costs or licensing implications, but the limitations need to be understood - more on this at https://learn.microsoft.com/en-us/microsoft-copilot-studio/agent-flow-express-mode

Summary

In this article, we focused on one of the most important ingredients for building dependable agents - giving them the right tools to perform precise, repeatable actions. In the Copilot Studio world, this is agent flows. While agents excel at reasoning and orchestrating conversations, they simply can’t execute structured operations (like updating systems, creating documents, or logging data) reliably without clearly defined, deterministic steps. I don't see this changing too much even as models and agent frameworks evolve over the next few years. 

We explored how agent flows act as the “hands” of your agent, wrapping complex processes into reliable Power Automate cloud flows. You get predictable execution, premium connector access, consistent handling of structured data, and better monitoring and governance. Using the architect proposal scenario, we walked through how two agent flows - one to log outputs into SharePoint and another to generate a branded proposal document - take the agent from being a conversational assistant  to providing real automation of the process. We also looked at how dynamic slot filling removes the need for brittle manual parsing, allowing the agent to intelligently map conversation data into flow inputs.

The result is an agent that not only reasons about a problem but also creates the tangible output - in this case, a ready‑to‑review customer proposal based on our branded Advania template. created from a SharePoint‑based template.

One aspect we didn’t dive into here is billing and capacity consumption, which becomes increasingly important as your agent ecosystem grows. That topic deserves its own space, and we’ll cover it in detail in the final article in this series.

Next article (coming soon)

Technique 4 - Leveraging Power Platform and Microsoft 365 capabilities in your agents
Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete

2 Weeks of Claude Code for Me

1 Share

I’m busy all the time with the Critter Stack tools, answering questions on Slack or Discord, and trying like hell to make JasperFx Software go. I’ve admittedly had my head in the sand a bit about the AI tools for coding, thinking that what I do being relatively novel for the most part and that I wasn’t missing out on anything yet because the AI stuff was probably mostly trained up and useful for repetitive feature work.

The unfortunate analogy I have to make for myself is harking back to my first job as a piping engineer helping design big petrochemical plants. I got to work straight out of college with a fantastic team of senior engineers who were happy to teach me and to bring me along instead of just being dead weight for them. This just happened to be right at the time the larger company was transitioning from old fashioned paper blueprint drafting to 3D CAD models for the piping systems. Our team got a single high powered computer with a then revolutionary Riva 128 (with a gigantic 8 whole megabytes of memory!) video card that was powerful enough to let you zoom around the 3D models of the piping systems we were designing. Within a couple weeks I was much faster doing some kinds of common work than my older peers just because I knew how to use the new workstation tools to zip around the model of our piping systems. It occurred to me a couple weeks ago that in regards to AI I was probably on the wrong side of that earlier experience with 3D CAD models and knew it was time to take the plunge and get up to speed.

Anyway, enough of that. I spent a week thinking about what I’d try to do first with AI coding agents and spent some time watching some YouTube videos on writing prompts. I signed up for a Claude Max subscription at the beginning of last week to just go jump into the deep end. My tally so far in two weeks for progress is:

  • Added MySql and Oracle database engine support to Weasel and Wolverine up to and including the ability for the Critter Stack to manage database migrations on the fly like we already did for PostgreSQL and SQL Server. Granted it took a couple attempts at the Oracle support, but it just doesn’t hurt to throwaway code that didn’t cost you much to write. Babu added Sqlite support as well.
  • Filled in a gap in our SQL Server support for a queue per tenant database that had been outstanding for quite awhile
  • I had Claude fix some holes in our compliance test suite for our RavenDb support I’d been neglecting for awhile
  • Still in progress, but I have the beginning of a “Marten style migrations for EF Core” subsystem going that’s going to make the Wolverine testing for our EF Core integration go a lot smoother when the kinks are worked out as well as potentially making EF Core less aggravating to use for just about anyone
  • I’m almost done with a potentially big performance optimization for Marten projections that I’d wanted to do for 6 months, but never had anywhere near enough time to research fully enough to do. That took in the end 30 minutes of my time and a couple hours of chugging. Just to make this point really hard here, it helps tremendously to have a large base of tests
  • I improved quite a few “blinking” tests in the Wolverine codebase. Not perfect, but way better than before
  • I pushed quite a few improvements to the Wolverine CI infrastructure. That’s a work in progress, but hey, it is progress
  • I got a previously problematic test suite in Marten running in CI for the first time
  • Marten’s open issue count (bugs and enhancements) is at 16 as I write this, and that’s the least that number has been since I filled out the initial story list in GitHub in late 2015.
  • Wolverine’s open issue count is coincidentally down to 16. That number has hovered between 50-70 for the past several years. I was able to address a handful of LINQ related bugs that have been hanging around for years because the effort to reward ratios seemed all wrong
  • I filled in some significant gaps in documentation in Wolverine that I’d been putting off for ages. I certainly went in after the fact and made edits, but we’re in better shape now. But of course, I’ve already got a tiny bit of feedback about something in that being wrong that I should have caught.
  • I had Claude look for savings in object allocations in both Marten and Wolverine, and got plenty of little micro-optimizations – mostly around convenient usages of LINQ instead of slightly uglier C# usage. I’m not the very best guy in the world around low level things, so that’s been nice.
  • I converted a couple of our solutions to centralized package management. That’s something I’ve kind of wanted to do for awhile, but who has time to mess with something like that in a big solution?

And really to make this sound a bit more impressive, this was with me doing 8 hours of workshops for a client and probably about 10-12 other meetings with clients during these two weeks so it’s not as if I had unbroken blocks of time in which to crank away. I also don’t have a terribly good handle on “Vibe Programming” and I’m not sure at all what a “Ralph Loop” is, so all of that very real progress was without me being completely up to speed on how to best incorporate AI tools.

Moreover, it’s already changed my perspective on the Critter Stack roadmap for this year because some things I’ve long wanted to do that sounded like too much work and too much risk now seem actually quite feasible based on the past couple weeks.

With all of that said, here are my general takeaways:

  • I think Steve Yegge’s AI Vampire post is worth some thought — and I also just thought it was cool that Steve Yegge is still around because he has to be older than me. I think the usage of AI is a little exhausting sometimes just because it encourages you to do a lot of context shifting as you get long running AI agent work going on different codebases and different features.
  • I already resent the feeling that I’m wasting time if I don’t have an agent loaded and churning
  • It’s been great when you have very detailed compliance test frameworks that the AI tools can use to verify the completion of the work
  • It’s also been great for tasks that have relatively straightforward acceptance criteria, but will involve a great deal of repetitive keystrokes to complete
  • I’ve been completely shocked at how well Claude Opus has been able to pick up on some of the internal patterns within Marten and Wolverine and utilize them correctly in new features
  • The Critter Stack community by and large does a great job of writing up reproduction steps and even reproduction GitHub repositories in bug reports. In many cases I’ve been able to say “suggest an approach to fix [link to github issue]” and been able to approve Claude’s suggestion.
  • I’m still behind the learning curve, but a few times now I’ve gotten Claude to work interactively to explore approaches to new features and get to a point where I could just turn it loose
  • Yeah, there’s no turning back unless the economic model falls apart
  • I’m absolutely conflicted about tools like Claude clearly using *my* work and *my* writings in the public to build solutions that rival Marten and Wolverine and there’s already some cases of that happening
  • The Tailwind thing upset me pretty badly truth be told

Anyway, I’m both horrified, elated, excited, and worried about the AI coding agents after just two weeks and I’m absolutely concerned about how that plays out in our industry, my own career, and our society.



Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete

How to Implement Abstract Factory Pattern in C#: Step-by-Step Guide

1 Share

Learn how to implement Abstract Factory pattern in C# with a complete step-by-step guide. Includes code examples, best practices, and common pitfalls to avoid.

Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Shipping Features Faster with Copilot CLI

1 Share

Ship faster

As developers, we’re always looking for ways to ship faster without sacrificing quality. The constant pressure to deliver new features while maintaining clean, maintainable code can feel like an impossible balance. But what if I told you that the latest wave of AI tooling is actually living up to the hype?

Enter GitHub Copilot CLI, a game-changer that’s fundamentally changing how I approach building applications on Azure. This isn’t just autocomplete on steroids. It’s a genuine productivity multiplier that’s helping teams ship features at a pace we couldn’t have imagined just a year ago, maybe even a few months ago.

What Makes Copilot CLI Different

You’re probably familiar with GitHub Copilot in your IDE, suggesting code as you type. Copilot CLI takes a different approach. It lives in your terminal, where you spend a huge chunk of your day running commands, deploying code, and managing infrastructure.

The beauty of Copilot CLI is its understanding of context. It knows you’re working in Azure. It understands your project structure. It can suggest complete command sequences that would normally require you to dig through documentation.

Real-World Impact on Development Speed

Let me give you a concrete example. Last week, I needed to set up a new Azure Function with blob storage triggers, configure monitoring, and deploy it to a staging environment. Traditionally, this would involve:

  • Looking up the Azure CLI syntax for creating function apps
  • Remembering the right flags for runtime and region
  • Setting up storage account connections
  • Configuring Application Insights
  • Writing deployment scripts
  • Testing everything locally first

With Copilot CLI, I described what I needed in plain English. It generated the exact command sequence, including error handling and best practices I might have overlooked. What would have taken an hour or two took maybe 15 minutes.

Natural Language to Complex Commands

The real power comes from turning intent into action. Instead of memorizing complex Azure CLI syntax, you can ask questions like:

“Create a container app with autoscaling and connect it to my existing PostgreSQL database”

“Deploy this app to three regions with traffic manager for load balancing”

“Set up monitoring alerts for when my function execution time exceeds 5 seconds”

Copilot CLI translates these requests into proper Azure CLI commands, complete with the right parameters and flags. It’s like having a senior Azure architect sitting next to you.

Faster Iteration Cycles

Where this really shines is during rapid prototyping and feature development. When you’re exploring a new Azure service or trying to implement a feature quickly, the feedback loop is everything.

Instead of context-switching between your editor, browser, and documentation, you stay in the flow. Need to check your current resource groups? Ask. Want to deploy a quick test? Describe it. Need to roll back a change? Just say so.

This compressed feedback loop means you can iterate on features multiple times in the span it would have previously taken to do it once.

Beyond Simple Commands

Copilot CLI isn’t just about generating single commands. It helps with entire workflows. Setting up CI/CD pipelines, configuring networking rules, managing secrets and certificates – these multi-step processes become conversations rather than chores.

I’ve seen team members who were less familiar with Azure infrastructure become significantly more productive. The learning curve flattens because they’re learning by doing, with AI assistance that explains what each command does and why.

Integration with Your Development Workflow

What I appreciate most is how Copilot CLI fits into existing workflows without forcing you to change how you work. It enhances your terminal experience rather than replacing it. You can review suggested commands before running them, modify them as needed, and build up your own understanding over time.

For Azure-specific development, this means you can focus on solving business problems rather than fighting with infrastructure syntax. Your cognitive load drops dramatically when you’re not constantly switching between writing application code and remembering the exact flags for Azure CLI commands.

The Compound Effect

Here’s what I’ve noticed after a few days of using Copilot CLI daily: the time savings compound. Every command you don’t have to look up, every deployment script you don’t have to debug, every configuration you get right the first time – it all adds up.

Features that used to take a sprint now take days. Proof of concepts that took days now take hours. The velocity increase isn’t linear, it’s exponential.

Getting Started

If you’re building on Azure and haven’t tried Copilot CLI yet, I’d strongly encourage giving it a shot. The setup is straightforward, and the productivity gains start immediately. You don’t need to be an AI expert or change your entire workflow.

Start with simple commands and gradually build up to more complex operations. Let it suggest, review what it generates, and learn from the patterns. Before long, you’ll wonder how you ever managed without it.

The Future of Development

This feels like a glimpse into where software development is heading. We’re moving from memorizing syntax to expressing intent. From fighting tools to collaborating with them. From spending time on mechanical tasks to focusing on creative problem-solving.

For those of us building on Azure, Copilot CLI represents a significant step forward in how quickly we can move from idea to deployed feature. And in today’s competitive landscape, that speed matters more than ever.

The tools are here. The technology works. The only question is: how much faster could your team ship if you started using them today?

Ready to get started – https://github.com/features/copilot/cli

.

The post Shipping Features Faster with Copilot CLI appeared first on Azure Greg.

Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Former Bing boss says Windows 11 killed the vertical taskbar for symmetric UX, says it was the best productivity feature

1 Share

Shopify CTO Mikhail Parakhin, who previously served as Bing search boss and also took on broader responsibilities as the head of a new Windows team, says he fought hard against Microsoft’s decision to remove the movable taskbar in Windows 11, and that they dropped it to focus on a new “symmetric panes” UX.

Windows 11 is not exactly a bad operating system. There are things I like about Windows 11, and then there are things I straight-up hate, and that is largely true for all products Microsoft makes lately. One of the most upvoted feedback items in the Feedback Hub is the ability to move the taskbar, and the second is a toggle to resize it, similar to Windows 10.

Microsoft’s former Windows boss, who also advocated for Bing pop-ups in Windows 11 and Edge, considers the movable taskbar, particularly the vertical taskbar with the disappearing feature turned on, as the best UX for productivity.

Windows 11 auto hide taskbar
You can automatically hide the taskbar from Settings > Personalization > Taskbar

Mikhail also says macOS copied the idea of a “disappearing” taskbar from Windows, as Microsoft’s operating system had it since 1995.

“Yes, obviously, vertical and disappearing: Windows had it since 95, that’s how I use it my whole life. Mac copied it from Windows when it acquired Dock in macOS,” Microsoft’s former Windows boss responded when a user told him that they prefer the “macOS option of having it disappear.”

But why was the movable taskbar removed from Windows 11?

Windows 11 taskbar on the bottom

Microsoft dropped the vertical taskbar because it wanted to focus on the “centered-Start menu” and create a symmetric pane UX where Windows is meant to feel balanced and predictable, almost like two “side panels.”

That means Microsoft not only wanted to create a centered Start menu UX, but also give each side of the screen a clear “job.”

Windows 11 right side

On the right side of the screen, we have the “controls” area, such as your quick settings to turn on or off features like Wi-Fi and Bluetooth, and also manage your notifications.

Windows 11 Quick Settings

There are also plans to add back Outlook Agendas to the Notification Center in Windows 11. We are not getting Agendas on the left side because the right side is meant to be the “control” and “notifications” region, while left side is all about information.

Windows 11 Calendar Agenda View in Notification Center

On the left side, we have widgets like weather and MSN, which “pushed” the Start menu to the center, according to the former Windows/Bing boss.

Windows 11 left side

Microsoft’s designers envisioned a clear UI and UX hierarchy where Windows 11 puts all your system controls on the right instead of flyouts or pop-ups coming out of every area, and shows information on the left side.

Feature rich Start menu setup in Windows 11 after the new Start menu update
Feature rich Start menu setup in Windows 11 after the new Start menu update

This is also why the vertical or movable taskbar did not make the cut, as a taskbar on the left would compete with the widgets panel, and a taskbar on the right would run into the notifications area.

“The vision was to create symmetric panes: you have notification/system controls/etc. pane on the right, Weather/Widgets/News pane on the left. That pushed Start menu into the center position. If you have the taskbar vertically, it starts conflicting with the panes…,” the former Windows and Bing boss argues in a post on X.

Mikhail’s statement aligns with what we heard from Microsoft’s designers in 2021. As Windows Latest previously exclusively reported, Windows 11 designers were against the idea of a movable taskbar because it breaks the “flow” and causes a “sudden reflow…”

“When you think about having the taskbar on the right or the left, all of a sudden the reflow and the work that all of the apps have to do to be able to have a wonderful experience in those environments is just huge,” a Microsoft designer who worked on the new UI/UX argued back then.

The good news is that Microsoft is internally planning to bring back the “movable” taskbar to Windows 11, and you will be able to resize it as well, similar to how you could in Windows 10 and all older versions of Windows for decades.

Microsoft also plans to reduce Copilot integration in Windows and focus on performance optimization, as it hopes to win back users in 2026.

The post Former Bing boss says Windows 11 killed the vertical taskbar for symmetric UX, says it was the best productivity feature appeared first on Windows Latest

Read the whole story
alvinashcraft
7 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories