I confess, I’m a bit obsessed with bottlenecks. Lately, I’ve been thinking a lot about the kind that slow down your flow of change. I’ve gathered a few links below to solutions for common ones. The theme today is decisions. Maybe one of them will help with whatever bottleneck you’re facing right now?
Decision speed
Suzi Edwards-Alexander pointed me to this interesting article by Dave Girouard: “Speed as a Habit”. It focuses on speed for decisions, which is often what slows everything down. He also addresses the common belief that speed is the enemy of quality. I know you can have both. In fact, high quality can help speed, a lot!
Decentralising decisions
Andrew Harmel Law provides an appealing solution for speeding up architecture decisions whilst keeping them sound: decentralise them “correctly”. (Read it here.) We’ve used architecture decision records for a long time, but this approach makes them even more powerful.
Answering certain hard questions quickly and well
Simon Wardley and Tudor Girba are working on a book called Moldable Development. I’ve only read the first four chapters, but they’re very inspiring. A theme is that by creating tiny, context-aware tools for answering questions about a system, speed and quality of decisions might increase dramatically. Again, they can go hand in hand...
From decision to action
An old favourite book of mine, “The Art of Action” by Stephen Bungay, always feels important here. Maybe the overall decision about direction was taken quickly, but how and when is it translated into action? And what action? Taking the decision is often the easy part. “Yes, there’s a strategy, but I just don’t know what I’m supposed to do” might sound awkwardly familiar. It’s about how to correctly decentralise decisions – and action – to where the information is.
Which are your best suggestions?
This article was originally published on LinkedIn. Join the discussion and read more here.
Gabriel Chua (Developer Experience Engineer for APAC at OpenAI) provides his take on the confusing terminology behind the term "Codex", which can refer to a bunch of of different things within the OpenAI ecosystem:
In plain terms, Codex is OpenAI’s software engineering agent, available through multiple interfaces, and an agent is a model plus instructions and tools, wrapped in a runtime that can execute tasks on your behalf. [...]
At a high level, I see Codex as three parts working together:
Codex = Model + Harness + Surfaces [...]
Model + Harness = the Agent
Surfaces = how you interact with the Agent
He defines the harness as "the collection of instructions and tools", which is notably open source and lives in the openai/codex repository.
Gabriel also provides the first acknowledgment I've seen from an OpenAI insider that the Codex model family are directly trained for the Codex harness:
Codex models are trained in the presence of the harness. Tool use, execution loops, compaction, and iterative verification aren’t bolted on behaviors — they’re part of how the model learns to operate. The harness, in turn, is shaped around how the model plans, invokes tools, and recovers from failure.
Did you like my last newsletter issue? About what is a small game? Honestly it's hard to get feedback from this newsletter but I received 3 positive comments so I'm hoping people enjoyed that format, look forward to another non-news issue in the middle of this coming week!
Oh and yesterday I was present on the Spotlight Game Awards! I got to give out the award for Reveal of the Year. Really nice event, congrats to all the winners!
Here is a really cool website that I found recently (thanks to BiteMe Games), it is called Niklas Notes and it's a great tool for game research and general sentiment analysis.
This is an actual positive use case for AI, it goes through Steam pages and analyses players reviews to get a sentiment overview. Quickly see how players love the Combat Mechanics and Soundtrack in Mewgenics, how players love the Artistic style of Cairn but didn't quite like the Character Writing and Story, or how the game Half Sword is a mega hit but with very rough reviews mainly about Performance Issues and the Combat Physics.
You can use this tool to analyze games similar to what you're currently working on to see what players like and what they don't like. Or use it to research new game ideas, find something that is successful but has pain points that you think you could solve, like Half Sword but with Good Performance and good Combat Physics.
It also includes an automatic weekly email covering the latest hits and flops with an analysis on WHY they are hits or flops.
I have previously mentioned how Idea Selection and Idea Validation are super crucial nowadays, and this is an excellent tool for doing exactly that. Use it to pick the right idea so you work on the right game to find success!
Affiliate
FREE VFX, Unity Tools 97% OFF!
Finally we have a new Unity Tools HumbleBundle! It’s been a while since we had one of these, and this one is great!
It includes Megafiers which is a great tool for manipulating meshes in weird unique ways. It also includes a 3D Book tool, a Sails tool, Wire tool, Plasma and more!
Get it HERE and use coupon ASSETMAGEW at checkout to get it for FREE!
Yet another awesome HumbleBundle is currently ongoing, this one is all about Low Poly visuals. It’s by Animpic who makes a lot of awesome stuff, you can find environments, props, characters, on all sorts of themes. The bundle contains a ton of stuff and you can get it all for just 15 bucks.
Apple is one of the biggest companies in the world, the iPhone is everywhere, and it's rare for them to put out a completely new device. Two years ago we had the Apple Vision Pro, (which didn't quite do very well) and now some leaks suggest three new wearable devices (with AI obviously) are incoming.
These will supposedly be Smart Glasses, a Pendant worn as a necklace or as a pin, and AirPods with a camera. All of them built around Siri Digital Assistant which will use visual context to carry out actions. Importantly is how all these devices will be linked to an iPhone, so that means they should be very light and small since processing will not be happening directly on-device.
I am definitely curious to see these, will the glasses be better than Meta's? Will the pin be a flop like the Humane AI Pin? Will AirPods with cameras be useful in any way?
Supposedly they were also working on a cheaper version of the Apple Vision Pro headset but that was cancelled apparently in favor of these devices.
Will these help push AR/VR forward or be another flop? We shall see. Although remember how this is a leak, not an official announcement, so they haven't officially said any of these actually exist or will ever be released.
I love new tech and new devices, even when they flop. I still would like to try out an Apple Vision Pro someday, but I'm definitely not paying $3500 for it. Hopefully these new devices will have a much more digestable pricepoint.
One of the biggest indie developers in the world is Edmund McMillen. Started making Flash games in 2010 then moved on to making Steam games, he is one of the devs behind Super Meat Boy, Binding of Isaac, The End is Nigh.
And a long time ago in 2012 he announced his next game, Mewgenics, (alongside Tyler Glaiel) but then went radio silent. It was over a decade since the announcement so people thought the game was cancelled, and suddenly it came back from the dead and launched in Feb 10, 2026 to massive critical acclaim and financial success!
In less than one month it has already sold over one million copies(!) and currently sits at 92% Very Positive reviews!
The game is quite strange, in a good way. It's a tactical roguelike game where you breed cats to make unique combinations, and then send them out into battles that demand careful positioning and wild combos to succeed.
The breeding system is very complex, cats return from battle with their scars and experience, which they then pass on to their offspring. The longer the game goes on, the strange the cats become.
I am always happy when a game finally comes out after being stuck in development hell. I imagine this game went through a ton of redesigns and reimagining’s to reach the final result, which players love! This is a reminder to NOT skip out on the prototyping stage, you need a lot of trial and error to find out what works.
Well, it's in the can! #FPGA end-to-end compilation with bit width inference confirmed with #cleflang, including AMD/Xilinx bootstrapping the flash process in the back end #fsharp #ocaml #haskell #rust #dotnet
I need to double (and tripple) check this but it seems like Giraffe on Beam is faster than .NET. Is it too good to be true? At least the Python perf looks to be true #fsharp #fablecompiler
There are quite a few interesting announcements and updates this week. Here are the highlights: Custom Agents in Visual Studio: Built in and Build-Your-Own agents – Visual Studio now ships preset AI agents for debugging, profiling, testing, and modernisation, alongside a preview framework for building custom agents via .agent.md files with MCP support for connecting to external knowledge sources. Shipping Features Faster with Copilot CLI – An overview of how GitHub Copilot CLI accelerates Azure development by translating natural language into Azure CLI commands, compressing feedback loops, and dramatically reducing context-switching between documentation and terminal. How I Used GitHub Copilot CLI to Build an Azure Governance Web App: From Zero to Maturity Score in One Weekend – A detailed walkthrough of building a comprehensive Azure governance and FinOps web application in a single weekend using GitHub Copilot CLI with the Azure MCP Server, covering budgets, cost management, tagging compliance, security posture, and an improving the overall maturity score.
In Data & Analytics: Microsoft ODBC Driver for Microsoft Fabric Data Engineering (Preview) – A new enterprise-grade ODBC 3.x driver that enables .NET, Python, and other ODBC-compatible applications to connect to and query Spark workloads in Microsoft Fabric via Livy APIs, with deep lakehouse integration and Entra ID authentication. Official support for Microsoft fabric-cicd tool – The open-source fabric-cicd Python library for CI/CD automation across Microsoft Fabric workspaces is now officially supported and Microsoft-backed, addressing cross-workspace deployment challenges including dependency management and parameterisation. Share bundled Fabric Items in Fabric Org Apps – A walkthrough of using the new Fabric Org Apps feature (public preview) to bundle and share multiple Fabric items such as Real-Time Dashboards, Maps, and Notebooks as customisable, read-only packages for consumers within an organisation.
Finally, Unified tooling in the AKS MCP server (Public Preview) – The AKS MCP server now defaults to unified tools (call_az and call_kubectl) replacing legacy specialised tools, with a new --enabled-components flag for granular control over which components are active. Azure Quick Review (azqr) 3.0: Breaking Changes and Migration Guide – Version 3.0 of the Azure Quick Review CLI tool introduces breaking changes, replacing individual boolean flags with a unified --stages flag and changing cost analysis, Defender recommendations, and Arc scans to disabled by default. Azure Bicep Snapshots – Test and Validate Your Code and Deployments – A new preview feature in Bicep v0.36.1 that lets you locally generate and validate ARM resource definitions from parameter files without deploying to Azure, producing what-if style diff output to catch issues before deployment.
If you're interested in all things Microsoft Fabric - don't forget to sign up for our new newsletter - Fabric Weekly - which we'll start publishing in the next month or so. We'll be moving all Fabric content over from Azure Weekly to Fabric Weekly, just as we did with Power BI Weekly 7 years ago.
Microsoft appears to be experimenting with yet another way to weave AI deeper into everyday Windows workflows. A new option called “Share any window from my taskbar with virtual assistant” has started showing up in Taskbar settings, allowing supported apps such as Copilot and Microsoft 365 Copilot to access whatever window you choose directly from the taskbar interface.
The idea is to skip the process of manually screen sharing to an AI assistant and just let Windows hand off a live app window straight to Copilot or Microsoft 365 Copilot, of course, with your permission, and it all aligns with Microsoft’s inevitable plan of making the taskbar into a dynamic hub for AI.
Invoking agent from Ask Copilot in Taskbar. Credit: Microsoft
The company is already toying around with Ask Copilot, which is an AI-powered potential replacement for the traditional Windows Search on the Taskbar, featuring a single button access to Copilot Voice and Copilot Vision, the latter of which lets you share your screen to Copilot and make it do tasks for you.
However, the “Share any window from taskbar with virtual assistant” toggle is a part of Microsoft’s broader plan to make Windows into an Agentic OS, because from the looks of it, the space isn’t just reserved for Copilot and may include more AI agents in the future.
Windows 11 is preparing to let you share app windows directly with AI assistant
Source: Phantomofearth via X
The screenshot by Windows enthusiast @phantomofearth shows a new Taskbar setting called “Share any window from my taskbar with virtual assistant.” The toggle appears to be the settings-side control for a feature Microsoft has already been testing under the name “Share with Copilot.”
That implementation added the option to share a specific app window directly to Copilot when hovering over that app’s thumbnail preview on the taskbar. It allowed Copilot to analyze what’s on screen and provide contextual help. Windows Latest tested Share with Copilot on apps like Outlook, Cloudflare WARP, and more.
Once a window is shared, Copilot can read visible content, summarize information, suggest replies, or guide you through actions by highlighting UI elements with its own cursor. It is designed as a read-only, assistive layer. The AI sees what you see, but it does not take control of the app or interact with protected content.
The newly spotted toggle suggests Microsoft is formalizing that capability into a system-level permission model. Users may now be able to choose which “virtual assistant” apps are allowed to request access to open windows. The list already includes Copilot and Microsoft 365 Copilot, and we believe that third-party AI agents may make their way here.
This Windows shell-level sharing infrastructure was originally meant for communication apps. Microsoft appears to be extending that plumbing so AI agents can register as sharing targets, letting Windows pass along a selected WindowId from the taskbar itself.
Until now, Microsoft has only approved the Copilot app, but now Microsoft 365 Copilot is also whitelisted. For any third-party AI Agents to make the cut, the company has to approve those developers. And once they do, you’ll start seeing other AI agents in the list.
The screenshot shows that you’ll be able to turn on or off individual AI Agents from this list.
The option to Share window with virtual assistants is, in fact, an option, and a simple toggle can completely turn off the feature. Also, the feature is turned off by default.
To enable the feature, go to Settings > Personalization > Taskbar, and check under Taskbar beaviors:
Source: Phantomofearth via X
You can prioritize AI Agents while sharing an app window
Interestingly, you can decide which AI Agents get higher priority while sharing an app window. The screenshot shows 6 dots on the left of each AI agent, and they can be long-pressed and dragged to move them up or down. However, we are not sure how we can choose a particular AI Agent when selecting the option to share the window with a virtual assistant.
Source: Phantomofearth on X
Windows 11 is redefining the taskbar for the AI era
For decades, the taskbar was little more than a launcher. It held the Start button, showed running apps, and stayed mostly unchanged from Windows 7 through Windows 10.
Windows 11 initially drew criticism for removing long-standing behaviors, including the ability to move the taskbar to different edges of the screen. That decision frustrated power users who had built workflows around a more flexible layout.
Now Microsoft appears to be reversing course while simultaneously expanding what the taskbar can actually do. The company is already working on bringing back the ability to move the taskbar and even resize it, features that are reportedly under active development for upcoming Windows 11 updates.
Visual representation of taskbar at the top
Microsoft has also been layering in new functionality, such as updated battery indicators. Recent preview features include a built-in network speed monitoring from the taskbar, which could be a very nifty feature. Microsoft increasingly wants everyday information and actions to appear directly from this strip of UI.
The addition of AI entry points follows that same course. Windows 11 is embedding them into places users already interact with constantly. We have recently seen how AI agents run on the taskbar via Ask Copilot.
Note that Microsoft is not replacing the taskbar with something unrecognizable. It is layering new capabilities on top of a familiar foundation, testing them in limited rollouts, and adjusting based on feedback. The newly spotted toggle can be turned off, but the UX may change before it reaches a wider audience.