Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
147640 stories
·
33 followers

Launching Interop 2026

1 Share

Launching Interop 2026

Jake Archibald reports on Interop 2026, the initiative between Apple, Google, Igalia, Microsoft, and Mozilla to collaborate on ensuring a targeted set of web platform features reach cross-browser parity over the course of the year.

I hadn't realized how influential and successful the Interop series has been. It started back in 2021 as Compat 2021 before being rebranded to Interop in 2022.

The dashboards for each year can be seen here, and they demonstrate how wildly effective the program has been: 2021, 2022, 2023, 2024, 2025, 2026.

Here's the progress chart for 2025, which shows every browser vendor racing towards a 95%+ score by the end of the year:

Line chart showing Interop 2025 browser compatibility scores over the year (Jan–Dec) for Chrome, Edge, Firefox, Safari, and Interop. Y-axis ranges from 0% to 100%. Chrome (yellow) and Edge (green) lead, starting around 80% and reaching near 100% by Dec. Firefox (orange) starts around 48% and climbs to ~98%. Safari (blue) starts around 45% and reaches ~96%. The Interop line (dark green/black) starts lowest around 29% and rises to ~95% by Dec. All browsers converge near 95–100% by year's end.

The feature I'm most excited about in 2026 is Cross-document View Transitions, building on the successful 2025 target of Same-Document View Transitions. This will provide fancy SPA-style transitions between pages on websites with no JavaScript at all.

As a keen WebAssembly tinkerer I'm also intrigued by this one:

JavaScript Promise Integration for Wasm allows WebAssembly to asynchronously 'suspend', waiting on the result of an external promise. This simplifies the compilation of languages like C/C++ which expect APIs to run synchronously.

Tags: browsers, css, javascript, web-standards, webassembly, jake-archibald

Read the whole story
alvinashcraft
29 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

How Generative and Agentic AI Shift Concern from Technical Debt to Cognitive Debt

1 Share

How Generative and Agentic AI Shift Concern from Technical Debt to Cognitive Debt

This piece by Margaret-Anne Storey is the best explanation of the term cognitive debt I've seen so far.

Cognitive debt, a term gaining traction recently, instead communicates the notion that the debt compounded from going fast lives in the brains of the developers and affects their lived experiences and abilities to “go fast” or to make changes. Even if AI agents produce code that could be easy to understand, the humans involved may have simply lost the plot and may not understand what the program is supposed to do, how their intentions were implemented, or how to possibly change it.

Margaret-Anne expands on this further with an anecdote about a student team she coached:

But by weeks 7 or 8, one team hit a wall. They could no longer make even simple changes without breaking something unexpected. When I met with them, the team initially blamed technical debt: messy code, poor architecture, hurried implementations. But as we dug deeper, the real problem emerged: no one on the team could explain why certain design decisions had been made or how different parts of the system were supposed to work together. The code might have been messy, but the bigger issue was that the theory of the system, their shared understanding, had fragmented or disappeared entirely. They had accumulated cognitive debt faster than technical debt, and it paralyzed them.

I've experienced this myself on some of my more ambitious vibe-code-adjacent projects. I've been experimenting with prompting entire new features into existence without reviewing their implementations and, while it works surprisingly well, I've found myself getting lost in my own projects.

I no longer have a firm mental model of what they can do and how they work, which means each additional feature becomes harder to reason about, eventually leading me to lose the ability to make confident decisions about where to go next.

Via Martin Fowler

Tags: definitions, ai, generative-ai, llms, ai-assisted-programming, vibe-coding

Read the whole story
alvinashcraft
45 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

#551 - 15th February 2026

1 Share

Highlights this week include: OneLake catalog: The trusted catalog for organizations worldwide - Microsoft's OneLake catalog provides a central hub for discovering, managing, governing, and securing data across the Fabric platform, now adopted by over 230,000 organizations worldwide. Supercharge AI, BI, and data engineering with Semantic Link (GA) - Ruixin Xu - Semantic Link reaches general availability in Microsoft Fabric, unifying AI, BI, and data engineering through a shared semantic layer that streamlines workflows across data science, reporting, and automation. Enrich Power BI reports with machine learning in Microsoft Fabric - Ruixin Xu - A walkthrough of an end-to-end pattern for adding ML predictions such as churn scoring to Power BI reports, using Fabric's unified semantic models, notebooks, and scoring endpoints.

VS Code becomes multi-agent command center for developers - VS Code v1.109 lets developers orchestrate GitHub Copilot, Anthropic Claude, and OpenAI Codex agents side by side, transforming the editor into a unified multi-agent development hub. Choosing the Right Model in GitHub Copilot: A Practical Guide for Developers - A practical guide to selecting the right AI model in GitHub Copilot for different tasks, from fast lightweight models for quick edits to deep reasoning models for complex debugging and agentic workflows. Azure Databricks Supervisor Agent (GA) - Databricks Agent Bricks Supervisor Agent is now generally available, offering a managed orchestration layer that coordinates multiple AI agents and tools from a single entry point, governed by Unity Catalog.

Finally, What is Retrieval-Augmented Generation (RAG)? - An explainer on RAG uses a retail example - analysing customer reviews about delivery, the technique of retrieving relevant information and injecting it into an LLM's context to ground responses in domain-specific data and reduce hallucinations.

If you're interested in all things Microsoft Fabric - don't forget to sign up for our new newsletter - Fabric Weekly - which we'll start publishing in the next month or so. We'll be moving all Fabric content over from Azure Weekly to Fabric Weekly, just as we did with Power BI Weekly 7 years ago.

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

How to Learn AI with AI

1 Share
From: AIDailyBrief
Duration: 16:35
Views: 705

Overview of the shift from instructor-led courses to agent-first, context-driven learning with AI as a collaborative build partner. Key mindsets: start with vision, think out loud, insist on mutual pushback, and use AI as a mirror for refining ideas. Practical tactics: create handoff documents, paste exact errors or code into prompts, use AI to craft prompts for other models, preserve session context, and prefer voice over typing for faster iteration.

The AI Daily Brief helps you understand the most important news and discussions in AI.
Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614
Get it ad free at http://patreon.com/aidailybrief
Learn more about the show https://aidailybrief.ai/

Read the whole story
alvinashcraft
14 hours ago
reply
Pennsylvania, USA
Share this story
Delete

F# Weekly #7, 2026 – .NET 11 Preview 1 & Rider 2026.1 EAP 3

1 Share

Welcome to F# Weekly,

A roundup of F# content from this past week:

News

Microsoft News

Next week @dsyme.bsky.social shows how agentic workflows can continuously improve #fsharp libraries.amplifyingfsharp.io/sessions/202…

(@amplifyingfsharp.io) 2026-02-10T11:59:34.636Z

Videos

Blogs

Highlighted projects

Perhaps not too impressive for now but this shows Giraffe (F#) running on the BEAM (Erlang Runtime) using Fable (WIP) #fablecompiler #fsharp

Dag Brattli (@dbrattli.bsky.social) 2026-02-11T20:24:17.335Z

New Releases

🚀 EasyBuild.ShipIt 1.0.0 is out! 🎉Automate your release chores. ShipIt parses your Conventional Commits to calculate versions, generate changelogs, and auto-open Release PRs! 🛠✅ Auto Release PRs ✅ Monorepo readyStart shipping: 🚢https://github.com/easybuild-org/EasyBuild.ShipIt#dotnet #fsharp

Maxime (@mangelmaxime.bsky.social) 2026-02-11T18:37:08.087Z

That’s all for now. Have a great week.

If you want to help keep F# Weekly going, click here to jazz me with Coffee!

Buy Me A Coffee





Read the whole story
alvinashcraft
14 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Justifying text-wrap: pretty

1 Share

Justifying text-wrap: pretty

Something truly monumental happened in the world of software development in 2025. Safari shipped a reasonable implementation of text-wrap: pretty: https://webkit.org/blog/16547/better-typography-with-text-wrap-pretty/. We are getting closer and closer to the cutting-edge XV-century technology. Beautiful paragraphs!

Gutenberg bible Old Testament Epistle of St Jerome

We are not quite there yet, hence the present bug report.


A naive way to break text into lines to form a paragraph of a given width is greediness: add the next word to the current line if it fits, otherwise start a new line. The result is unlikely to be pretty — sometimes it makes sense to try to squeeze one more word on a line to make the lines more balanced overall. Johannes Gutenberg did this sort of thing manually, to produce a beautiful page above. In 1981, Knuth and Plass figured out a way to teach computer to do this, using dynamic programming, for line breaking in TeX.

Inexplicably, until 2025, browsers stuck with the naive greedy algorithm, subjecting generations of web users to ugly typography. To be fair, the problem in a browser is harder version than the one solved by Gutenberg, Plass, and Knuth. In print, the size of the page is fixed, so you can compute optimal line breaking once, offline. In the web context, the window width is arbitrary and even changes dynamically, so the line-breaking has to be “online”. On the other hand, XXI century browsers have a bit more compute resources than we had in 1980 or even 1450!


Making lines approximately equal in terms of number of characters is only half-way through towards a beautiful paragraph. No matter how you try, the length won’t be exactly the same, so, if you want both the left and the right edges of the page to be aligned, you also need to fudge the spaces between the words a bit. In CSS, text-wrap: pretty asks the browser to select line breaks in an intelligent way to make lines roughly equal, and text-align: justify adjusts whitespace to make them equal exactly.

Although Safari is the first browser to ship a non-joke implementation of text-wrap, the combination with text-align looks ugly, as you can see in this very blog post. To pin the ugliness down, the whitespace between the words is blown out of proportion. Here’s the same justified paragraph with and without text-wrap: pretty:

The paragraph happens to look ok with greedy line-breaking. But the “smart” algorithm decides to add an entire line to it, which requires inflating all the white space proportionally. By itself, either of

p {
    text-wrap: pretty;
    text-align: justify;
}

looks alright. It’s just the combination of the two that is broken.


This behavior is a natural consequence of implementation. My understanding is that the dynamic programming scoring function aims to get each line close to the target width, and is penalized for deviations. Crucially, the actual max width of a paragraph is fixed: while a line can be arbitrary shorter, it can’t be any longer, otherwise it’ll overflow. For this reason, the dynamic programming sets the target width to be a touch narrower than the paragraph. That way, it’s possible to both under and overshoot, leading to better balance overall. As per original article:

The browser aims to wrap each line sooner than the maximum limit of the text box. It wraps within the range, definitely after the magenta line, and definitely before the red line.

But if you subsequently justify all the way to the red line, the systematic overshoot will manifest itself as too wide inter-word space!

WebKit devs, you are awesome for shipping this feature ahead of everyone else, please fix this small wrinkle such that I can make my blog look the way I had intended all along ;-)

Read the whole story
alvinashcraft
14 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories