Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
151696 stories
·
33 followers

JavaScript creator warns against “rushed web UX over native” as Windows 11 leans harder on WebView2 and Electron

1 Share

From Discord and Teams to WhatsApp, Windows Search, the Start menu, and even the new Agenda view in Notifications Center, Windows 11 keeps doubling down on web junk, and it’s getting so out of control that JavaScript and Brave browser creator Brendan Eich is also upset with the approach.

Windows 11 has been in the news for all the wrong reasons lately. Recently, I wrote about “Microsoft denies rewriting Windows 11 using AI.” The main story was Microsoft pushing back on the claim that Windows 11 is being rewritten in Rust using AI. However, I also used it to highlight a bigger issue:

Windows 11 is increasingly relying on web frameworks, especially WebView2 and Electron.

This has been part of my effort to turn Windows 11’s web-enshittification into a larger story that more people notice.

To my surprise, it caught the attention of Brendan Eich, the creator of JavaScript and the CEO of Brave. JavaScript legend Brendan Eich also founded B2G OS (Boot to Gecko), which was Mozilla’s Firefox OS, and he has been involved with webOS as well.

Eich argues that he is against the bloat (likely referring to apps on Windows), due to the rushed use of Web UX over native. He also added that the web apps can be “done right,” but that takes time, which is something that most companies don’t want to do.

If you look at Discord, it’s been trying to restart Discord on Windows 11 when RAM usage hits 4GB instead of switching to native code while they figure out how to optimize Electron.

“The buried lede is “Windows 11 has a bigger problem, and it’s WebView2 or Electron,” Brendan Eich wrote in a X post sharing Windows Latest’s story. “As a b2g (FirefoxOS) cofounder, also connected to webOS folks in the day, I’m against bloat due to rushed use of Web UX over native. It can be done right; it takes time.”

JavaScript creator response on Windows web apps problem

In the same thread, one user argues that WebView is about control and getting people used to subscription software. But Brendan Eich pushes back on the logic and asks: “How does web vs. native help that agenda?”

Eich also adds that “Native is easier to use for lock-in.”

In other words, if the fear is lock-in, web apps are not automatically the best proof.

Then Eich zooms out from “web vs native” into what he thinks is the real driver: business incentives. He describes it as “subscription model not buy to own” and links it to broader “enshittification” dynamics, including debt-driven tactics and DRM, even bringing up the “DRMed tractor” example.

Eich went as far as calling “NPM a mistake.”

Web apps need to be done “right” if they’re going to be forced upon us

Web apps aren’t necessarily bad, especially if done right and used in the right place. You don’t need web tech for everything, including something as basic as the Notification Center.

On Windows 11, Microsoft is adding a WebView2-based Agenda view to the Notifications Center instead, and if you monitor the Task Manager, you’ll notice that Edge-related processes’ RAM usage shoots up to 100MB from 1MB.

If it’s an indie dev trying to build a cross-platform app and preferring a web framework, it makes sense. But we’re talking about companies like Microsoft ($3.5+ trillion in valuation), which are unable to build a native UI for something as basic as a Calendar Agenda view in Windows 11. This really needs to stop.

What do you think? Let me know in the comments below.

The post JavaScript creator warns against “rushed web UX over native” as Windows 11 leans harder on WebView2 and Electron appeared first on Windows Latest

Read the whole story
alvinashcraft
8 hours ago
reply
Pennsylvania, USA
Share this story
Delete

How to Create an HTTP Server in C++ (and a Brief History of the Internet)

1 Share

Imagine trying to jump-start a car with no battery. No matter how sleek it is, without that vital spark, it’s just a shiny brick. That’s a bit how the internet would be without HTTP, the lifeblood that propels data across the web. HTTP, or Hypertext Transfer Protocol, is not just a tool—it’s the foundation upon which the intricate network of online communications is built. Intrigued by its complexity and critical role, I recently ventured into the nuts and bolts of HTTP by attempting to build my own HTTP server using C++ and the Boost library.

This video is from CodeTeapot🫖.

But first, let’s take a quick detour into history to appreciate how revolutionary HTTP truly is. Back in the 1960s, when the bulk of communication rested on the unreliable shoulders of telephone lines, transmitting data was not only slow but also inefficient. Picture this: scientists had to physically transport magnetic tapes or punched cards to share computational data. The growing need for a robust, digital communication system gave birth to ARPANET, spearheaded by the ARPA (Advanced Research Projects Agency).

This early network laid down the framework of the decentralized, packet-switching method that forms the essence of today’s internet. Packet switching allowed data to be broken down into packets, each tagged with destination and origin addresses, which could then travel independently across networks to reassemble at the destination. This made communications not only faster but also resilient to line failures—a crucial innovation during the Cold War era.

The real game-changer came with the birth of TCP/IP on January 1, 1983, marking what is often considered the official birthday of the internet. But it was the advent of HTTP that truly democratized the internet, spearheaded by Tim Berners Lee at CERN in 1989. HTTP leveraged the structure of TCP/IP to send data over the web, essentially creating the World Wide Web— an interconnected system of information accessible through hyperlinks.

The brilliance of HTTP lies in its simplicity and universality, which made the internet accessible to the general public, ushering in an era of web browsers and a boom in web services. But how exactly does HTTP work in the background? Well, that’s what I set out to explore.

A HTTP server essentially sits on a network, waiting for client requests like a waiter ready to take your order. Once a request is made—say, for a webpage like index.html—the server processes this request and serves the desired pages back to the user. To bring this concept to life, I used C++, armed with the Boost library known for its robust network capabilities.

The beauty of the Boost library lies in its ability to handle complex network programming with relative ease. It supports asynchronous and multi-threaded programming, crucial for managing numerous user requests simultaneously. This is vital because, in earlier days, servers struggled under heavy loads—a problem known in tech circles as the C10K problem (the challenge of handling ten thousand connections at once).

Building the server involved setting up a TCP acceptor to listen for incoming connections at a specified IP and port. Here’s an interesting tidbit: while an IP address identifies the computer within the network, a port number identifies the specific process running the server on that computer. The server I built was designed to handle these connections efficiently, employing a multi-threaded approach to ensure that each request was processed without delays.

To test the server’s mettle, I simulated a high-load environment using a popular benchmarking tool called WRK. The test involved thousands of simultaneous connections trying to access the server, and though there were a few hiccups with read errors, it held up impressively well. It was a proud moment to see something created from scratch work seamlessly (well, almost).

This little adventure into the world of HTTP not only fed my curiosity but also deepened my respect for the complex web of technologies we often take for granted. As we continue to innovate and build on the digital landscape, it’s crucial to look back at these foundational technologies with appreciation and a drive for continuous learning.

Read the whole story
alvinashcraft
8 hours ago
reply
Pennsylvania, USA
Share this story
Delete

A hodgepodge of ideas spewing in my head

1 Share
As I sit down to write, I have a hodgepodge of ideas spewing in my head, but none that has taken hold in any immersive way. Usually a blog post has a single topic of focus, and I try to go somewhat deep into it. But this approach can be problematic: If I don't have an idea that catches my attention, I feel I have nothing to write about. Hence, I'll skip my writing time “until the muse strikes” or something. But then days pass without the muse striking, and I start to wonder if I've gone about the creative process all wrong.

Read the whole story
alvinashcraft
17 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Context Engineering Lessons from Building Azure SRE Agent

1 Share

We spent a long time chasing model upgrades, polishing prompts, and debating orchestration strategies. The gains were visible in offline evals, but they didn’t translate into the reliability and outcomes we wanted in production. The real breakthrough came when we started caring much more about what we were adding to the context, when, and in what form. In other words: context engineering.

Every context decision involves tradeoffs: latency, autonomy (how far the agent goes without asking), user oversight, pre-work (retrieve/verify/compute before answering), how the agent decides it has sufficient evidence, and the cost of being wrong. Push on one dimension, and you usually pay for it elsewhere.

This blog is our journey building Azure SRE Agent – a cloud AI agent that takes care of your Azure resources and handles your production incidents autonomously. We'll talk about how we got here, what broke along the way, which context patterns survived contact with production, and what we are doing next to treat context engineering as the primary lever for reliable AI-driven SRE.

Tool Explosion, Under-Reasoned

We started where everyone starts: scoped tools and prescriptive prompts. We didn't trust the model in prod, so we constrained it. Every action got its own tool. Every tool got its own guardrails.

Azure is a sprawling ecosystem - hundreds of services, each with its own APIs, failure modes, and operational quirks. Within 2 weeks, we had 100+ tools and a prompt that read like a policy manual.

The cracks showed fast. User hits an edge case? Add a tool. Tool gets misused? Add guardrails. Guardrails too restrictive? Add exceptions. The backlog grew faster than we could close it.

Worse, the agent couldn’t generalize. It was competent at the scenarios we’d already encoded and brittle everywhere else. We hadn't built an agent - we'd built a workflow with an LLM stapled on.

Insight #1: If you don’t trust the model to reason, you’ll build brittle workflows instead of an agent.

Wide tools beat many tools

Our first real breakthrough came from asking a different question: what if, instead of 100 narrow tools, we gave the model two wide ones?

We introduced `az` and `kubectl` CLI commands as first-class tools. These aren’t “tools” in the traditional sense - they’re entire command-line ecosystems. But from the model’s perspective, they’re just two entries: “execute this Azure CLI command” and “execute this Kubernetes command”

The impact was immediate:

  • Context compression: Three tools instead of hundreds. Massive headroom recovered.
  • Capability expansion: The model now had access to the entire az/kubectl surface area, not just the subset we had wrapped.
  • Better reasoning: LLMs already “know” these CLIs from training data. By hiding them behind custom abstractions, we were fighting their priors.

This was our first hint of a deeper principle:

Insight #2: Don’t fight the model’s existing knowledge - lean on it. 

Multi-Agent Architectures: Promise, Pain, and the Pivot

Looking at the success of generic tools, we went further and built a full multi-agent system with handoffs.  A “handoff” meant one agent explicitly transferring control - along with the running context and intermediate results - to another agent.

Human teams are organized by specialty, so we mirrored that structure: specialized sub-agents with focused personas, each owning one azure service, handing off when investigations crossed boundaries. 

The theory was elegant: lazy tool loading.

  • The orchestrator knows about sub-agents, not individual tools.
  • User asks about Kubernetes? Hand off to the K8s agent.
    Networking question? Route to the networking agent.
  • Each agent loads only its own tools. Context stays lean.

It worked beautifully at small scale. Then we grew to 50+ sub-agents and it fell apart.

The results showed a bimodal distribution: when handoffs worked, everything worked; when they didn't, the agent got lost. We saw a clear cliff – problems requiring more than four handoffs almost always failed.

The following patterns emerged:

  1. Discovery problems.
    Each sub-agent only knew sub-agents it could directly call. Users would ask reasonable questions and get “I don’t know how to help with that” - not because the capability didn’t exist, but because the orchestrator didn’t know that the right sub-agent was buried three hops away.
  2. System prompt fragility.
    Each sub-agent has its own system prompt. A poorly tuned sub-agent doesn’t just fail locally - it affects the entire reasoning chain with its conflicting instructions. The orchestrator’s context gets polluted with confused intermediate outputs, and suddenly nothing works. One bad agent drags down the whole interaction, and we had over 50 SubAgents at this point.
  3. Infinite Loops.
    In the worst cases, agents started bouncing work around without making progress. The orchestrator would call a sub-agent, which would defer back to the orchestrator or another sub-agent, and so on. From the user’s perspective, nothing moved forward; under the hood, we were burning tokens and latency on a “you handle it / no, you handle it” loop. Hop limits and loop detection helped, but they also undercut the original clean architecture of the design.
  4. Tunnel Vision.
    Human experts have overlapping domains - a Kubernetes engineer knows enough networking to suspect a route issue, enough about storage to rule it out. This overlap makes human handoffs intelligent. Our agents had hard boundaries. They either surrendered prematurely or developed tunnel vision, chasing symptoms in their domain while the root cause sat elsewhere.

Insight #3: Multi-agent systems are hard to scale - coordination is the real work.

The failures revealed a familiar pattern. With narrow tools, we'd constrained what the model could do – and paid in coverage gaps. With domain-scoped agents, we'd constrained what it could explore – and paid in coordination overhead. Same overcorrection, different layer.

The fix was to collapse dozens of specialists into a small set of generalists. This was only possible because we already had generic tools. We also moved the domain knowledge from system prompts into files the agents could read on demand (later morphed to agent skills capability inspired by Anthropic). 

Our system evolved: fewer agents, broader tools, and on-demand knowledge replaced brittle routing and rigid boundaries. Reliability improved as we stopped depending on the handoff roulette.

Insight #4: Invest context budget in capabilities, not constraints.

A Real Example: The Agent Debugging Itself

Case in point: Our own Azure OpenAI infrastructure deployment started failing. We asked the SRE agent to debug it. 

Without any predefined workflow, it checked deployment logs, spotted a quota error, queried our subscription limits, found the correct support request category, and filed a ticket with the support team. The next morning, we had an email confirming our quota increase.

Our old architecture couldn't have done this - we had no Cognitive Services sub-agent, no support request tool. But with az as a wide tool and cross-domain knowledge, the model could navigate Azure's surface area the same way a human would.

This is what we mean by capability expansion. We never anticipated this scenario. With generalist agents and wide tools, we didn't need to.

Context Management Techniques for Deep Agents

After consolidating tools and agents, we focused on context management for long-running conversations.

1. The Code Interpreter Revelation

Consider metrics analysis. We started with the naive approach: dump all metrics into the context window and ask the model to find anomalies.

This was backwards. We were taking deterministic, structured data and pushing it through a probabilistic system. We were asking an LLM to do what a single Pandas one-liner could do. We ended up paying in tokens, latency, and accuracy (models don’t like zero-valued metrics).

Worse, it kind of worked. For short windows. For simple queries. Just enough success to hide how fundamentally wrong the approach was. Classic “works in demo, fails in prod.”

The fix was obvious in hindsight: let the model write code.

  • Don’t send 50K tokens of metrics into the context.
  • Send the metrics to a code interpreter.
  • Let the model write the pandas/numpy analysis.
  • Execute it. Return only the results and analysis of the results.

Metrics analysis had been our biggest source of tool failures. After this change: zero failures. And because we weren’t paying the token tax anymore, we could extend time ranges by an order of magnitude.

Insight #5: LLMs are orchestrators, not calculators.
Use them to decide what computation to run, then let actual code perform the computation.

2. Planning and Compaction

We also added two other patterns: a todo-style planner and more aggressive compaction.

  • Todo planner: Represent the plan as an explicit checklist outside the model’s context, and let the model update it instead of re-deriving the workflow on every turn.
  • Compaction: Continuously shrink history into summaries and structured state (e.g., key incident facts), so the context stays a small working set rather than an ever-growing log.

Insight #6: Externalizing plans and compacting history effectively “stretch” the usable context window.

3. Progressive disclosure with Files

With code interpretation working, we hit the next wall: tool calls returning absurd amounts of data.

Real example: an internal App Service Control Plane log table on which a user fires off a SELECT * – style query. The table has ~3,000 columns. Single digit log entry expands to 200K+ tokens. The context window is gone. The model chokes. The user gets an error.

Our solution was session-based interception.

Tool calls that can return large payloads never go straight into context. Instead, they write as a “file” into a sandboxed environment where the data can be:

  • Inspected ("what columns exist?")
  • Filtered ("show only the error-related columns")
  • Analyzed via code ("find rows where latency > p99")
  • Summarized before anything enters the model’s context

The model never sees the raw 200K tokens. It sees a reference to a session and a set of tools to interact with that session. We turned an unbounded context explosion into a bounded, interactive exploration. You have seen this with coding agents, and the idea was similar - could the model find its way through the large amount of data?

 

 

Insight #7: Treat large tool outputs as data sources, not context.

4. What's Next: Tool Call Chaining

The next update we’re working on is tool call chaining. This idea started with solving Troubleshooting Guides (TSGs) as Code.

A lot of agent workflows are predictable: “run this query, fetch these logs, slice this data, summarize the result.” Today, we often force the model to walk that path with one tool at a time:

Today, it often looks like:

Model → Tool A → Model → Tool B → Model → Tool C → Model → … → Response

The alternative:

Model → [Script: Tool A → Tool B → Tool C → … → Final Output] → Model → Response

The model writes a small script that chains the tools together. The platform executes the script and returns consolidated results. Three roundtrips become one. Context overhead drops by 60–70%.

This also unlocks something subtle: deterministic workflows inside probabilistic systems. Long-running operations that must happen in a specific order can be encoded as scripts. The model decides what should happen; the script guarantees how it happens. Anthropic recently published  a similar capability with Programmatic tool calling.

The Meta Lesson

Six months ago, we thought we were building an SRE agent. In reality, we were building a context engineering system that happens to do Site Reliability Engineering.

Better models are table stakes, but what moved the needle was what we controlled: generalist capabilities and disciplined context management.

Karpathy’s analogy holds: If context windows are the agent’s “RAM” then context engineering is memory management: what to load, what to compress, what to page out, and what to compute externally. As you fill it up, model quality often drops non-linearly - “lost in the middle,” “not adhering to my instructions,” and plain old long-context degradation shows up well before we hit the advertised limits. More tokens don’t just cost latency; they quietly erode accuracy.

We’re not done. Most of what we have done is “try it, observe, watch it break, tighten the loop”. But these patterns that keep working - wide tools, code execution, context compaction, tool chaining - are the same ones we see rediscovered across other agent stacks. In the end, the throughline is simple: give the model fewer, cleaner choices and spend your effort making the context it sees small, structured, and easy to operate on.

Read the whole story
alvinashcraft
17 hours ago
reply
Pennsylvania, USA
Share this story
Delete

PPP 487 | Why Humor Is a Serious Leadership Skill, with comedian Adam Christing

1 Share

Summary

In this episode, Andy talks with comedian and corporate emcee Adam Christing, author of The Laughter Factor: The 5 Humor Tactics to Link, Lift, and Lead. If you have ever hesitated to use humor at work because you were unsure it would land, or worried it might backfire, this conversation offers both encouragement and a practical path forward.

Adam shares how his early influences shaped his approach to humor and why he believes every human is also a "humor being." You will hear why humor is more than chasing chuckles, including how it can build trust, improve learning, and strengthen relationships on teams. Adam introduces the concept of "laugh languages" and walks through examples such as Surprise and Poke, along with guidance on how to tease without crossing the line. They also discuss tailoring humor across cultures and how leaders can bring the laughter factor home with their families.

If you are looking for practical insights on leading with humor, building trust, and bringing more humanity into your projects and teams, this episode is for you!

Sound Bites

  • "If you're a human being, you are also a humor being, and I would say not only do you have a sense of humor, but a sense of humor has you."
  • "The audience is actually, whether it's three people or 300, they're actually rooting for you."
  • "They don't want to be bored. They want to be entertained."
  • "When we think back on the things that have made us laugh the most, it's often the flops that are the funniest."
  • "They won't trust your humor until you do."
  • "There's a saying in show business, 'funny is money'."
  • "I really believe that humor is a bridge that helps you connect heart to heart with other people."
  • "You're a leader. You need to be the one building trust."
  • "Humor is a shortcut to trust."
  • "Leaders help their people learn with laughter."
  • "Increase your LPMs: laughs per meeting."
  • "If in doubt, leave it out."
  • "Every meeting really should be a party with a purpose."

Chapters

  • 00:00 Introduction
  • 01:43 Start of Interview
  • 03:38 Adam's Backstory and Early Influences
  • 05:23 "I'm Not Funny" and the Confidence Barrier
  • 10:36 Why Humor Is More Than Just Chuckles
  • 16:00 The Laughter Factor Explained
  • 18:10 Laugh Languages and the Power of Surprise
  • 21:09 Poke: Teasing Without Crossing the Line
  • 24:42 Using Humor Across Cultures
  • 30:14 How You Know the Laughter Factor Is Working
  • 32:17 Developing a Laughter Factor at Home
  • 34:25 End of Interview
  • 34:55 Andy Comments After the Interview
  • 38:02 Outtakes

Learn More

Get a copy of Adam's book The Laughter Factor: The 5 Humor Tactics to Link, Lift, and Lead.

You can learn more about Adam and his work at TheLaughterFactor.com. While you are there, check out the short questionnaire to discover your laugh language.

For more learning on this topic, check out:

  • Episode 316 with Jennifer Aaker and Naomi Bagdonas. They are completely on this theme of humor being a strategic ability for leaders and teams.
  • Episode 109 with Peter McGraw. Peter breaks down what makes something funny based on his book The Humor Code, an episode Andy still calls back to today.
  • Episode 485 with John Krewson, a conversation about lessons from sketch comedy that nicely reinforce ideas from today's episode.

Level Up Your AI Skills

Join other listeners from around the world who are taking our AI Made Simple course to prepare for an AI-infused future.

Just go to ai.PeopleAndProjectsPodcast.com. Thanks!

Pass the PMP Exam

If you or someone you know is thinking about getting PMP certified, we've put together a helpful guide called The 5 Best Resources to Help You Pass the PMP Exam on Your First Try. We've helped thousands of people earn their certification, and we'd love to help you too. It's totally free, and it's a great way to get a head start.

Just go to 5BestResources.PeopleAndProjectsPodcast.com to grab your copy. I'd love to help you get your PMP this year!

Join Us for LEAD52

I know you want to be a more confident leader, that's why you listen to this podcast. LEAD52 is a global community of people like you who are committed to transforming their ability to lead and deliver. It's 52 weeks of leadership learning, delivered right to your inbox, taking less than 5 minutes a week. And it's all for free. Learn more and sign up at GetLEAD52.com. Thanks!

Thank you for joining me for this episode of The People and Projects Podcast!

Talent Triangle: Power Skills

Topics: Leadership, Humor At Work, Trust Building, Communication, Team Culture, Psychological Safety, Cross-Cultural Leadership, Meeting Facilitation, Emotional Intelligence, Influence, Learning And Development, People Management, Project Management

The following music was used for this episode:

Music: The Fantastical Ferret by Tim Kulig
License (CC BY 4.0): https://filmmusic.io/standard-license

Music: Synthiemania by Frank Schroeter
License (CC BY 4.0): https://filmmusic.io/standard-license





Download audio: https://traffic.libsyn.com/secure/peopleandprojectspodcast/487-AdamChristing.mp3?dest-id=107017
Read the whole story
alvinashcraft
17 hours ago
reply
Pennsylvania, USA
Share this story
Delete

IoTCT Webcast Episode 293 - "Non-Alcoholic Predictions" (Hot IoT+ guess for 2026)

1 Share
From: Iot Coffee Talk
Duration: 1:00:46
Views: 8

Welcome to IoT Coffee Talk, where hype comes to die a terrible death. We have a fireside chat about all things #IoT over a cup of coffee or two with some of the industry's leading business minds, thought leaders and technologists in a totally unscripted, organic format.

This week Rob, Pete, and Leonard jump on Web3 to host a discussion about:

🎶 🎙️ BAD KARAOKE! 🎸 🥁 "Light Up The Sky" by Van Halen
🐣 AI music will rule the day because most people don't appreciate jazz!
🐣 Rob thinks that rock n' roll may be on an extinction path! How an analog board and Dave Grohl will save rock n' roll humanity!
🐣 Will our over-dependence and over-reliance on tech and networks be the downfall of humanity?
🐣 Did we shoot ourselves in the branding foot naming our podcast after "IoT"? The Metaverse of Everything (MOE) Coffee Talk?
🐣 What is an industry analyst, and how are they different than influencers and Walls street analysts?
🐣 Why it was important that Arthur Andersen should have stayed independent with Enron.
🐣 Rob predicts that everyone will go back to Windows 10 to escape MS AI bloatware!
🐣 Let's get physical! We will be getting physical in 2026 with physical AI!
🐣 The gang drop their predictions for 2026, and it is sooooo good!

It's a great episode. Grab an extraordinarily expensive latte at your local coffee shop and check out the whole thing. You will get all you need to survive another week in the world of IoT and greater tech!

Tune in! Like! Share! Comment and share your thoughts on IoT Coffee Talk, the greatest weekly assembly of Onalytica and CBT tech and IoT influencers on the planet!!

If you are interested in sponsoring an episode, please contact Stephanie Atkinson at Elevate Communities. Just make a minimally required donation to www.elevatecommunities.org and you can jump on and hang with the gang and amplify your brand on one of the top IoT/Tech podcasts in the known metaverse!!!

Take IoT Coffee Talk on the road with you on your favorite podcast platform. Go to IoT Coffee Talk on Buzzsprout, like, subscribe, and share: https://lnkd.in/gyuhNZ62

Read the whole story
alvinashcraft
17 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories