Read more of this story at Slashdot.
Read more of this story at Slashdot.
There’s a new version of Copilot rolling out on Windows 11, and it dumps native code (WinUI) in favor of web components. This was expected based on our previous findings, but to our surprise, it actually ships with a full-blown version of Microsoft Edge.
I can’t tell if Microsoft is really losing the AI race, but at this point, it’s quite obvious that the company hasn’t managed to build a solid Copilot experience for Windows or stick with one approach for more than a quarter.
This latest version replaces the native app, which itself replaced the WebView version, which replaced the PWA, which replaced the Copilot that once lived in a sidebar.

If you don’t have the new Copilot yet, go to the Microsoft Store and search for Copilot. You’ll find a new listing called “Microsoft Copilot,” and it shows a download button even when Copilot is already installed on your PC.
If you hit the Download button, you’ll notice it completes almost instantly. That’s because it isn’t downloading the Copilot app itself. Instead, it’s downloading a Copilot installer, similar to how the Microsoft Edge installer works.

The Store even warns that you need to take action in another window, which makes it clear that the Copilot download is no longer handled directly by the Microsoft Store. You might have noticed a similar pattern for Microsoft Teams.
After the update is installed, the old native Copilot app, built on the WinUI framework, automatically disappears from the Start menu and other places, as the new Copilot takes over.

I opened this new Copilot, and it looks exactly like the web version (web.copilot.com). It’s actually a lot smoother and almost feels native. However, there are some caveats, such as high RAM usage, which is quite upsetting as it undermines Microsoft’s recent efforts to revive Windows.
In our tests, Windows Latest observed that Copilot uses up to 500MB of RAM in the background, and it also reaches up to 1GB of RAM when you begin to interact with it. On the other hand, native Copilot used to have less than 100MB of RAM usage.

This made me curious , so I looked into how the new “web-based” Copilot app is different, and it turns out that it is a hybrid web app with a rebranded/forked Edge instance running as a dedicated app in a WebView2 container.

As you can see in the above screenshot, Copilot’s installation folder literally has a 146.0.3856.97 folder, which is a complete Microsoft Edge installation. The size of the Edge folder is approx 850 MB.
It contains all Edge binaries, including msedge.exe, msedge.dll, msedge_elf.dll, ffmpeg.dll, libGLESv2.dll, Vulkan/SwiftShader, WidevineCDM, etc. Also, Windows Latest observed that msedge.dll inside the new Copilot app package is 315 MB, which confirms it’s a full Chromium browser engine.

If it were a standard WebView2 or Progressive Web App, it would have relied on the existing Edge integration in Windows 11 instead of shipping with its own Edge fork.
I also found Edge subsystems in Copilot’s package, including Browser Helper Objects, Trust Protection Lists/, PdfPreview/, Extensions/, edge_feedback/, edge_game_assist/, and DRM.

Interestingly, Windows 11’s new Copilot app has both WebView2 and full browser capabilities. My source is an msedgewebview2.exe in the package, along with multiple .dll files, including EmbeddedBrowserWebView.dll, which means there’s a bundled WebView2 runtime with Microsoft Edge.

This new Copilot is an interesting app, and that might also explain why it feels faster than typical web apps or PWAs. It’s because Microsoft ships a private copy of Edge inside the Copilot app, includes a custom launcher (mscopilot.exe), and the Copilot UI itself is a web app rendered via WebView2.
Regardless, even if it passes as a good web app, we don’t need any of those on Windows 11 at this point. Windows 11 is already bloated with web apps, PWAs, and Electron. What do you think? Let me know in the comments below.
The post New Copilot for Windows 11 includes a full Microsoft Edge package, uses more RAM appeared first on Windows Latest
Microsoft announced three new first‑party MAI models this week, all available through Foundry. The releases cover transcription, voice generation, and image creation through Microsoft Foundry and the MAI Playground.
The transcription model (MAI‑Transcribe‑1) focuses on accuracy across a broad set of languages while running faster and cheaper than the usual options. The voice model (MAI‑Voice‑1) generates natural speech from very small samples. The model can produce a full minute of audio in about a second, and it does so with unusually efficient GPU use. If you want to check it out, try it in Copilot Audio Expressions.
MAI‑Image‑2 also improves image generation speed across Copilot and Foundry, delivering roughly twice the performance while keeping quality in line with previous models. Just ask Copilot (web or Windows) to generate an image and it will use MAI‑Image‑2 where available.
Microsoft is also pricing these models well below the usual market rates. Transcription at thirty‑six cents per hour is roughly a 40 to 60 percent savings compared to the typical dollar‑per‑hour services. Voice generation at twenty‑two dollars per million characters comes in at about half the cost of most high‑quality TTS models. Image output at thirty‑three dollars per million tokens is often 70 percent cheaper than comparable offerings from the major providers. The MAI lineup is clearly positioned as the lower‑cost option.
What stands out is not any single capability, but the shift in direction. Microsoft is building more of its own stack rather than betting everything on OpenAI. That shift, I assume, has deeper implications for cost, direction, and long‑term strategy. Even more significantly, each model was built by small team about 10 and tuned for efficiency, which seems to be the through‑line of this entire effort. Suggesting that high‑quality models no longer require massive research groups.
As a note, I do work at Microsoft, but I am not part of the team that develops these models.
Why context is the core bottleneck for agentic AI adoption in enterprises, with data readiness, access, and portability as decisive factors. Presentation of a Personal Context Portfolio: modular markdown files (identity, roles, projects, tools, communication style, domain knowledge, decision log) as a machine-readable, portable context package. Demonstration of practical tooling and deployment patterns, including Context Hub, CLI-based context sharing, MCP server setup, and common troubleshooting lessons.
The AI Daily Brief helps you understand the most important news and discussions in AI.
Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614
Get it ad free at http://patreon.com/aidailybrief
Learn more about the show https://aidailybrief.ai/
This episode is a cross-post from The EBFC Show, Felipe Engineer-Manriquez's podcast exploring Lean and Agile in construction. In this conversation, Felipe interviews Vasco about the #NoEstimates movement, throughput-based planning, and why traditional project management is still stuck in the middle ages of managing creative work.
"When you go into a daily meeting and you start looking at the people in that room, maybe they are the exact same people that were there yesterday, but the team is totally different. Somebody might have had a bad night's sleep, somebody might have had an argument with their spouse. These are human beings. These are not machines that you can just distribute work to."
Vasco's path to agile coaching started with a realization that most practitioners eventually reach: the problems in software development aren't technological. They're about people — getting agreements, sharing information at the right time, making the collective brain of a team actually function. The Scrum Guide gives you organizing principles — how many meetings, who's in them — but it says almost nothing about the real-time feedback cycle between humans that makes or breaks a team. That's why the Scrum Master role exists: to be the lubricant for human interactions, to break down complex ideas into items the collective mind can process. It's the piece that makes Scrum work, and it's the piece that's hardest to teach.
"The PM wanted 15 items per sprint, and the team said 'yeah, we can do 15.' I said, this is not gonna happen. The team had been delivering between five and eight items per sprint. I said, I'm gonna be positive — I'm gonna say seven. And no surprise, by the end of the sprint, they delivered seven."
Vasco started as a project manager — and not the easy certification kind. He went through IPMA, which means six months of training, a four-hour written exam, and an expert interview, just for the entry level. Planning and estimating was the job. Then he ran his first Scrum project, specifically to prove it couldn't work. By the second month, he couldn't understand how anything else could work. The team delivered something to show every single sprint — something that never happened with traditional project management. The turning point came when he made a bet with a product manager: the PM needed 15 items per sprint, the team committed to 15, but historical throughput was 5-8 items. Reality delivered seven. That moment crystallized the #NoEstimates insight: we can't fight reality, but we can choose which seven items to deliver.
"Never believe the plan. Or as in Scarface — never get high on your own supply. It's so unbelievable how project managers still today believe their freaking plans."
At Nokia, Vasco managed a program of 500 people across 100 teams on four continents. No way to get everyone in a room. So he tracked system-level throughput — features delivered to integration per week. Six months into a twelve-month project, the data said they'd be at least six months late. He told the program manager: cut scope now. The program manager did what every PMI-trained program manager does — sent an email asking all 100 teams if they'd deliver on time. Every single team said yes. Nobody wants to be first to admit they're late. Twelve months in, they discovered they were six months late. The project got canceled. 500 people, millions of euros, all because somebody believed the plan. Linear predictive planning is useful for exploring what might be possible if nothing goes wrong. It is not reality. The only tool that reflects reality is throughput — the number of items completed per unit of time.
"It's not earned, it's spent. It's not value, it's cost. It's not management, it's just observation. Monty Python could not have come up with a better name."
Felipe shares a story that mirrors the absurdity: an industrial project with a dedicated 35-person earned value management department. Before the meeting even started, the department head announced, "Let's all acknowledge that earned value management is more an art than a science." Their charts were made up, the contractor's charts were made up, and the goal of the meeting was to agree that the project would finish on time — regardless of what any data said. This is where traditional project management ends up when it disconnects from throughput: a $30 million scope addition with zero additional time, defended by charts that a mediocre attorney can invalidate in the first week of litigation. Felipe knows — he spent a year being cross-examined by forensic schedulers whose full-time job is proving that construction schedules are fiction.
"Never convince anyone. Convince yourself. Once you're convinced, whatever other people say, it doesn't really matter because you're not gonna take them seriously anyway."
Here's how to validate throughput-based planning with your own data: take the last 10 sprints (or periods). Calculate the average throughput and control limits from the first five. Then check whether the next five sprints fall within that range. They will. If you're in software and using Jira, you already have this data. You don't need anyone's permission. You don't need to change anything. Just look at what your team actually delivers versus what they planned to deliver. The gap between those two numbers is the gap between superstition and reality.
Felipe Engineer-Manriquez is a best-selling author, international keynote speaker, Project Delivery Services Director at The Boldt Company, host of The EBFC Show podcast, and a proven construction change-maker implementing Lean and Agile practices on projects from millions to billions of dollars worldwide. He is a Registered Scrum Trainer™ (RST), Registered Scrum Master™ (RSM), and recipient of the Lean Construction Institute Chairman's Award. His book Construction Scrum is the first practical guide for applying Scrum in construction.
You can link with Felipe Engineer-Manriquez on LinkedIn.
In this episode, I explore the strange, beautiful phenomenon of perspective—through science, space, and the quiet awe of being alive on Earth.
I talk about the cultural moment surrounding Project Hail Mary and the Artemis II mission—the first human return to the Moon in over 50 years—and how both point to something deeper than innovation: a reorientation of how we see ourselves in the universe.
At the center of this episode is the observer effect—not just as a scientific concept, but as a lived experience. The idea that the act of observing changes what is observed. That our attention, our awareness, our witnessing…matters.
I reflect on the posts I’ve shared this week on Threads, how they reached thousands of people, and the responses that came back. The shared sense of wonder. The quiet recognition that something about being human, here, now, is far more miraculous than we tend to let ourselves feel.
We move through the words of astronauts—those who have seen Earth from space—and the overwhelming shift in perspective that follows. Including the iconic reflection from Carl Sagan on the “pale blue dot”—a reminder of how small we are, and how precious.
I talk about how our understanding of our place in the cosmos has evolved—from ancient handprints pressed into cave walls, to images of Earth suspended in darkness. From wondering where we are…to finally seeing it.
This episode is about awe. About beauty. About remembering that this planet—this life—is not mundane, but extraordinary.
We explore:
✔ The cultural resonance of Project Hail Mary and why it’s striking a chord right now
✔ Artemis II and the significance of returning to the Moon after decades
✔ The observer effect—scientifically and philosophically
✔ The “overview effect” astronauts experience when seeing Earth from space
✔ Carl Sagan’s pale blue dot, and what it asks us to remember
✔ How perspective shapes meaning—and how we participate in that shaping
✔ The quiet, radical act of letting yourself feel wonder again
This episode is for anyone who has felt, even briefly, the pull of something larger. For those moments when the sky looks different. When the world feels alive. When you remember that you are here—not by accident, but as part of something vast and unfolding.
If you’ve been craving a sense of perspective…of beauty…of meaning that doesn’t need to be forced, this one’s for you.
Thank you for being here. And thank you for listening. 🕯️
The best way to support the podcast is to become a patron of The Folklore Library Substack.
And if you have topics or questions you’d like me to cover, email me at insertwisdom@gmail.com.
The Art of After Workbook: How to Turn Grief into Art (https://itskatehill.gumroad.com/l/theartofafter)
Under the Same Sky by Kate Hill (https://www.amazon.com/dp/B0DJY2DWRD/ref=tmm_pap_swatch_0?_encoding=UTF8&qid=&sr=) 📖
Find me here 👇🏼
Email: insertwisdom@gmail.com
Become a patron of the Folklore Library Substack ✍🏼 (https://insertwisdom.substack.com)
Threads (https://www.threads.net/@itskatehill) ✨
Ambiance Channel ✨ (https://www.youtube.com/@etherandink)
Tiktok (https://www.tiktok.com/@itskatehill?lang=en) ✨
Instagram (https://www.instagram.com/itskatehill/) ✨
Goodreads (https://www.goodreads.com/author/show/52471695.Kate_Hill) ✨
Get full access to The Folklore Library at insertwisdom.substack.com/subscribe (https://insertwisdom.substack.com/subscribe?utm_medium=podcast&utm_campaign=CTA_4)