Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
148413 stories
·
33 followers

I tested Microsoft Edge’s AI tab organizer, and it’s shockingly good

1 Share

At any point, I’ll have around 60 open tabs in 3 to 4 instances of Edge, and there’ll be several completely different topics that I’m switching between as my brain jumps from one thought to another every 5 seconds.

For someone like me, Microsoft Edge browser’s Organize tabs could be just what the doctor ordered. But the feature is AI-powered, and well, AI and Microsoft are two words that have a ton of negative press surrounding them, both individually and when combined.

So, I decided to test whether an AI-powered feature can actually help me organize my tabs better. Of course, tab grouping is already possible in most browsers, but it is manual. Edge often brings some unique features first, like vertical tabs, for example, and it is something that makes finding tabs easier for me. Let’s see if the Organize tabs feature does the same.

What is the AI-powered Organize tabs feature in Microsoft Edge?

Microsoft says that the Organize tabs feature creates Tab groups automatically based on tab similarity. The feature is AI-powered, so an AI may check all your tabs, find what information each site holds, and group the similar ones in Tab groups, with different names and colors.

Organize tabs feature in Microsoft Edge
Source: Microsoft

However, there is no Copilot logo here, and we’re not sure if Edge is using the same AI engine as that of Microsoft’s infamous AI assistant.

To use Organize tabs, you can select the Search tabs menu next to your tabs, and then click the Organize tabs icon. Edge will then organize your tabs in groups. There is also an edit feature (pencil icon) that allows you to customize the name, color, and you can also sort tabs and move them between groups. If you’re happy with everything, you can click the Group tabs button to confirm it.

How to automatically group tabs in Microsoft Edge using AI
How to automatically group tabs in Microsoft Edge using AI

Fortunately, the feature works with vertical tabs too. And if you use the Collections feature in Edge, you can add a tab group to a new collection.

Microsoft doesn’t say whether the feature runs locally or uses cloud processing, but mentions that functionality may vary by device type. The feature could be a good job for NPUs, though.

My Windows 11 PC doesn’t have an NPU. Here is how the Organize tabs feature in Edge performs under real-world browsing scenarios:

Testing Microsoft Edge’s AI-based Organize tabs feature

Multiple tabs open in Edge browser

I have here 40 tabs in one instance of Edge, with topics, including

  1. Our exclusive leak about Lenovo ThinkPad X13 Detachable (will be announced at MWC)
  2. The upcoming budget MacBook
  3. Dell XPS 14
  4. Samsung Unpacked event for 2026
  5. Our exclusive report about image support coming to Notepad
  6. WhatsApp Resume from Android feature for PCs
  7. Windows 11 26H1
  8. Samsung Galaxy Book 6 series

I sneakily added two YouTube tabs also, one for the Dell XPS 14 and the other for the Budget MacBook. The same sites are also repeated several times, so we’ll get to know if the AI is smart enough to separate tab content from the site name. Note that the Samsung Galaxy Book 6 and the Samsung Unpacked event should be separate tabs. I have also added one tab about Galaxy S26 Ultra, which I want to be added to the Unpacked event group. So, let’s see how the Organize tabs feature performs:

In all honesty, I’m pretty impressed with whatever AI Microsoft used here, as the tabs organized just the way I wanted, including the ones with YouTube. All 8 topics are now categorized, named, and organized with pastel colors for each Tab group. Job well done, Microsoft!

The process took less than a second to group all 40 tabs. Imagine the time it would take for me to manually group tabs. 40 tabs and 8 topics is a regular Tuesday afternoon for me, and until now, I haven’t ever tried Tab grouping just because I couldn’t bother myself to spend time organizing tabs.

Organize tab feature after organizing 8 topics automatically
Organize tab feature after organizing 8 topics automatically

Being an AI sceptic, I ungrouped all the tabs and rearranged them so that not all tabs with the same topics are together, making it a teeny tiny bit more difficult for the AI in Edge. But it seemed to make no difference in its capabilities.

This is more than useful to me. Every day, I spend a great deal of time finding tabs that I have opened previously. Of course, Edge also has a Search tabs feature, but Organize tabs makes the Grouped Tabs look neat and inviting.

Before using Organize tabs vs After using Organize tabs
Before using Organize tabs vs After using Organize tabs

Next, I added WhatsApp, Instagram, Threads, and Telegram to this, along with Amazon and Best Buy tabs for two different ThinkPad models, and surprisingly, the Organize tabs feature added all shopping links to one Group with the name “Lenovo ThinkPad Shopping”. I didn’t expect that!. Of course, all social media apps got categorized in the “Social Media” group.

Organize tab feature puts social media apps into one group and shopping sites into one group

Microsoft’s feature page for Organize tabs shows generic Tab groups like “Cooking” and “Shopping”. Naturally, I was expecting similar Tab groups here as well, but in practice, Edge’s AI shocked me with proper titles. This is a proper use of AI in a browser.

And speaking of AI, there is no sign of Copilot branding here, which, in my opinion, should have been the way Microsoft marketed AI-powered features. Excessive Copilot branding caused many Windows users to despise the name “Copilot”. In case you are wondering, Organize tab feature categorizes Gemini, ChatGPT, and Copilot under AI tools:

Tab Group names given by Organize tab feature

Features in Microsoft Edge Organize tabs

Apart from solid grouping and naming capabilities, what I like most about Organize tabs is the customizability. Certainly, these are all part of the Tab grouping feature. You can add a new tab to an existing group with a small plus icon. Clicking the pencil icon will allow you to change the name and color of the group.

And it’s not just a generic selection of popular colors; you can choose any color you want with a slider, and even a color picker tool. The default colors are soothing to the eyes as well, so it’s alright if you don’t want to spend time customizing Tab groups.

Change color of Tab group

Another neat feature is “Move to a new window”, which gives the ability to move all tabs in a particular group to a new window in Edge. Ungrouping is easy with the “Ungroup” button. My only nitpick is that the “Close grouped tabs” and “Delete group” button both does the same thing, which is deleting the group. Close grouped tabs should have instead closed all tabs without deleting the Tab group.

Microsoft Edge has the best tab grouping feature

Tab grouping is an underrated feature, especially for people like me who switch between 50 to 100 tabs for work and personal use. But I also don’t have the mental capacity to sort through each tab and add them to groups. Chrome’s manual tabs look barebones compared to the AI-powered Organize tab feature in Edge.

Manual Tab Grouping in Google Chrome
Manual Tab Grouping in Google Chrome

Currently, no other browser has this feature and after testing it, I’m sure that I’ll continue using Ogranize tabs every day. For once, I’m fully satisfied with an AI feature by Microsoft.

The post I tested Microsoft Edge’s AI tab organizer, and it’s shockingly good appeared first on Windows Latest

Read the whole story
alvinashcraft
7 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

VS Code v1.110 Insiders: AI Agents Gain Native Browser Access and Global Instructions

1 Share
The VS Code 1.110 cycle is putting more "hands-on" capabilities into chat, led by native browser integration that lets AI agents interact with page elements, capture screenshots, and pull real-time console logs from inside the editor.
Read the whole story
alvinashcraft
10 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

An Exploit … in CSS?!

1 Share

Ok, take a deep breath.

We’ll have some fun understanding this vulnerability once you make sure your browser isn’t affected, using the table below.

Chromium-based browserAm I safe?
Google ChromeEnsure you’re running version 145.7632.75 or later. Go to Settings > About Chrome and check for updates.
Microsoft EdgeEnsure you’re running on version 145.0.3800.58 or later. Click on the three dots (…) on the very right-hand side of the window. Click on Help and Feedback > About Microsoft Edge.
VivaldiEnsure you’re running on version 7.8 or later. Click the V icon (menu) in the top-left corner, select Help > About.
BraveEnsure you’re running on version v1.87.188 or later. Click the hamburger menu on the top right, select Help > About Brave.

So, you updated your browser and said a prayer. When you’re able to string whole sentences together again, your first question is: Has CSS really had the dubious honor of being the cause of the first zero-day exploit in Chromium-based browsers for 2026?

I mean, the Chrome update channel says they fixed a high-severity vulnerability described as “[u]ser after free in CSS” … on Friday the 13th no less! If you can’t trust a release with a description and date like that, what can you trust? Google credits security researcher Shaheen Fazim with reporting the exploit to Google. The dude’s LinkedIn says he’s a professional bug hunter, and I’d say he deserves the highest possible bug bounty for finding something that a government agency is saying “in CSS in Google Chrome before 145.0.7632.75 allowed a remote attacker to execute arbitrary code inside a sandbox via a crafted HTML page.”

Is this really a CSS exploit?

Something doesn’t add up. Even this security researcher swears by using CSS instead of JavaScript, so her security-minded readers don’t need to enable JavaScript when they read her blog. She trusts the security of CSS, even though she understands it enough to create a pure CSS x86 emulator (sidenote: woah). So far, most of us have taken for granted that the possible security issues in CSS are relatively tame. Surely we don’t suddenly live in a world where CSS can hijack someone’s OS, right?

Well, in my opinion, the headlines describing the bug as a CSS exploit in Chrome are a bit clickbait-y, because they make it sound like a pure CSS exploit, as though malicious CSS and HTML would be enough to perform it. If I’m being honest, when I first skimmed those articles in the morning before rushing out to catch the train to work, the way the articles were worded made me imagine malicious CSS like:

.malicious-class {
  vulnerable-property: 'rm -rf *';
}

In the fictional, nightmare version of the bug that my malinformed imagination had conjured, some such CSS could be “crafted” to inject that shell command somewhere it would run on the victim’s machine. Even re-reading the reports more carefully, they feel intentionally misleading, and it wasn’t just me. My security-minded friend’s first question to me was, “But… isn’t CSS, like, super validatable?” And then I dug deeper and found out the CSS in the proof of concept for the exploit isn’t the malicious bit, which is why CSS validation wouldn’t have helped!

It doesn’t help the misunderstanding when the SitePoint article about CVE-2026-2441 bizarrely lies to its readers about what this exploit is, instead describing a different medium-severity bug that allows sending the rendered value of an input field to a malicious server by loading images in CSS. That is not what this vulnerability is.

It’s not really a CSS exploit in the sense that JavaScript is the part that exploits the bug. I’ll concede that the line of code that creates the condition necessary for a malicious script to perform this attack was in Google Chrome’s Blink CSS engine component, but the CSS involved isn’t the malicious part.

So, how did the exploit work?

The CSS involvement in the exploit lies in the way Chrome’s rendering engine turns certain CSS into a CSS object model. Consider the CSS below:

@font-feature-values VulnTestFont {
  @styleset {
    entry_a: 1;
    entry_b: 2;
    entry_c: 3;
    entry_d: 4;
    entry_e: 5;
    entry_f: 6;
    entry_g: 7;
    entry_h: 8;
  }
}

When this CSS is parsed, a CSSFontFeaturesValueMap is added to the collection of CSSRule objects in the document.styleSheets[0].cssRules. There was a bug in the way Chrome managed the memory for the HashMap data structure underlying the JavaScript representation of the CSSFontFeaturesValueMap, which inadvertently allowed a malicious script to access memory it shouldn’t be able to. This by itself isn’t sufficient to cause harm other than crashing the browser, but it can form the basis for a Use After Free (UAF) exploit.

Chrome’s description of the patch mentions that “Google is aware that an exploit for CVE-2026-2441 exists in the wild,” although for obvious reasons, they are coy about the details for a full end-to-end exploit. Worryingly, @font-feature-values isn’t new — it’s been available since early 2023 — but the discovery of an end-to-end UAF exploit may be recent. It would make sense if the code that created the possibility of this exploit is old, but someone only pulled off a working exploit recently. If you look at this detailed explanation of a 2020 Use After Free vulnerability in Chrome within the WebAudio API, you get the sense that accessing freed memory is only one piece of the puzzle to get a UAF exploit working. Modern operating systems create hoops that attackers have to go through, which can make this kind of attack quite hard.

Real-world examples of this kind of vulnerability get complex, especially in a Chrome vulnerability where you can only trigger low-level statements indirectly. But if you know C and want to understand the basic principles with a simplified example, you can try this coding challenge. Another way to help understand the ideas is this medium post about the recent Chrome CSSFontFeaturesValueMap exploit, which includes a cute analogy in which the pointer to the object is like a leash you are still holding even after you freed your dog — but an attacker hooks the leash to a cat instead (known as type confusion), so when you command your “dog” to bark, the attacker taught his cat to think that “bark” command means to do something malicious instead.

The world is safe again, but for how long?

The one-line fix I mentioned Chrome made was to change the Blink code to work with a deep copy of the HashMap that underlies the CSSFontFeaturesValueMap rather than a pointer to it, so there is no possibility of referencing freed memory. By contrast, it seems Firefox rewrote its CSS renderer in Rust and therefore tends to handle memory management automatically. Chromium started to support the use of Rust since 2023. One of the motivations mentioned was “safer (less complex C++ overall, no memory safety bugs in a sandbox either)” and to “improve the security (increasing the number of lines of code without memory safety bugs, decreasing the bug density of code) of Chrome.” Since it seems the UAF class of exploit has recurred in Chromium over the years, and these vulnerabilities tend to be high-severity when discovered, a more holistic approach to defending against such vulnerabilities might be needed, so I don’t have to freak you out with another article like this.


An Exploit … in CSS?! originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

Read the whole story
alvinashcraft
11 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

The Intelligent OS: Making AI agents more helpful for Android apps

1 Share

Posted by Matthew McCullough, VP of Product Management, Android Development


User expectations for AI on their devices are fundamentally shifting how they interact with their apps. Instead of opening apps to do tasks step-by-step, they're asking AI to do the heavy lifting for them. In this new interaction model, success is shifting from getting users to open your app, to successfully fulfilling their tasks and helping them get more done faster. 

To help you evolve your apps for this agentic future, we're introducing early stage developer capabilities that bridge the gap between your apps and agentic apps and personalized assistants, such as Google Gemini. While we are in the early, beta stages of this journey, we’re designing these features with privacy and security at their core as our first step in exploring this paradigm shift as an app ecosystem.

Empowering apps with AppFunctions

Android AppFunctions allows apps to expose data and functionality directly to AI agents and assistants. With the AppFunctions Jetpack library and platform APIs, developers can create self-describing functions that agentic apps can discover and execute via natural language. Mirroring how backend capabilities are declared via MCP cloud servers, AppFunctions provides an on-device solution for Android apps. Much like WebMCP, it executes these functions locally on the device rather than on a server.

The Samsung Gallery integration with Gemini on the Galaxy S26 series showcases AppFunctions in action. Instead of manually scrolling through photo albums, you can now simply ask Gemini to "Show me pictures of my cat from Samsung Gallery." Gemini takes the user query, intelligently identifies and triggers the right function, and presents the returned photos from Samsung Gallery directly in the Gemini app, so users never need to leave. This experience is multimodal and can be done via voice or text. Users can even use the returned photos in follow-up conversations, like sending them to friends in a text message.



This integration is currently available on the Galaxy S26 series and will soon expand to Samsung devices running OneUI 8.5 and higher. Through AppFunctions, Gemini can already automate tasks across app categories like Calendar, Notes, and Tasks, on devices from multiple manufacturers. Whether it’s coordinating calendar events, organizing notes, or setting to-do reminders, users can streamline daily activities in one place.

Enabling agentic apps with intelligent UI automation

While AppFunctions provides a structured framework and more control for apps to communicate with AI agents and assistants, we know that not every interaction has a dedicated integration yet. We’re also developing a UI automation framework for AI agents and assistants to intelligently execute generic tasks on users’ installed apps, with user transparency and control built in. This is the platform doing the heavy lifting, so developers can get agentic reach with zero code. It’s a low-effort way to extend their reach without a major engineering lift right now. 

To get feedback as we refine this framework, we’re starting with an early preview on the Galaxy S26 series and select Pixel 10 devices, where users will be able to delegate multi-step tasks to Gemini with just a long press of the power button. Launching as a beta feature in the Gemini app, this will support a curated selection of apps in the food delivery, grocery, and rideshare categories in the US and Korea to start. Whether users need to place a complex pizza order for their family members with particular tastes, coordinate a multi-stop rideshare with co-workers, or reorder their last grocery purchase, Gemini can help complete tasks using the context already available from your apps, without any developer work needed.




Users are in control while a task is being actioned in the background through UI automation. For any automation action, users have the option to monitor a task’s progress via notifications or "live view" and can switch to manual control at any point to take over the experience. Gemini is also designed to alert users before completing sensitive tasks, such as making a purchase.

Looking ahead


In Android 17, we’re looking to broaden these capabilities to reach even more users, developers, and device manufacturers.

We are currently building experiences with a small set of app developers, focusing on high-quality user experiences as the ecosystem evolves. We plan to share more details later this year on how you can use AppFunctions and UI automation to enable agentic integrations for your app. Stay tuned for updates.


Read the whole story
alvinashcraft
11 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Microsoft Sovereign Cloud adds governance, productivity, and support for large AI models securely running even when completely disconnected

1 Share

Microsoft Sovereign Cloud's expansion of capabilities includes Azure Local disconnected operations, Microsoft 365 Local disconnected, and Microsoft Foundry addition of large model and modern infrastructure capabilities.

The post Microsoft Sovereign Cloud adds governance, productivity, and support for large AI models securely running even when completely disconnected appeared first on Microsoft 365 Blog.

Read the whole story
alvinashcraft
13 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Migration, Modernization & Agentic Tools

1 Share

This video covers Migration, Modernization, and Agentic tools. Agentic tools introduce autonomy, continuous optimization, and context‑aware decision‑making into the migration lifecycle. Instead of treating migration as a one‑time lift‑and‑shift, they operate as ongoing systems that:

  • Discover and map environments dynamically
  • Recommend modernization paths based on real telemetry
  • Automate execution steps end‑to‑end
  • Continuously validate, optimize, and remediate after landing in Azure

This shifts migration from a project to a self‑improving system

This video provides an overview of new tools in Azure Copilot and GitHub Copilot that you can use when migrating and modernizing. These tools provide the following benefits:

  • Agents can classify workloads into migrate/modernize/rebuild patterns based on performance, code structure, and operational signals.
  • Agents can execute migration waves automatically—copying data, validating cutovers, sequencing dependencies, and rolling back if needed.
  • Agentic tools can continuously tune cost, performance, resiliency, and security posture using telemetry and policy-driven actions.
  • Agentic tools can ensure governance is embedded into the migration engine—ensuring workloads land compliant, secure, and aligned with enterprise standards.
  • Autonomous discovery and automated execution remove weeks of manual effort.
  • Parallelized migration waves become safe because the system understands dependencies.
  • Automated validation reduces human error during cutover.
  • Refactoring recommendations are grounded in code and performance analysis.
  • Agentic tools keep optimizing cost, security, and resilience—closing the loop between migration and operations.

 

Read the whole story
alvinashcraft
13 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories