Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
148804 stories
·
33 followers

Meta just killed native WhatsApp on Windows 11, now it opens WebView, uses 1GB RAM all the time

1 Share

WhatsApp on Windows 11 has just got a ‘major’ upgrade, and you’re probably going to hate it because it simply loads web.whatsapp.com in a WebView2 container. This means WhatsApp on Windows 11 is cooked, and it’s back to being absolute garbage in terms of performance.

WhatsApp is one of those Windows apps that went from being a web wrapper to a native app and then back to the web again after all these years of investment.

WhatsApp for Windows 11

WhatsApp for Windows was originally an Electron app, and it was eventually replaced with UWP after years of investment. Four years later, WhatsApp is going back to WebView2, abandoning the original WinUI/UWP native idea.

I blame the layoffs

My understanding is that the recent layoffs at Mark Zuckerberg-headed Meta likely disbanded the entire team behind the native WhatsApp. I don’t see any other reason why Meta would abandon its native app for Windows. Meta will save costs by maintaining the web app codebase on Windows, but you’re going to hate the experience.

How bad is the new WhatsApp for Windows 11?

Our tests showed that new Chromium/WebView2-based WhatsApp for Windows 11 uses up to 300MB of RAM when you are on the login screen and doing nothing. On the other hand, the old/native WhatsApp uses just 18MB of RAM and even slips to less than 10MB when left idle on the login screen.

WhatsApp WebView2 RAM usage

After logging in, WhatsApp (new) memory usage increased to 2GB while trying to load all my chats. On average, it used 1.2GB when left idle in the background.

You’d realise how bad this is when I tell you the benchmarks for the native WhatsApp for comparison. I tested the old/native WhatsApp, and it uses just 190MB most of the time, dropping to less than 100MB when it’s completely idle. At worst, it would reach 300MB, which can happen only when the chat is really active.

WhatsApp for Windows RAM usage
“WhatsApp” is new version and “WhatsApp Beta” is old UPW/WinUI in the screenshot

By the looks of things, this new WhatsApp for Windows 11 can touch 3GB RAM if you have too many active conversations.

It’s absolutely garbage, and it should not be allowed inside the Microsoft Store. You’re better off using WhatsApp on the web (Edge/Chrome) than updating/downloading this new WebView2-based app.

In fact, it appears that WhatsApp web (web.whatsapp.com) in any browser is less terrible than this WebView2 container.

New WhatsApp is a performance nightmare

An app can use a lot of memory, and it does not necessarily mean it’s a performance nightmare, but the issue with the new WhatsApp is that it feels sluggish. You’re going to notice sluggish performance, long loading time, and other performance issues when browsing different conversations.

We also noticed that it does not work well with Windows notifications. It also struggles with Windows 11’s Do Not Disturb mode or Active Hours. And there are delayed notifications problems as well.

Can you avoid this new WhatsApp upgrade on Windows 11? Yes, but not for a very long time

Windows Latest found that WhatsApp version 2.2584.3.0 replaces the native (WinUI/UWP) app and is rolling out in all regions via the Microsoft Store. Do not download it, and you might still be allowed to use the native app for the next days.

WhatsApp native app
Image Courtesy: WindowsLatest.com

However, Windows Latest has learned that all users will be logged out eventually and forced to use the WebView2-based WhatsApp.

This ‘upgrade’ ships as the WhatsApp native experience rolls out on Apple Watch, which has 115 million consumers, while Windows has over one billion active monthly devices. Clearly, numbers are not always enough, and I am not sure if I can really blame Meta when Microsoft also does not make native apps for Windows anymore.

The post Meta just killed native WhatsApp on Windows 11, now it opens WebView, uses 1GB RAM all the time appeared first on Windows Latest

Read the whole story
alvinashcraft
4 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Node.js v24.11.1 (LTS)

1 Share
Read the whole story
alvinashcraft
4 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Node.js v25.2.0 (Current)

1 Share
Read the whole story
alvinashcraft
4 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Driving ROI with Azure AI Foundry and UiPath: Intelligent agents in real-world healthcare workflows

1 Share

Across industries, organizations are moving from experimentation with AI to operationalizing it within business-critical workflows. At Microsoft, we are partnering with UiPath—a preferred enterprise agentic automation platform on Azure—to empower customers with integrated solutions that combine automation and AI at scale.

One example is Azure AI Foundry agents and UiPath agents (built on Azure AI Foundry) orchestrated by UiPath Maestro™ in business processes, ensuring AI insights seamlessly flow into automated business processes that deliver measurable value.

From insight to action: Managing incidental findings in healthcare

In healthcare, where every insight can influence a life, the ability of AI to connect information and trigger timely action is especially transformative. Incidental findings in radiology reports—unexpected abnormalities uncovered during imaging studies like CT or MRI scans—represent one of the most challenging and overlooked gaps in patient care

As the volume of patient data grows, overlooked incidental findings outside the original imaging scope can delay care, raise costs, and increase liability risks.

This is where AI steps in. In this workflow, Azure AI Foundry agents and UiPath agents—orchestrated by UiPath Maestro™—work together to operationalize this process in healthcare:

  1. Radiology reports are generated and finalized in existing systems.
  2. UiPath medical record summarization (MRS) agents review reports, flagging incidental findings.
  3. Azure AI Foundry imaging agents analyze historical PACS images and radiology data, comparing past results with current findings relevant to the additional findings.
  4. UiPath agents aggregate all results—including pertinent EMR history, prior imaging, and AI-generated imaging insights—into a comprehensive follow-up report.
  5. The aggregated information is forwarded to the original ordering care provider in addition to the primary radiology report, eliminating the need to manually comb through the chart and prior exams for pertinent information. This creates both a secondary notification of the incidental finding and puts the summarized, relevant patient information in the clinicians’ hands, efficiently supporting the provision of safe, timely care.
  6. UiPath Maestro™ orchestrates the business process, routing the consolidated packet to the ordering physician or specialist for next steps.

The combination of UiPath and Azure AI Foundry agents turns siloed data into precise documentation that can be used to create actionable care pathways—accelerating clinical decision making, reducing physician workload, and improving patient outcomes.

This scenario is enabled by:

  • UiPath Maestro™: Orchestrates complex workflows that span multiple agents, systems, and data sources; and integrates natively with Azure AI Foundry and UiPath Agents, providing tracing capabilities that create business trust in underlying AI agents.
  • UiPath agents: Extract and summarize structured and unstructured data from EMRs, reports, and historical records.
  • Azure AI Foundry agents: Analyze medical images and generate AI-powered diagnostic insights with healthcare-specific models on Azure AI Foundry that provide secure data access through DICOMweb APIs and FHIR standards, ensuring compliance and scalability.

Together, this creates an agentic ecosystem on Azure where AI insights are not isolated but operationalized directly within end-to-end business processes.

Delivering customer value

By embedding AI into automated workflows, customers see tangible ROI:

  • Improved outcomes: Faster detection and follow-up on incidental findings.
  • Efficiency gains: Automated data collection, summarization, and reporting reduce manual physician workload.
  • Cost savings: Early detection helps prevent expensive downstream interventions.
  • Trust and compliance: Built on Azure & UiPath’s security, privacy, and healthcare data standards.

This is the promise of combining enterprise-grade automation with enterprise-ready AI.

What customers are saying about AI automation in healthcare

AI-powered automation is redefining how healthcare operates. At Mercy, we are beginning to partner with Microsoft and UiPath which will allow us to move beyond data silos and create intelligent workflows that truly serve patients. This is the future of care—where insights instantly translate into action.

Robin Spraul, Automation Manager-Automation Opt & Process Engineering, Mercy

Partnership perspectives

With UiPath Maestro and Azure AI Foundry working together, we’re helping enterprises operationalize AI across workflows that matter most. This is how we turn intelligence into impact.

Asha Sharma, Corporate Vice President, Azure AI Platform

Healthcare is just the beginning. UiPath and Microsoft are empowering organizations everywhere to unlock ROI by bringing automation and AI together in real-world business processes.

Graham Sheldon, Chief Product Officer, UiPath

Looking ahead

This healthcare scenario is one of many where UiPath and Azure AI Foundry are transforming operations. From finance to supply chain to customer service, organizations can now confidently scale AI-powered automation with UiPath Maestro™ on Azure.

At Microsoft, we believe AI is only as valuable as the outcomes it delivers. Together with UiPath, we are enabling enterprises to achieve those outcomes today.

The post Driving ROI with Azure AI Foundry and UiPath: Intelligent agents in real-world healthcare workflows appeared first on Microsoft Azure Blog.

Read the whole story
alvinashcraft
4 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Powering Distributed AI/ML at Scale with Azure and Anyscale

1 Share

The path from prototype to production for AI/ML workloads is rarely straightforward. As data pipelines expand and model complexity grows, teams can find themselves spending more time orchestrating distributed compute than building the intelligence that powers their products. Scaling from a laptop experiment to a production-grade workload still feels like reinventing the wheel. What if scaling AI workloads felt as natural as writing in Python itself? That’s the idea behind Ray, the open-source distributed computing framework born at UC Berkeley’s RISELab, and now, it’s coming to Azure in a whole new way.

Today, at Ray Summit, we announced a new partnership between Microsoft and Anyscale, the company founded by Ray’s creators, to bring Anyscale’s managed Ray service to Azure as a first-party offering in private preview. This new managed service will deliver the simplicity of Anyscale’s developer experience on top of Azure’s enterprise-grade Kubernetes infrastructure, making it possible to run distributed Python workloads with native integrations, unified governance, and streamlined operations, all inside your Azure subscription.

Ray: Open-Source Distributed Computing for Python
Ray reimagines distributed systems for the Python ecosystem, making it simple for developers to scale code from a single laptop to a large cluster with minimal changes. Instead of rewriting applications for distributed execution, Ray offers Pythonic APIs that allow functions and classes to be transformed into distributed tasks and actors without altering core logic. Its smart scheduling seamlessly orchestrates workloads across CPUs, GPUs, and heterogeneous environments, ensuring efficient resource utilization.

Developers can also build complete AI systems using Ray’s native libraries—Ray Train for distributed training, Ray Data for data processing, Ray Serve for model serving, and Ray Tune for hyperparameter optimization—all fully compatible with frameworks like PyTorch and TensorFlow. By abstracting away infrastructure complexity, Ray lets teams focus on model performance and innovation.

Anyscale: Enterprise Ray on Azure
Ray makes distributed computing accessible; Anyscale running on Azure takes it to the next level for enterprise-readiness. At the heart of this offering is RayTurbo, Anyscale’s high-performance runtime for Ray. RayTurbo is designed to maximize cluster efficiency and accelerate Python workloads, enabling teams on Azure to:

Spin up Ray clusters in minutes, without Kubernetes expertise, directly from the Azure portal or CLI.
Dynamically allocate tasks across CPUs, GPUs, and heterogeneous nodes, ensuring efficient resource utilization and minimizing idle time.
Easily run large experiments quickly and cost-effectively with elastic scaling, GPU packing, and native support for Azure spot VMs.
Run reliably at production scale with automatic fault recovery, zero-downtime upgrades, and integrated observability.
Maintain control and governance; clusters run inside your Azure subscription, so data, models, and compute stay secure, with unified billing and compliance under Azure standards.
By combining Ray’s flexible APIs with Anyscale’s managed platform and RayTurbo’s performance, Python developers can move from prototype to production faster, with less operational overhead, and at cloud scale on Azure.

Kubernetes for Distributed Computing
Under the hood, Azure Kubernetes Service (AKS) powers this new managed offering, providing the infrastructure foundation for running Ray at production scale. AKS handles the complexity of orchestrating distributed workloads while delivering the scalability, resilience, and governance that enterprise AI applications require.

AKS delivers:

Dynamic resource orchestration: Automatically provision and scale clusters across CPUs, GPUs, and mixed configurations as demand shifts.
High availability: Self-healing nodes and failover keep workloads running without interruption.
Elastic scaling: scale from development clusters to production deployments spanning hundreds of nodes.
Integrated Azure services: Native connections to Azure Monitor, Microsoft Entra ID, Blob Storage, and policy tools streamline governance across IT and data science teams.
AKS gives Ray and Anyscale a strong foundation—one that’s already trusted for enterprise workloads and ready to scale from small experiments to global deployments.

Enabling teams with Anyscale running on Azure
With this partnership, Microsoft and Anyscale are bringing together the best of open-source Ray, managed cloud infrastructure, and Kubernetes orchestration. By pairing Ray’s distributed computing platform for Python with Anyscale’s management capabilities and AKS’s robust orchestration, Azure customers gain flexibility in how they can scale AI workloads. Whether you want to start small with rapid experimentation or run mission-critical systems at global scale, this offering gives you the choice to adopt distributed computing without the complexity of building and managing infrastructure yourself.

You can leverage Ray’s open-source ecosystem, integrate with Anyscale’s managed experience, or combine both with Azure-native services, all within your subscription and governance model. This optionality means teams can choose the path that best fits their needs: prototype quickly, optimize for cost and performance, or standardize for enterprise compliance.

Together, Microsoft and Anyscale are removing operational barriers and giving developers more ways to innovate with Python on Azure, so they can move faster, scale smarter, and focus on delivering breakthroughs. Read the full release here.

Get started
Learn more about the private preview and how to request access at https://aka.ms/anyscale or subscribe to Anyscale in the Azure Marketplace.

The post Powering Distributed AI/ML at Scale with Azure and Anyscale appeared first on Microsoft Azure Blog.

Read the whole story
alvinashcraft
4 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Build Your Own Custom Copilots with Microsoft Copilot Studio and Oracle Database@Azure

1 Share

Enterprises have long relied on Oracle Databases to run mission-critical workloads across finance, HR, supply chain, manufacturing and many other sectors. Now, with Oracle Database@Azure, they can modernize these workloads directly within the Microsoft Cloud combining Oracle’s proven reliability and performance with Azure’s scalability, security, and AI-driven innovation. 

The next phase of this evolution brings AI copilots closer to enterprise data. With Microsoft Copilot Studio, organizations can now easily build custom copilots that securely connect to live Oracle data, providing natural language access to insights and actions within Microsoft Teams, Microsoft 365, or any business application. 

 

From Data Modernization to AI Innovation 

Oracle Database@Azure provides customers the flexibility to run Oracle workloads on Azure using Exadata, Autonomous, Exascale, or Base Database services without compromising performance or compliance. Once this data foundation is established, Copilot Studio becomes the bridge that turns structured Oracle data into conversational, intelligent experiences. 

Within minutes, organizations can design copilots that interpret business questions, query live Oracle data, and return contextual insights. No dashboards, no manual queries or no waiting for weekly and monthly reports, just trusted intelligence from the source of truth. 

This integration transforms how employees interact with enterprise systems, shifting from static reporting to dynamic, AI-driven dialogue in real time. 

 

Enterprise-Grade Security, Built In 

Security and compliance are at the core of every copilot built in Copilot Studio. Each copilot automatically inherits Microsoft’s trusted governance framework: 

  • Microsoft Entra ID ensures identity protection through Multi Factor Authentication, conditional access, and centralized authentication. 
  • Microsoft Purview extends data governance with sensitivity labels, Data Loss Prevention (DLP) policies, and audit trails even for AI conversations. 
  • Role-Based Access Control (RBAC) enables granular control over who can create, modify, or publish copilots, aligning with enterprise compliance policies. 
  • Microsoft Defender and Microsoft Sentinel further strengthen this ecosystem by monitoring Oracle Database@Azure environments for threats, vulnerabilities, and anomalous behavior, enabling proactive detection and response. 

These layers ensure copilots that access Oracle Database@Azure or hybrid Oracle environments remain secure, governed, and compliant by design critical for regulated industries such as finance, healthcare, and telecommunications. 

 

How to Get Started 

Creating an Oracle data powered copilot in Copilot Studio is simple and fast: 

1. Create a new copilot in Microsoft Copilot Studio by selecting the “New Agent” option. 

 

 

2. Connect your Oracle Database as a knowledge source through a secure connection: Add knowledge → Oracle Database. 

 

 

 

After the connection is established, the available tables within the schema will appear for selection. Once you select the desired tables, the copilot becomes grounded with relevant data from your Oracle Database. 

 

 

3. Define topics and natural-language prompts that map to live queries.

4. Apply Purview labels and Entra ID access policies to enforce governance. 

5. Publish directly into Microsoft Teams or Microsoft 365 for business-user access and collaboration. 

This workflow allows teams to move from structured data to governed conversational intelligence all within Microsoft’s AI-native ecosystem. 

 

Example: Sales Insights Copilot 

A finance department using Oracle Database@Azure can rapidly build a Sales Insights Copilot to simplify performance tracking and decision-making. 

When a leader asks: 

What are the top five regions driving profit growth this quarter? 

The copilot connects securely to Oracle data using the connector, executes the query, and delivers the answer instantly either in Microsoft Teams or M365 or any other supported channels. It respects existing data-governance controls and never exposes sensitive information. The result is faster decision-making and greater agility without the overhead of new dashboards or manual analysis. 

 

Why It Matters 

Together, Oracle Database@Azure and Microsoft Copilot Studio enable organizations to go beyond modernization to innovation. 
They empower enterprises to: 

  • Run Oracle workloads on Azure with low latency and enterprise-grade performance. 
  • Build AI copilots that bring Oracle data to life through conversational access. 
  • Maintain security, privacy, and compliance through Purview, Entra ID, and RBAC. 

By uniting trusted Oracle data with Microsoft’s AI-native tools, enterprises can reimagine how data is consumed turning it into actionable, intelligent copilots that operate securely at scale. 

 

Get Started Today 

Now is the time to modernize your Oracle environments with Oracle Database@Azure, delivering greater agility, AI-driven insights, and enterprise-grade security.  

Click here to get started today and connect with your local Microsoft sales team.  

We encourage you to try out these new features and let us know what you think:  

Follow our LinkedIn community for the latest updates, feature highlights, and best practices on building AI copilots with Microsoft Copilot Studio, Azure AI foundry and Oracle Database@Azure. 

Read the whole story
alvinashcraft
4 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories