Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
148746 stories
·
33 followers

Towards Humanist Superintelligence  | Microsoft AI

1 Share

A humanist future

Here’s a question that’s not getting the attention it deserves: what kind of AI does the world really want? I think it’s probably the most important question of our time.

For several years now, progress has been phenomenal. We’re breezing past the great milestones. The Turing Test, a guiding inspiration for many in the field for 70 years, was effectively passed without any fanfare and hardly any acknowledgement. With the arrival of thinking and reasoning models, we’ve crossed an inflection point on the journey towards superintelligence. If AGI is often seen as the point at which an AI can match human performance at all tasks, then superintelligence is when it can go far beyond that performance.

Instead of endlessly debating capabilities or timing, it’s time to think hard about the purpose of technology, what we want from it, what its limitations should be, and how we’re going to ensure this incredible tech always benefits humanity.

At Microsoft AI, we’re working towards Humanist Superintelligence (HSI): incredibly advanced AI capabilities that always work for, in service of, people and humanity more generally. We think of it as systems that are problem-oriented and tend towards the domain specific. Not an unbounded and unlimited entity with high degrees of autonomy – but AI that is carefully calibrated, contextualized, within limits. We want to both explore and prioritize how the most advanced forms of AI can keep humanity in control while at the same time accelerating our path towards tackling our most pressing global challenges.

To do this we have formed the MAI Superintelligence Team, led by me as part of Microsoft AI. We want it to be the world’s best place to research and build AI, bar none. I think about it as humanist superintelligence to clearly indicate this isn’t about some directionless technological goal, an empty challenge, a mountain for its own sake. We are doing this to solve real concrete problems and do it in such a way that it remains grounded and controllable. We are not building an ill-defined and ethereal superintelligence; we are building a practical technology explicitly designed only to serve humanity.

In doing this we reject narratives about a race to AGI, and instead see it as part of a wider and deeply human endeavour to improve our lives and future prospects. We also reject binaries of boom and doom; we’re in this for the long haul to deliver tangible, specific, safe benefits for billions of people. We feel a deep responsibility to get this right.

The history of humanism has been its enduring ability to fight off orthodoxy, totalitarian tendencies, pessimism and help us preserve human dignity, freedom to reason in pursuit of moral human progress. In that spirit, we think this approach will help humanity unlock almost all the benefits of AI, while avoiding the most extreme risks.

Climbing the exponential slope

The rate of progress has been eye-watering. This year it feels like everyone in AI is talking about the dawn of superintelligence. Such a system will have an open-ended ability of “learning to learn”, the ultimate meta skill. It would therefore likely continue improving, going far beyond human-level performance across all conceivable activities. It will be more valuable than anything we’ve ever known.

But to what end?

The prize for humanity is enormous. A world of rapid advances in living standards and science, and a time of new art forms, culture and growth. It’s a truly inspiring mission, and one that has motivated me for decades. We should celebrate and accelerate technology because it’s been the greatest engine of human progress in history. That’s why we need much, much more of it.

In the last 250 years, our intelligence drove the most beautiful process of scientific discovery and entrepreneurial application that has more than doubled life expectancy from 30 to 75. It’s our intelligence and the technologies we’ve invented that’s delivered food, light, shelter, healthcare, entertainment and knowledge to a population that grew from 1b to 8b people in that period.

It’s technology that enables us to fly around the globe, treat an infection with antibiotics, stare into the furthest reaches of outer space, and, yes, share a cat meme with millions of people we’ve never met. Walk into any modern supermarket, hospital, school or office and what you’re seeing is a marvel of human ingenuity. AI is the next phase in this journey. This is what Satya means when he talks about increasing global GDP growth to 10%; a transformative boost. As a platform of platforms, this is core to Microsoft’s mission of enabling others to create and invent at global scale.

When you hear about AI, then, this is what it’s worth keeping in mind. This is about making us collectively the best version of ourselves. AI is the path to better healthcare for everyone. AI is how our society levels up, escapes an increasingly zero-sum world. It’s how we grow the economy to increase wealth broadly, and enable a higher standard of living across society. Or let me put it another way: take AI out of the picture and the gains over the next decades look much harder to come by. It’s the next step on the long road of human creativity and invention, pushing the boundaries of what we can make, think and do. It’s how we discover new kinds of energy generation, new modes of entertainment.

AI – HSI – is how we rebuild.

Containment is necessary

At the same time we have to ask ourselves, how are we going to contain (secure and control), let alone align (make it “care” enough about humans not to harm us) a system that is – by design – intended to keep getting smarter than us? We simply don’t know what might emerge from autonomous, constantly evolving and improving systems that know every aspect of our science and society.

And since this kind of superintelligence can continuously improve itself, we’ll need to contain and align it not just once, but constantly, in perpetuity.

And it gets more complicated. It’s not just the “we” in today’s frontier AI research labs that have to do it. All of humanity needs to do it, together, all the time. Every commercial lab, every start up, every government, all need to be constantly alert and engaged in a project of alignment and containment, and that’s before we even deal with the bad actors and the crazy garage tinkerers.

No AI developer, no safety researcher, no policy expert, no person I’ve encountered has a reassuring answer to this question. How do we guarantee it’s safe? If you think that’s overly dramatic, I’d love to hear your rebuttal. Perhaps I’m missing something.

Creating superintelligence is one thing; but creating provable, robust containment and alignment alongside it is the urgent challenge facing humanity in the 21st century. And until we have that answer, we need to understand all the avenues facing us – both towards and away from superintelligence, or perhaps to an altogether alternative form of it.

The purpose of technology

Technology’s purpose is to help advance human civilization. It should help everyone live happier, healthier lives. It should help us invent a future where humanity and our environment truly prosper.

I think Albert Einstein put it best when he said: “The concern for man and his destiny must always be the chief interest of all technical effort… in order that the creations of our mind shall be a blessing and not a curse to mankind.”

Any technology that doesn’t achieve this is a failure. And we should reject it.

That remains the test of the coming wave of superintelligence and it’s the question we must ask over and over: how do we know, for sure, that this technology will do much more good than harm? As we get closer to superintelligence in the coming years, how certain are we that we won’t lose control? And who makes that assessment? And most importantly, amid the uncertainty of that question, what kind of superintelligence should we build, with what limitations and guardrails?

These questions are central to everything we do at the MAI Superintelligence Team and guide us day to day as we make decisions. The core, long term interests of human beings should be clearly prioritized over any research and development agenda.

Towards humanist superintelligence

I think we technologists need to do a better job of imagining a future that most people in the world actually want to live in.

Humanist superintelligence (HSI) offers an alternative vision anchored on both a non-negotiable human-centrism and a commitment to accelerating technological innovation… but in that order. The order is key. It means proactively avoiding harm and then accelerating.

Instead of being designed to beat all humans at all tasks and dominate everything, HSI begins rooted in specific societal challenges that improve human well-being. Our recent paper on expert AI medical diagnosis is a great directional example of this (more on this below).

It’s clearly showing signs of progress towards a medical superintelligence and when it makes its way into production it will be truly transformational. And yet since it’s envisaged as a more focused series of domain specific superintelligences, it poses less severe alignment or containment challenges.

Quite simply, HSI is built to get all the goodness of science and invention without the “uncontrollable risks” part. It is, we hope, a common-sense approach to the field.

It may seem absurd to have to declare it, but HSI is a vision to ensure humanity remains at the top of the food chain. It’s a vision of AI that’s always on humanity’s side. That always works for all of us. That helps support and grow human roles, not take them away; that makes us smarter, not the opposite as some increasingly fear. That always serves our interests and makes our planet healthier, wealthier and protects our fragile natural environment, regardless of the status of frontier safety and alignment research.

We owe it to the future to deliver a palpably improved world from the one we inherited. Sometimes it’s easy to overlook the amazing things technology has already delivered. When you put a jacket on because the office AC is too low or get frustrated by the lines at airport check-in during the holidays or agonize about what to watch on your smart TV: that’s the extraordinary privilege afforded to us by technology. Each moment would have bewildered our ancestors. And so would our grumbling. If we get this right, something similar is possible again.

Where Humanist Superintelligence will count

Here are three application domains that inspire us at Microsoft AI. There are, however, many more, and I’ll be outlining them in future.

An AI companion for everyone – Everyone who wants one will have a perfect and cheap AI companion helping you learn, act, be productive and feel supported. Many of us feel ground down by the everyday mental load; overwhelmed and distracted; rattled by a persistent drumbeat of information and pressures that never seems to stop. If we get it right, an AI companion will help shoulder that load, get things done, and be a personal and creative sounding board. AI Companions will be personalized, adapting to the contours of our life but not afraid to push back in your best interests, built to always support, rather than replace, human connection, designed with trust and responsibility at its heart.

AI Companions will also have a profound impact on how we learn. They’ll work with the strengths and weaknesses of every student, alongside teachers, to ensure they can achieve their full potential and encourage their intellectual curiosity. That means tailored learning methods, adaptive curricula, completely customized exercises. “One size fits all” education will seem as bizarre to the next generation as rote learning Latin does to us.

Medical Superintelligence – We will see the arrival of medical superintelligence in the next few years. This is the kind of domain specific humanist superintelligence we need more than anything. We’ll have expert level performance at the full range of diagnostics, alongside highly capable planning and prediction in operational clinical settings. For as long as I’ve been working in AI, solving this challenge has been my passion. It will mean world-class clinical knowledge and intervention / treatment is available everywhere.

As I mentioned above, our recent work demonstrates the value of this narrower form of domain specific superintelligence. The New England Journal of Medicine includes a Case Challenge in every issue – a list of symptoms and a patient to diagnose. It’s fiendishly difficult with pass rates of low single digit percentages even for domain experts let alone the average doctor. Our orchestrator, MAI-DxO, managed to reach 85% across the Case Challenges. Human doctors max out at about 20%, and need to order many more expensive tests. In our view both clinicians and patients alike would welcome the extra support. This work just hints at the potential to revolutionize healthcare.

Plentiful clean energy – Energy drives the cost of everything. We need more of it, more cheaply and more cleanly. Electricity consumption is estimated to rise 34% through 2050, driven in no small part by the rise in datacentre demand. I predict we will have cheap and abundant renewable generation and storage before 2040, and AI will play a big part in delivering it. It will help create and manage new workflows for designing and deploying new scientific breakthroughs. These advances will help produce everything from new carbon negative materials to far cheaper and lighter batteries, to far more efficient utilization of existing resources like grid infrastructure, water systems, manufacturing processes and supply chains. It will suggest and help implement viable carbon removal strategies at meaningful scale. And AI will also help push breakthroughs that finally crack fusion power.

These breakthroughs alongside many others are coming with HSI, and they’ll profoundly improve our civilization. They will make a transformative difference to billions of people. This next decade may well be the most productive in history. And yet, the risks are growing faster than ever before.

A safer superintelligence

Alongside spelling out very precisely the kind of superintelligence we should build, the time has come to also consider what societal boundaries, norms and laws we want around this process. At MAI this is a discussion, and a set of actions, that we welcome.

Doing this requires real trade-offs and tough decisions that come in environments of immense competitive pressure and also opportunity. There are numerous challenges and obstacles to both delivering the vision and avoiding the downsides, including around recruitment, security, mindset, the structure of the market and the calibration of optimum research paths that steer the course between harnessing upside and avoiding those downsides. There is at present a collective action problem of more unsafe models of superintelligence potentially being able to develop faster and operate more freely.

Overcoming this, as with all such problems, is an immense challenge that will require meaningful coordination across companies and governments and beyond. But it starts I believe with a willingness to be open about vision, open to conversations with others in the field, regulators, the public. That’s why I’m publishing this – to start a process and to make clear that we are not building a superintelligence at any cost, with no limits. There’s a lot more to say (and of course do) on all of it, and over the next months and years you can expect more from me and MAI to candidly explain and explore our work in this area.

Humans matter more than AI

Ultimately what HSI requires is an industry shift in approach. Are those building AI optimizing for AI or for humanity, and who gets to judge? At Microsoft AI, we believe humans matter more than AI. We want to build AI that deeply reflects our wider mission to empower every person on the planet.

Humanist superintelligence keeps us humans at the centre of the picture. It’s AI that’s on humanity’s team, a subordinate, controllable AI, one that won’t, that can’t open a Pandora’s Box. Contained, value aligned, safe – these are basics but not enough. HSI keeps humanity in the driving seat, always. Optimized for specific domains, with real restrictions on autonomy, my hope is that this can avoid some of the risks and leave precious space for human flourishing, for us to keep improving, engaging and trying, as we always have.

Unlocking the true benefits of the most advanced forms of AI is not something we can do alone. Accountability and oversight are to be welcomed when the stakes are this high. Superintelligence could be the best invention ever – but only if it puts the interests of humans above everything else. Only if it’s in service to humanity.

This – humanist, applied – is the superintelligence I believe the world wants. It’s the superintelligence I want to build. And it’s the superintelligence we’re going to build on MAI’s Superintelligence Team.

Read the whole story
alvinashcraft
22 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Meta’s chief AI scientist Yann LeCun reportedly plans to leave to build his own startup

1 Share
Yann LeCun, a chief AI scientist at Meta, is planning to leave the company to build his own startup, which will focus on continuing his work on world models.
Read the whole story
alvinashcraft
32 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Celebrities Fight Sora + Amazon’s Secret Automation Plans + ChatGPT Gets a Browser

1 Share
“Now we are just seeing OpenAI do the full Facebook when it comes to content policy.”
Read the whole story
alvinashcraft
32 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

With AI, What Should the Government's Role Be?

1 Share
From: AIDailyBrief
Duration: 15:10
Views: 1,123

Brought to you by:
KPMG – Go to ⁠www.kpmg.us/ai⁠ to learn more about how KPMG can help you drive value with our AI solutions.
Vanta - Simplify compliance - ⁠⁠⁠⁠⁠⁠⁠https://vanta.com/nlw

The AI Daily Brief helps you understand the most important news and discussions in AI.
Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614
Get it ad free at
Join our Discord: https://bit.ly/aibreakdown

Read the whole story
alvinashcraft
32 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

CNCF Launches Certified Kubernetes AI Conformance Program to Standardize AI Workloads on Kubernetes

1 Share

New initiative targets cloud native AI portability and reliability across environments

Key Highlights

  • CNCF and the Kubernetes open source community are  launching the Certified Kubernetes AI Conformance Program to create open, community-defined standards for running AI workloads on Kubernetes.
  • As organizations increasingly move AI workloads into production, they need consistent and interoperable infrastructure. This initiative helps reduce fragmentation and ensures reliability across environments.
  • Platform vendors, infrastructure teams, enterprise AI practitioners, and open source contributors in Kubernetes and across the cloud native ecosystem need a common foundation for seeking interoperable and production-ready AI deployments.
  • The program is now available and being developed in the open to encourage broader participation. 

KUBECON + CLOUDNATIVECON NORTH AMERICA, ATLANTA — Nov. 11, 2025 The Cloud Native Computing Foundation® (CNCF®), which builds sustainable ecosystems for cloud native software, today announced the launch of the Certified Kubernetes AI Conformance Program at KubeCon + CloudNativeCon North America. The new program introduces a community-led effort to define and validate standards for running artificial intelligence (AI) workloads reliably and consistently on Kubernetes.]

The program outlines a minimum set of capabilities and configurations required to run widely used AI and machine learning frameworks on Kubernetes. The initiative seeks to give enterprises confidence in deploying AI on Kubernetes while providing vendors a common baseline for compatibility.

Announced in its beta phase at KubeCon + CloudNativeCon Japan in June, the Kubernetes AI Conformance Program has successfully certified its initial participants with a v1.0 release and have started to work on a roadmap for a v2.0 release next year.

The growing use of Kubernetes for AI workloads highlights the importance of common standards. According to Linux Foundation Research on Sovereign AI, 82% of organizations are already building custom AI solutions, and 58% use Kubernetes to support those workloads. With 90% of enterprises identifying open source software as critical to their AI strategies, the risk of fragmentation, inefficiencies, and inconsistent performance is rising. The Certified AI Platform Conformance Program responds directly to this need by providing shared standards for AI on Kubernetes.

“As AI in production continues to scale and take advantage of multiple clouds and systems, teams need consistent infrastructure they can rely on,” said Chris Aniszczyk, CTO of CNCF. “This conformance program will create shared criteria to ensure AI workloads behave predictably across environments. It builds on the same successful community-driven process we’ve used with Kubernetes to help bring consistency across over 100+ Kubernetes systems as AI adoption scales.”

The Certified Kubernetes AI Platform Conformance Program builds on CNCF’s ongoing efforts to support work that ensures consistency and portability in cloud native environments as AI adoption accelerates. It draws from CNCF’s established Certified Kubernetes Conformance Program, which brought together more than 100 certified distributions and platforms across every major cloud, on-premises solution, and vendor offering. The Certified Kubernetes Conformance Program has been instrumental in making Kubernetes a reliable and interoperable solution across the industry.

This new initiative sees CNCF applying its proven model to AI infrastructure. The goal is to reduce confusion and inconsistency by setting clear requirements for running AI tasks that follow Kubernetes principles, using open, standard APIs and interfaces. By creating clear standards, testing for compliance, and building agreement within the community, CNCF aims to speed up the use of AI while also reducing risks. This approach mirrors the successful strategy employed for Kubernetes, now adapted for the rapidly advancing domain of AI.

Supporting Quotes

“Responsible AI depends on clear, trusted standards. They’re what turn innovation into something scalable and real. This certification gives enterprises confidence in deploying AI on Kubernetes and provides vendors a unified framework to ensure their solutions work together. Building on that foundation, Akamai’s Kubernetes-native platform on the Akamai Inference Cloud supports everything from DIY to fully managed cloud-to-edge deployments and turns those standards into production-ready infrastructure built to handle the scale and speed AI inference demands.”

— Alex Chircop, chief architect, Akamai

“The Certified Kubernetes AI Conformance Program is exactly the kind of community-driven standardization effort that helps move the industry forward. At AWS, we’ve long believed that standards are the foundation for true innovation and interoperability—and this is especially critical as customers increasingly scale AI workloads into production. In achieving AI Conformance certification for Amazon EKS, we’re demonstrating our commitment to providing customers with a verified, standardized platform for running AI workloads on Kubernetes. This certification validates our comprehensive AI capabilities, including built-in resource management for GPUs, support for distributed AI workload scheduling, intelligent cluster scaling for accelerators, and integrated monitoring for AI infrastructure.  AWS is proud to help establish a foundation for reliable and interoperable AI infrastructure that organizations of all sizes can build upon with confidence. We look forward to contributing to this important initiative.”

— Eswar Bala, director of container services, AWS

“Broadcom is excited to announce that  VMware vSphere Kubernetes Service (VKS) is now a Certified Kubernetes AI Conformant Platform. This milestone reinforces our commitment to open standards and helps customers innovate freely, knowing their AI platforms are built on a consistent, interoperable foundation. It’s another example of how Broadcom continues to contribute across the CNCF ecosystem to accelerate community-driven innovation.”

Dilpreet Bindra, senior director, engineering, VMware Cloud Foundation Division at Broadcom

“This initiative marks an important step forward for the AI ecosystem, aligning the community around shared standards that make deploying AI at scale more consistent and reliable. At CoreWeave, Kubernetes has always been central to how we build. Its flexibility and scalability are what make it possible to deliver the performance and reliability modern AI demands. We’re excited about this milestone—it reflects what we value most: openness, performance, and enabling developers to spend more time innovating.”

—Chen Goldberg, senior vice president, engineering, CoreWeave

“One of our core beliefs at Giant Swarm is empowering end users through open source and community standards. The Kubernetes AI Conformance Program is one of the most timely standardization efforts of the last 10 years. With demand at its peak it is important for end users to be able to rely on standardized platforms to become successful with their AI investments.”

—Puja Abbassi, VP product at Giant Swarm

“Google Cloud has certified for Kubernetes AI Conformance because we believe consistency and portability are essential for scaling AI. By aligning with this standard early, we’re making it easier for developers and enterprises to build AI applications that are production-ready, portable, and efficient—without reinventing infrastructure for every deployment.”

—Jago Macleod, Kubernetes & GKE engineering director at Google Cloud

“The future of AI will be built on open standards, not walled gardens. This conformance program is the foundation for that future, ensuring a level playing field where innovation and portability win.”

—Sebastian Scheele CEO & co-founder of Kubermatic

“AI is transforming every industry, and Kubernetes is at the center of this shift. The new Kubernetes AI Conformance Program gives customers confidence that providers can run AI workloads reliably in production. Microsoft Azure is pleased to help direct this effort, ensuring Kubernetes delivers the portability, security, and performance businesses need to innovate with AI at scale.” 

Brendan Burns, corporate vice president, cloud-native compute at Microsoft and co-creator of Kubernetes

At Oracle, we see open standards as critical for sustainable and scalable AI innovation. The AI/ML space is dynamic, and our users are looking for consistency. By supporting Kubernetes AI Conformance, we’re helping to reduce fragmentation of architectures and technologies to ensure developers and enterprises have a foundation to run production-ready AI/ML workloads reliably and efficiently.”

—Sudha Raghavan, senior vice president, AI Infrastructure, Oracle Cloud Infrastructure and CNCF board member

“Red Hat has always believed that open standards are the key to true enterprise adoption. The Certified Kubernetes AI Conformance Program extends the existing Kubernetes conformance to meet the complex demands of AI/ML workloads. This community-driven standard is essential to preventing vendor lock-in and helping ensure that the AI workloads our customers deploy using Red Hat OpenShift and Red Hat OpenShift AI are truly portable, reliable, and production-ready, whether they run on-premises, across multiple public clouds, or at the edge.”

—Yuan Tang, senior principal software engineer, Red Hat and co-chair of Kubernetes AI Conformance Working Group 

“Conformance is a critical component of an open ecosystem because it allows any company to create compatible software to democratize access to new technologies. Sidero Labs is excited to participate in the Kubernetes AI Conformance program to enable companies large and small to own their critical AI infrastructure.”

—Justin Garrison, head of product, Sidero Labs

The Certified Kubernetes AI Platform Conformance Program is being developed in the open at github.com/cncf/ai-conformance and is guided by the Working Group AI Conformance. The group, operating under an openly published charter, is focused on creating a conformance standard and validation suite to ensure AI workloads on Kubernetes are interoperable, reproducible, and portable. Its scope includes defining a reference architecture, framework support requirements, and test criteria for key capabilities such as GPU integration, volume handling, and job-level networking.

For more context on the initiative’s objectives, see theKubernetes AI Conformance planning document

About Cloud Native Computing Foundation
Cloud native computing empowers organizations to build and run scalable applications with an open source software stack in public, private, and hybrid clouds. The Cloud Native Computing Foundation (CNCF) hosts critical components of the global technology infrastructure, including Kubernetes, Prometheus, and Envoy. CNCF brings together the industry’s top developers, end users, and vendors and runs the largest open source developer conferences in the world. Supported by more than 800 members, including the world’s largest cloud computing and software companies, as well as over 200 innovative startups, CNCF is part of the nonprofit Linux Foundation. For more information, please visit www.cncf.io.

###

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our trademark usage page. Linux is a registered trademark of Linus Torvalds.

Media Contact

Kaitlin Thornhill

The Linux Foundation

PR@CNCF.io 

Read the whole story
alvinashcraft
33 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Effectively Monitoring Web Performance

1 Share

This article is a sponsored by DebugBear

There’s no single way to measure website performance. That said, the Core Web Vitals metrics that Google uses as a ranking factor are a great starting point, as they cover different aspects of visitor experience:

  • Largest Contentful Paint (LCP): Measures the initial page load time.
  • Cumulative Layout Shift (CLS): Measures if content is stable after rendering.
  • Interaction to Next Paint (INP): Measures how quickly the page responds to user input.

There are also many other web performance metrics that you can use to track technical aspects, like page weight or server response time. While these often don’t matter directly to the end user, they provide you with insight into what’s slowing down your pages.

You can also use the User Timing API to track page load milestones that are important on your website specifically.

Synthetic And Real User Data

There are two different types of web performance data:

  • Synthetic tests are run in a controlled test environment.
  • Real user data is collected from actual website visitors.

Synthetic monitoring can provide super-detailed reports to help you identify page speed issues. You can configure exactly how you want to collect the data, picking a specific network speed, device size, or test location.

Get a hands-on feel for synthetic monitoring by using the free DebugBear website speed test to check on your website.

That said, your synthetic test settings might not match what’s typical for your real visitors, and you can’t script all of the possible ways that people might interact with your website.

That’s why you also need real user monitoring (RUM). Instead of looking at one experience, you see different load times and how specific visitor segments are impacted. You can review specific page views to identify what caused poor performance for a particular visitor.

At the same time, real user data isn’t quite as detailed as synthetic test reports, due to web API limitations and performance concerns.

DebugBear offers both synthetic monitoring and real user monitoring:

  • To set up synthetic tests, you just need to enter a website URL, and
  • To collect real user metrics, you need to install an analytics snippet on your website.
Three Steps To A Fast Website

Collecting data helps you throughout the lifecycle of your web performance optimizations. You can follow this three-step process:

  1. Identify: Collect data across your website and identify slow visitor experiences.
  2. Diagnose: Dive deep into technical analysis to find optimizations.
  3. Monitor: Check that optimizations are working and get alerted to performance regressions.

Let’s take a look at each step in detail.

Step 1: Identify Slow Visitor Experiences

What’s prompting you to look into website performance issues in the first place? You likely already have some specific issues in mind, whether that’s from customer reports or because of poor scores in the Core Web Vitals section of Google Search Console.

Real user data is the best place to check for slow pages. It tells you whether the technical issues on your site actually result in poor user experience. It’s easy to collect across your whole website (while synthetic tests need to be set up for each URL). And, you can often get a view count along with the performance metrics. A moderately slow page that gets two visitors a month isn’t as important as a moderately fast page that gets thousands of visits a day.

The Web Vitals dashboard in DebugBear’s RUM product checks your site’s performance health and surfaces the most-visited pages and URLs where many visitors have a poor experience.

You can also run a website scan to get a list of URLs from your sitemap and then check each of these pages against real user data from Google’s Chrome User Experience Report (CrUX). However, this will only work for pages that meet a minimum traffic threshold to be included in the CrUX dataset.

The scan result highlights pages with poor web vitals scores where you might want to investigate further.

If no real-user data is available, then there is a scanning tool called Unlighthouse, which is based on Google’s Lighthouse tool. It runs synthetic tests for each page, allowing you to filter through the results in order to identify pages that need to be optimized.

Step 2: Diagnose Web Performance Issues

Once you’ve identified slow pages on your website, you need to look at what’s actually happening on your page that is causing delays.

Debugging Page Load Time

If there are issues with page load time metrics — like the Largest Contentful Paint (LCP) — synthetic test results can provide a detailed analysis. You can also run page speed experiments to try out and measure the impact of certain optimizations.

Real user data can still be important when debugging page speed, as load time depends on many user- and device-specific factors. For example, depending on the size of the user’s device, the page element that’s responsible for the LCP can vary. RUM data can provide a breakdown of possible influencing factors, like CSS selectors and image URLs, across all visitors, helping you zero in on what exactly needs to be fixed.

Debugging Slow Interactions

RUM data is also generally needed to properly diagnose issues related to the Interaction to Next Paint (INP) metric. Specifically, real user data can provide insight into what causes slow interactions, which helps you answer questions like:

  • What page elements are responsible?
  • Is time spent processing already-active background tasks or handling the interaction itself?
  • What scripts contribute the most to overall CPU processing time?

You can view this data at a high level to identify trends, as well as review specific page views to see what impacted a specific visitor experience.

Step 3: Monitor Performance & Respond To Regressions

Continuous monitoring of your website performance lets you track whether the performance is improving after making a change, and alerts you when scores decline.

How you respond to performance regressions depends on whether you’re looking at lab-based synthetic tests or real user analytics.

Synthetic Data

Test settings for synthetic tests are standardized between runs. While infrastructure changes, like browser upgrades, occasionally cause changes, performance is more generally determined by resources loaded by the website and the code it runs.

When a metric changes, DebugBear lets you view a before-and-after comparison between the two test results. For example, the next screenshot displays a regression in the First Contentful Paint (FCP) metric. The comparison reveals that new images were added to the page, competing for bandwidth with other page resources.

From the report, it’s clear that a CSS file that previously took 255 milliseconds to load now takes 915 milliseconds. Since stylesheets are required to render page content, this means the page now loads more slowly, giving you better insight into what needs optimization.

Real User Data

When you see a change in real user metrics, there can be two causes:

  1. A shift in visitor characteristics or behavior, or
  2. A technical change on your website.

Launching an ad campaign, for example, often increases redirects, reduces cache hits, and shifts visitor demographics. When you see a regression in RUM data, the first step is to find out if the change was on your website or in your visitor’s browser. Check for view count changes in ad campaigns, referrer domains, or network speed to get a clearer picture.

If those visits have different performance compared to your typical visitors, then that suggests the repression is not due to a change on your website. However, you may still need to make changes on your website to better serve these visitor cohorts and deliver a good experience for them.

To identify the cause of a technical change, take a look at component breakdown metrics, such as LCP subparts. This helps you narrow down the cause of a regression, whether it is due to changes in server response time, new render-blocking resources, or the LCP image.

You can also check for shifts in page view properties, like different LCP element selectors or specific scripts that cause poor performance.

Conclusion

One-off page speed tests are a great starting point for optimizing performance. However, a monitoring tool like DebugBear can form the basis for a more comprehensive web performance strategy that helps you stay fast for the long term.

Get a free DebugBear trial on our website!



Read the whole story
alvinashcraft
34 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories