Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
148743 stories
·
33 followers

With AI, What Should the Government's Role Be?

1 Share
From: AIDailyBrief
Duration: 15:10
Views: 1,123

Brought to you by:
KPMG – Go to ⁠www.kpmg.us/ai⁠ to learn more about how KPMG can help you drive value with our AI solutions.
Vanta - Simplify compliance - ⁠⁠⁠⁠⁠⁠⁠https://vanta.com/nlw

The AI Daily Brief helps you understand the most important news and discussions in AI.
Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614
Get it ad free at
Join our Discord: https://bit.ly/aibreakdown

Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

CNCF Launches Certified Kubernetes AI Conformance Program to Standardize AI Workloads on Kubernetes

1 Share

New initiative targets cloud native AI portability and reliability across environments

Key Highlights

  • CNCF and the Kubernetes open source community are  launching the Certified Kubernetes AI Conformance Program to create open, community-defined standards for running AI workloads on Kubernetes.
  • As organizations increasingly move AI workloads into production, they need consistent and interoperable infrastructure. This initiative helps reduce fragmentation and ensures reliability across environments.
  • Platform vendors, infrastructure teams, enterprise AI practitioners, and open source contributors in Kubernetes and across the cloud native ecosystem need a common foundation for seeking interoperable and production-ready AI deployments.
  • The program is now available and being developed in the open to encourage broader participation. 

KUBECON + CLOUDNATIVECON NORTH AMERICA, ATLANTA — Nov. 11, 2025 The Cloud Native Computing Foundation® (CNCF®), which builds sustainable ecosystems for cloud native software, today announced the launch of the Certified Kubernetes AI Conformance Program at KubeCon + CloudNativeCon North America. The new program introduces a community-led effort to define and validate standards for running artificial intelligence (AI) workloads reliably and consistently on Kubernetes.]

The program outlines a minimum set of capabilities and configurations required to run widely used AI and machine learning frameworks on Kubernetes. The initiative seeks to give enterprises confidence in deploying AI on Kubernetes while providing vendors a common baseline for compatibility.

Announced in its beta phase at KubeCon + CloudNativeCon Japan in June, the Kubernetes AI Conformance Program has successfully certified its initial participants with a v1.0 release and have started to work on a roadmap for a v2.0 release next year.

The growing use of Kubernetes for AI workloads highlights the importance of common standards. According to Linux Foundation Research on Sovereign AI, 82% of organizations are already building custom AI solutions, and 58% use Kubernetes to support those workloads. With 90% of enterprises identifying open source software as critical to their AI strategies, the risk of fragmentation, inefficiencies, and inconsistent performance is rising. The Certified AI Platform Conformance Program responds directly to this need by providing shared standards for AI on Kubernetes.

“As AI in production continues to scale and take advantage of multiple clouds and systems, teams need consistent infrastructure they can rely on,” said Chris Aniszczyk, CTO of CNCF. “This conformance program will create shared criteria to ensure AI workloads behave predictably across environments. It builds on the same successful community-driven process we’ve used with Kubernetes to help bring consistency across over 100+ Kubernetes systems as AI adoption scales.”

The Certified Kubernetes AI Platform Conformance Program builds on CNCF’s ongoing efforts to support work that ensures consistency and portability in cloud native environments as AI adoption accelerates. It draws from CNCF’s established Certified Kubernetes Conformance Program, which brought together more than 100 certified distributions and platforms across every major cloud, on-premises solution, and vendor offering. The Certified Kubernetes Conformance Program has been instrumental in making Kubernetes a reliable and interoperable solution across the industry.

This new initiative sees CNCF applying its proven model to AI infrastructure. The goal is to reduce confusion and inconsistency by setting clear requirements for running AI tasks that follow Kubernetes principles, using open, standard APIs and interfaces. By creating clear standards, testing for compliance, and building agreement within the community, CNCF aims to speed up the use of AI while also reducing risks. This approach mirrors the successful strategy employed for Kubernetes, now adapted for the rapidly advancing domain of AI.

Supporting Quotes

“Responsible AI depends on clear, trusted standards. They’re what turn innovation into something scalable and real. This certification gives enterprises confidence in deploying AI on Kubernetes and provides vendors a unified framework to ensure their solutions work together. Building on that foundation, Akamai’s Kubernetes-native platform on the Akamai Inference Cloud supports everything from DIY to fully managed cloud-to-edge deployments and turns those standards into production-ready infrastructure built to handle the scale and speed AI inference demands.”

— Alex Chircop, chief architect, Akamai

“The Certified Kubernetes AI Conformance Program is exactly the kind of community-driven standardization effort that helps move the industry forward. At AWS, we’ve long believed that standards are the foundation for true innovation and interoperability—and this is especially critical as customers increasingly scale AI workloads into production. In achieving AI Conformance certification for Amazon EKS, we’re demonstrating our commitment to providing customers with a verified, standardized platform for running AI workloads on Kubernetes. This certification validates our comprehensive AI capabilities, including built-in resource management for GPUs, support for distributed AI workload scheduling, intelligent cluster scaling for accelerators, and integrated monitoring for AI infrastructure.  AWS is proud to help establish a foundation for reliable and interoperable AI infrastructure that organizations of all sizes can build upon with confidence. We look forward to contributing to this important initiative.”

— Eswar Bala, director of container services, AWS

“Broadcom is excited to announce that  VMware vSphere Kubernetes Service (VKS) is now a Certified Kubernetes AI Conformant Platform. This milestone reinforces our commitment to open standards and helps customers innovate freely, knowing their AI platforms are built on a consistent, interoperable foundation. It’s another example of how Broadcom continues to contribute across the CNCF ecosystem to accelerate community-driven innovation.”

Dilpreet Bindra, senior director, engineering, VMware Cloud Foundation Division at Broadcom

“This initiative marks an important step forward for the AI ecosystem, aligning the community around shared standards that make deploying AI at scale more consistent and reliable. At CoreWeave, Kubernetes has always been central to how we build. Its flexibility and scalability are what make it possible to deliver the performance and reliability modern AI demands. We’re excited about this milestone—it reflects what we value most: openness, performance, and enabling developers to spend more time innovating.”

—Chen Goldberg, senior vice president, engineering, CoreWeave

“One of our core beliefs at Giant Swarm is empowering end users through open source and community standards. The Kubernetes AI Conformance Program is one of the most timely standardization efforts of the last 10 years. With demand at its peak it is important for end users to be able to rely on standardized platforms to become successful with their AI investments.”

—Puja Abbassi, VP product at Giant Swarm

“Google Cloud has certified for Kubernetes AI Conformance because we believe consistency and portability are essential for scaling AI. By aligning with this standard early, we’re making it easier for developers and enterprises to build AI applications that are production-ready, portable, and efficient—without reinventing infrastructure for every deployment.”

—Jago Macleod, Kubernetes & GKE engineering director at Google Cloud

“The future of AI will be built on open standards, not walled gardens. This conformance program is the foundation for that future, ensuring a level playing field where innovation and portability win.”

—Sebastian Scheele CEO & co-founder of Kubermatic

“AI is transforming every industry, and Kubernetes is at the center of this shift. The new Kubernetes AI Conformance Program gives customers confidence that providers can run AI workloads reliably in production. Microsoft Azure is pleased to help direct this effort, ensuring Kubernetes delivers the portability, security, and performance businesses need to innovate with AI at scale.” 

Brendan Burns, corporate vice president, cloud-native compute at Microsoft and co-creator of Kubernetes

At Oracle, we see open standards as critical for sustainable and scalable AI innovation. The AI/ML space is dynamic, and our users are looking for consistency. By supporting Kubernetes AI Conformance, we’re helping to reduce fragmentation of architectures and technologies to ensure developers and enterprises have a foundation to run production-ready AI/ML workloads reliably and efficiently.”

—Sudha Raghavan, senior vice president, AI Infrastructure, Oracle Cloud Infrastructure and CNCF board member

“Red Hat has always believed that open standards are the key to true enterprise adoption. The Certified Kubernetes AI Conformance Program extends the existing Kubernetes conformance to meet the complex demands of AI/ML workloads. This community-driven standard is essential to preventing vendor lock-in and helping ensure that the AI workloads our customers deploy using Red Hat OpenShift and Red Hat OpenShift AI are truly portable, reliable, and production-ready, whether they run on-premises, across multiple public clouds, or at the edge.”

—Yuan Tang, senior principal software engineer, Red Hat and co-chair of Kubernetes AI Conformance Working Group 

“Conformance is a critical component of an open ecosystem because it allows any company to create compatible software to democratize access to new technologies. Sidero Labs is excited to participate in the Kubernetes AI Conformance program to enable companies large and small to own their critical AI infrastructure.”

—Justin Garrison, head of product, Sidero Labs

The Certified Kubernetes AI Platform Conformance Program is being developed in the open at github.com/cncf/ai-conformance and is guided by the Working Group AI Conformance. The group, operating under an openly published charter, is focused on creating a conformance standard and validation suite to ensure AI workloads on Kubernetes are interoperable, reproducible, and portable. Its scope includes defining a reference architecture, framework support requirements, and test criteria for key capabilities such as GPU integration, volume handling, and job-level networking.

For more context on the initiative’s objectives, see theKubernetes AI Conformance planning document

About Cloud Native Computing Foundation
Cloud native computing empowers organizations to build and run scalable applications with an open source software stack in public, private, and hybrid clouds. The Cloud Native Computing Foundation (CNCF) hosts critical components of the global technology infrastructure, including Kubernetes, Prometheus, and Envoy. CNCF brings together the industry’s top developers, end users, and vendors and runs the largest open source developer conferences in the world. Supported by more than 800 members, including the world’s largest cloud computing and software companies, as well as over 200 innovative startups, CNCF is part of the nonprofit Linux Foundation. For more information, please visit www.cncf.io.

###

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our trademark usage page. Linux is a registered trademark of Linus Torvalds.

Media Contact

Kaitlin Thornhill

The Linux Foundation

PR@CNCF.io 

Read the whole story
alvinashcraft
34 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Effectively Monitoring Web Performance

1 Share

This article is a sponsored by DebugBear

There’s no single way to measure website performance. That said, the Core Web Vitals metrics that Google uses as a ranking factor are a great starting point, as they cover different aspects of visitor experience:

  • Largest Contentful Paint (LCP): Measures the initial page load time.
  • Cumulative Layout Shift (CLS): Measures if content is stable after rendering.
  • Interaction to Next Paint (INP): Measures how quickly the page responds to user input.

There are also many other web performance metrics that you can use to track technical aspects, like page weight or server response time. While these often don’t matter directly to the end user, they provide you with insight into what’s slowing down your pages.

You can also use the User Timing API to track page load milestones that are important on your website specifically.

Synthetic And Real User Data

There are two different types of web performance data:

  • Synthetic tests are run in a controlled test environment.
  • Real user data is collected from actual website visitors.

Synthetic monitoring can provide super-detailed reports to help you identify page speed issues. You can configure exactly how you want to collect the data, picking a specific network speed, device size, or test location.

Get a hands-on feel for synthetic monitoring by using the free DebugBear website speed test to check on your website.

That said, your synthetic test settings might not match what’s typical for your real visitors, and you can’t script all of the possible ways that people might interact with your website.

That’s why you also need real user monitoring (RUM). Instead of looking at one experience, you see different load times and how specific visitor segments are impacted. You can review specific page views to identify what caused poor performance for a particular visitor.

At the same time, real user data isn’t quite as detailed as synthetic test reports, due to web API limitations and performance concerns.

DebugBear offers both synthetic monitoring and real user monitoring:

  • To set up synthetic tests, you just need to enter a website URL, and
  • To collect real user metrics, you need to install an analytics snippet on your website.
Three Steps To A Fast Website

Collecting data helps you throughout the lifecycle of your web performance optimizations. You can follow this three-step process:

  1. Identify: Collect data across your website and identify slow visitor experiences.
  2. Diagnose: Dive deep into technical analysis to find optimizations.
  3. Monitor: Check that optimizations are working and get alerted to performance regressions.

Let’s take a look at each step in detail.

Step 1: Identify Slow Visitor Experiences

What’s prompting you to look into website performance issues in the first place? You likely already have some specific issues in mind, whether that’s from customer reports or because of poor scores in the Core Web Vitals section of Google Search Console.

Real user data is the best place to check for slow pages. It tells you whether the technical issues on your site actually result in poor user experience. It’s easy to collect across your whole website (while synthetic tests need to be set up for each URL). And, you can often get a view count along with the performance metrics. A moderately slow page that gets two visitors a month isn’t as important as a moderately fast page that gets thousands of visits a day.

The Web Vitals dashboard in DebugBear’s RUM product checks your site’s performance health and surfaces the most-visited pages and URLs where many visitors have a poor experience.

You can also run a website scan to get a list of URLs from your sitemap and then check each of these pages against real user data from Google’s Chrome User Experience Report (CrUX). However, this will only work for pages that meet a minimum traffic threshold to be included in the CrUX dataset.

The scan result highlights pages with poor web vitals scores where you might want to investigate further.

If no real-user data is available, then there is a scanning tool called Unlighthouse, which is based on Google’s Lighthouse tool. It runs synthetic tests for each page, allowing you to filter through the results in order to identify pages that need to be optimized.

Step 2: Diagnose Web Performance Issues

Once you’ve identified slow pages on your website, you need to look at what’s actually happening on your page that is causing delays.

Debugging Page Load Time

If there are issues with page load time metrics — like the Largest Contentful Paint (LCP) — synthetic test results can provide a detailed analysis. You can also run page speed experiments to try out and measure the impact of certain optimizations.

Real user data can still be important when debugging page speed, as load time depends on many user- and device-specific factors. For example, depending on the size of the user’s device, the page element that’s responsible for the LCP can vary. RUM data can provide a breakdown of possible influencing factors, like CSS selectors and image URLs, across all visitors, helping you zero in on what exactly needs to be fixed.

Debugging Slow Interactions

RUM data is also generally needed to properly diagnose issues related to the Interaction to Next Paint (INP) metric. Specifically, real user data can provide insight into what causes slow interactions, which helps you answer questions like:

  • What page elements are responsible?
  • Is time spent processing already-active background tasks or handling the interaction itself?
  • What scripts contribute the most to overall CPU processing time?

You can view this data at a high level to identify trends, as well as review specific page views to see what impacted a specific visitor experience.

Step 3: Monitor Performance & Respond To Regressions

Continuous monitoring of your website performance lets you track whether the performance is improving after making a change, and alerts you when scores decline.

How you respond to performance regressions depends on whether you’re looking at lab-based synthetic tests or real user analytics.

Synthetic Data

Test settings for synthetic tests are standardized between runs. While infrastructure changes, like browser upgrades, occasionally cause changes, performance is more generally determined by resources loaded by the website and the code it runs.

When a metric changes, DebugBear lets you view a before-and-after comparison between the two test results. For example, the next screenshot displays a regression in the First Contentful Paint (FCP) metric. The comparison reveals that new images were added to the page, competing for bandwidth with other page resources.

From the report, it’s clear that a CSS file that previously took 255 milliseconds to load now takes 915 milliseconds. Since stylesheets are required to render page content, this means the page now loads more slowly, giving you better insight into what needs optimization.

Real User Data

When you see a change in real user metrics, there can be two causes:

  1. A shift in visitor characteristics or behavior, or
  2. A technical change on your website.

Launching an ad campaign, for example, often increases redirects, reduces cache hits, and shifts visitor demographics. When you see a regression in RUM data, the first step is to find out if the change was on your website or in your visitor’s browser. Check for view count changes in ad campaigns, referrer domains, or network speed to get a clearer picture.

If those visits have different performance compared to your typical visitors, then that suggests the repression is not due to a change on your website. However, you may still need to make changes on your website to better serve these visitor cohorts and deliver a good experience for them.

To identify the cause of a technical change, take a look at component breakdown metrics, such as LCP subparts. This helps you narrow down the cause of a regression, whether it is due to changes in server response time, new render-blocking resources, or the LCP image.

You can also check for shifts in page view properties, like different LCP element selectors or specific scripts that cause poor performance.

Conclusion

One-off page speed tests are a great starting point for optimizing performance. However, a monitoring tool like DebugBear can form the basis for a more comprehensive web performance strategy that helps you stay fast for the long term.

Get a free DebugBear trial on our website!



Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Connect to Remote MCP Servers with OAuth in Docker

1 Share

In just a year, the Model Context Protocol (MCP) has become the standard for connecting AI agents to tools and external systems. The Docker MCP Catalog now hosts hundreds of containerized local MCP servers, enabling developers to quickly experiment and prototype locally.

We have now added support for remote MCP servers to the Docker MCP Catalog. These servers function like local MCP servers but run over the internet, making them easier to access from any environment without the need for local configuration.

With the latest update, the Docker MCP Toolkit now supports remote MCP servers with OAuth, making it easier than ever to securely connect to external apps like Notion and Linear, right from your Docker environment. Plus, the Docker MCP Catalog just grew by 60+ new remote MCP servers, giving you an even wider range of integrations to power your workflows and accelerate how you build, collaborate, and automate.

As remote MCP servers gain popularity, we’re excited to make this capability available to millions of developers building with Docker.

In this post, we’ll explore what this means for developers, why OAuth support is a game-changer, and how you can get started with remote MCP servers with just two simple commands.

Connect to Remote MCP Servers- Securely, Easily, Seamlessly

Goodbye Manual Setup, Hello OAuth Magic

Figuring out how to find and generate API tokens for a service is often tedious, especially for beginners. Tokens also tend to expire unpredictably, breaking existing MCP connections and require reconfiguration.

With OAuth built directly into Docker MCP, you’ll no longer need to juggle tokens or manually configure connections. You can securely connect to remote MCP servers in seconds – all while keeping your credentials safe. 

60+ New Remote MCP Servers, Instantly Available

From project management to documentation and issue tracking, the expanded MCP Catalog now includes integrations for Notion, Linear, and dozens more. Whatever tools your team depends on, they’re now just a command away. We will continue to expand the catalog as new remote servers become available.

Remote MCP server 1

Figure 1: Some examples of remote MCP servers that are now part of the Docker MCP Catalog

Easy to use via the CLI or Docker Desktop 

No new setup. No steep learning curve. Just use your existing Docker CLI and get going. Enabling and authorizing remote MCP servers is fully integrated into the familiar command-line experience you already love. You can also install servers via one-click with Docker Desktop.

Two Commands to Connect and Authorize Remote MCP Servers- It’s That Simple

Using Docker CLI

Step 1: Enable Your Remote MCP Server

Pick your server, and enable it with one line:

docker mcp server enable notion-remote

This registers the remote server and prepares it for OAuth authorization.

Step 2: Authorize Securely with OAuth

Next, authorize your connection with:

docker mcp oauth authorize notion-remote

This launches your browser with an OAuth authorization page.

Using Docker Desktop

Step 1: Enable Your Remote MCP Server

If you prefer to use Docker Desktop instead of the command line, open the Catalog tab and search for the server you want to use. The cloud icon indicates that it’s a remote server. Click the “+” button to enable the server.

Remote MCP server 2

Figure 2: Enabling the Linear remote MCP server is just one click.

Step 2: Authorize Securely with OAuth

Open the OAuth tab and click the “Authorize” button next to the MCP Server you want to authenticate with.

Remote MCP server 3

Figure 3: Built-in OAuth flow for Linear remote MCP servers. 

Once authorized, your connection is live. You can now interact with Notion, Linear, or any other supported MCP server directly through your Docker MCP environment.

Why This Update Matters for Developers

Unified Access Across Your Ecosystem

Developers rely on dozens of tools every day across different MCP clients. The Docker MCP Toolkit brings them together under one secure, unified interface – helping you move faster without manually configuring each MCP client. This means you don’t need to log in to the same service multiple times across Cursor, Claude Code, and other clients you may use.

Unlock AI-Powered Workflows

Remote MCP servers make it really easy to bridge data, tools, and AI. They are always up to date with the latest tools and are faster to use as they don’t run any code on your computer. With OAuth support, your connected apps can now securely provide context to AI models unlocking powerful new automation possibilities.

Building the Future of Developer Productivity

This update is more than just an integration boost – it’s the foundation for a more connected, intelligent, and automated developer experience. And this is only the beginning.

Conclusion

The addition of OAuth for remote MCP servers makes Docker MCP Toolkit the most powerful way to securely connect your tools, workflows, and AI-powered automations.

With 60+ new remote servers now available and growing, developers can bring their favorite services – like Notion and Linear, directly into Docker MCP Toolkit.

Learn more

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

The AI Paradox: GitLab finds faster coding is slowing teams down

1 Share
GitLab has published new findings highlighting what it calls the “AI Paradox,” whereby artificial intelligence is speeding up coding but introducing new productivity barriers as a result. The company’s 2025 Global DevSecOps Report, conducted with The Harris Poll, surveyed 3,266 professionals working in software development, IT operations, and security. The results suggest that while teams deploy faster than ever, they are losing time to inefficiencies that AI alone cannot fix. According to the study, DevSecOps professionals lose about seven hours per week to inefficient processes. The main causes include fragmented toolchains and collaboration gaps between teams. Sixty percent of respondents… [Continue Reading]
Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

How Can I Participate On The Copilot Studio User Group On The Microsoft Tech Community?

1 Share

This Is An Official Copilot Studio User Group Help Article On How To Participate On The Copilot Studio User Group On The Microsoft Tech Community It’s Not An Official Copilot Studio User Group Policy Or Copilot Studio User Group Guideline 

How To Participate On The Copilot Studio User Group?

There Are Many Practical Ways To Participate On The Copilot Studio User Group & Here’s How You Can Participate On The Microsoft Copilot Studio User Group:

  1. Participate In Virtual Chat Based Events
  2. Start Discussions That Are Focused On Microsoft Copilot Studio
  3. Introduce Yourself On The​ Thread 
  4. Post Comments On Discussions That Are Based On Microsoft Copilot Studio
  5. Show The Copilot User Group What Copilot Your Building Or Working On

 

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories