Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
147523 stories
·
33 followers

🏥 RepoDoctor - AI-Powered Repository Health Analysis with GitHub Copilot CLI

1 Share

This is a submission for the GitHub Copilot CLI Challenge

What I Built

RepoDoctor - Your Repository's AI Doctor 🩺

RepoDoctor is a Copilot-first CLI tool that revolutionizes how developers analyze and maintain their codebases. Instead of relying on rigid, hardcoded rules like traditional static analysis tools, RepoDoctor acts as an intelligent orchestrator that delegates all analysis logic to GitHub Copilot CLI.

Think of it as having an AI-powered code doctor that:

  • 🍔 Diagnoses bloat - Identifies large files, build artifacts, and missing hygiene files
  • 🌎 Creates onboarding guides - Generates comprehensive TOUR.md files for new contributors
  • 🐳 Audits Dockerfiles - Provides security and optimization recommendations
  • 💀 Detects dead code - Finds unused code with confidence levels
  • 🔬 Performs health scans - Multi-module analysis with overall health scoring
  • 📋 Generates reports - Beautiful markdown reports from scan results

What makes RepoDoctor special?

Traditional tools like ESLint, Pylint, or SonarQube use static rules that can't understand context. They tell you what's wrong but not why or how to fix it in your specific situation.

RepoDoctor is different because:

  • Contextual AI Analysis - Understands your tech stack, project structure, and patterns
  • Actionable Recommendations - Not just "this is bad," but "here's how to improve it"
  • Zero Configuration - No complex rule files or configuration needed
  • Extensible Prompts - Easy to add new analysis types with prompt templates
  • Human-Readable Output - Generates documentation, not just error lists

Why it matters to me:

As a developer, I've spent countless hours:

  • Onboarding new team members who struggle to understand large codebases
  • Debugging performance issues caused by bloated repositories
  • Reviewing PRs with potential security issues in Docker configurations
  • Hunting for dead code that clutters the codebase

RepoDoctor solves these pain points by leveraging AI to provide intelligent, context-aware analysis that actually helps developers improve their code quality.

Project Links

Demo

repodoc

repodoc-docker

repodoc-diet

repodoc-report

repodoc-deadcode

Quick Installation & Usage

# Install from PyPI
pip install repodoc

# Install from uv
uv install repodoc

# Use repodoc without installation
uv tool run repodoc

# Analyze repository bloat
repodoc diet

# Generate onboarding guide
repodoc tour

# Run full health scan
repodoc scan

# Generate beautiful report
repodoc report

My Experience with GitHub Copilot CLI

Building RepoDoctor was a transformative experience that fundamentally changed how I approach software development. GitHub Copilot CLI wasn't just a tool I used—it became the core architecture of my entire application.

The Copilot-First Architecture

Instead of building traditional static analysis with hardcoded rules, I had a radical idea: What if the AI itself is the analysis engine?

This led to RepoDoctor's unique architecture:

  1. Prompt Templates - Each analysis type (diet, tour, docker, etc.) has a carefully crafted prompt
  2. Workflow Orchestration - RepoDoctor manages file discovery, data collection, and output formatting
  3. AI Delegation - All actual analysis logic is delegated to GitHub Copilot CLI
  4. Schema Validation - Pydantic schemas ensure the AI returns structured, reliable data

How I Used GitHub Copilot CLI

1. Prompt Engineering as Code

I created a sophisticated prompt template system that provides Copilot CLI with:

  • Context - File listings, directory structure, key metrics
  • Instructions - Clear objectives and output format requirements
  • Constraints - What to focus on, what to ignore
  • Examples - Sample outputs to guide the AI

2. Iterative Development with Copilot CLI

During development, Copilot CLI helped me:

Code Generation:

  • Generated boilerplate for CLI commands with Typer
  • Created Pydantic schemas for each analysis type
  • Built async orchestration code for subprocess management

Debugging:

# When tests failed, I asked Copilot:
copilot "Why is this pytest fixture not mocking shutil.which correctly?"
copilot "How do I handle UTF-8 encoding on Windows for subprocess output?"

Architecture Decisions:

copilot "Should I use sync or async for subprocess calls to GitHub Copilot CLI?"
copilot "What's the best way to cache analysis results for the report command?"

3. AI-Powered Documentation Generation

The tour command is my favorite feature—it uses Copilot CLI to generate comprehensive onboarding guides by analyzing:

  • Project structure
  • Code patterns
  • Tech stack
  • Dependencies
  • Common workflows

This would have taken hours to write manually, but Copilot CLI generates it in seconds with context-aware understanding of the codebase.

Leveraging Agent Skills

One of my secret weapons during development was my Virtual Company project—a collection of 27 specialized agent skills that enhance AI agents with domain-specific expertise.

These skills act as expert personas that guide GitHub Copilot CLI through complex workflows.

These skills transformed GitHub Copilot CLI from a general assistant into a team of specialized experts, each bringing deep domain knowledge to different aspects of the project. It's like having a senior developer, tech writer, QA engineer, and DevOps specialist all working together!

Virtual Company is open source: https://github.com/k1lgor/virtual-company

Impact on My Development Experience

Before GitHub Copilot CLI:

  • ❌ Spent hours writing static analysis rules
  • ❌ Struggled with complex regex patterns
  • ❌ Wrote boilerplate code manually
  • ❌ Context-switched between docs and coding

With GitHub Copilot CLI + Virtual Company Skills:

  • 10x faster development - Instant code generation and debugging
  • Expert-level guidance - Each skill provides specialized domain knowledge
  • Better architecture - Copilot with tech-lead skill suggested async patterns I wouldn't have considered
  • Fewer bugs - AI-reviewed code with bug-hunter skill before I even ran it
  • More creativity - Spent time on features, not implementation details
  • Continuous learning - Copilot taught me new Python patterns and best practices
  • Comprehensive testing - Test-genius skill helped achieve 48 passing tests with proper mocking

The Meta Experience

The most mind-bending part? I built a tool powered by GitHub Copilot CLI, while using GitHub Copilot CLI to build it. 🤯

It was like:

  • Using Copilot CLI to debug Copilot CLI integration
  • Asking Copilot to generate prompts for Copilot
  • Having Copilot help me test code that invokes Copilot

This recursive AI-assisted development felt like a glimpse into the future of software engineering.

Key Takeaways

  1. AI-First Architecture is Real - RepoDoctor proves you can build production tools with AI as the core logic engine
  2. Prompt Engineering Matters - The quality of your prompts directly impacts output quality
  3. Specialized Skills Amplify AI - Using domain-specific agent skills (Virtual Company) accelerates development exponentially
  4. Copilot CLI for Everything - From code generation to debugging to documentation
  5. Ship Faster, Iterate Smarter - Copilot CLI enabled rapid prototyping and validation
  6. The Future is AI-Native - Tools will increasingly delegate intelligence to AI rather than hardcode it

Try RepoDoctor Today!

uv install repodoc
cd your-project
repodoc scan

GitHub: https://github.com/k1lgor/RepoDoctor

PyPI: https://pypi.org/project/repodoc/

Built with ❤️ and 🤖 by @k1lgor using GitHub Copilot CLI

Read the whole story
alvinashcraft
25 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Welcome to the Eternal September of open source. Here’s what we plan to do for maintainers.

1 Share

Open collaboration runs on trust. For a long time, that trust was protected by a natural, if imperfect filter: friction.

If you were on Usenet in 1993, you’ll remember that every September a flood of new university students would arrive online, unfamiliar with the norms, and the community would patiently onboard them. Then mainstream dial-up ISPs became popular and a continuous influx of new users came online. It became the September that never ended.

Today, open source is experiencing its own Eternal September. This time, it’s not just new users. It’s the sheer volume of contributions.

When the cost to contribute drops

In the era of mailing lists contributing to open source required real effort. You had to subscribe, lurk, understand the culture, format a patch correctly, and explain why it mattered. The effort didn’t guarantee quality, but it filtered for engagement. Most contributions came from someone who had genuinely engaged with the project.

It also excluded people. The barrier to entry was high. Many projects worked hard to lower it in order to make open source more welcoming.

A major shift came with the pull request. Hosting projects on GitHub, using pull requests, and labeling “Good First Issues” reduced the friction needed to contribute. Communities grew and contributions became more accessible.

That was a good thing.

But friction is a balancing act. Too much keeps people and their ideas out, too little friction can strain the trust open source depends on.

Today, a pull request can be generated in seconds. Generative AI makes it easy for people to produce code, issues, or security reports at scale. The cost to create has dropped but the cost to review has not.

It’s worth saying: most contributors are acting in good faith. Many want to help projects they care about. Others are motivated by learning, visibility, or the career benefits of contributing to widely used open source. Those incentives aren’t new and they aren’t wrong.

The challenge is what happens when low-quality contributions arrive at scale. When volume accelerates faster than review capacity, even well-intentioned submissions can overwhelm maintainers. And when that happens, trust, the foundation of open collaboration, starts to strain.

The new scale of noise

It is tempting to frame “low-quality contributions” or “AI slop” contributions as a unique recent phenomenon. It isn’t. Maintainers have always dealt with noisy inbound.

  • The Linux kernel operates under a “web of trust” philosophy and formalized its SubmittingPatches guide and introduced the Developer Certificate of Origin (DCO) in 2004 for a reason.
  • Mozilla and GNOME built formal triage systems around the reality that most incoming bug reports needed filtering before maintainers invested deeper time.
  • Automated scanners: Long before GenAI, maintainers dealt with waves of automated security and code quality reports from commercial and open source scanning tools.

The question from maintainers has often been the same: “Are you really trying to help me, or just help yourself?

Just because a tool—whether a static analyzer or an LLM—makes it easy to generate a report or a fix, it doesn’t mean that contribution is valuable to the project. The ease of creation often adds a burden to the maintainer because there is an imbalance of benefit. The contributor maybe gets the credit (or the CVE, or the visibility), while the maintainer gets the maintenance burden.

Maintainers are feeling that directly. For example:

  • curl ended its bug bounty program after AI-generated security reports exploded, each taking hours to validate.
  • Projects like Ghostty are moving to invitation-only contribution models, requiring discussion before accepting code contributions.
  • Multiple projects are adopting explicit rules about AI-generated contributions.

These are rational responses to an imbalance.

What we’re doing at GitHub

At GitHub, we aren’t just watching this happen. Maintainer sustainability is foundational to open source, and foundational to us. As the home of open source, we have a responsibility to help you manage what comes through the door.

We are approaching this from multiple angles: shipping immediate relief now, while building toward longer-term, systemic improvements. Some of this is about tooling. Some is about creating clearer signals so maintainers can decide where to spend their limited time.

Features we’ve already shipped

  • Pinned comments on issues: You can now pin a comment to the top of an issue from the comment menu.
  • Banners to reduce comment noise: Experience fewer unnecessary notifications with a banner that encourages people to react or subscribe instead of leaving noise like “+1” or “same here.”
  • Pull request performance improvements: Pull request diffs have been optimized for greater responsiveness and large pull requests in the new files changed experience respond up to 67% faster.
  • Faster issue navigation: Easier bug triage thanks to significantly improved speeds when browsing and navigating issues as a maintainer.
  • Temporary interaction limits: You can temporarily enforce a period of limited activity for certain users on a public repository.

These improvements focus on reducing review overhead.

Features we’ll be shipping soon

  • Repo-level pull request controls: Gives maintainers the option to limit pull request creation to collaborators or disable pull requests entirely. While the introduction of the pull request was fundamental to the growth of open source, maintainers should have the tools they need to manage their projects.
  • Pull request deletion from the UI: Remove spam or abusive pull requests so repositories can stay more manageable.

Exploring next steps

We know that walls don’t build communities. As we explore next steps, our focus is on giving maintainers more control while helping protect what makes open source communities work.

Some of the directions we’re exploring in consultation with maintainers include:

  • Criteria-based gating: Requiring a linked issue before a pull request can be opened, or defining rules that contributions must meet before submission.
  • Improved triage tools: Potentially leveraging automated triage to evaluate contributions against a project’s own guidelines (like CONTRIBUTING.md) and surface which pull requests should get your attention first.

These tools are meant to support decision-making, not replace it. Maintainers should always remain in control.

We are also aware of tradeoffs. Restrictions can disproportionately affect first-time contributors acting in good faith. That’s why these controls are optional and configurable.

The community is building ladders

One of the things I love most about open source is that when the community hits a wall, people build ladders. We’re seeing a lot of that right now.

Maintainers across the ecosystem are experimenting with different approaches. Some projects have moved to invitation-only workflows. Others are building custom GitHub Actions for contributor triage and reputation scoring.

Mitchell Hashimoto’s Vouch project is an interesting example. It implements an explicit trust management system where contributors must be vouched for by trusted maintainers before they can participate. It’s experimental and some aspects will be debated, but it fits a longer lineage, from Advogato’s trust metric to Drupal’s credit system to the Linux kernel’s Signed-off-by chain.

At the same time, many communities are investing heavily in education and onboarding to widen who can contribute while setting clearer expectations. The Python community, for example, emphasizes contributor guides, mentorship, and clearly labeled entry points. Kubernetes pairs strong governance with extensive documentation and contributor education, helping new contributors understand not just how to contribute, but what a useful contribution looks like.

These approaches aren’t mutually exclusive. Education helps good-faith contributors succeed. Guardrails help maintainers manage scale.

There is no single correct solution. That’s why we are excited to see maintainers building tools that match their project’s specific values. The tools communities build around the platform often become the proving ground for what might eventually become features. So we’re paying close attention.

Building community, not just walls

We also need to talk about incentives. If we only build blocks and bans, we create a fortress, not a bazaar.

Right now, the concept of “contribution” on GitHub still leans heavily toward code authorship. In WordPress, they use manually written “props” credit given not just for code, but for writing, reproduction steps, user testing, and community support. It recognizes the many forms of contribution that move a project forward.

We want to explore how GitHub can better surface and celebrate those contributions. Someone who has consistently triaged issues or merged documentation PRs has proven they understand your project’s voice. These are trust signals we should be surfacing to help you make decisions faster.

Tell us what you need

We’ve opened a community discussion to gather feedback on the directions we’re exploring: Exploring Solutions to Tackle Low-Quality Contributions on GitHub.

We want to hear from you. Share what is working for your projects, where the gaps are, and what would meaningfully improve your experience maintaining open source.

Open source’s Eternal September is a sign of something worth celebrating: more people want to participate than ever before. The volume of contributions is only going to grow — and that’s a good thing. But just as the early internet evolved its norms and tools to sustain community at scale, open source needs to do the same. Not by raising the drawbridge, but by giving maintainers better signals, better tools, and better ways to channel all that energy into work that moves their projects forward.

Let’s build that together.

The post Welcome to the Eternal September of open source. Here’s what we plan to do for maintainers. appeared first on The GitHub Blog.

Read the whole story
alvinashcraft
26 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

42% of Code Is Now AI-Assisted!

1 Share

In 2025, Sonar surveyed over 1.100 developers worldwide to see how software engineering is evolving. The findings show AI is now central to development – but it hasn’t made the job easier, only reshaped it.

Developers now write code faster than ever. At the same time, they spend more time questioning, reviewing, and validating what gets shipped. Productivity has increased. Confidence has not kept pace. This tension defines the current “State of Code” research.

AI is no longer experimental, it’s operational!

  • 72% of developers who use AI coding tools rely on them daily
  • Developers estimate that 42% of the code they commit is AI-assisted
  • They expect that number to rise to 65% by 2027

AI-assisted coding has shifted from novelty to routine. Among developers who have tried AI tools, most now use them daily. Many estimate that more than 40% of the code they commit includes AI assistance. They expect that percentage to grow significantly in the next two years.

Developers use AI across the full spectrum of work: prototypes, internal tools, customer-facing products, and even business-critical systems. AI no longer supports side experiments. It participates directly in production workflows.

This level of integration signals a structural shift in software engineering. Teams no longer ask whether to use AI. They focus on how to use it responsibly.

Do developers trust AI?

  • 58% use it in business-critical services
  • 88% use AI for prototypes and proof-of-concept work
  • 83% use it for internal production systems
  • 73% integrate it into customer-facing applications

Developers consistently report that AI makes them faster. Most say AI helps them solve problems more efficiently and reduces time spent on repetitive tasks. Many even report increased job satisfaction because they can offload boilerplate and mechanical work.

However, speed does not equal certainty. Nearly all developers (96%) express doubts about the reliability of AI-generated code. They acknowledge that AI often produces output that looks correct at first glance but contains subtle errors or hidden flaws. This creates a trust deficit.

Despite this skepticism, not all developers consistently verify AI output before committing it. The pressure to ship features quickly often outweighs the discipline of thorough review. As a result, teams face a new bottleneck: verification.

Reviewing AI-generated code frequently demands more effort than reviewing human-written code. Developers must reconstruct intent, validate assumptions, and check edge cases without knowing how the model arrived at its solution. AI compresses the time spent writing code but expands the time required to evaluate it.

In this environment, confidence – not velocity – becomes the true measure of engineering maturity.

AI doesn’t remove toil, it changes it!

75% believe AI reduces toil. However, time allocation data reveals a more complex picture:

  • Developers still spend roughly 23–25% of their work week on low-value or repetitive tasks
  • This percentage remains consistent regardless of AI usage frequency

One of the promises of AI tools involved reducing developer toil: repetitive, frustrating tasks such as writing documentation, generating tests, or navigating poorly structured codebases. Many developers believe AI reduces certain kinds of toil. They report improvements in documentation quality, test coverage, and refactoring efficiency.

However, the overall proportion of time developers spend on low-value work has not meaningfully decreased. Instead, AI shifts the nature of toil. Developers now spend less time writing boilerplate and more time validating AI suggestions. They spend less time drafting documentation and more time correcting generated code. Traditional frustrations have not disappeared; they have transformed.

This dynamic challenges simplistic narratives about AI productivity gains. AI does not eliminate friction. It redistributes it.

Teams that recognize this reality can adapt workflows accordingly. Teams that assume AI automatically saves time risk underestimating the verification cost.

Technical debt: reduction and acceleration at once

Positive impact:

  • 93% report at least one improvement related to technical debt
  • 57% see better documentation
  • 53% report improved testing or debugging
  • Nearly half report easier refactoring

Negative impact:

  • 88% report at least one negative consequence
  • 53% say AI generates code that appears correct but is unreliable
  • 40% say it produces unnecessary or duplicative code

AI influences technical debt in both directions. On the positive side, developers use AI to modernize legacy code, generate missing tests, improve documentation, and refactor inefficient structures. These activities reduce long-standing debt and improve maintainability.

On the negative side, AI sometimes generates redundant, overly verbose, or structurally weak code. Developers frequently encounter code that appears correct but introduces subtle reliability problems. When teams integrate such code without rigorous review, they create new debt.

AI therefore acts as both a debt reducer and a debt accelerator. Managing this tension requires deliberate governance. Teams must treat AI contributions as first-class code changes subject to the same standards as human-written code. Automated testing, static analysis, and clear architectural principles become even more critical in this environment.

Technical debt already ranks among developers’ top frustrations. Uncontrolled AI usage can amplify that burden rather than alleviate it.

What is shadow AI?

Most used tools:

  • GitHub Copilot (75%)
  • ChatGPT (74%)
  • Claude (48%)
  • Google Codey/Duet (37%)
  • Cursor (31%)

GitHub Copilot and ChatGPT dominate AI-assisted coding usage, but developers rely on a growing ecosystem of tools. Many teams use multiple AI platforms simultaneously, selecting each for specific strengths. This fragmentation creates flexibility but introduces complexity.

A critical risk signal:

  • 35% of developers use AI tools through personal accounts
  • 52% of ChatGPT users access it outside company-managed environments
  • 57% worry about exposing sensitive data

Developers often access AI tools through personal accounts rather than company-approved environments. This behavior reflects demand for productivity but introduces governance risks. When developers paste proprietary code into unmanaged AI systems, organizations lose visibility and control.

Security concerns rank among the most significant worries associated with AI adoption. Developers themselves recognize the risk of exposing sensitive data.

Organizations must respond pragmatically. Banning AI rarely works. Developers adopt tools that help them work faster. Instead, companies should provide secure, sanctioned AI environments and define clear usage guidelines. Enablement, not prohibition, produces safer outcomes.

The experience divide

Less experienced developers:

  • Use AI for understanding codebases
  • Rely more heavily on AI for implementation
  • Estimate a higher percentage of AI-assisted code

Senior developers:

  • Use AI more selectively
  • Focus on review, optimization, and refactoring
  • Express higher skepticism

AI does not affect all developers equally. Less-experienced developers tend to adopt new AI tools more aggressively. They use AI to understand unfamiliar codebases, generate implementations, and explore new frameworks. For them, AI functions as both assistant and tutor.

More experienced developers integrate AI differently. They rely on it to review code, optimize performance, and assist with maintenance tasks. Their experience enables them to identify flaws more quickly and apply AI selectively.

Junior developers often trust AI more readily. Senior developers approach it more cautiously. This difference does not reflect competence; it reflects perspective.

High-performing teams combine both approaches. Junior engineers introduce experimentation and speed. Senior engineers provide oversight and architectural discipline. Together, they mitigate risk while capturing gains.

What this means for engineering teams?

The State of Code 2025 reveals a profession in transition. AI increases output but raises the bar for validation. It reduces certain manual burdens while introducing new forms of cognitive load. It offers opportunities to address legacy debt while risking new complexity.

The central lesson does not concern speed. It concerns confidence. Developers must treat AI as a powerful collaborator that requires supervision. Engineering leaders must invest in testing infrastructure, review practices, and secure tooling environments. Organizations must acknowledge that productivity improvements depend on disciplined integration, not blind adoption.

Developers no longer compete on how quickly they can type code. They compete on how effectively they can evaluate, refine, and ship trustworthy systems.

The tools have changed. The responsibility has not. Teams that focus on clarity, code quality, and secure AI integration will thrive in this new environment. Teams that chase velocity without verification will accumulate invisible risk.

The post 42% of Code Is Now AI-Assisted! appeared first on ShiftMag.

Read the whole story
alvinashcraft
28 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

How to Build WordPress Plugins with AI: Claude Code + WordPress Studio Setup Guide

1 Share

Want to build WordPress plugins using AI? 

This guide shows you how to set up Claude Code and WordPress Studio to create working plugins with text prompts.

Claude Code is Anthropic’s AI coding assistant. WordPress Studio is a free local WordPress environment. Together, they let you go from idea to a working plugin in minutes — no deep coding knowledge required.

This walkthrough covers the complete setup and shows you how to build your first plugin.

1. Install Claude Code

Head to Claude Code and sign up for an account — you can choose any paid plan available. 

Run the native installer from the setup page and follow the on-screen instructions to complete the installation.

The installation runs for a minute or two. When it finishes, Claude Code is ready to use.

Screenshot showing the installation of Claude Code

2. Install WordPress Studio

Download WordPress Studio — it’s completely free and works on both Mac and Windows.

Install it, then create a new site. Give it any name you want — e.g., “My WordPress Website” works fine.

Screenshot of the WordPress Studio website.

Because Studio runs locally on your computer, everything you build stays safely contained on your machine — so you can experiment with AI-generated plugins without risking a live website.

3. Start Claude Code

Using the Open in… options on the Overview tab in Studio, click Terminal. This will open a terminal window at your project file’s location.

Screenshot of the WordPress Studio Overview screen.

Then, type claude. If it’s your first time, you’ll be prompted to log in to your Claude account and confirm that you trust the files in this folder.

Click Enter/Return on your keyboard to trust the folder, and you’ll see the welcome message.

Screenshot of the Claude Code welcome message.

5. Build your first plugin

In the Claude terminal, describe what you want. Give it some context about where you are and what you need. For example:

“We are in the root of the WordPress site folder. I want a simple plugin that prints out ‘Hello [Your Name]’ in the admin of the site.”

Screenshot of a prompt instructing Claude Code to create a plugin.

From here, Claude will ask some follow-up questions, create a plugin folder, and generate the complete plugin file with proper WordPress structure.

Screenshot of the Claude Code plugin creation confirmation message.

6. Activate and test

Go back to WordPress Studio and open your WordPress admin. Navigate to Plugins, find your new plugin, and activate it.

Screenshot of the Plugins page in WordPress Admin.

If the plugin works correctly, your custom message will appear at the top of the admin area — in our case, “Hello Nick” shows up as an admin notice. 

If you haven’t changed your name, you may see it say “Hello admin.” Simply go to your Users list and change the name of your default user.

Screenshot showing the output of the plugin created with Claude Code.

This is the simplest plugin possible, but it shows how fast you can build with Claude and WordPress.

7. Keep building

From here, you can add more features. 

Go back to the Terminal in your editor and ask Claude to add new functionality — settings pages, custom blocks, whatever you need.

As with any AI tool, experimenting with prompting will help you achieve better results:

  • Give Claude context about where you are in the file structure
  • Be specific about what you want the feature to do
  • Break complex features into smaller steps
  • Ask Claude to explain the generated code if something doesn’t make sense

Bonus tip: Telex — an alternative for WordPress blocks

Telex is another unique tool that helps you generate WordPress blocks with AI — and it’s completely free to use.

Just describe what WordPress block you want, and Telex builds it with a live preview in WordPress Playground.

Screenshot of the Telex AI-powered WordPress block generator.

Test it, refine it with follow-up prompts, then download it as a plugin and install it on your WordPress site.

Screenshot of a pricing table block created with Telex.

You’re ready to build with AI

You now have an AI-powered setup for building plugins for your WordPress site.

  • Describe the plugin you want to build, and watch Claude generate working code inside your WordPress Studio site.
  • Experiment with Telex to create entire WordPress blocks.
  • Keep experimenting and trying new things.

Start simple, then tackle more complex projects as you get comfortable.

And if you build something fun, share it in the comments — we’d love to see what you make.





Read the whole story
alvinashcraft
28 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Announcing Interop 2026

1 Share

Exciting news for web developers, designers, and browser enthusiasts alike — Interop 2026 is here, continuing the mission of improving cross-browser interoperability. For the fifth year in a row, we are pleased to collaborate with Google, Igalia, Microsoft, and Mozilla to make web technology more consistent and reliable across our browsers.

Introducing Interop 2026

Making your website work in every browser can be a challenge, especially if browser engines have implemented the same web technology in slightly different ways. The Interop Project tackles this challenge by bringing the major browser engines together to improve the same set of features during the same year. Each feature is judged on whether or not it fully aligns with its official web standard — the formal technical specifications that define how each web technology should work. This helps accelerate progress toward a more reliable, consistent platform to build on.

Safari has already implemented many of the features included in Interop 2026. In fact, we were the first browser to ship contrast-color(), Media pseudo-classes, shape(), and Scoped Custom Element Registries. Plus, we have support for Anchor Positioning, Style Queries, Custom Highlights, Scroll Snap, View Transitions and much more. We’re excited that these technologies are being included as focus areas in Interop 2026, ensuring they get implemented across all browsers and any remaining interoperability gaps are closed.

We will also be focused on adding support for the following features: advanced attr(), the getAllRecords() method for IndexedDB, WebTransport, and the JavaScript Promise Integration API for Wasm. Together, these four areas make up 20% of the Interop 2026 score. They are exciting new features that solve real needs.

Focus Areas for 2026

The Interop Project measures interoperability through Web Platform Tests — automated tests that check whether browsers conform to web standards. Interop 2026 is ambitious, covering twenty focus areas. Fifteen are brand new. And five are carryovers from Interop 2025.

Anchor positioning

Anchor positioning is a carryover from Interop 2025, where significant progress was made to empower developers to position elements relative to each other. This year’s focus will be on clarifying the spec, resolving test issues, and increasing the reliability of this powerful layout feature.

Advanced attr()

The CSS attr() function lets you bridge the gap between structural data and visual presentation by pulling values directly from HTML attributes into your CSS, making styles more dynamic and context-aware without the overhead of JavaScript. While attr() has long been supported for the content property, advanced attr() extends it to work across all CSS properties with type conversion — letting you use HTML attribute values as colors, lengths, angles, and other data types. Now that security concerns have been worked through in the specification, browser makers are united in our excitement to ship this long-awaited capability with strong interoperability.

Container style queries

Style queries let you apply styles conditionally, based on the value of a custom property (aka, variable) as defined at a certain container. Similar to how Container size queries let your CSS respond to the size of the container, style queries let it respond to theme values, state flags, and other contextual data.

@container style(--theme: dark) {
  .card {
    background: #1a1a1a;
    color: #ffffff;
  }
}

Style queries started shipping in recent years, including in Safari 18.0. Interop 2026 will help ensure this powerful tool works consistently everywhere.

contrast-color()

The contrast-color() function in CSS returns a color — either black or white. It puts the burden on the browser to choose whichever has higher contrast with the color specified in the function.

.button {
  background: var(--brand-color);
  color: contrast-color(var(--brand-color));
}

By having the browser make the choice, you can architect your design system in a simpler fashion. You don’t need to manually define every color pairing. Safari and Firefox both shipped support in 2025, and now Interop 2026 will ensure this powerful function works consistently across all browsers.

Note, contrast-color() does not magically solve all accessibility concerns. Read about all the details in How to have the browser pick a contrasting color in CSS.

Custom Highlights

The CSS Custom Highlight API lets you style arbitrary text ranges without adding extra elements to the DOM. Using JavaScript, you create a highlight range, then style it with the pseudo-elements.

The ::highlight() pseudo-element is perfect for highlighting in-page search results, customizing syntax highlighting in code editors, creating an app that allows collaborative editing with user cursors, or any situation where you need to visually mark text without changing the document structure. The ::target-text pseudo-element styles the text that’s scrolled to when a user taps a link with a text fragment.

With implementations progressing across browsers, Interop 2026 ensures these highlighting capabilities work consistently, giving you reliable tools for text-based interactions.

Dialog and popover additions

The <dialog> element and popover attribute have transformed how developers build overlays on the web. Dialog was part of Interop 2022 and Popover was in Interop 2024. This year, three recent enhancements to these features make up this focus area for Interop 2026.

The closedby attribute lets you control how users can dismiss dialogs:

<dialog closedby="any">
<!-- Can be closed by clicking outside or pressing Escape -->
</dialog>

The popover="hint" attribute creates subordinate popovers that don’t dismiss other auto popovers — perfect for tooltips:

<div popover="hint" id="tooltip">
  This tooltip won’t close the menu!
</div>

The :open pseudo-class matches elements with open states, working with <dialog>, <details>, and <select>:

dialog:open {
  animation: slideIn 0.3s;
}

Together, these additions make building accessible, user-friendly UI overlays easier than ever.

Fetch uploads and ranges

The fetch() method is getting three new powerful capabilities for handling uploads and partial content.

ReadableStream request bodies enable true streaming uploads, letting you upload large files or real-time data without loading everything into memory first:

await fetch('/upload', {
  method: 'POST',
  body: readableStream,
  duplex: 'half'
});

Enhanced FormData support improves multipart uploads and responses.

Range header support allows partial content requests, essential for video streaming and resumable downloads:

fetch('/video.mp4', {
  headers: { 'Range': 'bytes=0-1023' }
});

These enhancements bring fetch() up to par with more specialized APIs, reducing the need for custom solutions.

getAllRecords() for IndexedDB

IndexedDB is a low-level API that lets you store large amounts of structured data in the browser, including files and blobs. It’s been supported in browsers for many years.

Now, IndexedDB is getting a significant performance boost with the new getAllRecords() methods for IDBObjectStore and IDBIndex. These methods allow you to retrieve records in batches and in reverse order:

const records = await objectStore.getAllRecords({
  query: IDBKeyRange.bound('A', 'M'),
  count: 100,
  direction: 'prev'
});

It’s just this new method that’s being included in Interop 2026. The score only reports the percentage of getAllRecords() tests that are passing — not all IndexDB tests.

JSPI for Wasm

WebAssembly has opened the door for running high-performance applications in the browser — games, productivity tools, scientific simulations, and more. But there’s been a fundamental mismatch. Many of these applications were originally written for environments where operations like file I/O or network requests are synchronous (blocking), while the web is fundamentally asynchronous.

The JavaScript Promise Integration API (JSPI) bridges this gap. It lets WebAssembly code that expects synchronous operations work smoothly with JavaScript’s Promise-based async APIs, without requiring you to rewrite the entire application. This means you can port existing C, C++, or Rust applications to the web more easily, unlocking a wider range of software that can run in the browser.

Interop 2026 will ensure JSPI works consistently across browsers, making WebAssembly a more viable platform for complex applications.

Media pseudo-classes

We’ve proposed media pseudo-classes for inclusion in the Interop Project for many years in a row. We are excited that it’s being included this year!

Seven CSS pseudo-classes let you apply CSS based on the playback state of <audio> and <video> elements:

These all shipped in Safari many years ago, but without support in any other browser, most developers don’t use them — or even know they exist. Instead developers need JavaScript to sync UI state with media playback state.

It’s far simpler and more efficient to use media state pseudo-classes in CSS.

video:buffering::after {
  content: "Loading...";
}
audio:muted {
  opacity: 0.5;
}

They are especially powerful combined with :has(), since it unlocks the ability to style anything on the page based on playback state, not just elements that are descendants of the media player.

article:has(video:playing) {
  background-color: var(--backgroundColor); 
  color: contrast-color(var(--backgroundColor));
  transition: background-color 0.5s ease;
}

Learn more about the power of :has() in Using :has() as a CSS Parent Selector and much more.

Navigation API

If you’ve built single-page applications, you may have experienced the pain of managing navigation state with history.pushState() and popstate events. Navigation API gives you a cleaner, more powerful way to intercept and control navigation.

This focus area is a continuation of Interop 2025, where significant progress was made to empower developers to initiate, intercept, and modify browser navigation actions. This year continues work on interoperability, to get the overall score up from the 92.3% test pass result during Interop 2025. Plus, there’s one new feature being added — the precommitHandler option. It lets you defer navigation until critical resources are ready, preventing jarring flashes of incomplete content.

navigation.addEventListener('navigate', (e) => {
  e.intercept({
    async precommitHandler() {
      // Load critical resources before commit
      await loadCriticalData();
    },
    async handler() {
      // Render the new view
      renderPage();
    }
  });
});

Interop 2026 will ensure Navigation API works reliably across browsers, a solid foundation for web applications.

Scoped custom element registries

Working with web components, you may have run into a frustrating limitation: the global customElements registry only allows one definition per tag name across your entire application. When two different libraries both define a <my-button> component, they conflict.

The CustomElementRegistry() constructor solves this by letting you create scoped registries. Different parts of your application — or different shadow roots — can have their own definitions for the same tag name.

const registry = new CustomElementRegistry();
registry.define('my-button', MyButtonV2);
shadowRoot.registry = registry;

This is especially valuable for microfrontends, component libraries, and any situation where you’re integrating third-party web components.

Safari 26.0 was the first browser to ship Scoped custom element registries. Inclusion in Interop 2026 will help ensure this capability works consistently across all browsers.

Scroll-driven Animations

Scroll-driven animations let you more easily create animations that respond to scroll position, now entirely in CSS. As a user scrolls, the animation progresses — no JavaScript needed. You can build scroll-triggered reveals, progress indicators, parallax effects, and interactive storytelling experiences.

Define animations with standard CSS keyframes, then connect them to scroll using animation-timeline:

.reveal {
  animation: fade-in linear forwards;
  animation-timeline: view();
  animation-range: entry 0% entry 100%;
}

@keyframes fade-in {
  from { opacity: 0; }
  to { opacity: 1; }
}

Use view() to trigger animations as elements enter and exit the viewport, or scroll() to tie animations to a scrolling container’s position. Learn much more in A guide to Scroll-driven Animations with just CSS.

We shipped support for scroll-driven animations in Safari 26.0. Interop 2026 will help ensure this feature works consistently across all browsers.

Scroll Snap

CSS Scroll Snap controls the panning and scrolling behavior within a scroll container, creating carousel-like experiences:

.carousel {
  scroll-snap-type: x mandatory;
  overflow-x: scroll;
}
.carousel > * {
  scroll-snap-align: center;
}

Scroll Snap has been supported in all modern browsers for many years. But like many of the older CSS specifications, multiple rounds of changes to the specification while early versions were already shipping in browsers created a deep lack of interoperability. With a far more mature web standard, it’s time to circle back and improve interoperability. This is the power of the Interop Project — focusing all the browser teams on a particular feature, and using automated tests to find inconsistencies and disagreements.

shape()

For years, when you wanted to create a complex clipping path to use with clip-path or shape-outside you’ve been limited to polygon(), which only supports straight lines, or SVG paths, which aren’t responsive to element size changes.

Now, the shape() function lets you create complex shapes with path-like commands (move, line, curve). It gives you the best of both worlds — curves like SVG paths, but with percentage-based coordinates that adapt as elements resize.

.element {
  clip-path: shape(
    from 0% 0%,
    line to 100% 0%,
    line to 100% 100%,
    curve to 0% 100% via 50% 150%,
    close
  );
}

We shipped support for the shape() function in Safari 18.4. And we look forward to Interop 2026 improving browser implementations so you can confidently use it to render of complex, responsive curves.

View transitions

View Transitions was a focus area in Interop 2025, narrowly defined to include same-document view transitions and view-transition-class. These features allow for smooth, animated transitions between UI states within a single page, as well as flexible control over styling those transitions.

While Safari finished Interop 2025 with a score of 99.2% for view transitions, the overall interoperability score is at 90.8% — so the group decided to continue the effort, carrying over the tests from 2025.

For Interop 2026, the focus area expands to also include cross-document view transitions. This allows you to create smooth, animated transitions in the moments between pages as users navigate your site, rather than an abrupt jump when new page loads. Cross-document view transitions shipped in Safari 18.2. Learn more about it in Two lines of Cross-Document View Transitions code you can use on every website today.

Web Compat

Web compatibility refers to whether or not a real world website works correctly in a particular browser. When a site works in one browser, but not another — that’s a “compat” problem. This focus area is made up of a small collection of Web Platform Tests selected because the fact they fail in some browsers causes real websites to not work in other browsers — thus creating problems for both web developers and users.

Each time Web Compat has been a focus area as part of the Interop Project, it’s targeted a different set of compat challenges. This year, Interop 2026’s web compatibility work includes:

WebRTC

WebRTC (Web Real-Time Communication) enables real-time audio, video, and data communication directly between browsers, without requiring plugins or intermediate servers. You can build video conferencing apps, live streaming platforms, peer-to-peer file sharing, and collaborative tools.

Having reached a 91.6% pass rate, WebRTC continues as a focus area in 2026, building on the progress made during Interop 2025. We’re looking forward to fixing the long tail of interop issues of the main spec for WebRTC.

WebTransport

WebTransport provides a modern way to transmit data between client and server using the HTTP/3 protocol. It gives you low-latency bidirectional communication with multiple streams over a single connection. You get both unreliable datagram support (like UDP) for speed and reliable stream support (like TCP) for guaranteed delivery.

const transport = new WebTransport('https://example.com/endpoint');
await transport.ready;
const stream = await transport.createBidirectionalStream();
// Stream data efficiently

WebTransport is ideal for gaming, real-time collaboration tools, and applications where you need more control than WebSocket provides but don’t want to manage WebRTC’s complexity. Being part of Interop 2026 ensures WebTransport works consistently across all browsers, making it a reliable choice for real-time data transmission.

CSS Zoom

The CSS zoom property scales an element and its contents, affecting layout and making the element take up more (or less) space. Unlike transform: scale(), which is purely visual, zoom changes how the element participates in layout.

.card {
  zoom: 1.5; /* Element is 150% larger and takes up more space */
}

While zoom was supported in browsers for years as a non-standard property, it’s been plagued by inconsistencies in edge cases and how it interacts with other layout features. Now that it’s standardized, CSS zoom returns as a focus area in Interop 2026, continuing from 2025.

Investigation Efforts: A Look Ahead

In addition to the focus areas, the Interop Project includes four investigation areas. These are projects where teams gather to assess the current state of testing infrastructure and sort through issues that are blocking progress.

Accessibility testing

Continuing from previous years, the Accessibility Testing investigation aims to work towards generating consistent accessibility trees across browsers. This effort will improve the WPT testing infrastructure for accessibility on top of the foundation from Interop 2024. This work ensures that accessibility features are reliable and consistent, helping developers create more inclusive web experiences.

JPEG XL

JPEG XL is a next-generation raster graphics format that supports animation, alpha transparency, and lossy as well as lossless compression. We shipped support for it in Safari 17.0. This investigation will focus on making the feature properly testable by developing comprehensive test suites, opening up the possibility that JPEG XL could be a focus area in the future.

Mobile testing

The Mobile Testing investigation continues work started in 2025. This year, we will focus on improving infrastructure for mobile-specific features like dynamic viewport changes which are crucial for building responsive mobile web experience that billions of users rely on every day.

WebVTT

Continuing from 2025, the WebVTT investigation addresses a critical challenge facing the web platform. Developers cite WebVTT’s inconsistent behavior across browsers as a major reason for choosing other subtitling and captioning solutions. Our investment in WebVTT last year primarily consisted of validating and fixing the existing test suite, as well as making any necessary spec changes along the way. We are excited to continue that effort this year to ensure synchronized text tracks and closed captioning work seamlessly across the web.

A more interoperable web

Interop 2026 brings together twenty focus areas that matter to you as a web developer. Some, like attr() and contrast-color(), give you more flexible ways to architect your CSS. Others, like Scroll-Driven Animations and View Transitions, let you create smoother, more engaging experiences without reaching for JavaScript. Features like WebTransport and the Navigation API give you more powerful tools for building modern web applications.

Just as important are the focus areas working to fix long-standing inconsistencies — ensuring Scroll Snap works reliably, bringing all browsers up to speed on shape(), and solving real-world compatibility problems that have been frustrating developers and breaking sites.

The WebKit team is committed to making these features work consistently across all browsers. Whether you’re building a design system, a single-page application, a video streaming platform, or anything in between, Interop 2026 is working to give you a more reliable foundation to build on.

Here’s to another year of making the web better, together!

Read the whole story
alvinashcraft
28 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Launching Interop 2026

1 Share

The Interop Project is a cross-browser initiative to improve web compatibility in areas that offer the most benefit to both users and developers.

The group, including Apple, Google, Igalia, Microsoft, and Mozilla, takes proposals of features that are well defined in a sufficiently stable web standard, and have good test suite coverage. Then, we come up with a subset of those proposals that balances web developer priorities (via surveys and bug reports) with our collective resources.

We focus on features that are well-represented in Web Platform Tests as the pass-rate is how we measure progress, which you can track on the Interop dashboard.

Once we have an agreed set of focus areas, we use those tests to track progress in each browser throughout the year. And after that, we do it all again!

But, before we talk about 2026, let’s take a look back at Interop 2025…

Interop 2025

Interop dashboard showing browser scores. At the top are two large circles: ‘Interop’ with a score of 95 in green, and ‘Investigations’ with a score of 36 in orange. Below are four browser scores in green circles: Chrome 99, Edge 98, Firefox 99, and Safari 98, each shown with their respective browser icons.

Firefox started Interop 2025 with a score of 46, so we’re really proud to finish the cycle on 99. But the number that really matters is the overall Interop score, which is a combined score for all four browsers – and the higher this number is, the fewer developer hours are lost to frustrating browser differences.

The overall Interop score started at 25, and it’s now 95. As a result, huge web platform features became available cross-browser, such as Same-Document View Transitions, CSS Anchor Positioning, the Navigation API, CSS @scope, and the URLPattern API.

That’s the headline-grabbing part, but in my experience, it’s way more frustrating when a feature is claimed to be supported, but doesn’t work as expected. That’s why Interop 2025 also focused on improving the reliability of existing features like WebRTC, CSS Flexbox, CSS Grid, Pointer Events, CSS backdrop-filter, and more.

But it’s not just about passing tests

With some focus areas, in particular CSS Anchor Positioning and the Navigation API, we noticed that it was possible to achieve a good score on the tests while having inconsistent behavior compared to other browsers.

In some cases this was due to missing tests, but in some cases the tests contradicted the spec. This usually happens when tests are written against a particular implementation, rather than the specified behavior.

I experienced this personally before I joined Mozilla – I tried to use CSS Anchor Positioning back when it was only shipping in Chrome and Safari, and even with simple use-cases, the results were wildly inconsistent.

Although it caused delays in these features landing in Firefox, we spent time highlighting these problems by filing issues against the relevant specs, and ensured they got priority in their working groups. As a result, specs became less ambiguous, tests were improved, and browser behavior became more reliable for developers.

Okay, that’s enough looking at the past. Let’s move on to…

Interop 2026

Over 150 proposals were submitted for Interop 2026. We looked through developer feedback, on the issues themselves, and developer surveys like The State of HTML and The State of CSS. As an experiment for 2026, we at Mozilla also invited developers to stack-rank the proposals, the results of which we used in combination with the other data to compare developer preferences between individual features – this is something we want to expand on in the future.

After carefully examining all the proposals, the Interop group has agreed on 20 focus areas (formed of 33 proposals) and 4 investigation areas. See the Interop repository for the full list, but here are the highlights:

New features

As with 2025, part of the effort is about bringing new features to all browser engines.

Cross-document View Transitions allow transitions to work across documents, without any JavaScript. The sub-features rel="expect" and blocking="render" are included in this focus area..

Scroll-driven animations allow you to drive animations based on the user’s scroll position. This replaces heavy JavaScript solutions that run on the main thread.

WebTransport provides a low-level API over HTTP/3, allowing for multiple unidirectional streams, and optional out-of-order delivery. This is a modern alternative to WebSockets.

CSS container style queries allow you to apply a block of styles depending on the computed values of custom properties on the nearest container. This means, for example, you can have a simple --theme property that impacts a range of other properties.

JavaScript Promise Integration for Wasm allows WebAssembly to asynchronously ‘suspend’, waiting on the result of an external promise. This simplifies the compilation of languages like C/C++ which expect APIs to run synchronously.

CSS attr() has been supported across browsers for over 15 years, but only for pseudo-element content. For Interop 2026, we’re focusing on more recent changes that allow attribute values to be used in most CSS values (with URLs being an exception).

CSS custom highlights let you register a bunch of DOM ranges as a named highlight, which you can style via the ::highlight(name) pseudo-element. The styling is limited, but it means these ranges can span between elements, don’t impact layout, and don’t disrupt things like text selection.

Scoped Custom Element Registries allow different parts of your DOM tree (such as a shadow root) to use a different set of custom elements definitions, meaning the same tag name can refer to different custom elements depending on where they are in the DOM.

CSS shape() is a reimagining of path() that, rather than using SVG path syntax, uses a CSS syntax, allowing for mixed units and calc(). In practice, this makes it much easier to design responsive clip-paths and offset-paths.

And more, including CSS contrast-color, accent-color, dialog closedby, popover=”hint”, fetch upload streams, IDB getAllRecords(), media pseudo-classes such as :playing, and the Navigation API’s precommitHandler.

Existing feature reliability improvements

Like in previous years, the backbone of Interop is in improving the reliability of existing features, removing frustrations for web developers.

In 2026, we’ll be focusing these efforts on particular edge cases in:

  • Range headers & form data in fetch
  • The Navigation API
  • CSS scroll snap & scroll events
  • CSS anchor positioning
  • Same-document View Transitions
  • JavaScript top-level await
  • The event loop
  • WebRTC
  • CSS user-select
  • CSS zoom

Some of these are carried over from 2025 focus areas, as shortcomings in the tests and specs were fixed, but too late to be included in Interop 2025.

Again, these are less headline-grabbing than the shiny new features, but it’s these edge cases where us web developers lose hours of our time. Frustrating, frustrating, hours.

Interop investigations

Sometimes, we see a focus area proposal that’s clearly important, but doesn’t fit the requirements of Interop. This is usually because the tests for the feature aren’t sufficient, are in the wrong format, or browsers are missing automation features that are needed to make the feature testable.

In these cases, we identify what’s missing, and set up an investigation area.

For interop 2026, we’re looking at…

Accessibility. This is a continuation of work in 2025. Ultimately, we want browsers to produce consistent accessibility trees from the same DOM and CSS, but before we can write tests for this, we need to improve our testing infrastructure.

Mobile testing. Another continuation from 2025. In particular, in 2026, we want to figure out an approach for testing viewport changes caused by dynamic UI, such as the location bar and virtual keyboard.

JPEG XL. The current tests for this are sparse. Existing decoders have more comprehensive test suites, but we need to figure out how these relate to browsers. For example, progressive rendering is an important feature for developers, but how and when browsers should do this (to avoid performance issues) is currently being debated.

WebVTT. This feature allows for text to be synchronised to video content. The investigation is to go through the test suite and ensure it’s fit for purpose, and amend it where necessary.

It begins… again

The selected focus areas mean we’ve committed to more work compared to the other browsers, which is quite the challenge being the only engine that isn’t owned by billionaires. But it’s a challenge we’re happy to take on!

Together with other members of the Interop group, we’re looking forward to delivering features and fixes over the next year. You can follow along with the progress of all browsers on the Interop dashboard.

If your favorite feature is missing from Interop 2026, that doesn’t mean it won’t be worked on. JPEG XL is a good example of this. The current test suite meant it wasn’t a good fit for Interop 2026, but we’ve challenged the JPEG XL team at Google Research to build a memory-safe decoder in Rust, which we’re currently experimenting with in Firefox, as is Chrome.

Interop isn’t the limit of what we’re working on, but it is a cross-browser commitment.

If you’re interested in details of features as they land in Firefox, and discussions of future features from spec groups, you can follow us on:

Partner Announcements

This is a team effort, and we’ve all made announcement posts like this one. Get other members’ take on it:

The post Launching Interop 2026 appeared first on Mozilla Hacks - the Web developer blog.

Read the whole story
alvinashcraft
28 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories