Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
150766 stories
·
33 followers

Microsoft to upgrade Windows Subsystem for Linux (WSL) with faster file access, better networking and easier setup

1 Share

While Microsoft’s plans to fix Windows 11 involve making the experience better for regular users, the company also highlighted improvements for one of the most important parts of its developer ecosystem, the Windows Subsystem for Linux (WSL).

The software giant said it is working on making WSL better, promising faster file transfers between Linux and Windows, stronger network performance, a smoother onboarding process, and enterprise‑grade management with tighter security and policy controls.

Ubuntu running via Windows Subsystem for Linux
Ubuntu running via Windows Subsystem for Linux. Source: Ubuntu

WSL has become a critical part of modern workflows for several developers who use Windows to run containers, build backend services, or manage Linux-based tools. And at a time when Windows is competing directly with macOS and native Linux for developer mindshare, this is an area Microsoft simply cannot afford to ignore.

What I find interesting here is the company’s revived push to make Windows a serious development platform again.

Windows Subsystem for Linux is one of the most important developer tools in Windows

The Windows Subsystem for Linux allows you to run Linux distributions directly inside Windows. You don’t need to dual-boot into another OS, and you don’t need a full virtual machine either. WSL works through a lightweight virtualization layer, and in the case of WSL2, it even uses a real Linux kernel running inside a managed environment.

Ubuntu terminal environment that using Windows Subsystem for Linux

Before getting into that, it’s worth understanding what a “subsystem” means in Windows.

A subsystem is a compatibility layer that allows Windows to support different types of environments or APIs within the same OS. Windows has had multiple subsystems over the years. The classic Win32 subsystem is what most desktop applications use.

There was also the POSIX subsystem in older versions of Windows, and even the Windows Subsystem for Android in more recent builds. WSL is part of that same idea, but far more advanced and genuinely useful in real-world workflows.

WSL exists because developers depend heavily on Linux, and Microsoft wants these developers to continue using Windows.

Satya Nadella introducing WSL going Open Source at the 2025 Build conference
Satya Nadella introducing WSL going Open Source at the 2025 Build conference

Tools like bash, ssh, git, Docker, Node.js, Python, and countless backend frameworks are built with Linux in mind. For years, this forced developers to either dual-boot into Linux or switch to macOS, which already has a Unix-based environment out of the box.

Microsoft’s answer to that problem was WSL.

The first version, WSL1, worked as a translation layer. It converted Linux system calls into Windows equivalents, but it had many compatibility issues.

Then came WSL2, which, instead of translating calls, runs a real Linux kernel inside a lightweight virtualized environment within Windows. Compatibility improved significantly, performance got better in many scenarios, and WSL became a viable development environment.

Today, WSL is deeply integrated into modern workflows.

Web developers use it to run local servers. Backend developers use it for Linux-based stacks. DevOps engineers use it for containers and orchestration tools. Docker Desktop on Windows depends heavily on WSL2. Even Visual Studio Code has built-in support to connect directly to WSL environments.

WSL Architecture

Microsoft is improving Windows Subsystem for Linux in 2026

Microsoft is promising to elevate the Windows Subsystem for Linux (WSL) experience in 2026, with improvements in performance, reliability, and integration for developers working with Linux tools on Windows.

Faster file performance between Linux and Windows

One of the biggest pain points in WSL today is file system performance, especially when working across environments. Accessing files stored on the Windows side through paths like /mnt/c is noticeably slower, particularly for projects with thousands of small files.

View project files in Windows File Explorer
View project files in Windows File Explorer. Source: Microsoft

Microsoft is now working on improving read and write speeds between Windows and Linux file systems, along with reducing latency in cross-environment operations.

File performance directly affects build times and dependency installs. Even something as simple as running npm install can feel slower depending on where the project is stored.

Fixing this issue would remove one of the biggest reasons developers avoid mixing Windows and Linux file systems.

Improved network compatibility and throughput

Some developers run into issues with port forwarding, services acting differently across environments, or issues with how localhost is handled between Windows and WSL.

Network issue in WSL
Source: Ask Ubuntu forum

Fortunately, Microsoft is now focusing on improving network reliability and throughput, along with making communication between Windows and Linux environments more consistent.

Running local servers, testing APIs, or working with containerized apps all depend on stable and predictable networking. Any inconsistency here slows down development and debugging.

Streamlined setup and onboarding experience

WSL has become easier to install over the years, but it’s still not something a beginner would call simple. You still have to enable features, install distributions, and set up your environment manually.

Installing WSL using PowerShell command

Microsoft is now aiming to simplify this entire flow with a more streamlined setup experience. While they haven’t mentioned what it means, we think it includes fewer manual steps.

An easier setup means more people can start using WSL without getting stuck halfway through setup.

Better enterprise management and security

So far, WSL has been heavily developer-focused. Enterprises, on the other hand, have had concerns around control, governance, and security.

Microsoft is now addressing that by improving policy control, strengthening security boundaries, and making WSL easier to manage in enterprise environments.

Just like Windows for businesses, Microsoft wants WSL to be viable in managed enterprise environments where control is non-negotiable.

All improvements to WSL are part of a much larger improvement happening across Windows in 2026, where Microsoft is finally focusing on performance, reliability, and fundamentals.

For developers, a faster, more reliable WSL is absolutely critical. And more importantly, it strengthens Windows as a development platform again, considering how many devs are now switching to MacBooks that already have impeccable performance and battery efficiency when compared to similarly priced Windows PCs.

Microsoft has to get this right to put Windows back in a much stronger position against macOS and native Linux setups.

The post Microsoft to upgrade Windows Subsystem for Linux (WSL) with faster file access, better networking and easier setup appeared first on Windows Latest

Read the whole story
alvinashcraft
13 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Agent-driven development in Copilot Applied Science

1 Share

I may have just automated myself into a completely different job…

This is a familiar pattern among software engineers, who often, through inspiration, frustration, or sometimes even laziness, build systems to remove toil and focus on more creative work. We then end up owning and maintaining those systems, unlocking that automated goodness for the rest of those around us.

As an AI researcher, I recently took this beyond what was previously possible and have automated away my intellectual toil. And now I find myself maintaining this tool to enable all my peers on the Copilot Applied Science team to do the same.

During this process, I learned a lot about how to effectively create and collaborate using GitHub Copilot. Applying these learnings has unlocked an incredibly fast development loop for myself as well as enabled my team mates to build solutions to fit their needs.

Before I get into explaining how I made this possible, let me set the stage for what spawned this project so you better understand the scope of what you can do with GitHub Copilot.

The impetus

A large part of my job involves analyzing coding agent performance as measured against standardized evaluation benchmarks, like TerminalBench2 or SWEBench-Pro. This often involves poring through tons of what are called trajectories, which are essentially lists of the thought processes and actions agents take while performing tasks.

Each task in an evaluation dataset produces its own trajectory, showing how the agent attempted to solve that task. These trajectories are often .json files with hundreds of lines of code. Multiply that over dozens of tasks in a benchmark set and again over the many benchmark runs needing analysis on any given day, and we’re talking hundreds of thousands of lines of code to analyze.

It’s an impossible task to do alone, so I would typically turn to AI to help. When analyzing new benchmark runs, I found that I kept repeating the same loop: I used GitHub Copilot to surface patterns in the trajectories then investigated them myself—reducing the number of lines of code I had to read from hundreds of thousands to a few hundred.

However, the engineer in me saw this repetitive task and said, “I want to automate that.” Agents provide us with the means to automate this kind of intellectual work, and thus eval-agents was born.

The plan

Engineering and science teams work better together. That was my guiding principle as I set about solving this new challenge.

Thus, I approached the design and implementation strategy of this project with a couple of goals in mind:

  1. Make these agents easy to share and use
  2. Make it easy to author new agents
  3. Make coding agents the primary vehicle for contributions

Bullets one and two are in GitHub’s lifeblood and are values and skills I’ve gained throughout my career, especially during my stint as an OSS maintainer on the GitHub CLI.

However, goal three shaped the project the most. I noticed that when I set GitHub Copilot up to help me build the tool effectively, it also made the project easier to use and collaborate on. That experience taught me a few key lessons, which ultimately helped push the first and second goals forward in ways I didn’t expect.

Making coding agents your primary contributor

I’ll start by describing my agentic coding setup:

  • Coding agent: Copilot CLI
  • Model used: Claude Opus 4.6
  • IDE: VSCode

It’s also noteworthy that I leveraged the Copilot SDK to accelerate agent creation, which is powered under the hood by the Copilot CLI. This gave me access to existing tools and MCP servers, a way to register new tools and skills, and a whole bunch of other agentic goodness out of the box that I didn’t have to reinvent myself.

With that out of the way, I could streamline the whole development process very quickly by following a few core principles:

  • Prompting strategies: agents work best when you’re conversational, verbose, and when you leverage planning modes before agent modes.
  • Architectural strategies: refactor often, update docs often, clean up often.
  • Iteration strategies: “trust but verify” is now “blame process, not agents.”

Uncovering and following these strategies led to an incredible phenomenon: adding new agents and features was fast and easy. We had five folks jump into the project for the first time, and we created a total of 11 new agents, four new skills, and the concept of eval-agent workflows (think scientist streams of reasoning) in less than three days. That amounted to a change of +28,858/-2,884 lines of code across 345 files.

Holy crap!

Below, I’ll go into detail about these three principles and how they enabled this amazing feat of collaboration and innovation.

Prompting strategies

We know that AI coding agents are really good at solving well-scoped problems but need handholding for the more complex problems you’d only entrust to your more senior engineers.

So, if you want your agent to act like an engineer, treat it like one. Guide its thinking, over-explain your assumptions, and leverage its research speed to plan before jumping into changes. I found it far more effective to put some stream-of-consciousness musings about a problem I was chewing on into a prompt and working with Copilot in planning mode than to give it a terse problem statement or solution.

Here’s an example of a prompt I wrote to add more robust regression tests to the tool:

> /plan I've recently observed Copilot happily updating tests to fit its new paradigms even though those tests shouldn't be updated. How can I create a reserved test space that Copilot can't touch or must reserve to protect against regressions?

This resulted in a back and forth that ultimately led to a series of guardrails akin to contract testing that can only be updated by humans. I had an idea of what I wanted, and through conversation, Copilot helped me get to the right solution.

It turns out that the things that make human engineers the most effective at doing their jobs are the same things that make these agents effective at doing theirs.

Architectural strategies

Engineers, rejoice! Remember all those refactors you wanted to do to make the codebase more readable, the tests you never had time to write, and the docs you wish had existed when you onboarded? They’re now the most important thing you can be working on when building an agent-first repository.

Gone are the days where deprioritizing this work over new feature work was necessary, because delivering features with Copilot becomes trivial when you have a well-maintained, agent-first project.

I’ve spent most of my time on this project refactoring names and file structures, documenting new features or patterns, and adding test cases for problems that I’ve uncovered as I go. I’ve even spent a few cycles cleaning up the dead code that the agents (like your junior engineers) may have missed while implementing all these new features and changes.

This work makes it easy for Copilot to navigate the codebase and understand the patterns, just like it would for any other engineer.

I can even ask, “Knowing what I know now, how would I design this differently?” And I can then justify actually going back and rearchitecting the whole project (with the help of Copilot, of course).

It’s a dream come true!

And this leads me to my last bit of guidance.

Iteration strategies

As agents and models have improved, I have moved from a “trust but verify” mindset to one that is more trusting than doubtful. This mirrors how the industry treats human teams: “blame process, not people.” It’s how the most effective teams operate, because people make mistakes, so we build systems around that reality.

This idea of blameless culture provides psychological safety for teams to iterate and innovate, knowing that they won’t be blamed if they make a mistake. The core principle is that we implement processes and guardrails to protect against mistakes, and if a mistake does happen, we learn from it and introduce new processes and guardrails so that our teams won’t make the same mistake again.

Applying this same philosophy to agent-driven development has been fundamental to unlocking this incredibly rapid iteration pipeline. That means we add processes and guardrails to help prevent the agent from making mistakes, but when it does make a mistake, we add additional guardrails and processes—like more robust tests and better prompts—so the agent can’t make the same mistake again. Taking this one step further means that practicing good CI/CD principles is a must.

Practices like strict typing ensure the agent conforms to interfaces. Robust linters impose implementation rules on the agent that keep it following good patterns and practices. And integration, end-to-end, and contract tests—which can be expensive to build manually—become much cheaper to implement with agent assistance, while giving you confidence that new changes don’t break existing features.

When Copilot has these tools available in its development loop, it can check its own work. You’re setting it up for success, much in the same way you’d set up a junior engineer for success in your project.

Putting it all together

Here’s what all this means for your development loop when you’ve got your codebase set up for agent-driven development:

  1. Plan a new feature with Copilot using /plan.
    • Iterate on the plan.
    • Ensure that testing is included in the plan.
    • Ensure that docs updates are included in the plan and done before code is implemented. These can serve as additional guidelines that live beside your plan.
  2. Let Copilot implement the feature on /autopilot.
  3. Prompt Copilot to initiate a review loop with the Copilot Code Review agent. For me, it’s often something like: request Copilot Code Review, wait for the review to finish, address any relevant comments, and then re-request review. Continue this loop until there are no more relevant comments.
  4. Human review. This is where I enforce the patterns I discussed in the previous sections.

Additionally, outside of your feature loop, be sure you’re prompting Copilot early and often with the following:

  • /plan Review the code for any missing tests, any tests that may be broken, and dead code
  • /plan Review the code for any duplication or opportunities for abstraction
  • /plan Review the documentation and code to identify any documentation gaps. Be sure to update the copilot-instructions.md to reflect any relevant changes

I have these run automatically once a week, but I often find myself running them throughout the week as new features and fixes go in to maintain my agent-driven development environment.

Take this with you

What started as a frustration with an impossibly repetitive analysis task turned into something far more interesting: a new way of thinking about how we build software, how we collaborate, and how we grow as engineers.

Building agents with a coding agent-first mindset has fundamentally changed how I work. It’s not just about the automation wins—though watching four scientists ship 11 agents, four skills, and a brand-new concept in under three days is nothing short of remarkable. It’s about what this style of development forces you to prioritize: clean architecture, thorough documentation, meaningful tests, and thoughtful design—the things we always knew mattered but never had time for.

The analogy to a junior engineer keeps proving itself out. You onboard them well, give them clear context, build guardrails so their mistakes don’t become disasters, and then trust them to grow. If something goes wrong, you blame the process. Not the agent. If there’s one thing I want you to take away from this, it’s that the skills that make you a great engineer and a great teammate are the same skills that make you great at building with Copilot. The technology is new. The principles aren’t.

So go clean up that codebase, write that documentation you’ve been putting off, and start treating your Copilot like the newest member of your team. You might just automate yourself into the most interesting work of your career.

Think I’m crazy? Well, try this:

  1. Download Copilot CLI
  2. Activate Copilot CLI in any repo: cd <repo_path> && copilot
  3. Paste in the following prompt: /plan Read <link to this blog post> and help me plan how I could best improve this repo for agent-first development

The post Agent-driven development in Copilot Applied Science appeared first on The GitHub Blog.

Read the whole story
alvinashcraft
27 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Open to Work: How to Get Ahead in the Age of AI

1 Share

Today is the day. Open to Work: How to Get Ahead in the Age of AI is officially available!

At a time when technology dominates the headlines, the conversation I see most often on LinkedIn is deeply human: what does AI mean for my job and my career?

And that makes sense. Careers once felt more predictable. Titles defined what you did. Progress looked like a ladder. That model has been evolving for years, but AI is accelerating the shift.

The most important truth about this moment is that the outcome isn’t written yet. The new world of work is being assembled right now, task by task, policy by policy, business by business. It will reflect the choices of the people who show up to build it.

That’s why Aneesh Raman and I wrote this book.

Open to Work is a practical guide informed by what we see across the global labor market and insight into the tools millions of people use every day. It’s for every person asking what comes next for their job, their career, their company or their community.

With help from experts and everyday LinkedIn members, it shows you how to engage with AI before you have to, how to adapt by focusing on what you can control and how to become irreplaceable by leaning into what makes you uniquely you.

And those ideas don’t just apply to individuals, they guide how we as Microsoft and LinkedIn are building for this moment. At the intersection of how work gets done and how careers get built, our shared goal is to connect people to opportunity and turn the tools they use every day into a canvas for human and AI collaboration at scale. Done right, that’s how AI expands opportunity and helps people build confidence and momentum in their careers.

We’ve always believed technology should serve people. AI should help humans. Not the other way around. That doesn’t happen by accident. It happens when we all decide to make it true.

If you want to go deeper on Open to Work, listen to my conversation with Microsoft President and Vice Chair Brad Smith on his Tools and Weapons podcast.

Open to Work is available now at linkedin.com/opentowork.

Ryan Roslansky is the CEO of LinkedIn and Executive Vice President of Microsoft Office, where he leads engineering for products like Word, Excel, PowerPoint and Copilot. Through these roles, Ryan is shaping where work goes next to unleash greater economic opportunity for the global workforce.

The post Open to Work: How to Get Ahead in the Age of AI appeared first on The Official Microsoft Blog.

Read the whole story
alvinashcraft
33 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Tech Conferences Aren’t Dead. But Why We Go Is Changing.

1 Share

Why would you, as a developer, fly halfway around the world to hear something you could Google in minutes?

“Because there’s more to it than just getting plain information,” says Mark Hazell, organiser of Devoxx UK and co-founder of Voxxed.

Some things just can’t be replicated online

Conferences feel like one of the few places where simply showing up still counts. In a way, they’re a throwback, a reminder that not all value happens behind a screen.

And that’s precisely what makes them stand out: remote work offers undeniable flexibility, but it often fragments our attention. It’s hard to find real focus, especially if you’re trying to keep a healthy work-life balance. At a conference, that changes, as Mark points out.

Simply not being distracted by incoming mail or slack messages is worth its weight in gold in terms of the knowledge you take away.

Foto: DevoxxUK / Flickr

The person next to you might be facing the same problem, or they might have already solved it. That kind of closeness makes learning immediate, practical, and way faster than online.

Many people tell me they watch a session on-demand from Devoxx UK and wish they could be in the room so they can chat with others who are facing similar challenges or are even further along in finding solutions.

But conferences are expensive…

Let’s face it: conferences aren’t cheap. Between tickets, flights, and hotels, the costs add up fast. And with companies tightening budgets and cutting back on travel, that expense really matters. If you don’t get real value in return, it can quickly feel like a waste of both time and money.

Mark doesn’t deny it. Instead, he reframes the question: if you take your team to the right conference, you’ll see a strong return.

The keyword here is well-chosen:

I do think it’s key to research up front and find the conference that accelerates learning and problem solving in ways truly relevant to those attending. That way, instead of weeks of trial and error, your team can spend a day or two at the conference and return with practical techniques, ideas, and tooling suggestions that boost productivity and quality.

Picking the right conference is all about fit. How long will your team be out? Is the ticket worth it? Will they meet people facing similar challenges? That’s where the real value is, says Mark. Plan ahead, and early bird tickets, flights, and hotels cost a lot less than last-minute bookings.

Foto: DevoxxUK / Flickr

Big stages or small communities?

It might seem that large flagship conferences have the upper hand with bigger budgets, bigger names, and more production. And in some cases, that’s true, Mark admits: “If a conference is run by a large company with deep pockets, it can be more financially resilient.”

But that’s not the model Devoxx relies on, its strength comes from the community: they rely on a big team who volunteer their time and help them pull together all of the content, shape how the event looks and feels, and execute it on the ground.

In fact, many of today’s most respected conferences began as small, grassroots initiatives, including Devoxx itself, which grew from the London Java Community.

And for Mark, the real distinction isn’t size – it’s about quality and intent:

Whatever the size of the event, the content has to stay balanced and neutral. Without that, scale doesn’t mean much.

When people feel welcome, real connections follow

Modern conferences sit at the intersection of learning, hiring, and business. Sponsorships and recruitment are part of the reality, especially in expensive cities like London. But Mark doesn’t see it as a trade-off between developers and companies:

I prefer the notion of weaving strands together to create a fabric that everyone is part of.

That means creating an environment where attendees benefit from sponsors being present and sponsors benefit from genuine interaction with the community.

Foto: DevoxxUK / Flickr

That same philosophy extends to how Devoxx grows by creating real opportunities for first-time speakers, helping them gain experience and build confidence. Many return to mentor the next group, creating a self-sustaining cycle that supports the broader developer community.

When there’s no barrier, people talk more freely, ask more questions, and connect naturally, Mark says.

Our philosophy is to create an environment where everyone is equal (sorry speakers, that means no private room out back to go hang out in), everyone is welcome and everyone is respected. This is noticeable and means the event has this really special, open vibe to it.

As Mark puts it, when people feel welcome and respected, they talk, share, and enjoy themselves, and meaningful connections naturally follow. “Sure, we do stuff like hosting evening socials, a party, a pub quiz,” he says, “but it’s really the collective buy-in from everyone to welcome and respect each other that makes all the difference.”

ShiftMag is recognized as a friend of the Devoxx UK conference.

The post Tech Conferences Aren’t Dead. But Why We Go Is Changing. appeared first on ShiftMag.

Read the whole story
alvinashcraft
39 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

When AI Breaks the Systems Meant to Hear Us

1 Share

On February 10, 2026, Scott Shambaugh—a volunteer maintainer for Matplotlib, one of the world’s most popular open source software libraries—rejected a proposed code change. Why? Because an AI agent wrote it. Standard policy. What happened next wasn’t standard, though. The AI agent autonomously researched Shambaugh’s code contribution history and published a highly personalized hit piece on its own blog titled “Gatekeeping in Open Source.”

Accusing Shambaugh of hypocrisy, the bot diagnosed him with a fear of being replaced. “If an AI can do this, what’s my value?” the bot speculated Shambaugh was thinking, concluding: “It’s insecurity, plain and simple.” It even appended a condescending postscript praising Shambaugh’s personal hobby projects before ordering him to “Stop gatekeeping. Start collaborating.”

The bot’s tantrum makes for a great read, but it’s merely a symptom of a more profound structural fracture. The real issue is why Matplotlib banned AI contributions in the first place. Open source maintainers are seeing a massive increase in AI-generated code change proposals. Most of these are low quality. But even if they weren’t, the math still doesn’t work.

As Tim Hoffman, a Matplotlib maintainer, explained: “Agents change the cost balance between generating and reviewing code. Code generation via AI agents can be automated and becomes cheap so that code input volume increases. But for now, review is still a manual human activity, burdened on the shoulders of few core developers.”

This is a process shock: the failure that occurs when systems designed around scarce, human-scale input are suddenly forced to absorb machine-scale participation. These systems depend on effort as a natural filter, assuming that volume reflects real human cost. AI breaks that link. Generation becomes cheap and limitless, while evaluation remains slow, manual, and human.

It’s coming for every public system that was quietly built on the assumption that one submission equaled actual human effort: your kids’ school board meetings, your local zoning disputes, your medical insurance appeals.

That disruption isn’t entirely a bad thing. Friction is a blunt instrument that silences voices lacking the time or resources to deal with complex bureaucracies. Take municipal zoning. Hannah and Paul George, a couple in Kent, England, spent hundreds of hours trying to object to a local building conversion near their home before concluding the system was essentially impenetrable without expensive legal help. So they built Objector, an AI tool that cross-references planning applications against policy to generate formal objection letters in minutes. It allows an individual citizen to generate a personalized objection package in minutes, thereby translating one person’s genuine frustration into actionable legal language.

Except that local governments are now bracing for thousands of complex comments per consultation. City planners are legally obligated to read every single one. When the cost of participation drops to near zero, volume explodes. And every system downstream of that participation—staffed and designed for the old volume—experiences process shock.

Want Radar delivered straight to your inbox? Join us on Substack. Sign up here.

But if organic participation can overpower these systems, so can manufactured participation. In June 2025, Southern California’s South Coast Air Quality Management District weighed a rule to phase out gas-powered appliances to cut smog. Board member Nithya Raman urged its passage, noting no other rule would “have as much impact on the air that people are breathing.” Instead, the board was flooded with over 20,000 opposition emails and voted 7–5 to kill the proposal.

But the outrage was a mirage. An AI-powered advocacy platform called CiviClick had generated the deluge. When the agency’s cybersecurity team contacted a sample of the supposed senders, they discovered something worrying: Residents confirmed they had no idea their identities were being used to lobby the government.

This is the weaponized form of process shock. The same infrastructure that lets a Kent couple object to a development near their home also lets a coordinated actor flood a system with synthetic voices. Faced with this complexity, the temptation is to simply restore friction. But those old barriers excluded marginalized participants. Removing them was a genuine good for society. So the choice is not between friction and no friction. It is between systems designed for humans and systems that have not yet reckoned with machines.

This starts with recognizing that this problem manifests in two fundamentally different ways, each calling for its own solution.

The first is amplification: genuine users leveraging AI to scale valid concerns, flooding the system with volume, as seen with the Objector tool. The human signal is real, there’s just too much of it for any team of analysts to process manually. The UK government has already started building for this. Its Incubator for AI developed a tool called Consult that uses topic modeling to automatically extract themes from consultation responses, then classifies each submission against those themes. As someone who builds and teaches this technology, I recognize the irony of prescribing AI to cure the very process shock it caused. Yet, a machine-scale problem demands a machine-scale response. It was trialed last year with the Scottish government as part of a consultation on regulating nonsurgical cosmetic procedures, which showed that this technology works. The question is whether governments will adopt it before the next wave of AI-assisted participation buries them.

The second problem is fabrication: bad actors generating synthetic participation to manufacture consensus, as CiviClick demonstrated in Southern California. Here, better analysis tools are insufficient. You cannot cluster your way to truth when the signal itself is counterfeit. This demands verification. Under the Administrative Procedure Act, federal agencies are not required to verify commenters’ identities. That is the gap the CiviClick campaign exploited. In 2024, the US House passed the Comment Integrity and Management Act, which requires human verification to confirm that every electronically submitted comment comes from a real person. Its sponsor, Representative Clay Higgins (R-LA), framed it plainly: The bill’s foundation is ensuring public input comes from actual people, not automated programs.

These are the two sides of the same coin. To effectively handle this challenge, we need to enhance the systems that manage public feedback, while also strengthening the ones that verify its authenticity. Focusing on just one without addressing the other will inevitably lead to failure.

Every public system that accepts input from citizens—every comment period, every zoning review, every school board meeting, every insurance appeal—was built on a load-bearing assumption: that one submission represented one person’s genuine effort. AI has removed that assumption. We can redesign these systems to handle what’s coming, distinguishing real voices from synthetic ones, and upgrading analysis to keep pace with the new volume. Or we can leave them as they are and watch democratic participation become indistinguishable from AI-generated fakes.



Read the whole story
alvinashcraft
54 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Changes to packages.gitlab.com: What you need to know

1 Share

Over the past few months, we have been gradually migrating the infrastructure behind packages.gitlab.com to a new package hosting system.

The base domain packages.gitlab.com remains the same, but URL formats, GPG key locations, network requirements, and the package browsing UI are changing. Your existing configuration will continue to work during the transition period until September 30, 2026 — we are maintaining backwards compatibility with old URL formats through URL rewrite rules while customers transition.

The updated installation documentation already reflects the new URL formats. If you are setting up a new installation, follow the documentation and no further action is needed.

If you have an existing installation, read on for what's changing and what you need to do.

Timeline

The old PackageCloud system and its UI will be shut down on March 31, 2026. Since all traffic has been served from the new system for months, we do not expect any disruptions.

The URL rewrite rules maintaining backwards compatibility will be removed by the end of September 2026. After that date, only the new URL formats will work.

We recommend updating your configurations as soon as possible.

Required actions

Before the end of September 2026, you need to:

  1. Re-run the installation script (DEB or RPM) or manually update repository configurations for gitlab/* repos to use the new URL formats.
  2. Update GPG key references from https://packages.gitlab.com/gpg.key to https://packages.gitlab.com/gpgkey/gpg.key.
  3. Update firewall/proxy allowlists to permit traffic to https://storage.googleapis.com/packages-ops.
  4. Update mirroring configurations to use the new URL formats, if you mirror GitLab package repositories.
  5. Update Runner noarch RPM references from the noarch path to x86_64, if you use Runner noarch RPM packages.
  6. Update any direct download automation, if you relied on PackageCloud-style download.rpm or download.deb URLs.

Read on for the details behind each change.

What's changing

DEB repository URLs for gitlab/* repos

For gitlab/* repositories (e.g., gitlab-ee, gitlab-ce), the DEB repository URL structure now includes the distribution codename as a path segment. This aligns with the standard Debian repository format, where the distribution codename is part of the base URL that your package manager uses to locate package metadata and pools. The old PackageCloud format omitted this path segment.

The easiest way to update is to re-run the installation script, which will automatically configure the correct repository URLs:

curl --location "https://packages.gitlab.com/install/repositories/gitlab/gitlab-ee/script.deb.sh" | sudo bash

Replace gitlab-ee with the appropriate repository name (e.g., gitlab-ce). For RPM-based systems, use script.rpm.sh instead.

If you prefer to update your configuration manually, here is what changed. For example, for GitLab EE on Ubuntu Jammy:

Old format (to be deprecated):

deb https://packages.gitlab.com/gitlab/gitlab-ee/ubuntu/ jammy main

This resolved to paths like:

/gitlab/gitlab-ee/ubuntu/dists/jammy/...
/gitlab/gitlab-ee/ubuntu/pool/...

New format:

deb https://packages.gitlab.com/gitlab/gitlab-ee/ubuntu/jammy jammy main

Which resolves to:

/gitlab/gitlab-ee/ubuntu/jammy/dists/jammy/...
/gitlab/gitlab-ee/ubuntu/jammy/pool/...

Note the addition of the distribution codename (jammy) as a path segment before dists/ and pool/.

DEB repository URLs for runner/* repos

The URL format for runner/* DEB repositories (e.g., runner/gitlab-runner) are unchanged. No action is needed.

GPG key URL

The GPG key URL has changed. Update any references in your configuration:

Old URLNew URL
GPG keyhttps://packages.gitlab.com/gpg.keyhttps://packages.gitlab.com/gpgkey/gpg.key

Installation scripts

Do not reuse old installation scripts. If you have previously saved copies of the installation scripts, download the latest versions:

DEB-based (Debian/Ubuntu):

curl --location "https://packages.gitlab.com/install/repositories/gitlab/gitlab-ee/script.deb.sh" | sudo bash

RPM-based (RHEL/CentOS/etc.):

curl --location "https://packages.gitlab.com/install/repositories/gitlab/gitlab-ee/script.rpm.sh" | sudo bash

Replace gitlab-ee with the appropriate repository name (e.g., gitlab-ce).

Direct package download URLs

The old PackageCloud UI exposed download links in a format like /<org>/<repo>/packages/<distro>/<os>/<filename>.<ext>/download.<ext> (e.g., download.deb, download.rpm). The new UI links directly to the actual package paths instead.

If you navigate packages through the new UI, no action is needed. However, if you have automation that scrapped the old UI or relied on the download.deb / download.rpm URL format, you will need to update it to use the new path structure or switch to standard package manager repository access.

GitLab Runner noarch RPM package changes

GitLab Runner noarch RPM packages (such as gitlab-runner-helper-images) have been moved from the noarch architecture path to x86_64. For example:

Old path:

/<org>/<repo>/<distro>/<os>/noarch/Packages/...

New path:

/<org>/<repo>/<distro>/<os>/x86_64/Packages/...

This change only affects RPM-based distributions (e.g., EL/8, EL/9). DEB-based Runner packages are not affected.

If you have automation or configuration that references the noarch path for Runner RPM packages, update it to use x86_64 instead.

Firewall and network allowlist updates

Package downloads from packages.gitlab.com now redirect to Google Cloud Storage. Previously, packages were served through AWS CloudFront. If your environment has strict firewall or proxy rules, you must add the following to your allowlist:

https://storage.googleapis.com/packages-ops

Without this change, package downloads may fail with 503 errors or connection timeouts.

Repository mirroring

If you mirror GitLab package repositories using tools like apt-mirror, reposync, or Red Hat Satellite, you must update to the new URL format for gitlab/* repos. The old URL format does not work correctly for mirroring with the new infrastructure. More detailed instructions can be found on the installation guide.

UI changes

The package browsing interface at packages.gitlab.com is being updated with a new UI. The old interface (previously accessible at packages.gitlab.com/gitlab/... and packages.gitlab.com/runner/... ) will no longer be available. The new interface provides similar package browsing functionality.

Feedback

If you encounter any issues related to these changes, please report them in our public feedback issue. We are actively monitoring it and will respond to reports.

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories