Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
150769 stories
·
33 followers

Azure Developer CLI (azd)

1 Share
Learn how to use the Azure Developer CLI as an alternative deployment path for Aspire applications.
Read the whole story
alvinashcraft
20 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Apple at 50: the good and the bad

1 Share

We're spending the week documenting and analyzing the first half-century of Apple's existence, from the oft-overlooked creation of QuickTime to the iconic MacBook Air to Apple's veering into antitrust territory. We're also ranking the best Apple products of all time, and it's causing us all to feel many feelings.

Verge subscribers, don't forget you get exclusive access to ad-free Vergecast wherever you get your podcasts. Head here. Not a subscriber? You can sign up here.

On this episode of The Vergecast, we begin by stepping back a bit to ask a big question: How is Apple doing right now? Obviously, by many measures, Apple's doing great …

Read the full story at The Verge.

Read the whole story
alvinashcraft
21 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Mary Jo Foley: What the heck is going on with Microsoft lately?

1 Share
Satya Nadella in November 2016, in his honeymoon period as Microsoft CEO. (GeekWire File Photo)

[Editor’s Note: We’re excited to welcome Mary Jo Foley as a GeekWire contributor. Mary Jo has been one of the sharpest watchers of Microsoft for many years, currently as Editor in Chief at Directions on Microsoft, an IT planning and advisory service. She’ll be offering her take for GeekWire periodically on the latest developments in Redmond, starting with this piece.]

Reorgs are a way of life at Microsoft. But the pace of them over the last couple of months has led many to wonder what the heck is happening in Redmond — especially when coupled with the company’s stock price having its worst quarter in years.

During the past couple of months, Microsoft has made a noticeable number of organizational changes:

Is this just the usual Microsoft fiscal-year-end housekeeping, or is something different? A blip that will pass, or a new AI-centric reality for the Satya Nadella era?

It’s a mix of both, I’d argue.

The current wave of churn, at least in part, can be attributed to Microsoft’s corporate calendar. Its fourth quarter ends June 30 and new fiscal year kicks off on July 1. Microsoft often reorgs and does layoffs in the months leading up to this as a way to reset for the coming year.

The company also is taking actions to reduce hierarchy and make the corporate structure flatter, as are a number of tech companies, in the hopes of becoming nimbler.

A year ago, Chief Financial Officer Amy Hood proclaimed that Microsoft was “increasing our agility by reducing layers with fewer managers.” With moves like replacing 35-year veteran Executive Vice President Jha with a new gang of four, rather than just another single uber-boss, Microsoft is following through on those promises.

It’s not all mundane matters at play, however.

Thanks to AI, the way companies are prioritizing and following through on their strategies is different. Microsoft isn’t immune to the market’s jitters around capex overspending on AI when ROI still remains questionable. Its no-longer-exclusive partnership with OpenAI has people inside and outside the company worried, too, as does the fact that a whopping 45 percent of its unfulfilled Azure backlog last quarter was attributable to OpenAI.

Investor pressure on the company to keep its Azure business growing during a time of admitted capacity challenges also can’t be dismissed as contributing to the current churn. As a result, Microsoft travel budgets, new-hire spending, and investments in unproven areas are all on the chopping block.

Almost nothing (except towels, maybe) is immune from scrutiny with the goal of freeing up more dollars to pay for AI and cloud build-out.

But those reasons alone may not be enough to explain why Microsoft is looking like the least magnificent of the so-called Magnificent Seven tech leaders right now.

Microsoft continues to struggle in the consumer space, and not just with Xbox. Most of the company’s revenues have been and continue to be from sales to commercial customers. That consumer weakness is especially apparent when it comes to AI.

Microsoft recently disclosed only three percent of its Microsoft 365 customers are paying for Microsoft 365 Copilot. But its adoption rate for its consumer Copilot is even worse, and far lower than the rates for OpenAI’s ChatGPT and Google Gemini.

The decision earlier this month to remove AI CEO Mustafa Suleyman from his consumer AI product responsibilities and into more of a research role is Microsoft’s latest attempt to adjust its consumer bets.

Suleyman’s reassignment came later than some expected (and hoped), given the starts and stops with Microsoft’s consumer AI efforts. Mico, a ghost-like Clippy wannabe, seems to be in limbo. Microsoft’s push to make voice one of the main ways users interact with AI on their PCs, when people don’t talk to PCs like they do phones, seems to be falling flat.

Meanwhile, the Windows organization is trying to right the ship by backing out of some of its over-zealous AI plans. Instead of trying to force AI into Notepad and Photos, execs said they instead will focus on some top consumer requests, ranging from taskbar customization, to adding the ability to pause updates at will.

Microsoft shows no signs of giving up on the consumer space. Maybe new blood will find new ways to harness the company’s enterprise tactics to boost its consumer share? If not, there’s always the next reorg. …

Read the whole story
alvinashcraft
21 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Microsoft to upgrade Windows Subsystem for Linux (WSL) with faster file access, better networking and easier setup

1 Share

While Microsoft’s plans to fix Windows 11 involve making the experience better for regular users, the company also highlighted improvements for one of the most important parts of its developer ecosystem, the Windows Subsystem for Linux (WSL).

The software giant said it is working on making WSL better, promising faster file transfers between Linux and Windows, stronger network performance, a smoother onboarding process, and enterprise‑grade management with tighter security and policy controls.

Ubuntu running via Windows Subsystem for Linux
Ubuntu running via Windows Subsystem for Linux. Source: Ubuntu

WSL has become a critical part of modern workflows for several developers who use Windows to run containers, build backend services, or manage Linux-based tools. And at a time when Windows is competing directly with macOS and native Linux for developer mindshare, this is an area Microsoft simply cannot afford to ignore.

What I find interesting here is the company’s revived push to make Windows a serious development platform again.

Windows Subsystem for Linux is one of the most important developer tools in Windows

The Windows Subsystem for Linux allows you to run Linux distributions directly inside Windows. You don’t need to dual-boot into another OS, and you don’t need a full virtual machine either. WSL works through a lightweight virtualization layer, and in the case of WSL2, it even uses a real Linux kernel running inside a managed environment.

Ubuntu terminal environment that using Windows Subsystem for Linux

Before getting into that, it’s worth understanding what a “subsystem” means in Windows.

A subsystem is a compatibility layer that allows Windows to support different types of environments or APIs within the same OS. Windows has had multiple subsystems over the years. The classic Win32 subsystem is what most desktop applications use.

There was also the POSIX subsystem in older versions of Windows, and even the Windows Subsystem for Android in more recent builds. WSL is part of that same idea, but far more advanced and genuinely useful in real-world workflows.

WSL exists because developers depend heavily on Linux, and Microsoft wants these developers to continue using Windows.

Satya Nadella introducing WSL going Open Source at the 2025 Build conference
Satya Nadella introducing WSL going Open Source at the 2025 Build conference

Tools like bash, ssh, git, Docker, Node.js, Python, and countless backend frameworks are built with Linux in mind. For years, this forced developers to either dual-boot into Linux or switch to macOS, which already has a Unix-based environment out of the box.

Microsoft’s answer to that problem was WSL.

The first version, WSL1, worked as a translation layer. It converted Linux system calls into Windows equivalents, but it had many compatibility issues.

Then came WSL2, which, instead of translating calls, runs a real Linux kernel inside a lightweight virtualized environment within Windows. Compatibility improved significantly, performance got better in many scenarios, and WSL became a viable development environment.

Today, WSL is deeply integrated into modern workflows.

Web developers use it to run local servers. Backend developers use it for Linux-based stacks. DevOps engineers use it for containers and orchestration tools. Docker Desktop on Windows depends heavily on WSL2. Even Visual Studio Code has built-in support to connect directly to WSL environments.

WSL Architecture

Microsoft is improving Windows Subsystem for Linux in 2026

Microsoft is promising to elevate the Windows Subsystem for Linux (WSL) experience in 2026, with improvements in performance, reliability, and integration for developers working with Linux tools on Windows.

Faster file performance between Linux and Windows

One of the biggest pain points in WSL today is file system performance, especially when working across environments. Accessing files stored on the Windows side through paths like /mnt/c is noticeably slower, particularly for projects with thousands of small files.

View project files in Windows File Explorer
View project files in Windows File Explorer. Source: Microsoft

Microsoft is now working on improving read and write speeds between Windows and Linux file systems, along with reducing latency in cross-environment operations.

File performance directly affects build times and dependency installs. Even something as simple as running npm install can feel slower depending on where the project is stored.

Fixing this issue would remove one of the biggest reasons developers avoid mixing Windows and Linux file systems.

Improved network compatibility and throughput

Some developers run into issues with port forwarding, services acting differently across environments, or issues with how localhost is handled between Windows and WSL.

Network issue in WSL
Source: Ask Ubuntu forum

Fortunately, Microsoft is now focusing on improving network reliability and throughput, along with making communication between Windows and Linux environments more consistent.

Running local servers, testing APIs, or working with containerized apps all depend on stable and predictable networking. Any inconsistency here slows down development and debugging.

Streamlined setup and onboarding experience

WSL has become easier to install over the years, but it’s still not something a beginner would call simple. You still have to enable features, install distributions, and set up your environment manually.

Installing WSL using PowerShell command

Microsoft is now aiming to simplify this entire flow with a more streamlined setup experience. While they haven’t mentioned what it means, we think it includes fewer manual steps.

An easier setup means more people can start using WSL without getting stuck halfway through setup.

Better enterprise management and security

So far, WSL has been heavily developer-focused. Enterprises, on the other hand, have had concerns around control, governance, and security.

Microsoft is now addressing that by improving policy control, strengthening security boundaries, and making WSL easier to manage in enterprise environments.

Just like Windows for businesses, Microsoft wants WSL to be viable in managed enterprise environments where control is non-negotiable.

All improvements to WSL are part of a much larger improvement happening across Windows in 2026, where Microsoft is finally focusing on performance, reliability, and fundamentals.

For developers, a faster, more reliable WSL is absolutely critical. And more importantly, it strengthens Windows as a development platform again, considering how many devs are now switching to MacBooks that already have impeccable performance and battery efficiency when compared to similarly priced Windows PCs.

Microsoft has to get this right to put Windows back in a much stronger position against macOS and native Linux setups.

The post Microsoft to upgrade Windows Subsystem for Linux (WSL) with faster file access, better networking and easier setup appeared first on Windows Latest

Read the whole story
alvinashcraft
22 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Agent-driven development in Copilot Applied Science

1 Share

I may have just automated myself into a completely different job…

This is a familiar pattern among software engineers, who often, through inspiration, frustration, or sometimes even laziness, build systems to remove toil and focus on more creative work. We then end up owning and maintaining those systems, unlocking that automated goodness for the rest of those around us.

As an AI researcher, I recently took this beyond what was previously possible and have automated away my intellectual toil. And now I find myself maintaining this tool to enable all my peers on the Copilot Applied Science team to do the same.

During this process, I learned a lot about how to effectively create and collaborate using GitHub Copilot. Applying these learnings has unlocked an incredibly fast development loop for myself as well as enabled my team mates to build solutions to fit their needs.

Before I get into explaining how I made this possible, let me set the stage for what spawned this project so you better understand the scope of what you can do with GitHub Copilot.

The impetus

A large part of my job involves analyzing coding agent performance as measured against standardized evaluation benchmarks, like TerminalBench2 or SWEBench-Pro. This often involves poring through tons of what are called trajectories, which are essentially lists of the thought processes and actions agents take while performing tasks.

Each task in an evaluation dataset produces its own trajectory, showing how the agent attempted to solve that task. These trajectories are often .json files with hundreds of lines of code. Multiply that over dozens of tasks in a benchmark set and again over the many benchmark runs needing analysis on any given day, and we’re talking hundreds of thousands of lines of code to analyze.

It’s an impossible task to do alone, so I would typically turn to AI to help. When analyzing new benchmark runs, I found that I kept repeating the same loop: I used GitHub Copilot to surface patterns in the trajectories then investigated them myself—reducing the number of lines of code I had to read from hundreds of thousands to a few hundred.

However, the engineer in me saw this repetitive task and said, “I want to automate that.” Agents provide us with the means to automate this kind of intellectual work, and thus eval-agents was born.

The plan

Engineering and science teams work better together. That was my guiding principle as I set about solving this new challenge.

Thus, I approached the design and implementation strategy of this project with a couple of goals in mind:

  1. Make these agents easy to share and use
  2. Make it easy to author new agents
  3. Make coding agents the primary vehicle for contributions

Bullets one and two are in GitHub’s lifeblood and are values and skills I’ve gained throughout my career, especially during my stint as an OSS maintainer on the GitHub CLI.

However, goal three shaped the project the most. I noticed that when I set GitHub Copilot up to help me build the tool effectively, it also made the project easier to use and collaborate on. That experience taught me a few key lessons, which ultimately helped push the first and second goals forward in ways I didn’t expect.

Making coding agents your primary contributor

I’ll start by describing my agentic coding setup:

  • Coding agent: Copilot CLI
  • Model used: Claude Opus 4.6
  • IDE: VSCode

It’s also noteworthy that I leveraged the Copilot SDK to accelerate agent creation, which is powered under the hood by the Copilot CLI. This gave me access to existing tools and MCP servers, a way to register new tools and skills, and a whole bunch of other agentic goodness out of the box that I didn’t have to reinvent myself.

With that out of the way, I could streamline the whole development process very quickly by following a few core principles:

  • Prompting strategies: agents work best when you’re conversational, verbose, and when you leverage planning modes before agent modes.
  • Architectural strategies: refactor often, update docs often, clean up often.
  • Iteration strategies: “trust but verify” is now “blame process, not agents.”

Uncovering and following these strategies led to an incredible phenomenon: adding new agents and features was fast and easy. We had five folks jump into the project for the first time, and we created a total of 11 new agents, four new skills, and the concept of eval-agent workflows (think scientist streams of reasoning) in less than three days. That amounted to a change of +28,858/-2,884 lines of code across 345 files.

Holy crap!

Below, I’ll go into detail about these three principles and how they enabled this amazing feat of collaboration and innovation.

Prompting strategies

We know that AI coding agents are really good at solving well-scoped problems but need handholding for the more complex problems you’d only entrust to your more senior engineers.

So, if you want your agent to act like an engineer, treat it like one. Guide its thinking, over-explain your assumptions, and leverage its research speed to plan before jumping into changes. I found it far more effective to put some stream-of-consciousness musings about a problem I was chewing on into a prompt and working with Copilot in planning mode than to give it a terse problem statement or solution.

Here’s an example of a prompt I wrote to add more robust regression tests to the tool:

> /plan I've recently observed Copilot happily updating tests to fit its new paradigms even though those tests shouldn't be updated. How can I create a reserved test space that Copilot can't touch or must reserve to protect against regressions?

This resulted in a back and forth that ultimately led to a series of guardrails akin to contract testing that can only be updated by humans. I had an idea of what I wanted, and through conversation, Copilot helped me get to the right solution.

It turns out that the things that make human engineers the most effective at doing their jobs are the same things that make these agents effective at doing theirs.

Architectural strategies

Engineers, rejoice! Remember all those refactors you wanted to do to make the codebase more readable, the tests you never had time to write, and the docs you wish had existed when you onboarded? They’re now the most important thing you can be working on when building an agent-first repository.

Gone are the days where deprioritizing this work over new feature work was necessary, because delivering features with Copilot becomes trivial when you have a well-maintained, agent-first project.

I’ve spent most of my time on this project refactoring names and file structures, documenting new features or patterns, and adding test cases for problems that I’ve uncovered as I go. I’ve even spent a few cycles cleaning up the dead code that the agents (like your junior engineers) may have missed while implementing all these new features and changes.

This work makes it easy for Copilot to navigate the codebase and understand the patterns, just like it would for any other engineer.

I can even ask, “Knowing what I know now, how would I design this differently?” And I can then justify actually going back and rearchitecting the whole project (with the help of Copilot, of course).

It’s a dream come true!

And this leads me to my last bit of guidance.

Iteration strategies

As agents and models have improved, I have moved from a “trust but verify” mindset to one that is more trusting than doubtful. This mirrors how the industry treats human teams: “blame process, not people.” It’s how the most effective teams operate, because people make mistakes, so we build systems around that reality.

This idea of blameless culture provides psychological safety for teams to iterate and innovate, knowing that they won’t be blamed if they make a mistake. The core principle is that we implement processes and guardrails to protect against mistakes, and if a mistake does happen, we learn from it and introduce new processes and guardrails so that our teams won’t make the same mistake again.

Applying this same philosophy to agent-driven development has been fundamental to unlocking this incredibly rapid iteration pipeline. That means we add processes and guardrails to help prevent the agent from making mistakes, but when it does make a mistake, we add additional guardrails and processes—like more robust tests and better prompts—so the agent can’t make the same mistake again. Taking this one step further means that practicing good CI/CD principles is a must.

Practices like strict typing ensure the agent conforms to interfaces. Robust linters impose implementation rules on the agent that keep it following good patterns and practices. And integration, end-to-end, and contract tests—which can be expensive to build manually—become much cheaper to implement with agent assistance, while giving you confidence that new changes don’t break existing features.

When Copilot has these tools available in its development loop, it can check its own work. You’re setting it up for success, much in the same way you’d set up a junior engineer for success in your project.

Putting it all together

Here’s what all this means for your development loop when you’ve got your codebase set up for agent-driven development:

  1. Plan a new feature with Copilot using /plan.
    • Iterate on the plan.
    • Ensure that testing is included in the plan.
    • Ensure that docs updates are included in the plan and done before code is implemented. These can serve as additional guidelines that live beside your plan.
  2. Let Copilot implement the feature on /autopilot.
  3. Prompt Copilot to initiate a review loop with the Copilot Code Review agent. For me, it’s often something like: request Copilot Code Review, wait for the review to finish, address any relevant comments, and then re-request review. Continue this loop until there are no more relevant comments.
  4. Human review. This is where I enforce the patterns I discussed in the previous sections.

Additionally, outside of your feature loop, be sure you’re prompting Copilot early and often with the following:

  • /plan Review the code for any missing tests, any tests that may be broken, and dead code
  • /plan Review the code for any duplication or opportunities for abstraction
  • /plan Review the documentation and code to identify any documentation gaps. Be sure to update the copilot-instructions.md to reflect any relevant changes

I have these run automatically once a week, but I often find myself running them throughout the week as new features and fixes go in to maintain my agent-driven development environment.

Take this with you

What started as a frustration with an impossibly repetitive analysis task turned into something far more interesting: a new way of thinking about how we build software, how we collaborate, and how we grow as engineers.

Building agents with a coding agent-first mindset has fundamentally changed how I work. It’s not just about the automation wins—though watching four scientists ship 11 agents, four skills, and a brand-new concept in under three days is nothing short of remarkable. It’s about what this style of development forces you to prioritize: clean architecture, thorough documentation, meaningful tests, and thoughtful design—the things we always knew mattered but never had time for.

The analogy to a junior engineer keeps proving itself out. You onboard them well, give them clear context, build guardrails so their mistakes don’t become disasters, and then trust them to grow. If something goes wrong, you blame the process. Not the agent. If there’s one thing I want you to take away from this, it’s that the skills that make you a great engineer and a great teammate are the same skills that make you great at building with Copilot. The technology is new. The principles aren’t.

So go clean up that codebase, write that documentation you’ve been putting off, and start treating your Copilot like the newest member of your team. You might just automate yourself into the most interesting work of your career.

Think I’m crazy? Well, try this:

  1. Download Copilot CLI
  2. Activate Copilot CLI in any repo: cd <repo_path> && copilot
  3. Paste in the following prompt: /plan Read <link to this blog post> and help me plan how I could best improve this repo for agent-first development

The post Agent-driven development in Copilot Applied Science appeared first on The GitHub Blog.

Read the whole story
alvinashcraft
22 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Open to Work: How to Get Ahead in the Age of AI

1 Share

Today is the day. Open to Work: How to Get Ahead in the Age of AI is officially available!

At a time when technology dominates the headlines, the conversation I see most often on LinkedIn is deeply human: what does AI mean for my job and my career?

And that makes sense. Careers once felt more predictable. Titles defined what you did. Progress looked like a ladder. That model has been evolving for years, but AI is accelerating the shift.

The most important truth about this moment is that the outcome isn’t written yet. The new world of work is being assembled right now, task by task, policy by policy, business by business. It will reflect the choices of the people who show up to build it.

That’s why Aneesh Raman and I wrote this book.

Open to Work is a practical guide informed by what we see across the global labor market and insight into the tools millions of people use every day. It’s for every person asking what comes next for their job, their career, their company or their community.

With help from experts and everyday LinkedIn members, it shows you how to engage with AI before you have to, how to adapt by focusing on what you can control and how to become irreplaceable by leaning into what makes you uniquely you.

And those ideas don’t just apply to individuals, they guide how we as Microsoft and LinkedIn are building for this moment. At the intersection of how work gets done and how careers get built, our shared goal is to connect people to opportunity and turn the tools they use every day into a canvas for human and AI collaboration at scale. Done right, that’s how AI expands opportunity and helps people build confidence and momentum in their careers.

We’ve always believed technology should serve people. AI should help humans. Not the other way around. That doesn’t happen by accident. It happens when we all decide to make it true.

If you want to go deeper on Open to Work, listen to my conversation with Microsoft President and Vice Chair Brad Smith on his Tools and Weapons podcast.

Open to Work is available now at linkedin.com/opentowork.

Ryan Roslansky is the CEO of LinkedIn and Executive Vice President of Microsoft Office, where he leads engineering for products like Word, Excel, PowerPoint and Copilot. Through these roles, Ryan is shaping where work goes next to unleash greater economic opportunity for the global workforce.

The post Open to Work: How to Get Ahead in the Age of AI appeared first on The Official Microsoft Blog.

Read the whole story
alvinashcraft
22 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories