Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
151534 stories
·
33 followers

The AI RAM Shortage is Also Driving Up SSD Prices

1 Share
In 2024 the Verge's consumer tech reporter paid $173 for a WD Black SN850X 2TB SSD. But "now that same SSD costs $649..." "Like with RAM, demand from the AI industry is swallowing up supply from a limited number of manufacturers, leading to a drastic reduction in the inventory that's available to consumers" — and skyrocketing prices: The price on my WD Black drive nearly quadrupled since November 2025, and consumer SSDs across the board are seeing similar increases, much like with RAM. The 4TB version of the popular Samsung 990 Pro SSD previously cost $320, but will now run you nearly $1,000. External SanDisk SSDs saw a 200 percent price hike at the Apple Store in March.... According to price trends from PC Part Picker, NVMe SSD prices began ticking upward in December 2025, with prices on 256GB to 4TB SSDs now double or triple what they were just a few months ago, and continuing to climb.

Read more of this story at Slashdot.

Read the whole story
alvinashcraft
21 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Riding the rails — over a floating bridge: GeekWire Podcast takes the train across the lake to Microsoft

1 Share
GeekWire co-founders Todd Bishop, left, and John Cook on Sound Transit’s 2 Line. (GeekWire Photo / Kurt Schlosser)

This week on the GeekWire Podcast: we take the show on the road — or rather, on the rails — recording on Sound Transit’s 2 Line as we ride the world’s first light rail on a floating bridge from Seattle’s Northgate neighborhood to Microsoft’s campus in Redmond.

It’s an engineering marvel decades in the making — the bridge, that is, not the podcast. That said, juggling a couple of handheld mics and portable recorder on a crowded train, we did have to overcome some logistical challenges to make it happen. 

John and Todd interview Sound Transit Public Information Officer Henry Bendon on the 2 Line. (GeekWire Photo / Kurt Schlosser)

Along the way, we chat with fellow passengers and talk about the week’s headlines, including Anduril’s autonomous warship facility on Seattle’s ship canal, and golf star Bryson DeChambeau’s acquisition of Bellevue-based Sportsbox AI ahead of the Masters.

Then we get a behind-the-scenes look at the engineering from Sound Transit’s Henry Bendon. He explains how engineers solved the unprecedented challenge of running 55 mph trains on a bridge that constantly moves with wind, waves, and changing lake levels.

Bendon describes the surge in ridership since the Crosslake Connection opened on March 28, and what the line means for connecting the tech hubs on both sides of the lake.

After arriving in Redmond, we sit down with Microsoft President Brad Smith to talk about the company’s two-decade role in making the Crosslake Connection a reality.

Smith says the line gives people “a choice they didn’t have a month ago.”

We ask what it says about how we build big things in this region that it took nearly 60 years to get from idea to reality. “What really matters is people stuck with it,” he says.

Todd Bishop with Microsoft’s Brad Smith in Redmond after riding the 2 Line across Lake Washington. (GeekWire Photo / Curt Milton)

We discuss the unlikely duo of Microsoft and Amazon — fierce competitors in cloud computing and AI — collaborating on regional transit and civic issues. “When it comes to local issues, we’re not competing with Amazon, we’re working together,” Smith says.

And finally, we challenge him with a trivia question that hits close to home.

Subscribe to GeekWire in Apple Podcasts, Spotify, or wherever you listen.

Read the whole story
alvinashcraft
21 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

BONUS Why a Distinguished Engineer Stopped Reading Code — Lights-Out Codebases and the End of the IC With Philip Su

1 Share

BONUS: Why a Distinguished Engineer Stopped Reading Code — Lights-Out Codebases and the End of the IC

Philip Su has spent two decades at the highest levels of software engineering — Microsoft, Meta (where he reached Distinguished Engineer, IC9), OpenAI, and now building his own product solo with AI. In this episode, he makes a provocative case: the individual contributor role as we know it is over, code reviews are becoming a liability, and the best engineers are already managing AI agents instead of writing code themselves.

From Amazon Warehouse Floors to OpenAI

"Every day at work, I lifted six tons of packages with my arms. No one learned my name. And it was the structure — the ability to leave work behind when I clocked out — that pulled me out of a spiral."

 

Philip's path through tech is anything but typical. After scaling Facebook's London engineering office from a dozen engineers to 500+, he stepped away from Big Tech entirely. During Peak 2021, he worked the floor at Amazon's flagship warehouse south of Seattle — 11-hour shifts, processing 15,000 packages a day. He documented the experience in his Peak Salvation podcast, exploring depression, the divide between the wealthy and the working class, and the maddening inefficiencies inside one of the world's largest employers. That experience reshaped how he thinks about work, systems, and what actually matters when you strip away titles and stock options. He later joined OpenAI as an individual contributor — going from leading hundreds of engineers to writing code again — before leaving to build Superphonic, an AI-powered podcast player.

No More Code Reviews: The Lights-Out Codebase

"We'll one day be scared, positively petrified, to use any mission-critical software known to have allowed human interference in its codebase."

 

Philip borrows the concept of "lights-out" from data centers that run with zero human workers and applies it to codebases. A lights-out codebase is one where no human ever sees or edits the code. He's already built two apps this way — Tanya's Snowfield and OTD: On This Day — without looking at a single line of code from repository creation through production release. His argument is not just about efficiency. Code reviewers are becoming the bottleneck. The volume of AI-generated code is already too high for humans to keep up, and the same LLM that wrote the code often catches bugs that another instance of itself introduced. Philip has been running both Codex and Cursor as PR reviewers on GitHub, and has been surprised by how often they identify issues in both human- and AI-generated code. He believes we are approaching a threshold where human intervention in codebases will be seen as risky and irresponsible — not the other way around.

AI Killed the Individual Contributor

"You're not building the thing anymore. You're pondering and tweaking the machine that builds the thing."

 

In his widely discussed essay "AI Killed the Individual Contributor", Philip argues that maximizing productivity with AI now requires engineers to spend their time on what are essentially management tasks: setting priorities, resolving conflicts, delegating to agents, reviewing output, and giving feedback. The IC role isn't disappearing because AI codes better — it's disappearing because the highest-leverage use of an engineer's time has shifted from writing code to orchestrating the systems that write code. Right now, it feels like managing a team of barely competent interns. But Philip expects that to change fast. Soon it will feel like managing high performers who are faster and more capable than you — and the engineers who thrive will be the ones who learned to let go of the keyboard and focus on judgment, direction, and taste.

Building Solo with AI: The Superphonic Experiment

"20x productivity means we have 20x fewer PMs than we need."

 

Philip is putting his thesis to the test with Superphonic, an AI-powered podcast player he's building essentially as a solo founder. What would have required a team two years ago, he now ships alone — leveraging AI agents for coding, testing, and review. But the productivity multiplier creates its own problems. When you can build 20x faster, the bottleneck shifts from engineering capacity to product judgment. You need to know what to build, not just how to build it. Philip's reference to The Mythical Man-Month is deliberate: adding more people (or agents) doesn't solve the fundamental challenge of building the right thing. The hardest part of being both the architect and the manager of your AI agents is knowing when the model breaks down — when you need to step in and do the work yourself rather than delegating.

What Teams Get Wrong About AI Integration

"There is a lot more that can be done to increase the quality of AI output even if all progress on foundation models stops."

 

For Scrum Masters and agile coaches helping teams adopt AI tools, Philip's warning is clear: don't treat AI as just another developer on the team. The integration requires rethinking how work is structured, how quality is assured, and what it means to be an engineer. Teams that bolt AI onto existing workflows without changing the underlying process will get marginal gains at best. The ones that redesign their workflows around AI capabilities — including accepting that humans may not need to review every line of code — will see transformational results. Philip's practical advice: do the work yourself first. Understand what the AI is doing before you delegate wholesale. The engineers who skip this step lose the judgment they need to manage the output effectively.

About Philip Su

Philip Su is a Distinguished Engineer (IC9) who scaled Facebook's London office from a dozen engineers to 500+, served as site lead at OpenAI, and now builds Superphonic — an AI-powered podcast player. He writes about the future of software work at Molochinations on Substack. LinkedIn

 

You can link with Philip Su on LinkedIn.





Download audio: https://traffic.libsyn.com/secure/scrummastertoolbox/20260411_Philip_Su_BONUS.mp3?dest-id=246429
Read the whole story
alvinashcraft
22 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Automate Astro Upgrades with GitHub Agentic Workflows

1 Share

I opened GitHub on my phone before my morning coffee had finished brewing. There it was — a pull request, freshly opened, titled "chore: upgrade astro to v6.1.2". I hadn't asked anyone to do it. I hadn't filed an issue, assigned a task, or written a single command. An agent had woken up, checked the npm registry, read the Astro changelog, inspected my codebase, applied the changes, run pnpm install, and handed me a PR to review. All I had to do was drink my coffee and click Merge.

This is the promise of what GitHub Next is calling Continuous AI — and it's already working on my blog.

Read the whole story
alvinashcraft
22 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Integration Testing Azure Functions with Reqnroll and C#, Part 1 - Introduction

1 Share

TL;DR - This series of posts shows how you can integration test Azure Functions projects using the open-source Corvus.Testing.AzureFunctions.ReqnRoll library and walks through the different ways you can use it in your Reqnroll projects to start and stop function app instances for your scenarios and features.

If you use Azure Functions on a regular basis, you'll likely have grappled with the challenge of testing them. The testing story for functions is not hugely well defined. If you're building your functions well, then there won't be a lot of code in them - they will be thin facades calling into code that does the bulk of the work, in which case you will likely have used a standard unit testing approach on that code. Nevertheless, that likely leaves some functionality untested - for example, ensuring your models are correctly bound to input, and ensuring that correct status codes, headers and so on are returned from your requests.

As such, it becomes necessary to step up a level and look at how to test the functions as a whole. There are two options for this:

  • You can test in-process, using the approach defined in Microsoft's docs (note that this doesn't seem to have been updated to take account of functions that use instance methods, but that's unlikely to affect the approach). This is good, but doesn't ensure that your function is configured correctly, and if you're using automatic model binding, it doesn't test that this is working as you expect.
  • Alternatively, you can test out of process, either against a deployed instance of the function or against one that's running locally. Testing against a deployed instance is a great idea, but this is normally reserved for another level of testing, meant to ensure that things are working as expected in a deployed environment. It doesn't address the needs of the developer as the feedback loop from making a change to deploying a function to Azure is likely just too long. This leaves us with the challenge of testing against a function running locally.

So, how do we go about this?

Before I continue I should note that while I'm specifically addressing how to do this with Reqnroll, a very similar approach can be taken with other frameworks. Reqnroll is the community-driven successor to SpecFlow, created by the original SpecFlow creator after SpecFlow reached end-of-life. If you're migrating from SpecFlow, the Reqnroll migration guide is a great place to start.

Goals

As always, it's worth starting with what we want to achieve:

  1. We want a way of automatically starting a function, and then shutting it down once the test is completed.
  2. We want this to work in as close a way as possible to a deployed function
  3. Ideally, we want to be able to capture the output from the function while it's running.
  4. It's useful to be able to easily affect the configuration of the function under test.
  5. We want an approach that can work as part of a CI pipeline.

So, let's have a look at how we achieve these goals.

Running the function locally

When you hit F5 to run a function in Visual Studio, it uses a copy of the Azure Functions Core Tools that's managed by Visual Studio. Normally they get automatically installed into C:\Users\username\AppData\Local\AzureFunctionsTools\Releases and Visual Studio selects the correct version to use based on your project's runtime.

However, this is an internal detail of how Visual Studio implements the Functions SDK, so it's not really something we can rely on. Fortunately you can install and use Azure Functions Core Tools directly.

We recommend using Azure Functions v4 with the isolated worker model and .NET 8 or later. The isolated worker model is the recommended approach for new Azure Functions projects, and in-process support is scheduled to end in November 2026.

To get the tools installed, you have a few choices. If you're on Windows, you can use winget:

winget install Microsoft.Azure.FunctionsCoreTools

or Chocolatey

choco install azure-functions-core-tools

Otherwise, you'll need npm:

npm i -g azure-functions-core-tools@4 --unsafe-perm true

This will install the tools locally - you can verify they are there using the new func command from the command prompt. If you do this, you'll see all the things you can do with it - scaffolding new functions apps and functions, and running functions locally. The latter is what we're concerned with - you'll see that you can start a new function using the command func start, providing port number and other details as part of the command. This is what we're going to use when setting up our test.

Introducing Corvus.Testing

The code to start, stop and manage functions as part of a Reqnroll test is part of the endjin-sponsored Corvus.Testing libraries. The original Corvus.Testing repository has been split into separate, focused repos:

The classes that we're interested in are part of Corvus.Testing.AzureFunctions.ReqnRoll and are:

FunctionsController.cs - this contains methods to start a new functions instance, and to tear down all functions it manages. It's intended to live for the lifetime of the test as it captures the output and error streams from the function and write them all to the Console when the functions are terminated. When running in Reqnroll, this results in that information being written to the test's output.

FunctionConfiguration.cs - this is part of the mechanism by which the test project can provide settings to the function under test.

FunctionsBindings.cs - this provides a couple of standard step bindings that can be used as part of a scenario to start a function.

This code is all open source, and contributions are accepted. It's available under the Apache 2.0 open source license meaning you're free to use and modify the code as you see fit. The license does impose some conditions around retaining copyright attributions and so on - you can read the full details here.

This code ticks the boxes for the first four of the five goals I set out above, providing mechanisms to keep functions running for the duration of test execution, as well as a way to supply additional configuration. The next few sections explain the different ways of using this.

I'll be doing this with reference to the demo projects that are part of the Corvus.Testing.AzureFunctions.ReqnRoll codebase. Before continuing, I recommend downloading the project so you can examine the code. There are two demo functions projects — Corvus.Testing.AzureFunctions.Demo.InProcess for the in-process model and Corvus.Testing.AzureFunctions.Demo.Isolated for the isolated worker model — that contain a slightly modified version of code that's generated when you create a new HTTP-triggered function in Visual Studio. They accept GET and POST requests, looking for a parameter called name in either the querystring or request body, and returning a configurable string containing that parameter.

It also contains a Reqnroll test project, Corvus.Testing.AzureFunctions.ReqnRoll.Demo.Specs containing feature files which relate to the following next few posts in this series.

In the next post, I'll show you how you can add steps to your Reqnroll scenarios to run your functions apps.

Read the whole story
alvinashcraft
22 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Developers are System Thinkers, not Code Monkeys

1 Share

There's a lot of concern amongst developers at the moment that AI is going to take our jobs. And I get it - things are moving fast. But I think a lot of this fear comes from a misunderstanding of what we actually do. We're not typists. We're not "code monkeys". We are system thinkers.

First of all though - I'm going to be blunt. If you're not heavily leveraging AI right now - learning it, playing with it, embracing it - then you're going to get left behind. And not just by other developers - by non-developers who are! That should be a wake-up call.

Right, let's talk about systems. As developers, we are very good at systems. Think about it - we work with them all day long...

  • Your code is a system (consisting of functions, classes, modules, and so on)
  • A database is a system
  • A message broker is a system
  • Your CI/CD pipeline is a system
  • Git is a system
  • JIRA is a system
  • Those automation scripts you wrote to save yourself time? Systems.

And all of these things combine together in different ways to make bigger systems.

We've spent our entire careers taking complex problems, breaking them down into smaller pieces, and building systems to solve them. We understand how to integrate different components together. We understand how things fit together.

Make no mistake - AI is going to dramatically change our industry. It already is. The way we build software, the way we work, the tools we use - none of this is going to look the same in a few years. But at its core, AI is still a system. A very powerful system that's evolving at a frightening pace - but still a system. And who better to understand, integrate, and leverage a new system than the people who've been doing exactly that for years?

The developers who will thrive are the ones who recognise that this is their real skill. Not typing code - but understanding systems and how to orchestrate them. If you can design how an AI agent fits into a larger architecture, break down what it should and shouldn't do, and integrate it alongside your existing systems - you're in a really strong position.

So don't be the developer who's afraid of AI. Be the developer who treats it as the latest system to learn. Play with it. Experiment. Build things with it. The more you understand how it works, the better you'll be at leveraging it - and that's exactly what we've always done.

Read the whole story
alvinashcraft
23 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories