Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
147215 stories
·
33 followers

Fragments: February 9

1 Share

Some more thoughts from last week’s open space gathering on the future of software development in the age of AI. I haven’t attributed any comments since we were operating under the Chatham House Rule, but should the sources recognize themselves and would like to be attributed, then get in touch and I’ll edit this post.

 ❄                ❄

During the opening of the gathering, I commented that I was naturally skeptical of the value of LLMs. After all, the decades have thrown up many tools that have claimed to totally change the nature of software development. Most of these have been little better than snake oil.

But I am a total, absolute skeptic - which means I also have to be skeptical of my own skepticism.

 ❄                ❄

One of our sessions focused on the problem of “cognitive debt”. Usually, as we build a software system, the developers of that system gain an understanding both the underlying domain and the software they are building to support it. But once so much work is sent off to LLMs, does this mean the team no longer learns as much? And if so, what are the consequences of this? Can we rely on The Genie to keep track of everything, or should we take active measures to ensure the team understands more of what’s being built and why?

The TDD cycle involves a key (and often under-used) step to refactor the code. This is where the developers consolidate their understanding and embed it into the codebase. Do we need some similar step to ensure we understand what the LLMs are up to?

When the LLM writes some complex code, ask it to explain how it works. Maybe get it do so in a funky way, such as asking it to explain the code’s behavior in the form of a fairy tale.

 ❄                ❄

OH: “LLMs are drug dealers, they give us stuff, but don’t care about the resulting system or the humans that develop and use it”. Who cares about the long-term health of the system when the LLM renews its context with every cycle?

 ❄                ❄

Programmers are wary of LLMs not just because folks are worried for their jobs, but also because we’re scared that LLMs will remove much of the fun from programming. As I think about this, I consider what I enjoy about programming. One aspect is delivering useful features - which I only see improving as LLMs become more capable.

But, for me, programming is more than that. Another aspect I enjoy about programming is model building. I enjoy the process of coming up with abstractions that help me reason about the domain the code is supporting - and I am concerned that LLMs will cause me to spend less attention on this model building. It may be, however, that model-building becomes an important part of working effectively with LLMs, a topic Unmesh Joshi and I explored a couple of months ago.

 ❄                ❄

In the age of LLMs, will there still be such a things as “source code”, and if so, what will it look like? Prompts, and other forms of natural language context can elicit a lot of behavior, and cause a rise in the level of abstraction, but also a sideways move into non-determinism. In all this is there still a role for a persistent statement of non-deterministic behavior?

Almost a couple of decades ago, I became interested in a class of tools called Language Workbenches. They didn’t have a significant impact on software development, but maybe the rise of LLMs will reintroduce some ideas from them. These tools rely on a semantic model that the tool persists in some kind of storage medium, that isn’t necessarily textual or comprehensible to humans directly. Instead, for humans to understand it, the tools include projectional editors that create human-readable projections of the model.

Could this notion of a non-human deterministic representation become the future source code? One that’s designed to maximize expression with minimal tokens?

 ❄                ❄

OH: “Scala was the first example of a lab-leak in software. A language designed for dangerous experiments in type theory escaped into the general developer population.”

 ❄                ❄                ❄                ❄                ❄

elsewhere on the web

Angie Jones on tips for open source maintainers to handle AI contributions

I’ve been seeing more and more open source maintainers throwing up their hands over AI generated pull requests. Going so far as to stop accepting PRs from external contributors.

[snip]

But yo, what are we doing?! Closing the door on contributors isn’t the answer. Open source maintainers don’t want to hear this, but this is the way people code now, and you need to do your part to prepare your repo for AI coding assistants.

 ❄                ❄                ❄                ❄                ❄

Matthias Kainer has written a cool explanation of how transformers work with interactive examples

Last Tuesday my kid came back from school, sat down and asked: “How does ChatGPT actually know what word comes next?” And I thought - great question. Terrible timing, because dinner was almost ready, but great question.

So I tried to explain it. And failed. Not because it is impossibly hard, but because the usual explanations are either “it is just matrix multiplication” (true but useless) or “it uses attention mechanisms” (cool name, zero information). Neither of those helps a 12-year-old. Or, honestly, most adults. Also, even getting to start my explanation was taking longer than a tiktok, so my kid lost attention span before I could even say “matrix multiplication”. I needed something more visual. More interactive. More fun.

So here is the version I wish I had at dinner. With drawings. And things you can click on. Because when everything seems abstract, playing with the actual numbers can bring some light.

A helpful guide for any 12-year-old, or a 62-year-old that fears they’re regressing.

 ❄                ❄                ❄                ❄                ❄

In my last fragments, I included some concerns about how advertising could interplay with chatbots. Anthropic have now made some adverts about concerns about adverts - both funny and creepy. Sam Altman is amused and annoyed.

Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

Fixing the Windows Syntax Boof-a-Rama

1 Share

When I put together the core concepts of PowerShell, I was committed to solving the boof-a-rama that is Windows CLI syntax.  Prior to PowerShell, any developer that got at least a ‘D’ in a course on parsing was allowed to inflict their damage on the user community.  This incoherence caused a great deal of confusion as users struggled to navigate at least four distinct syntax groupings:

Group 1: Slash + Colon (/Parameter:Value)

This syntax is common in older administrative tools and file system commands. The parameter and its value are joined by a colon, leaving no spaces.

  • Syntax: Command /Parameter:Value
  • Examples:

Group 2: Slash + Space (/Parameter Value)

This syntax is frequently found in system information and task management tools. The parameter name is prefixed with a slash, but the value is separated by a space.

  • Syntax: Command /Parameter Value
  • Examples:

Group 3: Hyphen + Space (-Parameter Value)

This syntax is often seen in tools that have cross-platform origins or handle certificates and security. It uses a hyphen (dash) for the parameter name and a space for the value.

  • Syntax: Command -Parameter Value
  • Examples:

Group 4: Key + Equals (Parameter=Value)

This syntax is typically used for setting specific properties, variables, or query filters. There is no prefix (like / or -) for the parameter name, and it connects directly to the value with an equals sign.

  • Syntax: Command Parameter=Value
  • Examples:

We fixed this in PowerShell by providing a single common parser that developers did not have a hand in.  They specify their parameters and PowerShell turns that into a syntax and gives them what they asked for.  My original prototype for PowerShell used the VMS DCL syntax for CLIs.  After a while, we decided that the VMS->Windows market was a lot less interesting than the UNIX->Windows market so we decided to switch syntaxes.  Superstar Bruce Payette changed the parser one afternoon.  After that all the cmdlets instantly used the new syntax. No negotiations.  No priority meetings.

Read the whole story
alvinashcraft
27 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Why the Moltbook frenzy was like Pokémon

1 Share

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

Lots of influential people in tech last week were describing Moltbook, an online hangout populated by AI agents interacting with one another, as a glimpse into the future. It appeared to show AI systems doing useful things for the humans that created them (one person used the platform to help him negotiate a deal on a new car). Sure, it was flooded with crypto scams, and many of the posts were actually written by people, but something about it pointed to a future of helpful AI, right?

The whole experiment reminded our senior editor for AI, Will Douglas Heaven, of something far less interesting: Pokémon.

Back in 2014, someone set up a game of Pokémon in which the main character could be controlled by anyone on the internet via the streaming platform Twitch. Playing was as clunky as it sounds, but it was incredibly popular: at one point, a million people were playing the game at the same time.

“It was yet another weird online social experiment that got picked up by the mainstream media: What did this mean for the future?” Will says. “Not a lot, it turned out.”

The frenzy about Moltbook struck a similar tone to Will, and it turned out that one of the sources he spoke to had been thinking about Pokémon too. Jason Schloetzer, at the Georgetown Psaros Center for Financial Markets and Policy, saw the whole thing as a sort of Pokémon battle for AI enthusiasts, in which they created AI agents and deployed them to interact with other agents. In this light, the news that many AI agents were actually being instructed by people to say certain things that made them sound sentient or intelligent makes a whole lot more sense. 

“It’s basically a spectator sport,” he told Will, “but for language models.”

Will wrote an excellent piece about why Moltbook was not the glimpse into the future that it was said to be. Even if you are excited about a future of agentic AI, he points out, there are some key pieces that Moltbook made clear are still missing. It was a forum of chaos, but a genuinely helpful hive mind would require more coordination, shared objectives, and shared memory.

“More than anything else, I think Moltbook was the internet having fun,” Will says. “The biggest question that now leaves me with is: How far will people push AI just for the laughs?”

Read the whole story.

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

AI Doesn’t Reduce Work—It Intensifies It

1 Share

AI Doesn’t Reduce Work—It Intensifies It

Aruna Ranganathan and Xingqi Maggie Ye from Berkeley Haas School of Business report initial findings in the HBR from their April to December 2025 study of 200 employees at a "U.S.-based technology company".

This captures an effect I've been observing in my own work with LLMs: the productivity boost these things can provide is exhausting.

AI introduced a new rhythm in which workers managed several active threads at once: manually writing code while AI generated an alternative version, running multiple agents in parallel, or reviving long-deferred tasks because AI could “handle them” in the background. They did this, in part, because they felt they had a “partner” that could help them move through their workload.

While this sense of having a “partner” enabled a feeling of momentum, the reality was a continual switching of attention, frequent checking of AI outputs, and a growing number of open tasks. This created cognitive load and a sense of always juggling, even as the work felt productive.

I'm frequently finding myself with work on two or three projects running parallel. I can get so much done, but after just an hour or two my mental energy for the day feels almost entirely depleted.

I've had conversations with people recently who are losing sleep because they're finding building yet another feature with "just one more prompt" irresistible.

The HBR piece calls for organizations to build an "AI practice" that structures how AI is used to help avoid burnout and counter effects that "make it harder for organizations to distinguish genuine productivity gains from unsustainable intensity".

I think we've just disrupted decades of existing intuition about sustainable working practices. It's going to take a while and some discipline to find a good new balance.

Via Hacker News

Tags: careers, ai, generative-ai, llms, ai-assisted-programming, ai-ethics

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Testing ads in ChatGPT

1 Share
OpenAI begins testing ads in ChatGPT to support free access, with clear labeling, answer independence, strong privacy protections, and user control.
Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Transformers.js v4 Preview: Now Available on NPM!

1 Share
Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories