Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
152123 stories
·
33 followers

Fragments: April 21

1 Share

Last week Thoughtworks released the 34th volume of our Technology Radar. This radar is our biannual survey of our experience of the technology scene, highlighting tools, techniques, platforms, and languages that we’ve used or otherwise caught our eye. This edition contains 118 blips, each briefly describing our impressions of one of these elements.

As we would expect, the radar is dominated by AI-oriented topics. Part of this is revisiting familiar ground with LLM-assisted eyes:

An interesting consequence of AI in software development is that it’s not only forcing us to look to the future; it’s also pushing us to revisit the foundations of our craft. While assembling this edition, we found ourselves returning to many established techniques, from pair programming to zero trust architecture, and from mutation testing to DORA metrics. We also revisited core principles of software craftsmanship, such as clean code, deliberate design, testability and accessibility as a first-class concern. This is not nostalgia, but a necessary counterweight to the speed at which AI tools can generate complexity. We also observed a resurgence of the command line: After years of abstracting it away in the name of usability, agentic tools are bringing developers back to the terminal as a primary interface.

I was especially happy to see my colleague Jim Gumbley added to the writing team, he’s been a regular source of security information for me over the years, including working on this site’s Threat Modeling Guide. Having a strong security presence on the radar team is especially important given the serious security concerns around using LLMs. One of the themes of the radar is securing “permission hungry” agents:

“Permission hungry” describes the bind at the heart of the current agent moment: the agents worth building are the ones that need access to everything. OpenClaw and Claude Cowork supervise real work tasks; Gas Town coordinates agent swarms across entire codebases. These agents require broad access to private data, external communication and real systems — each arguing that the payoff justifies it.

However, like a skier who’s just learned to turn and confidently points themselves at the hardest black run, the safeguards haven’t caught up with that ambition. The appetite for access collides with unsolved problems. Prompt injection means models still can’t reliably distinguish trusted instructions from untrusted input.

Given all of this, many of this radar’s blips are about Harness Engineering, indeed the radar meeting was a major source of ideas for Birgitta’s excellent article on the subject. The radar includes several blips suggesting the guides and sensors necessary for a well-fitting harness. I expect that when the next radar appears in six months time, that list will increase.

 ❄                ❄                ❄                ❄                ❄

Mike Mason looks what happens when developers aren’t reading the code.

The Python codebase Claude produced was largely working. Unit tests passed, and a few hours of real-world testing showed it was successfully managing a fairly complex piece of my infrastructure. But somewhere around 100KB of total code I noticed something: the main file had grown to about 50KB (2,000 lines) and Claude Code, when it needed to make edits, had started reaching for sed to find and modify code within that file. When I saw that, it was a serious alarm bell.

As well as the experience of “a friend”, he ponders the 500,000 lines of Claude Code after the leak.

Both things are true: there is good architecture in Claude Code, and there is also an incomprehensible mess. That’s actually the point. You don’t get to know which is which without reading the code.

His conclusion is a rough framework. Throw-away analysis scripts are fine to vibe away. Tooling you need to maintain and durable code, needs regular human review - even if it’s just a human asking a model to evaluate the code with some hints as to what good code looks like

The moment you say “I’m getting uncomfortable with how big this is getting, can we do something better?” it does the right thing: sensible decomposition, new classes, sometimes even unit tests for the new thing. It knew, it just didn’t volunteer it.

He does recommend being serious with CLAUDE.md, I don’t know if he’s tried many of the patterns that Rahul Garg has recently posted to break the similar frustration loop that he saw.

 ❄                ❄                ❄                ❄                ❄

Dan Davies poses an annoying philosophy thought experiment for us to consider how we feel about LLMs indulging in ghost writing.

 ❄                ❄                ❄                ❄                ❄

DOGE dismantled many useful things during their brief period with the wood chipper. One of these was DirectFile, a government program that supported people filing their taxes online. Don Moynihan has talked to many folks involved in Direct File, has penned a worthwhile essay that isn’t just relevant to DirectFile and other U.S. government technology projects, but indeed any technology initiative in a large organization.

Moynihan highlights:

a paradox of government reform: the simpler a potential change appears, the more likely that it has not been implemented because it features deceptive complexity that others have tried and failed to resolve.

I’ve heard that tale in many a large corporation too

One way government initiatives are different is that, at its best, it’s built on an attitude of public service

Many who worked on Direct File drew a sharp contrast with DOGE and their approach to building tech products. One point of distinction was DOGE’s seeming disinterest in public interest goals and of the public itself: “if you do not think government has a responsibility to serve people, I think it draws into question how good are you going to be at making government work better for people if you just don’t believe in that underlying principle”

The tragedy for U.S. taxpayers like me is that we’ve lost an effective way to go through the annual hassle of taxes. In addition the IRS is much weaker - it’s lost 25% of its staff and its budget is 40% below what it was in 2010. Much though we hate tax collectors, this isn’t a good thing. An efficient tax system is an important part of national security, many historians consider the ability to raise taxes effectively was an important reason why Britain won its century-long struggle with France in the Eighteenth century. A wonky tax system is also a major reason why the French monarchy, so powerful at the start of that century, fell to revolution. Indeed there is considerable evidence that increasing the budget of the IRS would more than pay for itself by increasing revenue.

Read the whole story
alvinashcraft
40 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Azure DevOps MCP Server April Update

1 Share

This update brings a set of improvements and changes across both local and remote Azure DevOps MCP Servers.

Here’s a summary of what’s changed.

Query work items with WIQL

We’ve introduced a new wit_query_by_wiql tool that enables users to construct and run work item WIQL queries. For our remote MCP, to ensure reliability and performance, access to this tool is currently limited to users with the Insiders feature enabled. Learn more.

As we gather usage telemetry and validate query performance, we plan to make it broadly available.

Remote MCP Server

Annotations

MCP Annotations are metadata tags that help LLMs understand how to safely and effectively use external tools by providing a shared vocabulary for behavior, context, and risk. We’re implementing annotations for read-only, destructive, and openWorld tools to clearly signal how each tool operates and ensure safer, more reliable interactions.

Missing tools

There are still a few gaps between the local and remote MCP servers. We’ve recently added support for repo_get_file_content, repo_list_directory, and repo_vote_pull_request, and will continue closing these gaps by introducing additional tools in the coming weeks.

Tool restructuring

One of the key challenges in building an Azure DevOps MCP Server is the sheer surface area that Azure DevOps covers. At the same time, both clients and LLMs tend to perform better with a smaller, more focused set of tools. To address this, we are beginning to consolidate related tools. With our remote MCP Server still in public preview, now is an ideal time to make these improvements. We’re starting incrementally, beginning with the wiki tools, to evaluate performance and usability before expanding further. Here is what you can expect:

New Tool Type Actions / Scope Replaces
wiki Read-only get_page, list_pages, list_wikis, get_wiki wiki_get_page, wiki_get_page_content, wiki_list_pages, wiki_list_wikis, wiki_get_wiki
wiki_upsert_page Write Single operation, no action parameter wiki_create_or_update_page
search_wiki Search search_wiki

There will be more changes to come. Keep an eye on the documentation for updates.

Local MCP Server

Personal Access Token Support

Personal access tokens are now supported for authentication, simplifying the experience for users integrating the Azure DevOps MCP Server with external services and clients such as GitHub Copilot. Learn more

Elicitations

Elicitations are guided prompts that help ensure the correct information is provided when performing a task. For example, since most operations require a project, we’ve added elicitation support for project selection across the core, work, and work items toolsets.

While elicitations can be helpful, we haven’t yet seen strong demand from the community. As a result, we are experimenting with a limited rollout to evaluate their effectiveness. We would love your feedback. Please share your thoughts in an issue or comment and let us know if you would like to see broader support across more tools and parameters.

MCP Apps (Experimental)

MCP Apps are an experimental feature that enables packaging and executing common workflows directly within the MCP Server environment. Rather than manually chaining multiple tools together, MCP Apps provide a more structured and repeatable way to perform tasks such as querying or updating work items.

This approach reduces setup time and helps maintain consistency across users and scenarios.

For example, you can use the mcp_app_my_work_item tool to access a self-contained work item experience that allows you to view work items assigned to you, filter results, and open and edit work items.

mcp blog 1 image

To try it out, use the mcp-apps-poc branch.

Then update your mcp.json configuration to include the mcp-apps domain:

{
  "servers": {
    "ado": {
      "type": "stdio",
      "command": "mcp-server-azuredevops",
      "args": ["contsoso", "-d", "core", "work", "work-items", "mcp-apps"]
    }
  }
}

We’d love your feedback on MCP Apps. If you find them useful, let us know. Your input will help shape whether this capability is brought into the main local and remote MCP Servers.

Feedback

Stay tuned, more updates are on the way. In the meantime, we’d love your feedback. Please leave a comment on this post or create an issue in the MCP Server repository.

The post Azure DevOps MCP Server April Update appeared first on Azure DevOps Blog.

Read the whole story
alvinashcraft
40 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

How do developers define their worth when code is written by AI?

1 Share

Lately I’ve been in a few podcasts and interviews and one question came up almost every time:

What is left for developers to care about or define themselves with when all the code is written by AI?

Here is the quick answer: being a developer was never about writing code. Code is a tool to achieve the thing we really care about: solving problems. Every developer I know loves solving problems and when there aren’t any problems to solve, we invent them.

This is the reason why so many frameworks and libraries exist. We take a coding solution we built and make it generic so that it can deal with whatever problem that is thrown at it. And then we lose interest and start all over again.

Don’t get me wrong, writing code is great fun. Having witnessed and helped new languages and environments evolve and change from an OK concept to a platform almost every software solution relies on is also great. Squeezing the last bit of optimisation out of a script whilst keeping it understandable and maintainable is a great feeling. But the code is not the end goal. If we find a tool that does the job as well, we will use that.

Every great developer I know is open to change and eager to learn about new things to do and try out. Asking if the code is what defines us is a sign that people still do not take the role of programmer as a normal thing for humans to do. We’re not some freaks in the corner that nobody understands and that stand just outside of “normal” society.

We’re doing a job and we are honing our craft constantly and to find better ways to make computers help others simplify their lives. Creative people thrive doing the thing that makes them happy. Writers write although the web is 90% AI generated and algorithm optimised slop. Musicians play in their garage and then pubs with 10 people because they like making the music they do. Painters paint although a prompt could give you a seeminlgy perfect picture. People knit, sew and weave although there is already far too much fashion available to ever wear in a lifetime.

Developers use code as a tool to create. So when you ask me if I feel threatened by AI and agents I can safely say that I am not. These things can take the task and the typing and the releasing from me, but I still feel a lot of joy popping open the hood and looking at things the machines created knowing that I can read and understand it. I can take it apart and put it back together. I can make it do things that the machine didn’t think of. I can make it better. I can make it mine. And so can you.

Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete

The Business and Politics of Platform Status Page Details

1 Share

I like GitHub’s recent blog post on transparency around their status page. Status pages are human and machine-readable properties I’ve tracked on for API providers as part of my APIs.json work for over a decade now. API status pages emerged as a standard shortly after we became dependent on these platforms and their APIs, and have become an expected building block of any serious platform. Seeing more evolution, discussion, and transparency around our platforms is a good thing, and something we should see more of.

You can see GitHub wrestling with the technical details and the best way to communicate around real or perceived instability. They have added a new “degraded performance” state, per-service uptime metrics, and clearer communication around model availability. They are separating things into three buckets now, degraded performance, partial outage, and major outage–with per service metrics determining these states.

The GitHub Status Page breaks things down by Git operations, Webhooks, API requests, issues, pull requests, actions, packages, pages, code spaces, and now copilot, which let’s you asses which part of the platform you care about being up or down, and separates the API from the platform. Each platform will have their own way of breaking things down, but I will be looking for what some of the common patterns are around what status pages report, what services they use, and the separation of API and the rest of the platform.

I’ll have to step back and do some thinking about the business and politics of this transparency from GitHub. What does the introduction of “degraded performance” do when it comes to service level agreements or other general expectations? I am curious to think more about what they might be getting ahead of here. But I don’t want to assume any ill intent behind their blog post. I just get nervous when I see “transparency” used, as I watch transparency pages evolve from something helpful to something that was used to split the business and political hairs in favor of platforms.

I just wanted to write about this so I have a timestamp in the blog, and it is something I can revisit and look at across other providers–then I will likely understand the bigger picture and how GitHub’s changes fit into things.



Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete

AI Coding, Streamer Maps, Glitches, and more

1 Share
From: Fritz's Tech Tips and Chatter
Duration: 0:00
Views: 1

Let's have some more fun building a map for streamers

Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Agent Building Trends

1 Share
From: AIDailyBrief
Views: 15

In this Operator's Bonus episode, NLW zooms out from the Agent Madness bracket to share the patterns emerging across nearly 100 agent submissions — from the shift toward AI org charts and "markets of one" software, to the memory gap holding the whole field back. He also previews the Elite Eight matchups.

The AI Daily Brief helps you understand the most important news and discussions in AI.
Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614
Get it ad free at http://patreon.com/aidailybrief
Learn more about the show https://aidailybrief.ai/

Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories