Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
149258 stories
·
33 followers

1.0.3

1 Share

2026-03-09

  • Enable alternate screen buffer by default for staff users
  • Extensions are now available as an experimental feature — ask Copilot to write custom tools and hooks for itself using @github/copilot-sdk
  • Document GH_HOST, HTTP_PROXY, HTTPS_PROXY, NO_COLOR, and NO_PROXY environment variables in help
  • Read MCP server configuration from .devcontainer/devcontainer.json
  • Add --binary-version flag to query the CLI binary version without launching
  • Add /restart command to hot restart the CLI while preserving your session
  • Background task notifications display in timeline with expandable detail
  • Type 'quit' to exit the CLI, in addition to 'exit'
  • Add extraKnownMarketplaces repository setting to replace marketplaces
  • Add Windows Terminal support to /terminal-setup command
  • /reset-allowed-tools now fully undoes /allow-all and re-triggers the autopilot permission dialog
  • Improved handling of batched queries in the SQL tool
  • Login flow no longer hangs on Ubuntu when system keyring is unresponsive
  • Terminal is properly reset when CLI crashes unexpectedly
  • Table disables borders in screen reader mode to prevent announcing decorative characters
  • MCP servers with non-conforming outputSchema are now accessible
  • /plugin update now works for GitHub-installed plugins
  • /add-dir directories persist across session changes like /clear and /resume
  • Prevent env command from being treated as safe to allow without approval
  • Placeholder text color displays correctly when wrapping in narrow terminals
  • /plugin update now works with marketplaces defined in project settings
  • Retry status messages now display to show progress during server error recovery
  • Show loading spinner in diff mode while fetching changes
  • Suppress /init suggestion when .github/instructions/ contains instructions
  • Rename merge_strategy config to mergeStrategy for consistency
  • Suppress unknown field warnings in skill and command frontmatter
  • Trust safe sed commands to run without confirmation
Read the whole story
alvinashcraft
13 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

EA Lays Off Staff Across All Battlefield Studios Following Record-Breaking Battlefield 6 Launch

1 Share
Electronic Arts has laid off staff across multiple Battlefield studios despite Battlefield 6 being the best-selling game in the U.S. in 2025 and the "biggest launch in franchise history." According to IGN, the layoffs include workers at Criterion, Dice, Ripple Effect, and Motive Studios. From the report: Individuals are being informed that the layoffs are taking place as part of a "realignment" across the Battlefield studios, as the team continues its ongoing, live service support for Battlefield 6 following launch. All four studios will remain operational, though the layoffs seem to be impacting a variety of teams across multiple studios and offices. IGN asked EA for comment on total number and types of roles impacted, as well as for the specific reasons for the layoffs. An EA spokesperson told IGN: "We've made select changes within our Battlefield organization to better align our teams around what matters most to our community. Battlefield remains one of our biggest priorities, and we're continuing to invest in the franchise, guided by player feedback and insights from Battlefield Labs."

Read more of this story at Slashdot.

Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Live Nation Avoids Ticketmaster Breakup By 'Open Sourcing' Their Ticketing Model

1 Share
Live Nation reached a settlement with the U.S. Department of Justice that avoids breaking up its dominant live events empire with Ticketmaster. Instead, the deal requires changes like "open sourcing" their ticketing model and divesting some venues. NBC News reports: The company and the Justice Department reached a settlement on Monday, following a week of testimony during an antitrust trial that threatened to potentially separate the world's largest live entertainment company. [...] On a background call with reporters Monday, a senior justice official said the deal will drive down prices by giving both artists and consumers more choice. As part of the agreement, Ticketmaster will provide a standalone ticketing system that will allow third-party companies like SeatGeek and StubHub to offer primary tickets through the platform. The senior justice official described it as "open sourcing" their ticketing model. The company will also divest up to 13 amphitheaters and reserve 50% of tickets for nonexclusive venues. Ticketmaster is also prohibited from retaliating against a venue that selects another primary ticket distributor, among other requirements. Although a group of states have joined the DOJ in signing the agreement, other states can continue to press their own claims.

Read more of this story at Slashdot.

Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete

The government shutdown is hitting airports — but not ICE

1 Share
Department of Homeland Security seal on white background.
Department of Homeland Security. | Image: The Verge

Chaos reigned at airports across the country last weekend, with thousands of travelers reportedly waiting in hours-long security lines thanks to staffing shortages. Transportation Security Administration (TSA) and Coast Guard workers have turned to food banks for assistance after weeks without pay. But amid a partial government shutdown aimed at curtailing the Department of Homeland Security's mass arrests and deportations, federal agents have continued their anti-immigrant crackdown unabated - and for now, there's not much anyone can do.

DHS has gone without funding for four weeks in a standoff over immigration enforcement. Congressional D …

Read the full story at The Verge.

Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Nvidia Is Planning to Launch an Open-Source AI Agent Platform

1 Share
Ahead of its annual developer conference, Nvidia is readying a new approach to software that embraces AI agents similar to OpenClaw.
Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Your Data is Made Powerful By Context (so stop destroying it already) (xpost)

1 Share

Your Data Is Made Powerful By Context (so stop destroying it already)

In logs as in life, the relationships are the most important part. AI doesn’t fix this. It makes it worse.

(cross-posted)

After twenty years of devops, most software engineers still treat observability like a fire alarm — something you check when things are already on fire.

Not a feedback loop you use to validate every change after shipping. Not the essential, irreplaceable source of truth on product quality and user experience.

This is not primarily a culture problem, or even a tooling problem. It’s a data problem. The dominant model for telemetry collection stores each type of signal in a different “pillar”, which rips the fabric of relationships apart — irreparably.

Your observability data is self-destructing at write time

The three pillars model works fine for infrastructure1, but it is catastrophic for software engineering use cases, and will not serve for agentic validation.

But why? It’s a flywheel of compounding factors, not just one thing, but the biggest one by far is this:

✨Data is made powerful by context✨

The more context you collect, the more powerful it becomes

Your data does not become linearly more powerful as you widen the dataset, it becomes exponentially more powerful. Or if you really want to get technical, it becomes combinatorially more powerful as you add more context.

I made a little Netlify app here where you can enter how many attributes you store per log or trace, to see how powerful your dataset is.

  • 4 fields? 6 pairwise combos, 15 possible combinations.
  • 8 fields? 28 pairwise combos, 255 possible combinations.
  • 50 fields? 1.2K pairwise combos, 1.1 quadrillion (2^250) possible combinations, as seen in the screenshot below.

When you add another attribute to your structured log events, it doesn’t just give you “one more thing to query”. It gives you new combinations with every other field that already exists.

The wider your data is, the more valuable the data becomes. Click on the image to go futz around with the sliders yourself.

Note that this math is exclusively concerned with attribute keys. Once you account for values, the precision of your tooling goes higher still, especially if you handle high cardinality data.

Data is made valuable by relationships

“Data is made valuable by context” is another way of saying that the relationships between attributes are the most important part of any data set.

This should be intuitively obvious to anyone who uses data. How valuable is the string “Mike Smith”, or “21 years old”? Stripped of context, they hold no value.

By spinning your telemetry out into siloes based on signal type, the three pillars model ends up destroying the most valuable part of your data: its relational seams.

AI-SRE agents don’t seem to like three pillars data

posted something on LinkedIn yesterday, and got a pile of interesting comments. One came from Kyle Forster, founder of an AI-SRE startup called RunWhen, who linked to an article he wrote called “Do Humans Still Read Logs?”

Humpty Dumpty traced every span, Humpty Dumpty had a great plan.

In his article, he noted that <30% of their AI SRE tools were to “traditional observability data”, i.e. metrics, logs and traces. Instead, they used the instrumentation generated by other AI tools to wrap calls and queries. His takeaway:

Good AI reasoning turns out to require far less observability data than most of us thought when it has other options.

My takeaway is slightly different. After all, the agent still needed instrumentation and telemetry in order to evaluate what was happening. That’s still observability, right?

But as Kyle tells it, the agents went searching for a richer signal than the three pillars were giving them. They went back to the source to get the raw, pre-digested telemetry with all its connective tissue intact. That’s how important it was to them.

Huh.

You can’t put Humpty back together again

I’ve been hearing a lot of “AI solves this”, and “now that we have MCPs, AI can do joins seamlessly across the three pillars”, and “this is a solved problem”.

Mmm. Joins across data siloes can be better than nothing, yes. But they don’t restore the relational seams. They don’t get you back to the mathy good place where every additional attribute makes every other attribute exponentially more valuable. At agentic speed, that reconstruction becomes a bottleneck and a failure surface.

Humpty Dumpty stored all the state, Humpty Dumpty forgot to replicate.

Our entire industry is trying to collectively work out the future of agentic development right now. The hardest and most interesting problems (I think) are around validation. How do we validate a change rate that is 10x, 100x, 1000x greater than before?

I don’t have all the answers, but I do know this: agents are going to need production observability with speed, flexibility, TONS of context, and some kind of ontological grounding via semantic conventions.

In short: agents are going to need precision tools. And context (and cardinality) are what feed precision.

Production is a very noisy place

Production is a noisy, rowdy place of chaos, particularly at scale. If you are trying to do anomaly detection with no a priori knowledge of what to look for, the anomaly has to be fairly large to be detected. (Or else you’re detecting hundreds of “anomalies” all the time.)

But if you do have some knowledge of intent, along with precision tooling, these anomalies can be tracked and validated even when they are exquisitely minute. Like even just a trickle of requests2 out of tens of millions per second.

Let’s say you work for a global credit card provider. You’re rolling out a code change to partner payments, which are “only” tens of thousands of requests per second — a fraction of your total request volume of tens of millions of req/sec, but an important one.

This is a scary change, no matter how many tests you ran in staging. To test this safely in production, you decide to start by rolling the new build out to a small group of employee test users, and oh, what the hell — you make another feature flag that lets any user opt in, and flip it on for your own account.

You wait a few days. You use your card a few times. It works (thank god).

On Monday morning you pull up your observability data and select all requests containing the new build_id or commit hash, as well as all of the feature flags involved. You break down by endpoint, then start looking at latency, errors, and distribution of request codes for these requests, comparing them to the baseline.

Hm — something doesn’t seem quite right. Your test requests aren’t timing out, but they are taking longer to complete than the baseline set. Not for all requests, but for some.

Further exploration lets you isolate the affected requests to a set with a particular query hash. Oops.. how’d that n+1 query slip in undetected??

You quickly submit a fix, ship a new build_id, and roll your change out to a larger group: this time, it’s going out to 1% of all users in a particular region.

The anomalous requests may have been only a few dozen per day, spread across many hours, in a system that served literally billions of requests in that time.

Humpty Dumpty: assembled, redeployed, A patchwork of features half-built, half-destroyed. “It’s not what we planned,” said the architect, grim. “But the monster is live — and the monster is him.”

Precision tooling makes them findable. Imprecise tooling makes them unfindable.

How do you expect your agents to validate each change, if the consequences of each change cannot be found?[3]

Well, one might ask, how have we managed so far? The answer is: by using human intuition to bridge the gaps. This will not work for agents. Our wisdom must be encoded into the system, or it does not exist.

Agents need speed, flexibility, context, and precision to validate in prod

In the past, excruciatingly precise staged rollouts like these have been mostly the province of your Googles and Facebooks. Progressive deployments have historically required a lot of tooling and engineering resources.

Agentic workflows are going to make these automated validation techniques much easier and more widely used; at the exact same time, agents developing to spec are going to require a dramatically higher degree of precision and automated validation in production.

It is not just the width of your data that matters when it comes to getting great results from AI. There’s a lot more involved in optimizing data for reasoning, attribution, or anomaly detection. But capturing and preserving relationships is at the heart of all of it.

In this situation, as in so many others, AI is both the sickness and the cure[4]. Better get used to it.

 

 

 

1 — Infrastructure teams use the three pillars for one extremely good reason: they have to operate a lot of code they did not write and can not change. They have to slurp up whatever metrics or logs the components emit and store them somewhere.

2 — Yes, there are some complications here that I am glossing past, ones that start with ‘s’ and rhyme with “ampling”. However, the rich data + sampling approach to the cost-usability balance is generally satisfied by dropping the least valuable data. The three pillars approach to the cost-usability problem is generally satisfied by dropping the MOST valuable data: cardinality and context.

3 — The needle-in-a-haystack is one visceral illustration of the value of rich context and precision tooling, but there are many others. Another example: wouldn’t it be nice if your agentic task force could check up on any diffs that involve cache key or schema changes, say, once a day for the next 6-12 months? These changes famously take a long time to manifest, by which time everyone has forgotten that they happened.

4 — One sentence I have gotten a ton of mileage out of lately: “AI, much like alcohol, is both the cause of and solution to all of life’s problems.”

Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories