Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
149253 stories
·
33 followers

Your Data is Made Powerful By Context (so stop destroying it already) (xpost)

1 Share

Your Data Is Made Powerful By Context (so stop destroying it already)

In logs as in life, the relationships are the most important part. AI doesn’t fix this. It makes it worse.

(cross-posted)

After twenty years of devops, most software engineers still treat observability like a fire alarm — something you check when things are already on fire.

Not a feedback loop you use to validate every change after shipping. Not the essential, irreplaceable source of truth on product quality and user experience.

This is not primarily a culture problem, or even a tooling problem. It’s a data problem. The dominant model for telemetry collection stores each type of signal in a different “pillar”, which rips the fabric of relationships apart — irreparably.

Your observability data is self-destructing at write time

The three pillars model works fine for infrastructure1, but it is catastrophic for software engineering use cases, and will not serve for agentic validation.

But why? It’s a flywheel of compounding factors, not just one thing, but the biggest one by far is this:

✨Data is made powerful by context✨

The more context you collect, the more powerful it becomes

Your data does not become linearly more powerful as you widen the dataset, it becomes exponentially more powerful. Or if you really want to get technical, it becomes combinatorially more powerful as you add more context.

I made a little Netlify app here where you can enter how many attributes you store per log or trace, to see how powerful your dataset is.

  • 4 fields? 6 pairwise combos, 15 possible combinations.
  • 8 fields? 28 pairwise combos, 255 possible combinations.
  • 50 fields? 1.2K pairwise combos, 1.1 quadrillion (2^250) possible combinations, as seen in the screenshot below.

When you add another attribute to your structured log events, it doesn’t just give you “one more thing to query”. It gives you new combinations with every other field that already exists.

The wider your data is, the more valuable the data becomes. Click on the image to go futz around with the sliders yourself.

Note that this math is exclusively concerned with attribute keys. Once you account for values, the precision of your tooling goes higher still, especially if you handle high cardinality data.

Data is made valuable by relationships

“Data is made valuable by context” is another way of saying that the relationships between attributes are the most important part of any data set.

This should be intuitively obvious to anyone who uses data. How valuable is the string “Mike Smith”, or “21 years old”? Stripped of context, they hold no value.

By spinning your telemetry out into siloes based on signal type, the three pillars model ends up destroying the most valuable part of your data: its relational seams.

AI-SRE agents don’t seem to like three pillars data

posted something on LinkedIn yesterday, and got a pile of interesting comments. One came from Kyle Forster, founder of an AI-SRE startup called RunWhen, who linked to an article he wrote called “Do Humans Still Read Logs?”

Humpty Dumpty traced every span, Humpty Dumpty had a great plan.

In his article, he noted that <30% of their AI SRE tools were to “traditional observability data”, i.e. metrics, logs and traces. Instead, they used the instrumentation generated by other AI tools to wrap calls and queries. His takeaway:

Good AI reasoning turns out to require far less observability data than most of us thought when it has other options.

My takeaway is slightly different. After all, the agent still needed instrumentation and telemetry in order to evaluate what was happening. That’s still observability, right?

But as Kyle tells it, the agents went searching for a richer signal than the three pillars were giving them. They went back to the source to get the raw, pre-digested telemetry with all its connective tissue intact. That’s how important it was to them.

Huh.

You can’t put Humpty back together again

I’ve been hearing a lot of “AI solves this”, and “now that we have MCPs, AI can do joins seamlessly across the three pillars”, and “this is a solved problem”.

Mmm. Joins across data siloes can be better than nothing, yes. But they don’t restore the relational seams. They don’t get you back to the mathy good place where every additional attribute makes every other attribute exponentially more valuable. At agentic speed, that reconstruction becomes a bottleneck and a failure surface.

Humpty Dumpty stored all the state, Humpty Dumpty forgot to replicate.

Our entire industry is trying to collectively work out the future of agentic development right now. The hardest and most interesting problems (I think) are around validation. How do we validate a change rate that is 10x, 100x, 1000x greater than before?

I don’t have all the answers, but I do know this: agents are going to need production observability with speed, flexibility, TONS of context, and some kind of ontological grounding via semantic conventions.

In short: agents are going to need precision tools. And context (and cardinality) are what feed precision.

Production is a very noisy place

Production is a noisy, rowdy place of chaos, particularly at scale. If you are trying to do anomaly detection with no a priori knowledge of what to look for, the anomaly has to be fairly large to be detected. (Or else you’re detecting hundreds of “anomalies” all the time.)

But if you do have some knowledge of intent, along with precision tooling, these anomalies can be tracked and validated even when they are exquisitely minute. Like even just a trickle of requests2 out of tens of millions per second.

Let’s say you work for a global credit card provider. You’re rolling out a code change to partner payments, which are “only” tens of thousands of requests per second — a fraction of your total request volume of tens of millions of req/sec, but an important one.

This is a scary change, no matter how many tests you ran in staging. To test this safely in production, you decide to start by rolling the new build out to a small group of employee test users, and oh, what the hell — you make another feature flag that lets any user opt in, and flip it on for your own account.

You wait a few days. You use your card a few times. It works (thank god).

On Monday morning you pull up your observability data and select all requests containing the new build_id or commit hash, as well as all of the feature flags involved. You break down by endpoint, then start looking at latency, errors, and distribution of request codes for these requests, comparing them to the baseline.

Hm — something doesn’t seem quite right. Your test requests aren’t timing out, but they are taking longer to complete than the baseline set. Not for all requests, but for some.

Further exploration lets you isolate the affected requests to a set with a particular query hash. Oops.. how’d that n+1 query slip in undetected??

You quickly submit a fix, ship a new build_id, and roll your change out to a larger group: this time, it’s going out to 1% of all users in a particular region.

The anomalous requests may have been only a few dozen per day, spread across many hours, in a system that served literally billions of requests in that time.

Humpty Dumpty: assembled, redeployed, A patchwork of features half-built, half-destroyed. “It’s not what we planned,” said the architect, grim. “But the monster is live — and the monster is him.”

Precision tooling makes them findable. Imprecise tooling makes them unfindable.

How do you expect your agents to validate each change, if the consequences of each change cannot be found?[3]

Well, one might ask, how have we managed so far? The answer is: by using human intuition to bridge the gaps. This will not work for agents. Our wisdom must be encoded into the system, or it does not exist.

Agents need speed, flexibility, context, and precision to validate in prod

In the past, excruciatingly precise staged rollouts like these have been mostly the province of your Googles and Facebooks. Progressive deployments have historically required a lot of tooling and engineering resources.

Agentic workflows are going to make these automated validation techniques much easier and more widely used; at the exact same time, agents developing to spec are going to require a dramatically higher degree of precision and automated validation in production.

It is not just the width of your data that matters when it comes to getting great results from AI. There’s a lot more involved in optimizing data for reasoning, attribution, or anomaly detection. But capturing and preserving relationships is at the heart of all of it.

In this situation, as in so many others, AI is both the sickness and the cure[4]. Better get used to it.

 

 

 

1 — Infrastructure teams use the three pillars for one extremely good reason: they have to operate a lot of code they did not write and can not change. They have to slurp up whatever metrics or logs the components emit and store them somewhere.

2 — Yes, there are some complications here that I am glossing past, ones that start with ‘s’ and rhyme with “ampling”. However, the rich data + sampling approach to the cost-usability balance is generally satisfied by dropping the least valuable data. The three pillars approach to the cost-usability problem is generally satisfied by dropping the MOST valuable data: cardinality and context.

3 — The needle-in-a-haystack is one visceral illustration of the value of rich context and precision tooling, but there are many others. Another example: wouldn’t it be nice if your agentic task force could check up on any diffs that involve cache key or schema changes, say, once a day for the next 6-12 months? These changes famously take a long time to manifest, by which time everyone has forgotten that they happened.

4 — One sentence I have gotten a ton of mileage out of lately: “AI, much like alcohol, is both the cause of and solution to all of life’s problems.”

Read the whole story
alvinashcraft
35 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Security Was Already a Mess. Generative AI Is About to Prove It.

1 Share

I was thinking about some of the points from the Polyglot Conf list of predictions for Gen AI, titled “Second Order Effects of AI Acceleration: 22 Predictions from Polyglot Conference Vancouver“. One thing that stands out to me, and I’m sure many of you have read about the scenario, of misplaced keys, tokens, passwords and usernames, or whatever other security collateral left in a repo. It’s been such an issue orgs like AWS have setup triggers that when they find keys on the internet, they trace back and try to alert their users (i.e. if a user of theirs has stuck account keys in a repo). It’s wild how big of a problem this is.

Once you’ve spent any serious amount of time inside corporate IT, you eventually come to a slightly uncomfortable realization. Exponentially so if you focus on InfoSec or other security related things. Security, broadly speaking, is not in a particularly great state.

That might sound dramatic, but it’s not really. It is the standard modus operandi of corporate IT. The cost of really good security is too high for more corporations to focus where they should and often when some corporations focus on security they’ll often miss the forrest for the trees. There are absolutely teams doing excellent security work, so don’t get the idea I’m saying there aren’t some solid people doing the work to secure systems and environments. There are some organizations that invest heavily in it. There are people in security roles who take the mission extremely seriously and do very good engineering.

A lot of what passes for security is really just a mixture of documentation, policy, and a little bit of obscurity. Systems are complicated enough that people assume things are protected. Access is restricted mostly because people don’t know where to look. Credentials are hidden in configuration files or environment variables that nobody outside the team sees.

And that becomes the de facto security posture.

Not deliberate protection.

Just… quiet obscurity.

I’ve lost count of the number of times I’ve been pulled into a system review, or some troubleshooting session, where a secret shows up in a place it absolutely shouldn’t be. An API key sitting in a script. A database password in a config file. An environment file committed to a repository six months ago that nobody noticed.

That sort of thing happens constantly. Not out of malice. Out of convenience. But now we’ve introduced something new into the environment.

Generative AI.

More importantly though, the agentic tooling built around it. Tooling that literally takes actions on your behalf. Tools that can read entire repositories, analyze logs, scan infrastructure configuration, generate code, and help debug systems in seconds. Tools that engineers increasingly rely on as a kind of external thinking partner while they work through problems.

All that benefit is coming with AI tools. However AI doesn’t care about the secret. It’s just processing text. But the act of pasting it there matters. Because the moment that secret leaves your controlled environment, you no longer know exactly where it goes, how it’s stored, or how long it persists in the LLM.

The mental model a lot of people are using right now is wrong. They treat AI like a scratch pad or an extension of their own thoughts.

It isn’t.

The more accurate model is this: an AI tool is another resource participating in your workflow. Another staff member, effectively.

Except instead of being a person sitting at the desk next to you, it’s a system operated by someone else, running on infrastructure you don’t control, processing information you send to it. Including keys and secrets.

Once you start looking at it that way, a few things become obvious. You wouldn’t casually hand a contractor your production API keys while asking them to help debug something. You wouldn’t drop a full .env file containing service credentials into a conversation with someone who doesn’t actually need those values.

Yet that is exactly the pattern that is quietly emerging with generative AI tools. Especially among new users of said tools! Developers paste configuration files, snippets of infrastructure code, environment variables, connection strings, and logs directly into prompts because it’s the fastest way to get an answer.

It feels harmless. But secrets have a way of spreading through systems once they start moving.

The real issue here is that generative AI doesn’t create security problems. It amplifies the ones that already exist. Problems that the industry has failed (miserably might I add) at solving. If an organization already has sloppy credential management, AI just gives those credentials another place to leak. If engineers already pass secrets around informally to get work done, AI becomes another convenient channel for that behavior.

And because AI tools accelerate everything, they accelerate the consequences too. What used to take hours of searching through documentation can now happen instantly. A repository full of configuration files can be analyzed in seconds. Systems that were once opaque are now far easier to reason about.

The Takeaway (Including secrets!)

The practical takeaway here isn’t that people should stop using AI tools. That’s not realistic and frankly a career limiting maneuver at this point. The tools are genuinely useful and they’re going to become a permanent part of how software gets built.

What needs to change – desperately – is operational discipline.

Secrets should never be treated casually, and that includes interactions with generative systems. API keys, tokens, passwords, certificates, environment files, connection strings—none of those belong in prompts or screenshots or debugging sessions with external tools.

If you need to ask an AI for help, scrub the sensitive pieces first. Replace real values with placeholders. Remove anything that grants access to a system. Setup ignore for the env files and don’t let production env values (or vault values, whatever you’re using) leak into your Generative AI systems.

Treat every AI interaction the same way you would treat a conversation with another engineer outside your organization, or better yet outside the company (or Government, etc) altogether.

But not someone you hand the keys to the kingdom. Don’t give them to your AI tooling.

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Expanded Version Control Support in Pulumi Cloud

1 Share

Your version control provider shouldn’t limit your infrastructure workflows. Pulumi Cloud now works with GitHub, GitHub Enterprise Server, Azure DevOps, and GitLab. Every team gets the same deployment pipelines, PR previews, and AI-powered change summaries regardless of where their code lives.

Add account screen showing GitHub, GitLab, and Azure DevOps as VCS options

What your team can do

Deploy on every push

Connect a repository to a stack, and infrastructure deploys automatically when you push. Configure path filters to trigger only when relevant files change, and manage environment variables and secrets directly in Pulumi Cloud. No external CI/CD pipeline required.

Preview changes on pull requests

Every pull request gets an infrastructure preview so reviewers can see exactly what will change before merging. The preview runs the same Pulumi operations your deployment would, giving your team confidence that a merge won’t break anything.

Neo explains your changes

Neo posts AI-generated summaries on your pull requests explaining what infrastructure changes mean in plain language. Reviewers who aren’t Pulumi experts can still understand the impact of a change without reading resource diffs.

Neo posting an infrastructure change summary on a pull request

Let Neo open pull requests for you

Ask Neo to make infrastructure changes and it opens pull requests directly against your connected repositories. Describe what you want in natural language, and Neo writes the code, opens the PR, and kicks off a preview, all without leaving Pulumi Cloud.

Detect and fix drift

Schedule drift detection to catch out-of-band changes automatically. When someone modifies infrastructure outside of your Pulumi programs, drift detection flags the difference so your team can remediate before it causes issues.

Secure authentication

Pulumi Cloud authenticates with your VCS provider using OIDC or OAuth so no long-lived credentials need to be stored. Short-lived tokens keep your deployment pipelines secure without manual secret rotation.

Set up new projects from your VCS

The new project wizard discovers your organizations, repositories, and branches so you can scaffold and deploy a new stack without leaving Pulumi Cloud. Pick your repo, choose a branch, and you’re ready to deploy.

New project wizard showing repository settings

Getting started

  1. An org admin configures the integration under Settings > Version Control.
  2. Authorize with your VCS provider.
  3. Deploy infrastructure with first-class workflows.

For setup details, see the docs for GitHub, GitHub Enterprise Server, Azure DevOps, and GitLab.

Connect your VCS
Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

The SQL Server Transaction Log, Part 3: The Circular Nature of the Log

1 Share

(This post first appeared on SQLperformance.com four years ago as part of a blog series, before that website was mothballed later in 2022 and the series was curtailed. Reposted here with permission, with a few tweaks.)

In the second part of this series, I described the structural hierarchy of the transaction log. As this post is chiefly concerned with the Virtual Log Files (VLFs) I described, I recommend you read the second part before continuing.

When all is well, the transaction log will endlessly loop, reusing the existing VLFs. This behavior is what I call the circular nature of the log. Sometimes, however, something will happen to prevent this, and the transaction log grows and grows, adding more and more VLFs. In this post, I’ll explain how all this works, or sometimes doesn’t.

VLFs and Log Truncation

All VLFs have a header structure containing metadata about the VLF. One of the most important fields in the structure is the status of the VLF, and the values we’re interested in are zero, meaning the VLF is inactive, and two, meaning the VLF is active. It’s important because an inactive VLF can be reused, but an active one cannot. Note that a VLF is wholly active or wholly inactive.

A VLF will remain active while required log records are in it, so it can’t be reused and overwritten (I’ll cover log records themselves next time). Examples of reasons why log records may be required include:

  • There’s a long-running transaction the log records are part of, so they cannot be released until the transaction has committed or has finished rolling back
  • A log backup hasn’t yet backed up those log records
  • That portion of the log has not yet been processed by the Log Reader Agent for transactional replication or Change Data Capture
  • That portion of the log hasn’t yet been sent to an asynchronous database mirror or availability group replica

It’s important to note that if there are no reasons for a VLF to remain active, it won’t switch to being inactive again until a process called log truncation occurs – more on this below.

Using a simple hypothetical transaction log with only five VLFs and VLF sequence numbers starting at 1 (remember from last time that in reality, they never do), when the transaction log is created, VFL 1 is immediately marked as active, as there always has to be at least one active VLF in the transaction log—the VLF where log blocks are currently being written to. Our example scenario is shown in Figure 1 below.

(Figure 1: Hypothetical, brand-new transaction log with 5 VLFs, sequence numbers 1 through 5 (my image))

As more log records are created, and more log blocks are written to the transaction log, VLF 1 fills up, so VLF 2 has to become active for more log blocks to be written to, as shown in Figure 2 below.

(Figure 2: Activity moves through the transaction log (my image))

SQL Server tracks the start of the oldest uncommitted (active) transaction, and this LSN is persisted on disk every time a checkpoint operation occurs. The LSN of the most recent log record written to the transaction log is also tracked, but it’s only tracked in memory as there’s no way to persist it disk without running into various race conditions. That doesn’t matter as it’s only used during crash recovery, and SQL Server can work out the LSN of the “end” of the transaction log during crash recovery. Checkpoints and crash recovery are topics for future posts in the series.

Eventually, VLF 2 will fill up, and VLF 3 will become active, and so on. The crux of the circular nature of the transaction log is that earlier VLFs in the transaction log become inactive so they can be reused. This is done by a process called log truncation, which is also commonly called log clearing. Unfortunately, both of these terms are terrible misnomers because nothing is actually truncated or cleared.

Log truncation is simply the process of examining all the VLFs in the transaction log and determining which active VLFs can now be marked as inactive again, as none of their contents are still required by SQL Server. When log truncation is performed, there’s no guarantee any active VLFs can be made inactive—it entirely depends on what’s happening with the database.

There are two common misconceptions about log truncation:

  1. The transaction log gets smaller (the “truncation” misconception). No, it doesn’t – there’s no size change from log truncation. The only thing capable of making the transaction log smaller is an explicit DBCC SHRINKFILE.
  2. The inactive VLFs are zeroed out in some way (the “clearing” misconception). No – nothing is written to the VLF when it’s made inactive except for a few fields in the VLF header.

Figure 3 below shows our transaction log where VLFs 3 and 4 are active, and log truncation was able to mark VLFs 1 and 2 inactive.

(Figure 3: Log truncation marks earlier VLFs as inactive (my image))

When log truncation occurs depends on which recovery model is in use for the database:

  • Simple model: log truncation occurs when a checkpoint operation completes
  • Full model or bulk-logged model: log truncation occurs when a log backup completes (as long as there isn’t a concurrent full or differential backup running, in which case log truncation is deferred until the data backup completes)

There are no exceptions to this.

Circular Nature of the Log

To avoid the transaction log having to grow, log truncation must be able to mark VLFs inactive. The first physical VLF in the log must be inactive for the transaction log to have its circular nature.

Consider Figure 4 below, which shows VLFs 4 and 5 are in use and log truncation has marked VLFs 1 through 3 as inactive. More log records are generated, more log blocks are written into VLF 5, and eventually, it fills up.

(Figure 4: Activity fills up the highest physical VLF in the transaction log (my image))

At this point, the log manager for the database looks at the status of the first physical VLF in the transaction log, which in our example is VLF 1, with sequence number 1. VLF 1 is inactive, so the transaction log can wrap around and begin filling again from the start. The log manager changes the first VLF to active and increases its sequence number to be one higher than the current highest VLF sequence number. So it becomes VLF 6, and logging continues with log block being written into that VLF. This is the circular nature of the log, as shown below in Figure 5.

(Figure 5: The circular nature of the transaction log and VLF reuse (my image))

When Things Go Wrong

When the first physical VLF in the transaction log isn’t inactive, the transaction log cannot wrap around, so it will grow (as long as it’s configured to do so and there is sufficient disk space). This often happens because there’s something preventing log truncation from deactivating VLFs. If you find the transaction log for a database is growing, you can query SQL Server to find out if there’s a log truncation problem using this simple code below:

SELECT [log_reuse_wait_desc]
FROM [master].[sys].[databases]
WHERE [name] = N'MyDatabase';

If log truncation was able to deactivate one or more VLFs, then the result will be NOTHING. Otherwise, you’ll be given a reason why log truncation couldn’t deactivate any VLFs. There is a long list of possible reasons described here in the section Factors that can delay log truncation.

It’s important to understand the semantics of what the result is: it’s the reason log truncation couldn’t do anything the last time it tried to run. For instance, the result might be ACTIVE_BACKUP_OR_RESTORE, but you know that that long-running full backup has finished. This just means that the last time log truncation was attempted, the backup was still running.

In my experience, the most common reason for log truncation being prevented is LOG_BACKUP; i.e., go perform a log backup! But there’s also an interesting, weird behavior with LOG_BACKUP. If you continually see the result LOG_BACKUP but you know log backups are happening successfully, it’s because there is very little activity in the database and the current VLF is the same as it was the last time a log backup was performed. So, LOG_BACKUP means “go perform a log backup” or “all of the log records backed up are from the current VLF, so it couldn’t be deactivated.” When the latter happens, it can be confusing.

Circling Back…

Maintaining the circular nature of the transaction log is very important to avoid costly log growths and the need to take corrective action. Usually, this means ensuring log backups are happening regularly to facilitate log truncation and sizing the transaction log to be able to hold any large, long-running operations like index rebuilds or ETL operations without log growth occurring.

In the next part of the series, I’ll cover log records, how they work, and some interesting examples.

The post The SQL Server Transaction Log, Part 3: The Circular Nature of the Log appeared first on Paul S. Randal.

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Daily Reading List – March 9, 2026 (#737)

1 Share

I had a fun weekend, with no option to do anything work related. Baseball, boating, and shenanigans with family. I probably get more inspiration for things to do at work because I’m living an enjoyable life outside of it. At least that’s what I tell myself.

[blog] Designing MCP tools for agents: Lessons from building Datadog’s MCP server. Some hard-earned lessons here! There are at least 3-4 strong pieces of advice in this Datadog post.

[article] The revenge of SQL: How a 50-year-old language reinvents itself. Is SQL hot again? Is it about relational databases solving most use cases nowadays? Also you have SQL on the frontend, better SQL clients and more.

[article] EY hit 4x coding productivity by connecting AI agents to engineering standards. Better models matter. But applying a smart context is a difference maker regardless of model.

[blog] Game on with Spanner: How Playstation achieves global scale with 91% less storage, 50% lower costs. Cool story. A high performing database engine can end up saving you a ton of money and complexity.

[blog] How Do Large Companies Manage CI/CD at Scale? Me building and running a simple deployment pipeline is not “scalable CI/CD.” What do teams do when they have lots of apps, pipelines, and targets? Some insight here.

[blog] Go for Backend Development — Why We Bet on It. Very strong, defensible case for using Go.

[article] When Using AI Leads to “Brain Fry.” Across roles, people are using AI past the point of their brains can handle. What leads to brain fry, and how to prevent it?

[blog] Hardware-Enabled Software and the Next Generation of Vertical AI. I don’t pay a lot of attention to this space, so this was educational.

[blog] Firebase A/B Testing is now available for the web. The functional was available to mobile devs for a while, and now web users can take advantage of this powerful system for running experiments.

[blog] Terminals Are Cool Again. Maybe we should be building more terminal apps? I’m not sold that they’re more accessible than a web or desktop app. But there’s no doubt they’re lighter weight and can be more efficient to use.

[blog] gRPC on GKE for Fun & Profit Part 1 — An Overview. gPRC is a key technology within Google, and also many other companies that care about performance between services. See part 2 of this series as well.

Want to get this update sent to you every day? Subscribe to my RSS feed or subscribe via email below:



Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

New GUI in Windows Terminal to create Actions and more! - Developer News 10/2025

1 Share
From: Noraa on Tech
Duration: 2:16
Views: 4

WinUI Gallery 2.5, Windows Terminal 1.22 Preview and more releases. All covered in this episode!

-----

Links

Visual Studio Code
• Visual Studio Code Release March 2026 (version 1.111) - https://code.visualstudio.com/updates/v1_111?WT.mc_id=MVP_274787
GitHub
• Copilot Memory now on by default for Pro and Pro+ users in public preview - https://github.blog/changelog/2026-03-04-copilot-memory-now-on-by-default-for-pro-and-pro-users-in-public-preview/
• Quick access to merge status in pull requests is in public preview - https://github.blog/changelog/2026-03-05-quick-access-to-merge-status-in-pull-requests-in-public-preview/
• Hierarchy view improvements and file uploads in issue forms - https://github.blog/changelog/2026-03-05-hierarchy-view-improvements-and-file-uploads-in-issue-forms/
Windows
• Announcing WinUI Gallery 2.8 - https://devblogs.microsoft.com/ifdef-windows/announcing-winui-gallery-2-8/?WT.mc_id=MVP_274787
• Windows Terminal Preview 1.25 Release - https://devblogs.microsoft.com/commandline/windows-terminal-preview-1-25-release/?WT.mc_id=MVP_274787

-----

🐦X: https://x.com/theredcuber
🐙Github: https://github.com/noraa-junker
📃My website: https://noraajunker.ch

Read the whole story
alvinashcraft
2 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories