Read more of this story at Slashdot.
Read more of this story at Slashdot.
Microsoft’s Customer Zero blog series gives an insider view of how Microsoft builds and operates Microsoft using our trusted, enterprise-grade agentic platform. Learn best practices from our engineering teams with real-world lessons, architectural patterns, and operational strategies for pressure-tested solutions in building, operating, and scaling AI apps and agent fleets across the organization.
As teams move from experimenting with AI agents to running them in production, the questions they ask begin to change. Early prototypes often focus on whether an agent can reason to generate useful output. But once agents are placed into real systems where they continuously need to serve users and respond to events, new concerns quickly take center stage: reliability, scale, observability, security, and long‑running operations.
A common misconception at this stage is to think of an agent as a simple chatbot wrapped around an API. In practice, an AI agent is something very different. It is a service that listens, thinks, and acts, ingesting unstructured inputs, reasoning over context, and producing outputs that may span multiple phases. Treating agents as services means teams often need more than they initially expect: dependable compute, strong security, and real-time visibility to run agents safely and effectively at scale.
When we kick off an agent loop, we provide input that informs the context it recalls for the task, the data it connects to, the tools it calls, and the reasoning steps it outlines for itself to generate an output. Agent needs are different from traditional services in hosting, scaling, identity, security, and observability; it’s a product with a probabilistic nature that requires secure, auditable access to many resources at the same lightspeed performance that users expect from any software.
This isn’t the first time that the software industry needed to evolve its thinking around infrastructure. When modern application architectures began shifting from monolithic apps toward microservices, existing infrastructure wasn’t built with that model in mind. As systems were reconstructed into independent services, teams quickly discovered they needed new runtime architecture that properly accommodated microservice needs. The modern app era brought new levels of performance, reliability, and scalability of apps, but it also warranted that we rebuild app infrastructure with container orchestration and new operational patterns in mind.
AI agents represent a similar inflection. Infrastructure designed for request‑response applications or stateless workloads wasn’t built with long‑running, tool‑calling, AI‑driven workflows in mind. As the builders of Foundry Agent Service, we were very aware that traditional architectures wouldn’t hold up to the bursty agentic workflows that needed to aggregate data across sources, connect to several simultaneous tools, and reason through execution plans for the output that we needed. Rather than building new infrastructure from scratch, the choice for building on Azure Container Apps was clear. With over a million Apps hosted on Azure Container Apps, it was the tried-and-true solution we needed to keep our team focused on building agent intelligence and behavior instead of the plumbing underneath.
Foundry Agent Service is Microsoft’s fully managed platform for building, deploying, and scaling AI agents as production services. Builders start by choosing their preferred framework or immediately building an agent inside Foundry, while Foundry Agent Service handles the operational complexity required to run agents at scale.
Let’s use the example of a sales agent in Foundry Agent Service. You might have a salesperson who prompts a sales agent with “Help me prepare for my upcoming meeting with customer Contoso.” The agent is going to kick off several processes across data and tools to generate the best answer: Work IQ to understand Teams conversations with Contoso, Fabric IQ for current product usage and forecast trends, Foundry IQ to do an AI search over internal sales materials, and even GitHub Copilot SDK to generate and execute code that can draft PowerPoint and Word artifacts for the meeting. And this is just one agent; more than 20,000 customers rely on Foundry Agent Service.
At the core of Foundry Agent Service is a dedicated agent runtime through Azure Container Apps that explicitly meets our demands for production agents. Agent runtime through flexible cloud infrastructure allows builders to focus on making powerful agent experiences without worrying about under-the-hood compute and configurations.
This runtime is built around five foundational pillars:
Together, these pillars define what it means to run AI agents as first‑class production services.
Building and operating agent infrastructure from scratch introduces unnecessary complexity and risk. Azure Container Apps has been pressure‑tested at Microsoft scale, proving to be a powerful, serverless foundation for running AI workloads and aligns naturally with the needs of agent runtime.
It provides serverless, event‑driven scaling with fast startup and scale‑to‑zero, which is critical for agents with unpredictable execution patterns. Execution is secure by default, with built‑in identity, isolation, and security boundaries enforced at the platform layer. Azure Container Apps natively supports running MCP servers and executing full agent workflows, while Container Apps jobs enable on‑demand tool execution for discrete units of work without custom orchestration.
For scenarios involving AI‑generated or untrusted code, dynamic sessions allow execution in isolated sandboxes, keeping blast radius contained. Azure Container Apps also supports running model inference directly within the container boundary, helping preserve data residency and reduce unnecessary data movement.
Make infrastructure flexible with serverless architecture. AI systems move too fast to create infrastructure from scratch. With bursty, unpredictable agent workloads, sub‑second startup times and serverless scaling are critical.
Simplify heavy lifting. Developers should focus on agent behavior, tool invocation, and workflow design instead of infrastructure plumbing. Using trusted cloud infrastructure, pain points like making sure agents run in isolated sandboxes, properly applying security policy to agent IDs, and ensuring secure connections to virtual networks are already solved. When you simplify the operational overhead, you make it easier for developers to focus on meaningful innovation.
Invest in visibility and monitoring. Strong observability enables faster iteration, safer evolution, and continuous self‑correction for both humans and agents as systems adapt over time.
1184. This week, we look at the history of lingua francas, from the original mix of Italian, French, Spanish, Arabic, and Turkish used during the Crusades to today's global English. Plus, we look at whether it's wrong to use "who" for animals, "that" instead of "who" for people, and "whose" for inanimate objects.
The lingua franca segment was written by Alexandra Aikhenvald, a Professor and Australian Laureate Fellow at Jawun Research Institute, CQ University in Australia. It originally ran on The Conversation and appears here through a Creative Commons license.
AI systems confusing dog faces with blueberry muffins.
🔗 Join the Grammar Girl Patreon.
🔗 Share your familect recording in Speakpipe or by leaving a voicemail at 833-214-GIRL (833-214-4475)
🔗 Watch my LinkedIn Learning writing courses.
🔗 Subscribe to the newsletter.
🔗 Find an edited transcript.
🔗 Get Grammar Girl books.
| HOST: Mignon Fogarty
| Grammar Girl is part of the Quick and Dirty Tips podcast network.
| Theme music by Catherine Rannus.
| Grammar Girl Social Media: YouTube. TikTok. Facebook. Threads. Instagram. LinkedIn. Mastodon. Bluesky.
Hosted on Acast. See acast.com/privacy for more information.
You need a property that validates its input. In C# 13 and earlier, that means writing a private backing field, a get accessor that returns it, and a set accessor that validates the value before storing it. Three moving parts for one property:
public class Greeting
{
private string _msg = "Hello";
public string Message
{
get => _msg;
set => _msg = value ?? throw new ArgumentNullException(nameof(value));
}
}
One property, one validation rule, three lines of ceremony. The backing field _msg exists only to give set something to write to and get something to read from. It carries no meaning of its own.
Multiply this across a configuration class with five or six validated properties, and the noise adds up fast.
There are five ways to ingest SharePoint data into Microsoft Fabric: Dataflow Gen2, Copy Jobs, shortcuts, pipelines, and notebooks. Each makes different trade-offs around capacity cost, incremental loading, and how well they cope with multi-environment setups - and the right choice depends entirely on what you're trying to do.
This post walks through all five, with practical guidance on when to reach for each one. It's based on a great session by Laura Graham-Brown at SQLBits 2026 - and let's be honest, SharePoint isn't going anywhere in most organisations, so knowing how to get that data into Fabric efficiently is worth doing well.
Pretty much all of the ingestion options below can work with either a SharePoint list or a folder (which sometimes actually means an entire document library). Let's walk through each one.

Dataflow Gen2 provides a visual, low-code way to ingest and transform data from SharePoint. The demo in the session worked smoothly, and it's a familiar experience if you've used Power Query before.
One important thing to be aware of: Dataflows eat capacity. They're convenient, but if you're running a lot of them, the capacity consumption can add up. Keep this in mind if you're cost-conscious!
An interesting aside: you can actually use Dataflows to ingest from SharePoint on-premises (which is, apparently, still very much a thing in some organisations). This might be the easiest path if you're dealing with legacy on-prem SharePoint installations.
Copy Jobs are a relatively simple way to move data from SharePoint into Fabric. When you select a list copy, you can set up incremental ingestion - this uses the modified column to only update records that have changed, performing a merge on ID, as described in the Microsoft Fabric Copy Job documentation.
One thing to watch out for: the default schedule for Copy Jobs is every 15 minutes. Depending on your use case, this might be more frequent (and more expensive) than you need. Make sure to review and adjust the schedule to match your actual requirements.
You can create a shortcut directly to a SharePoint folder or document library. This makes the data available in your Lakehouse without physically copying it - the data stays in SharePoint and is referenced in place.
However, there's a significant limitation right now: SharePoint shortcuts can't use variable libraries. This means that if you're working across multiple environments (Dev, Test, Prod), you can't use variable libraries to point the shortcut to different SharePoint locations per environment. The current SharePoint shortcut capabilities and constraints are covered in the Microsoft Fabric OneLake shortcuts documentation.
Using pipelines, you can set up a copy activity to ingest SharePoint data. A neat trick highlighted in the session: if you reference a SharePoint list as a folder, it actually lists the files in that folder including the last modified date. This means you can build logic to re-ingest only files that have changed - essentially implementing your own incremental pattern.
Notebooks offer the most flexibility, with two main approaches:
Using the Microsoft Graph API - this is a code-first approach where you call the Microsoft Graph API to access SharePoint data programmatically. If you've done something similar in Azure Synapse before, the pattern will be familiar.
Using shortcuts via the Lakehouse - you can add a shortcut to a SharePoint folder or document library, and then read the data directly from the Lakehouse within your notebook. This combines the simplicity of shortcuts with the processing power of Spark.
The notebook approach is the most flexible, but also requires the most technical skill. It's a good fit when you need to do something more complex that the other options don't easily support.
Here's a rough guide to choosing between the options:
There are more options than you might expect for getting SharePoint data into Fabric, and each has its place. The right choice depends on your specific requirements around cost, complexity, change detection, and multi-environment support.
The biggest gap right now is the lack of variable library support for SharePoint shortcuts, which limits their usefulness in multi-environment setups. But given the pace of Fabric development, I'd expect this to be addressed soon.
As always, the practical reality of working with SharePoint is a bit messy - but at least we now have a decent range of tools to tame the mess!
In reply to an email asking about learning software design skills as a researcher physicist:
I was attached to a bioinformatics lab early in my career, so I think I understand what you are talking about, the phenomenon of “scientific code”! My thoughts:
First meta observation is that “software design” is something best learned by doing. While I had some formal “design” courses at the University, and I was even “an architect” for our course project, that stuff was mostly make-believe, kindergarteners playing fire-fighters. What really taught me how to do stuff was an accident of my career, where my second real project (IntelliJ Rust) propelled me to a position of software leadership, and made design my problem. I did make a few mistakes in IJ Rust, but nothing too horrible, and I learned a lot. So that’s good news — software engineering is simple enough that an inquisitive mind can figure it out from first principles (and reading random blog posts).
Second meta observation, the bad news: Conway’s law is important. Softwaregenesis repeats the social architecture of the organization producing software. Or, as put eloquently by neugierig,
If I were to summarize what I learned in a single sentence, it would be this: we talk about programming like it is about writing code, but the code ends up being less important than the architecture, and the architecture ends up being less important than social issues.
I suspect that the difference you perceive between industrial and scientific software is not so much about software-building knowledge, but rather about the field of incentives that compels people to produce the software. Something like “my PhD needs to publish a paper in three months” is perhaps a significant explainer?
Two things you can do here. One, at times you get a chance to design or nudge an incentive structure for a project. This happens once in a blue moon, but is very impactful. This is the secret sauce behind TIGER_STYLE, not the set of rules per se, but the social context that makes this set of rules a good idea.
Two, you can speedrun the four stages of grief to acceptance. Incentive structure is almost never what you want it to be, but, if you can’t change it, you can adapt to it. This is also true about most industrial software projects — there’s never a time to do a thing properly, you must do the best you can, given constraints.
Let me use rust-analyzer as an example. The physical reality of the project is that it’s simultaneously very deep (it’s a compiler! Yay!) and very wide (opposite to an LLM, a classical IDE is a lot of purpose-built special features). The social reality is that “deep compiler” can attract a few brilliant dedicated contributors, and that the “breadth features” can be a good fit for an army of weekend warriors, people who learn Rust, who don’t have sustained capacity to participate in the project, but who can sink an hour or two to scratch their own itch.
My insistence that rust-analyzer doesn’t require building rustc, that it builds on stable, that
it doesn’t have any C dependencies, and that the entire test suite takes seconds, was in the
service of the goal of attracting high-impact contributors. I was wrangling the build system to make
sure people can work on the borrow checker without thinking about anything else.
To attract weekend warriors, the internals of rust-analyzer are split into multiple independent
features, where each feature is guarded by catch_unwind at runtime. The thinking was that I
explicitly don’t want to care too much about quality there, that the bar for getting a feature PR
in is “happy path works & tested”. It’s fine if the code crashes, it will only attract
further contributors, provided that:
In contrast, when working on the core spine which provided support for features, I was very relatively more pedantic about quality.
A word of caution about adapting to, rather than fixing incentive structure — the future is
uncertain, and tends to happen in the least convenient manner. The original motivation behind
rust-analyzer experiment was to avoid the need to write a parallel compiler (the one in IntelliJ
Rust), and to prototype a better architecture for LSP, so that the learnings could be backported
to rustc. So, even in core (especially in core), the code was very experimental. Oh well. Stuck
with one more compiler now, I guess?
I might hazard a guess that something similar happened to uutils project, which started as the primary destination for people learning Rust, and ended up as Ubuntu coreutils implementation.
Third, now to some concrete recommendations. Sadly, I don’t know of a single book I can recommend which contains the truths. I suspect one can only find such a book in an apocryphal short story by Borges: practice seems to be an essential element here. But here are some things worth paying attention to:
Boundaries talk by Gary Bernhardt is all-time favorite. It contains solid object-level advice, and, for me, it triggered the meta inquiry.
How to Test is something I wish I had. I immediately understood the importance of testing, but it took me a long time to grow arrogant enough to admit that most widely-cited testing advice is shamanistic snake-oil, and to conceptualize what actually works.
∅MQ guide and, more generally, writings by Pieter Hintjens introduced me to Conway’s Law thinking. That “feature development” architecture of rust-analyzer? – optimistic merging, applied.
Reflections on a decade of coding by Jamii is excellent, goes very meta. It is intentionally the first of my links.
Ted Kaminski blog is the closest there is to a coherent theory of software development, appropriately framed as a set of notes to a non-existing book!
As for the actual books, Software Engineering at Google and Ousterhout’s The Philosophy of Software Design are often recommended. They are good. SWE, in particular, helped me with a couple of important names. But they weren’t ground breaking for me.