Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
153746 stories
·
33 followers

Leaked images show Microsoft’s new Xbox Cloud Gaming controller

1 Share

I revealed earlier this year that Microsoft was working on a new Xbox controller that includes Wi-Fi to connect direct to Xbox Cloud Gaming servers. It now looks like that controller has leaked, thanks to some new images from Brazil's Anatel regulator.

Tecnoblog published the images today, showing a new, smaller Xbox controller that looks similar to third-party options from 8BitDo and HyperX. The controller is made by Microsoft and includes 2.4GHz and 5GHz Wi-Fi connectivity, Bluetooth 5.3, and a USB-C port. There's also what appears to be some kind of pairing button at the top of the controller, as well as a D-Pad, bumpers, and triggers.

Read the full story at The Verge.

Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Why Doesn’t Anyone Teach Developers About Context Management?

1 Share

This is the sixth article in a series on agentic engineering and AI-driven development. Read part one here, part two here, part three here, part four here, and part five here.

I think context management is one of the most important skills in AI-driven development, and it’s weird that compared to other AI-related topics, almost nobody talks about it. We talk about prompt engineering, about which model to use, about agentic workflows and tool use. But more than anything else, the thing that actually determines whether your AI session produces good work or mediocre work is how well you manage context (or if you even do it at all!).

A lot of developers using AI tools treat all this “context” talk as AI jargon that can be dismissed, and it’s not hard to understand why. AI development tools have gotten so easy that an experienced developer can be incredibly effective by just combining vibe coding with critical thinking (that’s the central idea behind the Sens-AI Framework), and not really think about context at all. That’s ironic, because despite all the “I’m functionally illiterate but I just vibe coded an entire multitenant SaaS platform” articles, and despite everyone’s general concern that AI will put all developers out of work, the development skills you’ve been working on for years make you especially effective at writing code with AI—and context management is where those skills really shine.

Just to make sure we’re all on the same page, context is (basically) everything the AI is thinking about right now: your prompt, the conversation so far, the files it’s read, the decisions you’ve made together. When you start a fresh session with an AI, its context is wiped clean, and it starts fresh with just the initial instructions it’s been given. Managing context is central for building AI agents and skills. But it’s also really important when you’re using tools like Claude Code, Cursor, or Copilot for day-to-day development work. Context is typically measured in tokens, and there’s a finite amount of it. When the context window, or the maximum amount of information (input and output tokens) an AI model can process and retain at once, fills up, the AI starts losing track of things, and that’s when you start to see it give wrong and weird answers.

Unfortunately a lot of developers read paragraphs like the last one and their eyes glaze over. Somehow it gets classified in the same part of our brains as learning how our build systems work: boring stuff we somehow don’t really want to think about because it takes us away from “real” programming. That’s a shame, because when we don’t understand the basics of how context works we waste a lot of time.

For example, here’s something I see developers do all the time that they absolutely shouldn’t. They’re deep into an AI coding session, and the AI has built up a detailed understanding of their codebase (e.g., it’s noticed patterns, it’s making good decisions, etc.). Then they start seeing “Compacting conversation” messages, or they notice the little context usage indicator in Cursor or Copilot filling up, and they don’t really know what that means. But they learned that closing the session and starting a new one seems to fix the problem. Unfortunately, all they’ve done is trade compaction for total amnesia. The new session just keeps going, producing output that looks fine, but it’s giving worse answers and generating worse code because it’s working from incomplete information.

The really weird thing is that I was writing about something really similar all the way back in 2006, long before AI was around, in Applied Software Project Management: Missing requirements are especially insidious because they’re difficult to spot. I was writing about requirements, not AI context, but the problem is the same. I’ve written about how prompt engineering is requirements engineering, and this is another place where the parallel holds up. When a requirement is missing, there’s no artifact to flag it, you just end up with code that doesn’t do what it’s supposed to do. When context is missing from an AI session, there’s no error message telling you what the AI forgot; you just end up with worse answers.

The cost of poor context management is actually measurable. A developer on Microsoft’s Dev Blog recently timed his own reorientation overhead and found he was spending over an hour a day just reexplaining things to his AI that it had known in a previous session. He’s not alone. There are now entire frameworks and managed services dedicated to giving agents persistent memory, from lightweight CLIs that query Copilot’s local session database to managed memory services from Cloudflare. Some of these tools are genuinely useful, but they’re solutions you need to evaluate, integrate, and maintain before they help you.

My goal in this article and the next is to give you four specific things you can do today, using whatever AI tools you’re already working with. This article covers the problem: why context management matters and how context loss affects the quality of your AI’s output. The next article covers the specific practices that emerged from building the Quality Playbook and Octobatch, things you can bring back to your own prompts, skills, and agents immediately. I’ll use real examples from those projects, because I think they’ve got some good examples that you can draw on.

We get AI wrong in both directions

I think the through line through all of this is that developers both overestimate and underestimate AI. We overestimate how much it can hold in its memory and its ability to remember things and make decisions for us. So we’ll just stuff a whole bunch of stuff in the context window and assume the AI will work it out, and then get annoyed when it hallucinates or forgets.

On the other hand, we massively underestimate its ability as an orchestrator. Your prompt doesn’t just have to ask a question or ask the AI to generate something. You can give it a multistep workflow where each step writes its results to files, and the AI will coordinate the whole thing, spinning off subtasks and picking up where it left off if something breaks.

When developers don’t take either of those things seriously, context management or orchestration, you get a specific cycle. They treat the context window as infinite and cram everything in. Then when the session gets too long and the AI starts losing track, they throw it all away and start fresh. They never consider the alternative, which is designing the workflow so the AI works from externalized files across independent sessions.

I discovered this while building the Quality Playbook. The context management was working so well inside my sessions that I realized the sessions themselves were the bottleneck. I was running the playbook in a single prompt. I think I had a record of over 15 million tokens in a single Copilot GPT-5.4 session that ran for hours, and I did eight of them in parallel. Which incidentally is why I got rate-limited for 54 hours from Copilot, which is completely fair.

The playbook was writing everything down to files as it went, which is why those runs could last that long at all. But I didn’t want that behavior. Running 15 million tokens in a single session is expensive, and if you’re on pay-as-you-go API tokens instead of a flat-rate plan like Copilot or Claude Max or Cursor, that kind of usage can be a real shock. I wanted to make the playbook available to developers who don’t want to burn that much at once. And because the context was already externalized to files, splitting into independent phases turned out to be easy.

Ask the AI to write its context down along the way

Before I get into how the pipeline splits things up, I want to talk about the practice that made the split possible in the first place: storing development context in files as you go.

I don’t mean asking the AI to export its notes at the end of a session, or writing up a “lessons learned” document after the fact. I mean baking it into the actual instructions you give the AI from the start, so it’s continually writing and updating context as it works. For Octobatch, the batch LLM orchestrator that was my first experiment in agentic engineering (I wrote about the development process in “The Accidental Orchestrator”), I had the AI write developer context in every folder, and that really made it easy to spin up a new session.

Here’s what that looks like in practice. Every new Claude Code session on Octobatch starts with a single line: “Read ai_context/DEVELOPMENT_CONTEXT.md and bootstrap yourself to continue development.” That file contains a loading sequence: read this first, then fan out to component-level CONTEXT.md files in scripts/, tui/, pipelines/, each describing its own subsystem at the right level of detail. By the time the AI finishes reading, it knows what the project is, how it’s built, what’s currently in progress, and what the active bugs are.

I think of this as shifting left. Instead of putting constraints in every prompt (don’t use additionalProperties: false, always test with –limit 3), those rules live in the CONTEXT.md files. The prompt stays clean because the documentation does the heavy lifting.

And updating context files is part of every task. Before we commit anything, I have the AI review the context files and make sure they reflect what we just did. If we added a feature or fixed a bug, the context file should reflect that before we commit. Stale context causes the same kinds of problems as stale documentation, except it’s worse because the AI is actually relying on it to make decisions.

I want to be clear exactly what I mean by “development context.” Specifically, it’s the information a new AI session needs to get up to speed: what the project is, how it’s built, and what decisions have been made along the way. Tools like Claude Code read development context from files like AGENTS.md (and you can actually go to that website to learn more) at the start of every session, and if you do a thorough enough job of building up your development context and keeping it up-to-date, you can get them fully bootstrapped. They’re the blueprints for your AI sessions. I wrote in Applied Software Project Management that building software without requirements is similar to building a house without blueprints. Running AI sessions without externalized context is the same mistake. You’re relying on what’s in someone’s head instead of what’s written down. And when you’re working with AI, “someone’s head” is a context window that’s going to get compacted or thrown away.

The most important thing is that what’s in my head matches what’s in the AI’s head. The context file is just a convenient way to help us figure out whether or not we agree. When I start a new Claude Code session on a folder that has a good DEVELOPMENT_CONTEXT.md, the AI reads it and we’re immediately aligned. When I start a session without one, the AI has to rediscover everything from scratch, and it always misses things. Rediscovery is always lossy.

If you’re not already writing context files as part of your workflow, none of the fancier techniques I’m about to describe matter. This is the foundation.

Include the why, or the AI will undo your decisions

There’s a specific thing that has to go into these context files, and it took me a while to learn why it matters so much: the reasoning behind every decision.

Octobatch’s DEVELOPMENT_CONTEXT.md has a section called “Key Technical Learnings” with 49 entries, each in a specific format: What happened, Why it matters, When we discovered it, and Where in the code it applies. At the top of that section is a note in bold: “IMPORTANT: Always include the REASONING (the ‘Why’) for each learning. This prevents future sessions from ‘refactoring’ a deliberate decision.”

That note is there because without it, the AI will do exactly that. I had a case with Octobatch where we used recursive set_timer() instead of set_interval() for auto-refresh because Textual’s set_interval() callbacks aren’t reliably serviced on pushed screens. Without the “Why” in the context file, a future session would look at that code, see a “cleaner” alternative, and helpfully refactor it right back to the broken approach.

The same principle applies to quality standards. Don’t just say “90% coverage for core logic.” Say “90% coverage for core logic, because expression evaluation touches randomness and seeding, where subtle bugs produce plausible-but-wrong output. The drunken sailor reseeding bug passed all visual inspection. Only statistical verification caught that sequential seeds created correlation bias (77.5% fell in water instead of a theoretical 50/50).” Without the “why,” a future AI session will argue the coverage target down. Any standard or architectural decision or unusual code pattern that doesn’t have its rationale attached is vulnerable to being optimized away by an AI that doesn’t know what problem it was solving.

The garbage collection problem

A lot of people like to talk about the context window as your AI’s short-term or working memory, and context that’s persisted to disk as long-term memory. Personally, I’m not sure those analogies to human memory work all that well. I think it’s a lot more useful to find ways to think about context that are similar to how we manage memory in our code.

I find it especially helpful to compare context compaction to garbage collection—again, not a perfect analogy but a useful one. When you look at a GC graph in Java, you see the memory slowly fill up and then suddenly drop after each GC. That drop is the runtime figuring out what’s still being referenced and freeing everything else.

The context window does the same thing. Your conversation accumulates tokens, the AI’s context window fills up, and then compaction happens. The tool (or the model) decides what to keep and what to throw away. Compaction is lossy and automatic, and you don’t control what survives.

Java developers spent decades learning to design their allocation patterns so garbage collection wouldn’t destroy anything important. AI developers need to learn the same thing, and the learning curve should be shorter because the concepts transfer directly.

When you ask the AI to write important state to files, you’re promoting it out of that volatile space. It’s surprisingly easy to do this. Just pass the AI to write its context to a Markdown file. For example, you can put all of the context related to a specific domain into a particular file, like if the AI noticed a behavioral contract, you could have it write all the related context to a file called CONTRACTS.md. If it made a design decision, that could go into DEVELOPMENT_CONTEXT.md—that’s a pattern I use all the time to write down all the important contacts needed to bootstrap a new AI session to work on the code. Those files live on disk, outside the context window, and compaction can’t touch them. But if you start a new session without externalizing any of this, you’re shutting down the application and losing everything that was in memory.

The first time I built Octobatch’s batch orchestrator, it was a Python script with in-memory state and a lot of hope. It worked for small batches but fell apart at scale, which is pretty much what most developers are doing with their AI context right now: keeping everything in the context window and hoping it holds together, even though that stops working once sessions get long and codebases get complex.

It’s way too easy to fall into one context management extreme or the other

The Quality Playbook exists in part because of this problem. When I was building the requirements pipeline, I discovered that single-pass requirement generation runs out of attention after about 70 requirements. The model forgets behavioral contracts it noticed earlier. And it’s completely invisible. You don’t get a stack trace or an error message or any kind of warning, just incomplete output and no way to know what’s missing.

The longer a defect goes uncorrected, the more entrenched it becomes and the more things get built on top of it. Context drift works the same way. When the AI loses track of a design decision early in a session, everything built on that lost context compounds the error. And just like a late-discovered defect, you don’t know what went wrong because the original context is gone.

I had a concrete example when I was running the playbook against virtio-win. Version 1.3.32 found four bugs. Version 1.3.33, after some changes, found only one. That regression was only diagnosable because I had EXPLORATION.md, an externalized intermediate state file that captures what the AI observed during its exploration phase. Without it, the only observable output would have been “fewer bugs this time.” I had no way to tell whether the playbook was worse, or the bugs were harder, or it had just missed something. Without externalized state, I couldn’t have answered any of those questions.

The contracts file in the pipeline exists specifically to solve this. When the model forgets about a behavioral contract it noticed earlier, that forgetting is normally invisible. But with a contracts file, every observation is written down before any requirements work begins. If a contract is in the file but has no corresponding requirement, that’s a visible, greppable gap. You can see what was forgotten and fix it.

But it’s just as easy to overcompensate. If the LLM has to constantly hop between eight different reference files, its context window fragments and you start getting hallucinations. I’ve seen this happen. You load all your context files and requirements documents and design docs into the session, and the AI gets worse, not better. It spends all its attention navigating between reference files instead of thinking about the problem.

I hit this with the Quality Playbook when I expanded the scope of a run against virtio-win from 10 files to about 60. The result was 6x more files analyzed but 75% fewer bugs found. The model burned its context on device drivers instead of going deep on the transport layer where the bugs actually were. Wider scope meant shallower analysis.

The goal isn’t to save everything. You have to decide what to externalize, what to keep in context, and what to let go. The best context file contains exactly what the AI needs for this session and nothing more.

Helping your AI manage its context helps you too

The interesting thing about all of this is that good context management really makes use of your development expertise, and it’s one of those things that makes you a better developer the more you do it. Every practice I’ve described in this article, writing down your decisions, recording why you made them, being deliberate about what goes into a session and what doesn’t, is something developers have always been told to do. We write ADRs and design docs and inline comments explaining nonobvious choices, and we all know we should do more of it. When you’re working with AI, the cost of not doing it becomes immediate and visible. Your context files end up being the project documentation you should have been writing all along, except now there’s something on the other end that will actually go wrong if you skip it.

And once you start thinking about context as something you actively manage, you can start designing your workflows around it. That’s what happened with the Quality Playbook, when it went from a single 15-million-token session to a set of independent phases with clean handoffs between them, and the whole split worked on the first try because the context was already externalized to files.

In the next article, I’ll get into the specific techniques you can use today in your AI agents, but also in your day-to-day AI development work.The Quality Playbook is open source and works with GitHub Copilot, Cursor, and Claude Code. It’s also available as part of awesome-copilot.


Disclosure: Aspects of the approach described in this article are the subject of US Provisional Patent Application No. 64/044,178, filed April 20, 2026 by the author. The open-source Quality Playbook project (Apache 2.0) includes a patent grant to users of that project under the terms of the Apache 2.0 license.



Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Computing and Displaying Discounted Prices in CSS

1 Share

CSS math isn’t just about how things look! It can also be used to work out useful numeric information. For instance, you could calculate and show the percentage of tasks completed in a to-do list with CSS, helping users keep track of their progress. No need for script or server computation. No latency. No use of additional browser resources.

Working with math has become much simpler and more flexible. I’m going to give you an example using CSS to calculate and display a discounted price whenever you need it, using the base price and discount provided. It’s the sort of thing you see often on e-commerce sites where heavy JavaScript is used to show a product’s full price, its discount amount, and its sale price.

A four column row of product cards showing sale clothing from Gap. Model photos are on top, followed by the product name, price, and sale price.
Screenshot taken from gap.com

We can absolutely do that in CSS:

It does rely on some bleeding-edge features that are waiting to gain more browser support, but I think it’s still a good exercise to dig into how we will eventually be able to put these things in practice and eventually use them in our everyday work.

Here’s how I put it together.

The initial markup

The interface in this specific demo displays a list of streaming services for the user to choose from — Netflix, Disney+, HBO, HBO Now, HBO Go, HBO Max, etc. There’s a student discount offer on each subscription that takes a certain percentage amount off the full price.

<li>
  <!-- Service name, base price, and selection toggle -->
  <label>
    <span>Netflix</span>
    <!-- data-price and data-discount store base price and discount offered -->
    <div class="ott-price" data-price="7.99" data-discount="0.2">$7.99</div>
    <!-- Checkbox to track if the user wants to add this service -->
    <input type="checkbox" class="is-ott-selected">
  </label>

  <!-- Toggle for the student discount -->
  <label>
    <span>Apply Student Discount <br> 20%</span>
    <input type="checkbox" class="is-ott-discounted">
  </label>
</li>

<!-- etc. -->

The base price and discount are included as data-* attributes in the element displaying the price. Just remember, the discount only kicks in when you select “Apply Student Discount,” and then you’ll see how much the price is after the discount is applied.

Calculating the price cut

When the discount kicks in, the first step is to slash the base price with a line across it.

/* When the discount toggle is checked inside the .ott container */
.ott:has(.is-ott-discounted:checked) {
  /* Strike through the original price */
  .ott-price {
    text-decoration: line-through;
  }
}

Next, let’s figure out the new discounted price using the data-price and data-discount values.

.ott:has(.is-ott-discounted:checked) {
  .ott-price {
    text-decoration: line-through;
    /* 
        Calculate the new price from the data-* attributes:
        Original Price * (1 - Discount Applied)
    */
    --n: calc(attr(data-price number) * (1 - attr(data-discount number)));
  }
}

The attr(<name> <type>) syntax is relatively new. The function used to only work with the content property, but now supports any CSS property… and parses values into a range of data types, whereas before they were always parsed as strings.

Those arguments:

  1. <name>: This is the name of the HTML attribute we want to look at (like href, data-count, or title).
  2. <type>: This tells CSS how to “read” the value (like a color, a number, or a length). It’s the newer superpower that makes the work we’re doing here possible.

In our case, we’re using the function to parse both data-price and data-discount into numbers, and then we subtract the discount from the price with CSS math-iness.

The upgraded attr() is super cool, but not Baseline as I’m writing this, so keep an eye on it.

Showing the discounted price

Here’s how we display the updated price once the discount is applied:

.ott:has(.is-ott-discounted:checked) {
  .ott-price {
    text-decoration: line-through;
    --n: calc(attr(data-price number) * (1 - attr(data-discount number)));

    &::after {
      display: inline-block;
      /* Splits the variable --n into two counters: 
          'a' for the whole number (in dollars) and 'b' for the decimals (in cents) */
      counter-set: a calc(round(down, var(--n))) b calc((mod(var(--n), 1)) * 100);
      /* Output: two spaces (\2000), a dollar sign ($), the number, a dot, and the decimals */
      content: "\2000\2000$" counter(a) "." counter(b, decimal-leading-zero);
    }
  }
}

The counter() function helps us turn the numeric value of the --n varable into a content string. Since CSS counters can’t handle decimals (they round the value by default), we treat the numbers before and after the decimal as separate counters and then combine them as strings, adding a dot between them.

  1. calc(round(down, var(--n))) takes the variable --n and rounds it down to get the whole dollar amount (stored as counter(a)).
  2. calc((mod(var(--n), 1)) * 100) uses the modulo mod() function to isolate the fraction, then multiplies it by 100 to get the cents (stored as counter(b)).
  3. The content property inserts a dollar sign before the two counters and then joins them with a dot.

We know that calc() has plenty of browser support. And guess what? The mod() function is newly Baseline!

That’s only if you need decimals and all that. If you’re rounding prices, this would be plenty enough:

counter-set: price calc(var(--n));
content: counter(price);

Here’s the demo once again:

Wrapping up

So, there we have it, a working combination of newer CSS features (the upgraded attr() function), CSS math functions (mod(), round()), and custom counters to nail down something that we see in so many websites, only without scripts. When attr()‘s support for data types becomes a thing in all browsers, this is something you can use in your everyday work.


Computing and Displaying Discounted Prices in CSS originally handwritten and published with love on CSS-Tricks. You should really get the newsletter as well.

Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete

From beta to stable: Announcing the Azure SDK for Rust 🎉🦀

1 Share

Picture a Rust service that signs in with Microsoft Entra ID, pulls a signing key from Key Vault, picks up work items off a Storage Queue, and lands the results in Blob Storage. Every piece of that stack is now stable. 🚀

The Azure SDK for Rust 🦀 is stable. What we shipped as a beta is now a production-ready SDK with stable APIs, semver guarantees, and a surface area you can build on today.

Why Rust on Azure?

A few reasons teams keep telling us they picked Rust:

  • ⚡ Small binaries, low memory, fast cold starts. Great fit for containers and the edge, where every millisecond and megabyte matters.
  • 🛡 Whole categories of bugs (null derefs, data races, use-after-free) caught at compile time instead of at 3:00 AM.
  • 🔌 Native async on top of Tokio, with predictable performance for high-throughput workloads like event processing and streaming.
  • 🌐 The same design patterns you know from the .NET, Java, JavaScript, Python, Go, and C++ SDKs. Different language, familiar shape.

What’s stable today

Six service libraries and the core infrastructure that powers them: Core, Identity 🔐, Key Vault (Secrets, Keys, Certificates), and Storage (Blobs, Queues). All of them are crates you’ve already been using throughout beta. Now they’re stable.

Service Crate 📦 Docs Source Code
Core azure_core Reference GitHub
Identity azure_identity Reference GitHub
Key Vault Secrets azure_security_keyvault_secrets Reference GitHub
Key Vault Keys azure_security_keyvault_keys Reference GitHub
Key Vault Certificates azure_security_keyvault_certificates Reference GitHub
Storage Blobs azure_storage_blob Reference GitHub
Storage Queues azure_storage_queue Reference GitHub

What’s new since beta

Service coverage is the headline. But a lot changed under the hood. We spent the past year hardening the SDK on real-world usage and community feedback:

  • Stabilized API surface. Every public type, trait, and function got a pass against the Azure SDK guidelines. Breaking changes now follow semver.
  • Unified core primitives. A redesigned Pager that yields items by default. A Poller you can just .await for long-running operations. One ManagedIdentityCredential that works across every Azure hosting environment. A new DeveloperToolsCredential that streamlines local development by falling through your installed dev tools (Azure CLI, Azure Developer CLI) until one returns a token.
  • Production-grade resilience. Automatic retries on transient failures. Challenge-based authentication so sovereign and private clouds just work.
  • First-class observability. ⚡ Distributed tracing through azure_core_opentelemetry using #[tracing::*] macros, plus an HTTP logging policy that sanitizes secrets by default.
  • Pluggable async runtime. Tokio out of the box. Bring your own with set_async_runtime().

Get started 🚀

A few lines in your terminal and you’re off:

1. Add dependencies

cargo add azure_identity azure_storage_blob futures tokio --features tokio/full

2. Authenticate and list some blobs

DeveloperToolsCredential is the credential you reach for during local development. It falls through your installed dev tools (Azure CLI, Azure Developer CLI) until one returns a token. For workloads running in Azure, swap it for ManagedIdentityCredential. The rest of the code stays the same.

use azure_identity::DeveloperToolsCredential;
use azure_storage_blob::BlobContainerClient;
use futures::TryStreamExt;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    // Locally, DeveloperToolsCredential falls through your dev tools (Azure CLI, Azure Developer CLI).
    // In Azure, swap this for ManagedIdentityCredential.
    let credential = DeveloperToolsCredential::new(None)?;

    let container = BlobContainerClient::new(
        "https://<your-storage-account>.blob.core.windows.net/",
        "my-container",
        Some(credential),
        None,
    )?;

    let mut pager = container.list_blobs(None)?;
    while let Some(blob) = pager.try_next().await? {
        println!("📦 {}", blob.name.unwrap_or_default());
    }

    Ok(())
}

A few things worth noticing:

  • Pager yields blob items directly. try_next().await? walks the whole result set without any manual page bookkeeping. Need the raw pages instead? Call .into_pages() on the pager.
  • Errors propagate with ?. No surprise panics in your hot path.
  • Same credential type plugs into every other stable crate. No per-service auth boilerplate.

Want more? Each library has an examples directory in its project folder on GitHub, with more cross-library samples in the root /samples folder.

Documentation 📚

Get productive fast:

What’s coming next

Going stable is a milestone, not a finish line. A few things we’re working on:

  • 📨 Event Hubs. azure_messaging_eventhubs and azure_messaging_eventhubs_checkpointstore_blob are close. They won’t ship in this stable wave, but they’re a top priority for the next one.
  • 🗄 Azure Cosmos DB. azure_data_cosmos is in active development and is another planned stable release, expected in 2026.
  • 📡 More service crates. Coverage keeps growing. The fastest way to nudge the roadmap is to upvote 👍 the services you actually need.
  • 🔭 Continued investments in observability, runtime flexibility, and ergonomics across the existing stable crates.

Don’t see your favorite service? Open an issue. We read every one.

Join the conversation 🤝

Community feedback shaped a huge amount of this SDK. Keep it coming:

Now go build something. We can’t wait to see it. 🚀

The post From beta to stable: Announcing the Azure SDK for Rust 🎉🦀 appeared first on Azure SDK Blog.

Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Scaling out Maps for EVERYONE!

1 Share
From: Fritz's Tech Tips and Chatter
Duration: 1:00:56
Views: 25

Let's start scaling StreamerMaps

Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete

U.S. Congressman Beyer on AI challenges facing America and the World

1 Share

U.S. Congressman Don Beyer returns to Practical AI for another far-reaching conversation with Chris about many of the most important AI challenges facing America and the world.  Blending political savvy and statesmanship with his unique technical understanding as an active Ph.D student in AI at George Mason University (making him the coolest member of Congress!), the congressman shares his perspective about the really hard AI concerns that you would have asked him yourself.  Together, Congressman Beyer and Chris explore AI regulation, cybersecurity concerns sparked by advanced models like Mythos, bipartisan AI governance efforts, and the growing AI race between the U.S. and China. They fearlessly dived headfirst into AI-driven job displacement, mass surveillance, autonomous weapons, existential risk, and the philosophical questions surrounding consciousness and superintelligence as AI continues to accelerate.  This is an unusual and insightful conversation you don't want to miss!

Congressman Beyer was previously on Practical AI episode 271 on May 29, 2024:
AI in the U.S. Congress

Featuring:

Upcoming Events: 





Download audio: https://pscrb.fm/rss/p/dts.podtrac.com/redirect.mp3/media.transistor.fm/9ac73d0a/a5131751.mp3
Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories