Read more of this story at Slashdot.
Read more of this story at Slashdot.
Una and Bramus dive into the latest advancements in CSS with state-based container queries. Learn how to create responsive and dynamic user experiences by querying the scroll state of UI elements, including 'stuck,' 'snapped,' and 'scrollable' states. Discover practical examples and techniques to replace complex JavaScript with declarative CSS, making your web development more efficient and powerful. Resources: Una Kravets (co-host) Bramus Van Damme (co-host) |
When working with distributed databases like Couchbase, performance and efficiency are key considerations, especially when retrieving a large amount of data. Many times when customers come from different development or database backgrounds, they ask about the capability of Couchbase to do “multi-get” or “bulk get” operations. Many databases offer “multi-get” as an out of the box method to retrieve multiple documents to perform based on their keys. Most Couchbase SDKs don’t offer explicit APIs for batching because reactive programming provides the flexibility to implement batching tailored to your specific use case and is often more effective than a one-size-fits-all, generic method.
A bulk get operation allows you to request multiple documents in a single operation, rather than making repeated individual GET calls. In traditional key-value stores, each request usually targets a specific node. However, in a distributed environment like Couchbase, spreading these operations across nodes can introduce overhead if managed manually.
The Couchbase SDKs (including Java, .NET, and Go) offer built-in support for bulk get operations. These SDK methods are designed to accept a list of document keys and automatically manage the parallel execution of individual GET requests in an efficient way because of three main reasons.
Couchbase provides two main ways to achieve bulk get capability using Reactive Programming and Async Programming.
If you’re aiming to optimize bulk get operations in Couchbase, reactive programming provides an efficient and easier approach. Couchbase’s binary protocol has out-of-order execution and has strong support for async operations in KV. By efficiently managing asynchronous data flows, it enables high throughput and low latency, making it ideal for distributed systems. To fully leverage its capabilities, a fully reactive stack where each layer, from the database to the client, supports reactive streams is ideal. Couchbase’s ReactiveCollection integrates seamlessly with Project Reactor, enabling fully non-blocking access to Couchbase Key-Value (KV) operations. This integration aligns perfectly with modern reactive architectures, allowing applications to handle high-throughput workloads more efficiently by avoiding unnecessary thread blocking.
That said, migrating an entire existing application to a reactive architecture can involve significant work. If it is a new project, adopting a reactive framework like Spring WebFlux is strongly recommended. However, even in non-reactive applications, introducing a reactive approach at the Couchbase CRUD layer alone can deliver meaningful gains. By doing so, you can minimize thread blocking and reduce CPU throttling, leading to better resource efficiency and improved scalability.
Below is an example of a Java code that can maximize the performance of Couchbase using Reactive API and can work with a non-reactive stack.
/** * @param collection The collection to get documents from. * @param documentIds The IDs of the documents to return. * @param mapSupplier Factory for the returned map. Suggestion: * Pass {@code TreeMap::new} for sorted results, * or {@code HashMap::new} for unsorted. * @param concurrency Limits the number of Couchbases requests in flight * at the same time. Each invocation of this method has a separate quota. * Suggestion: Start with 256 and tune as desired. * @param mapValueTransformerScheduler The scheduler to use for converting * the result map values. Pass {@link Schedulers#immediate()} * to use the SDK's IO scheduler. Suggestion: If your value converter does IO, * pass {@link Schedulers#boundedElastic()}. * @param mapValueTransformer A function that takes a document ID and a result, * and returns the value to associated with that ID in the returned map. * @param <V> The return map's value type. * @param <M> The type of the map you'd like to store the results in. * @return a Map (implementation determined by {@code mapSupplier}) * where each given document ID is associated with the result of * getting the corresponding document from Couchbase. */ public static <V, M extends Map<String, V>> Map<String, V> bulkGet( ReactiveCollection collection, Iterable<String> documentIds, int concurrency, Supplier<M> mapSupplier, Scheduler mapValueTransformerScheduler, BiFunction<String, SuccessOrFailure<GetResult>, V> mapValueTransformer ) { return Flux.fromIterable(documentIds) .flatMap( documentId -> Mono.zip( Mono.just(documentId), collection.get(documentId) .map(SuccessOrFailure::success) .onErrorResume(error -> Mono.just(SuccessOrFailure.failure(error))) ), concurrency ) .publishOn(mapValueTransformerScheduler) .collect( mapSupplier, (map, idAndResult) -> { String documentId = idAndResult.getT1(); SuccessOrFailure<GetResult> successOrFailure = idAndResult.getT2(); map.put(documentId, mapValueTransformer.apply(documentId, successOrFailure)); } ) .block(); } }
This reactive approach is fetching documents using their IDs and returning a Map<String, V>
where each key is a document ID and the value is the processed result. While it’s not wrong to collect the results into a List and reprocess them later, a better strategy (both in terms of performance and code clarity) is to collect the results into a ConcurrentHashMap
indexed by document ID. This avoids repeated scanning and makes result lookups constant-time operations. Let’s break down how this works step-by-step.
While we recommend using the reactive APIs for their performance, flexibility, and built-in backpressure handling, Couchbase also offers a low-level Asynchronous API for scenarios where you need even more fine-grained control and performance tuning. However, writing efficient asynchronous code comes with its own challenges, it requires careful management of concurrency and backpressure to prevent resource exhaustion and avoid timeouts.
Below is an example demonstrating how to use the Async API to enhance bulk get performance in Couchbase:
// async api to call get() for an array of keys List<CompletableFuture<GetResult>> futures = new LinkedList<>(); for (int i = 0; i < keys.size(); i++) { CompletableFuture<GetResult> f = collection.async().get( keys.get(i), (GetOptions) options ); futures.add(f); } // Wait for all Get operations to complete CompletableFuture.allOf(futures.toArray(new CompletableFuture[0])).join(); // Convert results to JsonObjects List<JsonObject> results = new LinkedList<>(); for (CompletableFuture<GetResult> future : futures) { try { JsonObject json = future.join().contentAsObject(); results.add(json); } catch (CompletionException e) { e.printStackTrace(); results.add(null); // or skip / handle differently } }
Let’s break down how this works step-by-step.
We recommend using this API only if you are either writing integration code for higher level concurrency mechanisms or you really need the last drop of performance. In all other cases, the reactive API (for richness in operators) is likely the better choice.
Reactive programming offers one of the most efficient ways to achieve high performance for bulk get operations with Couchbase. Its true power is unlocked when applied across an entirely reactive stack, where non-blocking behavior and scalability are fully optimized.
That said, you don’t need a fully reactive architecture to start reaping the benefits. A practical and impactful first step is to migrate just the Couchbase CRUD layer to reactive. Doing so can dramatically reduce thread blocking and minimize CPU throttling, leading to better system responsiveness and resource utilization without requiring a complete architectural overhaul.
If performance and scalability are priorities, reactive programming is well worth the investment, even in a partial implementation.
The author acknowledges the Couchbase SDK team and their excellent explanation on how we can achieve the batching efficiently without the need for a generic bulk get function, thank you.
The post Bulk Get Documents in Couchbase using Reactive or Asynchronous API appeared first on The Couchbase Blog.
This blog post was created with the help of AI tools. Yes, I used a bit of magic from language models to organize my thoughts and automate the boring parts, but the geeky fun and the
in C# are 100% mine.
Hi!
In my previous post, I showed you how to orchestrate multiple AI agents using the Microsoft Agent Framework, connecting Azure AI Foundry (OpenAI), GitHub Models, and Ollama in a single .NET 9 application. Today, I’m taking this a step further by introducing Azure AI Foundry Persistent Agents—a powerful feature that allows you to create, manage, and track agents directly in the Azure AI Foundry portal.
This enhanced demo showcases a three-agent workflow that not only orchestrates models across different providers but also leverages Azure AI Foundry’s persistent agent capabilities for better lifecycle management, observability, and collaboration.
Note: If you want to avoid the full blog post, just go to lesson 6 here >> https://aka.ms/genainet
Building on the original multi-model orchestration concept, this version introduces:
Agent | Provider | Model | Special Feature |
---|---|---|---|
![]() | Azure AI Foundry Persistent Agent | gpt-4o-mini | Created and managed in Azure AI Foundry portal |
![]() | Azure OpenAI or GitHub Models | gpt-4o-mini | Flexible authentication (API key, managed identity, or GitHub token) |
![]() | Ollama (local) | llama3.2 | Privacy-focused local inference |
The key difference? The Researcher agent is now a persistent agent that lives in Azure AI Foundry. This means you can:
The workflow remains sequential, but now with enhanced cloud-native agent management:
User Input
↓
Azure AI Foundry Persistent Agent (Researcher)
↓ (research findings)
Azure OpenAI/GitHub Models Agent (Writer)
↓ (article draft)
Ollama Local Agent (Reviewer)
↓
Final Output (reviewed article with feedback)
Each agent reports telemetry to OpenTelemetry, giving you complete visibility across the entire pipeline—from cloud to local models.
Traditional agents are ephemeral—they exist only during your application’s runtime. Persistent agents, on the other hand, are first-class resources in Azure AI Foundry:
To simplify working with persistent agents, I created the AIFoundryAgentsProvider
class:
public static AIAgent CreateAIAgent(string name, string instructions)
{
var persistentAgentsClient = CreatePersistentAgentsClient();
AIAgent aiAgent = persistentAgentsClient.CreateAIAgent(
model: _config.DeploymentName,
name: name,
instructions: instructions);
return aiAgent;
}
private static PersistentAgentsClient CreatePersistentAgentsClient()
{
return new PersistentAgentsClient(
_config.AzureFoundryProjectEndpoint!,
new AzureCliCredential());
}
This abstraction:
AzureCliCredential
for secure, credential-free authenticationAIAgent
that works seamlessly with the Microsoft Agent FrameworkThe persistent agent uses Azure CLI credentials (AzureCliCredential
), which means:
No API keys in code or config
Works with
az login
for local development Production-ready with managed identities
Follows Azure security best practices
For the Writer agent, the demo supports three authentication options with automatic fallback:
dotnet user-secrets set "GITHUB_TOKEN" "your-github-token"
dotnet user-secrets set "deploymentName" "gpt-4o-mini"
dotnet user-secrets set "endpoint" "https://your-resource.cognitiveservices.azure.com"
dotnet user-secrets set "apikey" "your-azure-openai-api-key"
dotnet user-secrets set "deploymentName" "gpt-4o-mini"
dotnet user-secrets set "endpoint" "https://your-resource.cognitiveservices.azure.com"
dotnet user-secrets set "deploymentName" "gpt-4o-mini"
az login # Required for local development
The ChatClientProvider
automatically selects the best available option:
public static IChatClient GetChatClient()
{
if (_config.HasValidGitHubToken)
{
return CreateGitHubModelsClient();
}
return CreateAzureOpenAIClient();
}
Let’s walk through a real execution of the demo with OpenTelemetry traces showing each agent’s performance:
=== Microsoft Agent Framework - Multi-Model Orchestration Demo ===
This demo showcases 3 agents working together:
1. Researcher (Azure AI Foundry Agent) - Researches topics
2. Writer (Azure OpenAI or GitHub Models) - Writes content based on research
3. Reviewer (Ollama - llama3.2) - Reviews and provides feedback
Setting up Agent 1: Researcher (Azure AI Foundry Agent)...
Setting up Agent 2: Writer (Azure OpenAI or GitHub Models)...
Setting up Agent 3: Reviewer (Ollama)...
Creating workflow: Researcher -> Writer -> Reviewer
Starting workflow with topic: 'artificial intelligence in healthcare'
The persistent Researcher agent springs into action, creating a conversation thread in Azure AI Foundry:
Activity.DisplayName: invoke_agent Researcher
Activity.Kind: Client
Activity.StartTime: 2025-10-16T15:51:56.1426555Z
Activity.Duration: 00:00:40.0433429
Activity.Tags:
gen_ai.operation.name: chat
gen_ai.usage.output_tokens: 2113
Duration: 40 seconds
Output tokens: 2,113 tokens of comprehensive research
The Writer takes the research and crafts an engaging article:
Activity.DisplayName: invoke_agent Writer
Activity.Kind: Client
Activity.StartTime: 2025-10-16T15:52:36.2113339Z
Activity.Duration: 00:00:31.4054205
Activity.Tags:
gen_ai.operation.name: chat
gen_ai.usage.output_tokens: 1629
Duration: 31 seconds
Output tokens: 1,629 tokens of polished content
Finally, the local Ollama model provides feedback:
Activity.DisplayName: invoke_agent Reviewer
Activity.Kind: Client
Activity.StartTime: 2025-10-16T15:53:07.6191258Z
Activity.Duration: 00:00:07.4031237
Activity.Tags:
gen_ai.operation.name: chat
gen_ai.usage.output_tokens: 531
Duration: 7.4 seconds
Output tokens: 531 tokens of constructive feedback
Notice how the local Ollama model is significantly faster (7.4s vs 31-40s) because it runs on your machine without network latency. This demonstrates the power of hybrid architectures—using cloud models for complex tasks and local models for faster, privacy-sensitive operations.
After all three agents complete their work, you get a comprehensive article about AI in healthcare:
=== Final Output ===
Title: Artificial Intelligence in Healthcare - Key Facts, Applications,
Opportunities, and Risks
Introduction
Artificial intelligence (AI) applies algorithms and statistical models to
perform tasks that normally require human intelligence. In healthcare, AI
ranges from rule-based decision support to deep learning and large language
models (LLMs). It's being used to augment clinical care, accelerate research,
automate administrative tasks, and expand access to services...
[Full article content with research, engaging writing, and editorial feedback]
**Overall Assessment**
The article provides an in-depth analysis of artificial intelligence (AI)
in healthcare, covering its applications, benefits, challenges, and regulatory
landscape. The text is well-structured, with clear sections on different topics
and concise explanations...
**Recommendations for Improvement**
1. Simplify technical language: Use clear, concise definitions...
2. Improve transitions and connections: Add transitional phrases...
3. Balance benefits and challenges: Include more nuanced discussions...
The result is a production-ready article that went through research, creative writing, and editorial review—all automated through agent orchestration.
One of the most powerful features of persistent agents is lifecycle control. At the end of execution, the demo prompts:
=== Clean Up ===
Do you want to delete the Researcher agent in Azure AI Foundry? (yes/no)
yes
Deleting Researcher agent in Azure AI Foundry...
Researcher agent deleted successfully.
This cleanup is done programmatically:
public static void DeleteAIAgentInAIFoundry(AIAgent agent)
{
var persistentAgentsClient = CreatePersistentAgentsClient();
persistentAgentsClient.Administration.DeleteAgent(agent.Id);
}
You can also manage agents directly in the Azure portal:
This gives you operational flexibility—manage agents as code during development, then transition to UI-based management for production monitoring.
Use persistent agents when you need:
Use ephemeral agents (ChatClientAgent) when:
This hybrid approach gives you:
The built-in tracing provides:
az login
)llama3.2
model installed
git clone https://github.com/microsoft/Generative-AI-for-beginners-dotnet
cd Generative-AI-for-beginners-dotnet/06-AgentFx/src/AgentFx-MultiAgents
# Install from https://ollama.com
ollama pull llama3.2
ollama run llama3.2
# Required: Azure AI Foundry
dotnet user-secrets set "AZURE_FOUNDRY_PROJECT_ENDPOINT" "https://your-project.services.ai.azure.com/"
dotnet user-secrets set "deploymentName" "gpt-5-mini"
# Optional: GitHub Models (recommended for quick start)
dotnet user-secrets set "GITHUB_TOKEN" "your-github-token"
# OR Optional: Azure OpenAI
dotnet user-secrets set "endpoint" "https://your-resource.cognitiveservices.azure.com"
dotnet user-secrets set "apikey" "your-azure-openai-api-key"
az login
dotnet run
For complete setup instructions, troubleshooting, and customization options, check out the detailed README in the repository.
This demo opens up exciting possibilities:
The Microsoft Agent Framework makes all of this possible with a clean, .NET-native API.
Azure AI Foundry Persistent Agents bring enterprise-grade agent management to .NET 9, making it easier than ever to build, monitor, and maintain multi-agent systems. Combined with the flexibility of multi-provider orchestration and the observability of OpenTelemetry, you have everything you need to create production-ready AI applications.
Whether you’re building content creation pipelines, customer support systems, or research automation tools, this pattern provides a solid foundation for scalable, observable, and maintainable AI agent architectures.
Happy coding!
Greetings
El Bruno
More posts in my blog ElBruno.com.
More info in https://beacons.ai/elbruno
Here’s how to add a message to a Storage queue, including the security mechanisms appropriate to test, development and the Valet Key pattern.
In my previous post, I covered how to configure and secure a queue in a storage account. In this post, I’m going to cover how to add messages to the queue from a server-side or client-side frontend.
For my server-side frontend, I’m assuming that you’ll authorize access to your queue using a Managed Identity. For the client-side frontend, you’ll authorize using an Application Registration.
At the end of this post, I’ll also look at the code to authorize your application using either:
For my server-side example using a Managed Identity, I’ll cover both the authorization code and the code for adding a message to a Storage queue (I covered configuring a queue to use a Managed Identity in my previous post on configuring the Storage queue).
For my client-side example using an App Registration, I’m going to skip over the code for claiming an Application Registration (I covered that in an part 9a of this series. In this post, for the client-side frontend, I’ll just cover the code for adding a message to a Storage queue from a client-side.
To add a message to a Storage queue, in either a client-side or server-side frontend, you’ll need the URL for the queue. You can get that by surfing to your storage account and, from the menu down the left side, selecting Data Storage | Queues. The list of queues on the right displays the URL for each of your queues—you just need to copy the URL for the queue you want your frontend to add messages to.
The message that I’m going to add to my queue includes an array of products required for the transaction being processed, a correlation id associated with the business transaction and a string field (to support basically anything else I need to add to the message). To support adding all of that in one message, I created a Data Transfer Object (DTO) to hold the data to be added to the queue.
In C#, for my server-side code, that DTO looks like this:
public class QueueDTO
{
public IList<Product> Products { get; set; }
public string CorrelationId { get; set; } = string.Empty;
public string msg {get; set; } = string.Empty;
}
In TypeScript, for my client-side code, my DTO looks like this:
type QueueDTO =
{
Products: Product[],
CorrelationId: string,
Msg: string
};
In a server-side ASP.NET Core frontend, you first need to add the Azure.Storage.Queues
NuGet package to your project (if you’re using Visual Studio’s Manage NuGet Packages page, search for the package using “azure storage queues”).
My next step was to use the following code to create my QueueDTO
object and set its properties. Since only strings can be written to a Storage queue, I then converted my DTO into a string using the .NET JSON Serializer:
QueueDTO qdto = new()
{
Products = productList,
CorrelationId = correlationId
};
string qMsg = JsonSerializer.Serialize<QueueDTO>(qdto);
With my message created, the next step to create a QueueClient
to access the queue. The first step in that process is to create a Uri
object from the URL for the queue. Here’s the code for the queue in my case study:
Uri qURL = new(
"https://warehousemgmtproducts.queue.core.windows.net/updateproductinventory"
);
The next step is to create the credentials that will authorize adding messages to your queue. You could use (as I have in previous posts), the DefaultAzureCredentials
object. The DefaultAzureCredentials
object will try to authorize the request to add a message to your queue using “whatever means necessary.”
However, because I want my application to always use the Managed Identity I’ve assigned to the App Service, I can be more specific and use instead the ManagedIdentityCredential
object.
The ManagedIdentityCredential
object needs to be passed the client id of the Managed Identity that is a) assigned to the App Service and b) has been assigned a role that supports adding messages to the Storage queue. To get that identity’s client id:
You can then paste that client id into code like this to create a ManagedIdentityCredential
at run time:
ManagedIdentityCredential qMI = new ("<Managed Identity Client Id>");
My next step isn’t necessary, but Microsoft’s documentation tells me it will improve interoperability for my message. I created a QueueClientOptions
object and used it to specify that my message was to use Base64 encoding.
QueueClientOptions qOpts = new()
{
MessageEncoding = QueueMessageEncoding.Base64,
};
With all that in place, you can now create a QueueClient
object that is tied to your Storage queue and has permission to add messages to the queue. Just instantiate a QueueClient
object, passing the:
Uri
object that holds your queue’s URLManagedIdentityCredential
object that ties to the Managed Identity that has the necessary permissionsWith the QueueClient
object created, you can then use its SendMessageAsync
method to add a message to the queue, passing message as JSON string to the method. Here’s all that code:
QueueClient qc = new ( qURL, qMI, qOpts);
await qc.SendMessageAsync(qMsg);
One warning before you start testing your code: If you’ve only recently assigned your Managed Identity to either your App Service or your Storage queue, don’t rush to publish your frontend to your App Service so that you can try it out. It can take a few minutes for identity’s permissions to propagate. Go get a coffee.
In your client-side app, in addition to writing the code to add your message to your queue, you must configure your queue’s CORS settings to accept requests from your client-side app.
To configure your storage account’s CORS settings so that your queue will accept a request from your client-side TypeScript/JavaScript frontend’s domain, you’ll need the URL for your client-side frontend. The simplest (and most reliable) way to get that URL is to run your frontend and copy the URL for it out of your browser’s address bar.
Once you have that URL, surf to your storage account and:
Save your changes by clicking the Save icon in the menu bar at the top of the page.
With your storage account’s CORS settings configured, you’re almost read to start writing code. First add the @azure/storage-queue
package to your frontend application with this command:
npm install @azure/storage-queue
My next step was to create my DTO object, set its properties to the values I wanted and convert the DTO into a JSON string:
let qDTO:QueueDTO =
{
Products: productsList,
CorrelationId: correlationid,
Msg: ""
};
const qMsg:string = JSON.stringify(qDTO);
To get the permissions necessary to access your app, you’ll need an InteractiveBrowserCredential
, tied to your App Registration. You must pass the InteractiveBrowserCredential
the Application (client) ID and Tenant ID from your frontend’s App Registration. That code will look something like this:
const qCred:InteractiveBrowserCredential = new InteractiveBrowserCredential(
{
clientId: "d11…-…-…-…-…040"
tenantId: "e98…-…-…-…-…461"
}
);
You can now create a QueueClient
object, passing the full URL for your queue (you can get that from the list of queues in your storage account) and your InteractiveBrowserCredential
. Once you’ve created the QueueClient
object, you can use its sendMessage
method, passing your DTO as a JSON string, to add a message to your queue (the method is asynchronous so you should use the await
keyword with it):
const qc:QueueClient = new QueueClient(
"https://warehousemgmtproducts.queue.core.windows.net/updateproductinventory",
qCred
);
await qc.sendMessage(qMsg);
With that code written, you’re ready to deploy your code to your App Service and test it.
There is a simpler method to give your frontend (client-side or server-side) access to your queue then using an App Registration or Managed Identity: a connection string.
Using a connection string gives your application unfettered access to your queue. In production, you’ll probably prefer the more restricted access that you can create using Managed Identities or App Registrations (in conjunction with user permissions). However, for testing functionality or for proof-of-concept code, you may prefer to use a connection string. Be aware, though, that you can only use a connection string in TypeScript/JavaScript code running in the Node.js environment (i.e., on the server, in an App Service).
You can only use a connection string if you left “Enable storage account key access” turned on when you created your storage account.
If you have turned that option off, you can re-enable it in your storage account: Surf to your storage account, select the Settings | Configuration choice in the menu down the left side. Find the “Allow storage account key access” option, set it to Enabled.
The first step in using a connection string is to retrieve one of the automatically generated connection strings for your storage account:
To use your connection string in a server-side application in C#, instantiate the QueueClient
object, passing that copied connection string and your queue name as strings. (Be sure to flag the connection string with the @
flag. The Access Key embedded in your connection string may contain forward slashes and, without the @
flag, C# will treat those slashes as escape keys.)
Typical code will look like this:
QueueClient qc = new (@"<connection string>",
"<queue name>");
You can also create a QueueClient
by passing it a StorageSharedKeyCredential
created with just the Access Key portion of your connection string’s URL (I’ll show that code in the section on creating a Valet Key).
Again: If you’re going to use a connection string in a production application (and you shouldn’t), you should keep the string in a Key Vault. Even with your connection string in the Key Vault, you should consider retrieving the values from Key Vault through an App Service’s environment variables.
In TypeScript or JavaScript, you can pass your connection string to the QueueServiceClient
class’s static fromConnectionString method
. That method will return a QueueClient
object authorized to access the Queue.
You can then use that QueueClient
object’s sendMessage
method to add your message to the queue:
const qsc:QueueServiceClient =
QueueServiceClient.fromConnectionString("<connection string>");
const qc:QueueClient = qsc.getQueueClient("updateproductinventory");
await qc.sendMessage(qMsg);
For the Valet Key pattern, I’m going to assume that you’ll use a server-side resource to generate an SAS that you will then pass to a client (server-side or client-side). The client will then pass the SAS to a QueueClient
object to access the queue.
You have three options for generating an SAS in server-side code:
Your storage account will, in Settings | Configuration, need to have the “Allow storage account key access” option enabled. If you’re accessing the queue using a Managed Identity, that Managed Identity must have the Storage Queue Delegator role assigned to it.
To directly generate an SAS, first create a QueueSasBuilder
object, passing it:
This code creates a builder object that grants permission to add messages for the next two minutes and ties the SAS to the updateproductinventory
queue:
QueueSasBuilder qSASBldr = new( QueueSasPermissions.Add,
DateTimeOffset.UtcNow.AddMinutes(2) )
{
QueueName = " updateproductinventory "
};
You must next create a QueueClient
object using a StorageSharedKeyCredential
. You can create the key credential by passing it the name of the storage account and one of the keys from your storage account’s Security + networking | Access keys page. Once you have the credential key, you can pass it, along with the URL for your queue wrapped in a Uri
object, to create your QueueClient
:
StorageSharedKeyCredential sskc = new("updateproductinventory ",
"<account access key>");
QueueClient qcc = new(new Uri(<url for the queue>), sskc);
Once you have the QueueClient
created, you can generate the SAS (wrapped in a Uri
object) by calling the QueueClient
’s GenerateSasUri
method, passing the builder object:
Uri sasUri = qc.GenerateSasUri(qSASBldr);
The client would then use the returned URL to create its own QueueClient
object.
Once again: The application should be retrieving the Access Key from the Key Vault and, ideally, after being redirected through one of the App Service’s environment variables.
Instead of setting your start/expiry times and permissions in code, you can use an Access Policy to specify time and allowed operations. You can generate an Access Policy in your server-side code and use that in-memory Access Policy to generate an SAS. However, that method requires the same inputs as generating an SAS without a policy (see the previous section) and requires more code.
Using Access Policies makes more sense if you’re leveraging an Access Policy assigned to the queue at design time. A URL created from an Access Policy already assigned to the queue contains no restrictions—the URL just includes the name of the specified Access Policy. This strategy enables you to change the permissions being granted by your application by updating the policy in the Azure Portal rather than rewriting and redeploying your application (including deleting the policy if you want to disable any clients from using it).
To generate an SAS from an Access Policy, just create a qSasBuilder
object and set its Identifier
property to the name of a policy assigned to the queue. After that, pass that builder object to a QueueClient
object’s GenerateSasUri
method, which will return an SAS/URL (wrapped inside a Uri
object) that you can then pass to the client. Again, the QueueClient
object will have to have been created using a StorageSharedKeyCredential
object (see above).
This code creates an SAS that uses an Access Policy name:
QueueSasBuilder qSASBldr = new(QueueSasPermissions.Add,
DateTimeOffset.UtcNow.AddMinutes(2))
{
QueueName = "sasqueue"
};
Uri sasUri = qcc.GenerateSasUri(qSASBldr);
The client would then use the returned URL to create its own QueueClient
object with the permissions specified in the Access Policy.
While I’ve haven’t taken advantage of it here, when you add a message to a queue you can specify both when a message becomes visible to your backend processor and how long a message will be available to your backend processor. (This allows you to handle requests that must be processed within some time period.) And, while I’ve added my message to the queue as a string, there’s also an overload for the SendMessageAsync
method that accepts a BinaryData
object.
In my next post, I’m going to look at code for a backend server-side processor for reading messages your frontend has added to the queue.