Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
148880 stories
·
33 followers

python-1.0.0rc3

1 Share

[1.0.0rc3] - 2026-03-04

Added

  • agent-framework-core: Add Shell tool (#4339)
  • agent-framework-core: Add file_ids and data_sources support to get_code_interpreter_tool() (#4201)
  • agent-framework-core: Map file citation annotations from TextDeltaBlock in Assistants API streaming (#4316, #4320)
  • agent-framework-claude: Add OpenTelemetry instrumentation to ClaudeAgent (#4278, #4326)
  • agent-framework-azure-cosmos: Add Azure Cosmos history provider package (#4271)
  • samples: Add auto_retry.py sample for rate limit handling (#4223)
  • tests: Add regression tests for Entry JoinExecutor workflow input initialization (#4335)

Changed

  • samples: Restructure and improve Python samples (#4092)
  • agent-framework-orchestrations: [BREAKING] Tighten HandoffBuilder to require Agent instead of SupportsAgentRun (#4301, #4302)
  • samples: Update workflow orchestration samples to use AzureOpenAIResponsesClient (#4285)

Fixed

  • agent-framework-bedrock: Fix embedding test stub missing meta attribute (#4287)
  • agent-framework-ag-ui: Fix approval payloads being re-processed on subsequent conversation turns (#4232)
  • agent-framework-core: Fix response_format resolution in streaming finalizer (#4291)
  • agent-framework-core: Strip reserved kwargs in AgentExecutor to prevent duplicate-argument TypeError (#4298)
  • agent-framework-core: Preserve workflow run kwargs when continuing with run(responses=...) (#4296)
  • agent-framework-core: Fix WorkflowAgent not persisting response messages to session history (#4319)
  • agent-framework-core: Fix single-tool input handling in OpenAIResponsesClient._prepare_tools_for_openai (#4312)
  • agent-framework-core: Fix agent option merge to support dict-defined tools (#4314)
  • agent-framework-core: Fix executor handler type resolution when using from __future__ import annotations (#4317)
  • agent-framework-core: Fix walrus operator precedence for model_id kwarg in AzureOpenAIResponsesClient (#4310)
  • agent-framework-core: Handle thread.message.completed event in Assistants API streaming (#4333)
  • agent-framework-core: Fix MCP tools duplicated on second turn when runtime tools are present (#4432)
  • agent-framework-core: Fix PowerFx eval crash on non-English system locales by setting CurrentUICulture to en-US (#4408)
  • agent-framework-orchestrations: Fix StandardMagenticManager to propagate session to manager agent (#4409)
  • agent-framework-orchestrations: Fix IndexError when reasoning models produce reasoning-only messages in Magentic-One workflow (#4413)
  • agent-framework-azure-ai: Fix parsing oauth_consent_request events in Azure AI client (#4197)
  • agent-framework-anthropic: Set role="assistant" on message_start streaming update (#4329)
  • samples: Fix samples discovered by auto validation pipeline (#4355)
  • samples: Use AgentResponse.value instead of model_validate_json in HITL sample (#4405)
  • agent-framework-devui: Fix .NET conversation memory handling in DevUI integration (#3484, #4294)
Read the whole story
alvinashcraft
27 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

dotnet-1.0.0-rc3

1 Share

What's Changed

  • .NET: Support hosted code interpreter for skill script execution by @SergeyMenshykh in #4192
  • .NET: AgentThread serialization alternatives ADR by @westey-m in #3062
  • .NET: Add helpers to more easily access in-memory ChatHistory and make ChatHistoryProvider management more configurable. by @westey-m in #4224
  • .Net: Add additional Hosted Agent Samples by @rogerbarreto in #4325
  • .NET: Revert ".NET: Support hosted code interpreter for skill script execution" by @SergeyMenshykh in #4385
  • .NET: Fixing issue with invalid node Ids when visualizing dotnet workflows. by @alliscode in #4269
  • .NET: Fix FileAgentSkillsProvider custom SkillsInstructionPrompt silently dropping skills by @SergeyMenshykh in #4388
  • .NET: AuthN & AuthZ sample with asp.net service and web client by @westey-m in #4354
  • .NET: Update GroupChat workflow builder to support name and description by @peibekwe in #4334
  • .NET: Skip OffThread observability test by @rogerbarreto in #4399
  • .NET: AzureAI Package - Skip tool validation when UseProvidedChatClientAsIs is true by @rogerbarreto in #4389
  • [BREAKING] Add response filter for store input in *Providers by @westey-m in #4327
  • .NET: [BREAKING] Change *Provider StateKey to list of StateKeys by @westey-m in #4395
  • .NET: Updated Copilot SDK to the latest version by @dmytrostruk in #4406
  • .NET: Disable OpenAIAssistant structured output integration tests by @SergeyMenshykh in #4451
  • .NET: Update Azure.AI.Projects 2.0.0-beta.1 by @rogerbarreto in #4270
  • .NET: Skip flacky UT + (Attempt) Merge Gatekeeper fix by @rogerbarreto in #4456
  • .NET: Discover skill resources from directory instead of markdown links by @SergeyMenshykh in #4401
  • .NET: Update package versions by @dmytrostruk in #4468
  • .NET: Fixed CA1873 warning by @dmytrostruk in #4479

Full Changelog: dotnet-1.0.0-rc2...dotnet-1.0.0-rc3

Read the whole story
alvinashcraft
32 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Spreadsheet Analysis with Telerik Document Processing Libraries Agentic Tools

1 Share

Agentic workflows provide dynamic ways to interact with and analyze documents. The Progress Telerik Document Processing Libraries now have these AI tools ready to use!

Modern applications rely on automation and intelligent processing to handle large volumes of data, and spreadsheets are no exception. With the 2026 Q1 release, Progress introduced Agentic Tools (in Preview) for Telerik Document Processing Libraries (DPL).

These tools are purpose‑built .NET APIs for agentic document workflows, enabling AI agents to analyze, extract, edit and generate Excel and PDF files; run aggregates; transform content; and convert formats directly inside your app. The new Agentic Tools cover both PdfProcessing and SpreadProcessing libraries and are available with a Subscription license.

In this post, we’ll walk you through how to use the SpreadProcessing Agentic Tools to analyze an existing spreadsheet.

Importing the Workbook

Let’s create a simple .NET console app. Our first step is to import the Workbook. I am going to add the necessary DPL dependencies, in this case:

DPL packages: Telerik.Windows.Documents.Spreadsheet and Telerik.Windows.Documents.Spreadsheet.FormatProviders.OpenXml

and then use this code:

Workbook workbook = null;

using (Stream input = File.OpenRead("Electric_Vehicle_Population_Data.xlsx"))
{
    XlsxFormatProvider formatProvider = new();
    workbook = formatProvider.Import(input, null);
}

In case you are wondering, this file contains 17 columns and 75,331 rows of data.

Screenshot of a spreadsheet with VIN, County, City, State, Postal Code, Model Year, Make, Model, etc.

An agent using the DPL tools can work with multiple files at the same time by importing and exporting them via the SpreadProcessingFileManagementAgentTools and managing them with the InMemoryWorkbookRepository. However, in this example we are going to concentrate on analyzing one workbook, so we are going to use the SingleWorkbookRepository, a class which gives an agent a single file to work with in memory.

IWorkbookRepository repository = new SingleWorkbookRepository(workbook);

For this code, we need to reference the Telerik.Documents.AI.AgentTools.Spreadsheet package.

Initializing the DPL Agentic Tools

Initializing the Agentic Tools is very simple. For this example, we will use an Azure Open AI deployment, so first we are going to add the Microsoft.Agents.AI.OpenAI and Azure.AI.OpenAI packages. Then we are going to collect the Read and Formula tools in one collection:

List<AITool> tools = new SpreadProcessingReadAgentTools(repository).GetTools().ToList();
tools.AddRange(new SpreadProcessingFormulaAgentTools(repository).GetTools());

And then we can initialize our agent. We are going to need an Azure Open AI endpoint and key and are going to use gpt-4.1-mini.

OpenAI.Chat.ChatClient chatClient = new AzureOpenAIClient(
    new Uri(endpoint),
    new ApiKeyCredential(key))
    .GetChatClient(model);

AIAgent agent = chatClient.AsIChatClient().AsAIAgent(
    instructions: "",
    name: "SpreadsheetAnalyzer",
    tools: tools);

At this point, we can already ask a simple question to check our progress:

AgentResponse response = await agent.RunAsync("What is the value on cell A4?");
Console.WriteLine(response.ToString());

Here is the result:

Window shows text: The value in cell A4 is… Is there anything else you would like to know

If you examine the response of the agent more closely, you will notice that it contains three messages. The first is the function call the agent made to the document processing tool GetCellValues and with what parameters. The second is the response. And the third is what the LLM has reasoned and has decided to give us.

More insight into the DPL AI Agent response

Giving It UI

A console app is good for a proof of concept, but it would be nice to give our functionality a more finished look. One of the ways to do this is by using some Progress Telerik for WPF controls. The RadSpreadsheet control can show the content of our file, and we can leverage RadChat for the conversation with the agent.

We can give the repository the Workbook object of the RadSpreadsheet like this when initializing the chat:

IWorkbookRepository repository = new SingleWorkbookRepository(this.radSpreadsheet.Workbook);

Then we are going to subscribe to the SendMessage event, which is where our logic for running the agent is going to go:

this.radChat.SendMessage += this.RadChat_SendMessage;

After a few more tweaks, this is the result:

Spredsheet with right panel showing the AI Assistant, with AI agent initialized and ready to help with your spreadsheet!

The full code of the demo can be found on this link: AgentToolsAnalyzeSpreadsheet on GitHub

Conclusion

Agentic workflows introduce a powerful new way to interact with documents, moving beyond static reading and writing to dynamic, intelligent analysis. As you’ve seen, the setup integrates seamlessly with .NET, and, once the agent is initialized, it can leverage the full capabilities of the Document Processing Libraries.

This example is just the beginning! The same approach can be extended to multi-document scenarios, automated reporting, data validation or even generating new spreadsheets from scratch.

The future of document processing is agentic, and it’s already here.


Try out the Telerik Document Processing Libraries, which are included in the Telerik DevCraft bundles to pair perfectly with your favorite component libraries.


Try Telerik DevCraft
Read the whole story
alvinashcraft
45 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Navigating VS Code AI Toolkit and Microsoft Foundry for Agent Development

1 Share
VS Code's AI Toolkit and Microsoft Foundry can speed up agent development, but real-world success often depends on picking the right runtime and region, keeping tool-driven context under control, and designing around quotas, throttling, and uneven model/tool availability.
Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Get started with GitHub Copilot CLI: A free, hands-on course

1 Share

copilot banner image

GitHub Copilot has grown well beyond code completions in your editor. It now lives in your terminal, too. GitHub Copilot CLI lets you review code, generate tests, debug issues, and ask questions about your projects without ever leaving the command line.

To help developers get up to speed, we put together a free, open source course: GitHub Copilot CLI for Beginners. It’s 8 chapters, hands-on from the start, and designed so you can go from installation to building real workflows in a few hours. Already have a GitHub account? GitHub Copilot CLI works with GitHub Copilot Free, which is available to all personal GitHub accounts.

In this post, I’ll walk through what the course covers and how to get started.

What GitHub Copilot CLI can do

If you haven’t tried it yet, GitHub Copilot CLI is a conversational AI assistant that runs in your terminal. You point it at files using @ references, and it reads your code and responds with analysis, suggestions, or generated code.

You can use it to:

  • Review a file and get feedback on code quality
  • Generate tests based on existing code
  • Debug issues by pointing it at a file and asking what’s wrong
  • Explain unfamiliar code or confusing logic
  • Generate commit messages, refactor functions, and more
  • Write new app features (front-end, APIs, database interactions, and more)

It remembers context within a conversation, so follow-up questions build on what came before.

What the course covers

The course is structured as 8 progressive chapters. Each one builds on the last, and you work with the same project throughout: a book collection management app. Instead of jumping between isolated snippets, you keep improving one codebase as you go.

Here’s what using GitHub Copilot CLI looks like in practice. Say you want to review a Python file for potential issues. Start up Copilot CLI and ask what you’d like done:

$ copilot
> Review @samples/book-app-project/books.py for potential improvements. Focus on error handling and code quality.

Copilot reads the file, analyzes the code, and gives you specific feedback right in your terminal.

code review demo image

Here are the chapters covered in the course:

  1. Quick Start — Installation and authentication
  2. First Steps — Learn the three interaction modes: interactive, plan, and one-shot (programmatic)
  3. Context and Conversations — Using @ references to point Copilot at files and directories, plus session management with --continue and --resume
  4. Development Workflows — Code review, refactoring, debugging, test generation, and Git integration
  5. Custom Agents — Building specialized AI assistants with .agent.md files (for example, a Python reviewer that always checks for type hints)
  6. Skills — Creating task-specific instructions that auto-trigger based on your prompt
  7. MCP Servers — Connecting Copilot to external services like GitHub repos, file systems, and documentation APIs via the Model Context Protocol
  8. Putting It All Together — Combining agents, skills, and MCP servers into complete development workflows

learning path image

Every command in the course can be copied and run directly. No AI or machine learning background is required.

Who this is for

The course is built for:

  • Developers using terminal workflows. If you’re already running builds, checking git status, and SSHing into servers from the command line, Copilot CLI fits right into that flow.
  • Teams looking to standardize AI-assisted practices. Custom agents and skills can be shared across a team through a project’s .github/agents and .github/skills directories.
  • Students and early-career developers. The course explains AI terminology as it comes up, and every chapter includes assignments with clear success criteria.

You don’t need prior experience with AI tools. If you can run commands in a terminal, you learn and apply the concepts in this course.

How the course teaches

Each chapter follows a consistent pattern: a real-world analogy to ground the concept, then the core technical material, then hands-on exercises. For instance, the three interaction modes are compared to ordering food at a restaurant. Plan mode is more like mapping your route to the restaurant before you start driving. Interactive mode is a back-and-forth conversation with a waiter. And one-shot mode (programmatic mode) is like going through the drive-through.

ordering food analogy image

Later chapters use different comparisons: agents are like hiring specialists, skills work like attachments for a power drill, and MCP servers are compared to browser extensions. The goal is to provide you with a visual and mental model before the technical details land.

The course also focuses on a question that’s harder than it looks: when should I use which tool? Knowing the difference between reaching for an agent, a skill, or an MCP server takes practice, and the final chapter walks through that decision-making in a realistic workflow.

integration pattern image

Get started

The course is free and open source. You can clone the repo, or open it in GitHub Codespaces for a fully configured environment. Jump right in, get Copilot CLI running, and see if it fits your workflow.

GitHub Copilot CLI for Beginners

For a quick reference, see the CLI command reference.

Subscribe to GitHub Insider for more developer tips and guides.

The post Get started with GitHub Copilot CLI: A free, hands-on course appeared first on Microsoft for Developers.

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

SQL101: Indexing Strategies for SQL Server Performance

1 Share

(The original version of this post first appeared on the now-deleted SentryOne blog at the start of 2022.)

One of the easiest ways to increase query performance in SQL Server is to make sure that it can quickly access the requested data, and this is done as efficiently as possible. In SQL Server, using one or more indexes can be exactly the fix you need. In fact, indexes are so important that SQL Server can even warn you when it figures out that there’s an index missing that would benefit a query. This high-level post will explain what indexes are, why they’re so important, and a bit of the both the art and science of various indexing strategies.

What Are Indexes?

An index is simply a way of organizing data. SQL Server supports a variety of index types (see here for details) but this post will consider only the two most common ones, which are useful in a variety of ways and for a wide number of workloads: clustered and nonclustered non-columnstore indexes.

A table without a clustered index is called a heap, where the data rows in the table are unordered. If there are no indexes on the heap that means finding a particular data value in the table requires reading all the data rows in the table (called a table scan). That is obviously very inefficient, and becomes more so the larger the table grows.

A clustered index on a table arranges all the data rows in the table into a sorted order and places a navigational “tree” with the organized data so that it is easily navigated. The table is no longer a heap; it’s a clustered table. The order is defined by the clustered index key, which is comprised of one or more columns from the table. The structure of a clustered index is known as a B-tree, and this basic data structure allows a specific data row to be located (called a “seek”) based on the clustered index key value, without having to scan the whole table.

A good example of a clustered index is a table that stores the details of a company’s employees, where the table has a clustered index using the Employee ID as the key. All the rows in the table are stored in the clustered index in order of Employee ID, so finding the details of a particular employee using their Employee ID is very efficient.

A clustered index only allows efficient location of data rows based on the clustered index key. If it is necessary to be able to find data rows quickly using a different key value, then one or more additional indexes must be created, otherwise a table scan is required. For nonclustered indexes, each index row contains the nonclustered index key value and a locator for the corresponding data row (this is the data row’s physical location for a heap or the data row’s clustered index key for a clustered index).

Continuing the Employee table example, if someone wants to find the details of a particular employee and only knows the employee’s name, a nonclustered index could be created with a composite key of the LastName, FirstName, and MiddleInitial table columns. That would allow the Employee ID for an employee to be found, and then retrieve all the employee’s details from the corresponding data row in the clustered index.

Why Are Indexes So Important?

As you have no doubt gathered, the primary use of indexes is to allow the efficient retrieval of data from a table without having to perform a table scan. By limiting the amount of data that has to be accessed and then processed, there are a lot of benefits to overall workload performance:

  • Minimal amount of data has to be read from disk. This prevents undue pressure on the I/O subsystem from many queries reading inefficient or larger amounts of data, and help to prevent ‘churn’ in the buffer pool (the in-memory cache of data file pages) by not forcing data already in memory to be dropped from memory to make space for data be read from disk. In some cases, no data will have to be read from disk, if the required data is already in memory.
  • Minimal amount of data has to take up space in the buffer pool. This means more of the ‘working set’ of the workload can be held in memory, further reducing the need for physical reads.
  • Any reduction in the amount of physical reads that a query must perform will lead to a drop in execution time.
  • Any reduction in the amount of data that flows through the query plan will lead to a drop in execution time.

As well as indexes, there are other things that can help produce the benefits above, including:

  • Using proper join conditions.
  • Using search arguments to further narrow the data required.
  • Avoiding coding practices that force a table scan to be used, such as in advertently causing implicit conversions.
  • Making sure statistics are maintained correctly, so the query optimizer can choose the best processing strategies and indexes.
  • Taking into account the execution method of a query where a cached plan has been used, resulting in parameter sensitivity problems.

But these are all topics for future posts!

The Art and Science of Indexing

There are two parts to index tuning a workload – there’s both an art and a science. The science is that for any query there is always a perfect index, but the art is realizing that index may not be in the best interests of the overall database or server workload and figuring out what the best overall solution is for your server takes analyzing the server’s workload and priorities.

Clustered index key choice is more of a science than an art, and is a whole discussion by itself, but we usually say that a clustered index key should have multiple properties (in no particular order):

  1. The clustered index key is the data row locator that is included in every index row in every nonclustered index. This means the narrower it is, the less space it will take up overall and that will help with data size.
  2. Fixed-width. A clustered index key should be narrow but also use a fixed-width data type. When a variable-width data type is used then the data row and all nonclustered index rows will incur additional overhead.
  3. If the clustered index key is not unique, then a special, hidden ‘uniquifier’ column is added to the clustered index key for all non-unique data rows, making the clustered index key up to four bytes longer for those rows.
  4. If a clustered index key value changes, the data row must be deleted and reinserted internally, and all nonclustered index records containing that data row locator must be updated.
  5. Ever-increasing. This property helps to prevent index fragmentation from occurring in the clustered index.
  6. Non-nullable. The clustered index key should be unique by definition (see #3, above) so it implies that it cannot allow NULL values. In some SQL Server versions and in some structures, a nullable column would incur more overhead than a non-nullable column. Ideally, none of the columns that make up the clustered index key would allow NULL values.

As a generalization and because you can only have one clustered index, it’s usually nonclustered indexes (and multiple of them) that help queries run more efficiently.

The science of constructing the best nonclustered index for a query involves:

  • Understanding the search arguments being used and the type of query (as there are different indexing strategies, for instance, when search arguments use AND or OR clauses, when aggregates are involved, and for different join types). The search arguments are basically which table columns are necessary to identify the required data rows. These will likely be part of the nonclustered index keys.
  • Understanding the ‘selectivity’ of the data in each of these key columns. This will dictate the order of the columns in the index key, with the most selective predicates leading the key definition.
  • Understanding the SELECT list for the query. Any of these columns may be candidates for being included in the index as non-key columns to avoid the query having to go to the data row to retrieve them (also known as “covering” a query).

And there’s also SQL Server’s missing indexes functionality that will recommend the best index for a query (it focuses on just the science of “query tuning” but not the art of “server tuning”).

The art then becomes taking that index and figuring out whether and how it can be consolidated with other existing or also recommended indexes, so the table doesn’t become over-indexed.

As a simple example, let’s say that a table has ten int columns named col1 through col10.

The first query to index is SELECT col2, col3 FROM table WHERE col6 = value. A nonclustered index on col6 would avoid a table scan, but would require the query to go to the data row to get the values for col2 and col3. A more efficient nonclustered index would have col6 as the key and include col2 and col3 as non-key columns. This is called a covering index, because the index row has all the columns necessary for the index and removes the need to use the clustered index as well to get the additionally requested columns.

The second query to index is SELECT col4 FROM table WHERE col6 = value. The science tells us that a nonclustered index on col6 that includes col4 is likely the best index for the query. But then there are two nonclustered indexes keyed on col6, each including different non-key columns. This is where the art comes in, as the best index for the overall workload is likely a single nonclustered index on col6 that includes col2, col3, and col4. Now you have one index with more uses and fewer overall indexes on the table.

And the art can continue through multiple iterations.

Let’s say a third query is created that is SELECT col4, col5 from table where col6 = value AND col2 = value. The science may say that the best nonclustered index is on (col6, col2) if col6 is more selective than col2, and including col4 and col5 as non-ley columns. The art then has us look at consolidation and end up with a single nonclustered index on (col6, col2) that includes col3, col4, and col5. This satisfies all three queries with a single nonclustered index instead of three, so it takes up less space overall at the expense of being less efficient for each query than the individual “perfect” nonclustered indexes would be. However, there’s an added benefit to this consolidation – the fewer nonclustered indexes there are, there less amount of index maintenance needs to be done when a data row is inserted, deleted, or updated.

Obviously, there’s point where you may over-consolidate as well, and that’s where experience in indexing design helps hone your art, so you’re not under-indexing, over-indexing, or over-consolidating.

Summary

There’s a lot more to the art and science of designing an indexing strategy than can be covered in a post such as this but hopefully you now understand why having a good indexing strategy is so important. A deeper primer on indexing is Kimberly L. Tripp’s 7-hour Pluralsight course SQL Server: Indexing for Performance.

The post SQL101: Indexing Strategies for SQL Server Performance appeared first on Paul S. Randal.

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories