Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
151802 stories
·
33 followers

How AI Changes Development with Rob Conery

1 Share
How are LLMs changing software development? Carl and Richard talk to Rob Conery about his experiences as a consultant bringing the new AI tools and techniques into companies. Rob talks about focusing on the most painful problems first to show the team quick results and make their lives better. The conversation digs into how these tools seriously change the way developers work and what it takes to embrace those changes. Lots of good thinking from a very experienced developer on how to do more than ever before!



Download audio: https://dts.podtrac.com/redirect.mp3/api.spreaker.com/download/episode/71146008/dotnetrocks_1998_how_ai_changes_development.mp3
Read the whole story
alvinashcraft
18 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

SE Radio 716: Martin Kleppmann Local-First Software

1 Share

Martin Kleppmann, Associate Professor at the University of Cambridge and author of the best-selling O'Reilly book Designing Data-Intensive Applications, talks to host Adi Narayan about local-first collaboration software. They discuss what the term means, how it leads to simpler application architectures compared to the cloud-first model, and the benefits to developers and users from keeping all of their data on their own devices. Martin goes into detail about how applications can synchronize data with and without a server, as well as conflict-resolution techniques, and the open-source library Automerge, which implements CRDTs and developers can use out-of-the-box. He also clarifies what kinds of applications would be suitable for the local-first approach. In the context of AI, they discuss vibe coding, local-first apps, and how the conflict-resolution work that enables data to be synchronized between users can also work with human-AI collaboration.





Download audio: https://traffic.libsyn.com/secure/seradio/716-martin-kleppman-local-first-software.mp3?dest-id=23379
Read the whole story
alvinashcraft
18 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Access your dev environment from any device

1 Share
From: kayla.cinnamon
Duration: 11:50
Views: 16

In this video, I show off the new /remote feature in GitHub Copilot CLI.

Links:
GitHub Copilot CLI: https://github.com/features/copilot/cli/

Intro: (00:00)
Command palette extension: (00:23)
copilot --resume: (01:52)
/remote desktop development: (02:18)
/remote web development: (06:04)
Session results: (08:43)
Outro: (11:10)

Socials:
πŸ‘©β€πŸ’» GitHub: https://github.com/cinnamon-msft
🐀 X: https://x.com/cinnamon_msft
πŸ“Έ Instagram: https://www.instagram.com/kaylacinnamon/
πŸŽ₯: TikTok: https://www.tiktok.com/@kaylacinnamon
πŸ¦‹ Bluesky: https://bsky.app/profile/kaylacinnamon.bsky.social
🐘 Mastodon: https://hachyderm.io/@cinnamon

Disclaimer: I've created everything on my channel in my free time. Nothing is officially affiliated or endorsed by Microsoft in any way. Opinions and views are my own! 🩷

#github #copilot #cli #ai #terminal #developer #developmentskills

Read the whole story
alvinashcraft
18 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Can AI Save Collaboration Software?

1 Share
After decades of incremental change, collaboration software is at an inflection point, and AI could have a big influence on where it goes next, as the 'Three Smart Guys' team discusses.



Download audio: https://www.directionsonmicrosoft.com/wp-content/uploads/2026/04/season1ep4TSG.mp3
Read the whole story
alvinashcraft
18 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Use AI to help YOU refactor your code

1 Share
AI as a thinking partner > AI as a code generator
Read the whole story
alvinashcraft
19 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Creating a Custom AI Agent with Telerik Tools 2: Loading and Accessing Your Agent’s Content

1 Share

In my previous post, I covered how to set up a Large Language Model (LLM) either in the cloud using Azure AI or on your desktop/server using Ollama. This post walks through the code you need to load your own content into a custom AI agent based on that LLM.

That will let you create a Retrieval-Augmented Generation–enhanced application that provides grounded answers on the content that matters to your users. You could use this code to, for example, create a custom AI assistant for any part of your organization.

There are four steps to creating that application:

  1. Configure your application
  2. Create a chat client to work with your LLM
  3. Convert your content into a format that your chat client will accept using Progress Telerik Document Processing Libraries
  4. Pass the chat client, your content and a prompt to the appropriate Telerik AI processor

Configuring the Application

I’m going to demonstrate the code for loading your content using a Blazor application (Blazor simplifies creating an interactive application that integrates server and client-side processing). However, the code in this post will be very similar in any C# application and, in a later post, I’ll wrap my agent in a web service and access it from client-side JavaScript code.

Once your project is created, your next step is to add the necessary NuGet packages. The best advice I can give you around picking the right NuGet package is to a) include prerelease versions and b) always take the most recent package available.

The packages you’ll need are:

  • For OpenAI LLMs (like the LLMs I picked in my previous post):
    • Azure.AI.OpenAI
    • Microsoft.Extensions.AI.OpenAI
    • OpenAI (probably already installed with the previous packages)
  • For Ollama:
    • OllamaSharp
  • For Telerik document processing:
    • Telerik.Documents.AIConnector
    • Telerik.Documents.Core
    • Telerik.Documents.Flow (to work with DOCX, HTML, and RTF files)
    • Telerik.Documents.Fixed (to work with PDF files)

Accessing the LLM

Connecting to your LLM depends on whether you’re using Azure or Ollama to host your LLM. (And, if you just created your LLM deployment, it can take up to 15 minutes before your LLM is ready to be used.)

Azure-hosted LLM

If you’re using an Azure LLM then you need to create an AzureOpenAIClient, passing the URL and key from your deployment’s information page in the ai.azure.com portal (see my previous post).

Once you’ve created your AzureOpenAIClient, you can use its GetChatClient method to retrieve a ChatClient. But, rather than access that ChatClient directly, you should use the AsIChatClient method to, effectively, cast the ChatClient to the more general purpose IChatClient interface. All that just requires just these two lines of code:

AzureOpenAIClient aiclt = new(
                new Uri("<deployment URL>"),
                new AzureKeyCredential("<access key>"));
IChatClient chatClt = aiclt.GetChatClient("<Deployment Name>").AsIChatClient();

Ollama-hosted LLM

If you’re working with Ollama, you create a chat client with the IChatClient interface by creating an OllamaApiClient object, passing the URL for your local Ollama server and the LLM that you want to use. That looks like this:

IChatClient chatClt =
    new OllamaApiClient(new Uri("<address for Ollama>"), 
                                                   "<LLM name>");

Do be aware: For a typical development machine, processing documents using Ollama is not going to be as responsive as using an LLM on Azure. For example, the document I used in this case study contains about 1,500 words and took a few seconds to summarize using one of the Azure LLMs. Using Ollama, that process sometimes took over a minute. In some cases, your application may time out waiting for Ollama to respond.

You can deal with that issue in Ollama by creating a customHttpClient object and passing it to your OllamaApiClient when you create it.

Here’s some sample code that creates an HttpClient that is a) tied to the Ollama client’s URL (probably http://localhost:11434) and b) sets a five-minute timeout. The code then uses that custom HttpClient to create an Ollama client:

HttpClient httpClt = new()
{
    BaseAddress = new Uri("<address for Ollama>"),
    Timeout = new TimeSpan(0, 5, 0)
};

IChatClient chatClt =
          new OllamaApiClient(httpClt, "<model name>");

Loading Content

The Telerik Document Processing Libraries (DPL) provide multiple AI processors for analyzing documents.

In this post, I’m going to focus on the summarization processor (I’ll look at other processors in my next post). Since all the processors expect to be passed a DPL SimpleTextDocument, switching between processors is simple.

As an example, here’s the code to convert a DOCX file into a SimpleTextDocument using Telerik WordsProcessing library (the Flow library also handles RTF and HTML files; for PDF files, you would use Telerik PdfProcessing library):

RadFlowDocument dDoc;
DocxFormatProvider dProv = new();
using (Stream str = System.IO.File.OpenRead(@"wwwroot/documents/scrolltoitem.docx"))
            {
                dDoc = dProv.Import(str, TimeSpan.FromSeconds(10));
            }
SimpleTextDocument std = dDoc.ToSimpleTextDocument(TimeSpan.FromSeconds(10));

Now that you have a SimpleTextDocument, your next step is to configure one of the Telerik AI processors to work with it.

In terms of AI processing, this document represents the content (or corpus) to be used by your LLM … but it’s a very small corpus that consists of only a single document. While I’m going to stick with a single document for this case study, you can use the Merge method on both Flow and Fixed documents to load multiple documents into a single document object before creating your SimpleTextDocument from that single document.

This code, for example, loads one DOCX file and then merges a second one into it, before creating a SimpleTextDocument from the result:

RadFlowDocument ddocMaster;
RadFlowDocument ddocTemp;
DocxFormatProvider dprov = new();
using (Stream str = System.IO.File.OpenRead(@"wwwroot/documents/InitialDoc.docx"))
{
    ddocMaster = dprov.Import(str, TimeSpan.FromSeconds(10));
}

using (Stream str = System.IO.File.OpenRead(@"wwwroot/documents/SecondDoc.docx"))
{
    ddocTemp = dprov.Import(str, TimeSpan.FromSeconds(10));
}
ddocMaster.Merge(ddocTemp);

Generating Summaries in Your Agent

The next step is to use one of the Telerik AI Connectors to enable your content for use by the LLM. To support this processing, you’ll need to add the Telerik.Documents.AIConnector NuGet package to your project.

In this example, I’m using the Telerik summarization processor to generate a summary of my custom content (more on the summarization processor and the other two Telerik AI processors in my next post):

SummarizationProcessorSettings spOpts = new (3500, "Summarize in 100 words");
using (SummarizationProcessor sp = new (chatClt, spOpts))
{
   string summary = await sp.Summarize(std);
}

Don’t expect a fast response when testing—it takes some time to absorb a complete document. The Azure-based LLMs I used would pause for three or four seconds to absorb each document, while the Ollama LLM took about 90 seconds on my laptop.

Which raises an important point: The summarization processor passes the whole document to your LLM—that can be both time consuming and expensive. You might want to catch the SummaryResourceCalculated event that the processor raises. The EventArgs parameter passed to that event includes two properties (EstimatedCallsRequired and EstimatedTokensRequired) that you can check to see if the request is larger than you want to handle. If the request is “too big,” you can set the EventArgs parameter’s ShouldContinueExecutionproperty to false to stop processing.

And there you have your own, custom AI agent which you can load with whatever content you want to create. You can do more than just summarize document content, though, as I’ll cover in my next post.

But, looking ahead to providing a frontend for users to access my custom AI agent, I have two UI issues that I should be thinking about to create a genuinely useful agent:

  • Users will probably want to be able to enter an initial prompt and then refine it to get a better version of my custom agent’s response.
  • Telerik has two other AI processors that support other types of AI processing. I want to give my user the chance to pick one of those other processors.

And that’s ignoring the reality that, thanks to the existing AI-enabled UIs my users are already using (looking at you, ChatGPT), my users have expectations about what an AI-enabled UI should look like. All that suggests that my application will need an interactive UI. So, after my next post, I’m going to use the Telerik AI Prompt component to create a UI that provides my user with that interactivity.


Explore Telerik Document Processing Libraries, plus component libraries, reporting and more with a free trial of the Telerik DevCraft bundle:

Try DevCraft

Read the whole story
alvinashcraft
19 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories