The rustup team is happy to announce the release of rustup version 1.29.0.
Rustup is the recommended tool to install Rust, a
programming language that empowers everyone to build reliable and efficient
software.
What's new in rustup 1.29.0
Following the footsteps of many package managers in the pursuit of better
toolchain installation performance, the headline of this release is that rustup
has been enabled to download components concurrently and unpack during
downloads in operations such as rustup update or rustup toolchain and to
concurrently check for updates in rustup check, thanks to a GSoC 2025
project.
This is by no means a trivial change so a long
tail of issues might occur, please report them if you have found any!
Furthermore, rustup now officially supports the following host platforms:
sparcv9-sun-solaris
x86_64-pc-solaris
Also, rustup will start automatically inserting the right $PATH entries
during rustup-init for the following shells, in addition to those already
supported:
tcsh
xonsh
This release also comes with other quality-of-life improvements, to name a few:
When running rust-analyzer via a proxy, rustup will consider the
rust-analyzer binary from PATH when the rustup-managed one is not found.
This should be particularly useful if you would like to bring your own
rust-analyzer binary, e.g. if you use Neovim, Helix, etc. or are
developing rust-analyzer itself.
Empty environment variables are now treated as unset. This should help with
resetting configuration values to default when an override is present.
rustup check will use different exit codes based on whether new updates
have been found: it will exit with 100 on any updates or 0 for no
updates.
Furthermore, @FranciscoTGouveia has joined the team. He has shown his talent,
enthusiasm and commitment to the project since the first interactions with
rustup and has played a significant role in bring more concurrency to it, so we
are thrilled to have him on board and are actively looking forward to what we
can achieve together.
If you have a previous version of rustup installed, getting the new one is as easy as stopping
any programs which may be using rustup (e.g. closing your IDE) and running:
$ rustup self update
Rustup will also automatically update itself at the end of a normal toolchain update:
$ rustup update
If you don't have it already, you can get rustup from the appropriate page on our website.
Rustup's documentation is also available in the rustup book.
Caveats
Rustup releases can come with problems not caused by rustup itself but just due to having a new release.
In particular, anti-malware scanners might block rustup or stop it from creating or copying
files, especially when installing rust-docs which contains many small files.
Issues like this should be automatically resolved in a few weeks when the anti-malware scanners are updated
to be aware of the new rustup release.
Thanks
Thanks again to all the contributors who made this rustup release possible!
We are excited to announce the release of TX Text Control DS Server 5.0. This release focuses on extensibility, plugin integration, and developer productivity while continuing to improve the Document Editor and Document Viewer.
TL;DR: AI chatbots focus on conversational interfaces, while AI agents extend LLM capabilities with planning, tool usage, and autonomous task execution. Although both rely on similar AI models, their architectures and application patterns differ significantly. This guide compares chatbots and agents and helps developers decide which approach fits modern applications in 2026.
Conversational interfaces are now standard in modern applications. Whether you’re building support tools, productivity apps, or internal systems, users expect natural language interaction.
Early chatbots relied on rule-based logic and predefined flows. As NLP improved, chatbots became more flexible and could understand intent and generate natural responses.
LLMs advanced this further, enabling chatbots capable of rich, contextual interaction. They also enabled something more powerful: AI agents, which don’t just respond, but reason and act.
While both use LLMs, their architecture and purpose differ. This article explains those differences so you can choose the right approach for your application.
What are AI Chatbots?
An AI chatbot is software designed to simulate human conversation through natural language. Modern Chatbots use natural language processing (NLP) and large language models (LLMs) to understand queries and generate relevant responses.
Unlike older rule-based predecessors, today’s AI chatbots:
Understand user intent from natural language (not just keywords).
Maintain context across conversations.
Generate dynamic, contextual responses.
Pull data from backend systems when needed.
Key characteristic
Chatbots are reactive. They respond to user messages but do not independently initiate complex actions or workflows.
Core architecture
Most AI chatbot systems are built around these components:
Natural Language Understanding (NLU)
Processes user input to determine intent and extract entities.
Example: “What’s the status of my order?” → Order-status intent + order ID extraction.
Dialogue management
Controls conversation flow and determines the next appropriate response.
Response generation
Creates responses using templates, structured logic, or LLMs (increasingly common).
Backend integrations
Accesses databases or APIs to retrieve information, such as order status.
Dialogue manager determines the appropriate response path.
System checks if additional info is needed (email, username).
Chatbot generates a response.
If the user provides details, a backend API may be triggered.
Note: The chatbot responds to user input, it does not proactively inspect account health or take independent action.
What are AI Agents?
An AI agent autonomously performs tasks, makes decisions, and interacts with external systems to achieve specific goals. They extend beyond conversation. They integrate with tools, execute actions, and operate with independence.
Unlike chatbots that wait for prompts, AI agents:
Plan multi-step tasks.
Decide which tools or APIs to use.
Execute actions across different systems.
Remember previous interactions and outcomes.
Adjust strategies based on results and feedback.
This enables agents to perform tasks requiring reasoning, planning, and iterative execution, not just respond to questions.
Core capabilities
Autonomous decision making
Agents determine the steps required to complete a task without human instructions.
Planning and reasoning
LLMs help agents break down complex problems into actionable subtasks.
Tool usage
Agents interact with APIs, databases, search engines, code interpreters, and more.
Memory systems
Agents store information beyond single conversations across tasks and sessions.
Execution loops
Agents continuously evaluate their progress and adjust actions until the task is completed. If one approach doesn’t work, they try another.
Example scenario
User: “Research the latest trends in AI developer tools and create a summary report.”
Plan the task: Break it into subtasks, search for sources, identify trends, extract insights, compile a report.
Execute searches: Use web search tools to find recent articles, GitHub repos, and discussions.
Analyze the results: Read through sources and extract key trends.
Synthesize findings: Identify patterns and important developments.
Generate the report: Compile everything into a structured summary document.
Deliver results: Present the completed report to you.
This involves planning, tool usage, information synthesis, and content creation, far beyond chatbot capabilities.
Core differences between AI Agents and AI Chatbots
Feature
AI Chatbots (Reactive)
AI Agents (Autonomous)
Primary Role
Conversational interaction
Autonomous task execution
Autonomy
Low (user-driven)
High (goal-driven)
Interaction Model
Respond to user messages
Execute tasks to achieve goals
Decision Making
Predefined or limited logic
Dynamic reasoning with LLMs
Task Execution
Minimal or none
Multi-step workflows
Tool Usage
Limited integrations
Extensive tool orchestration
Memory Handling
Session-based context
Persistent or long-term memory
Complexity
Moderate
High
Example Applications
Support assistants, FAQ bots
Research agents, coding assistants
Risks
Hallucinated answers
Tool misuse, runaway loops, cost/latency spikes
SLAs
Response quality & deflection
Task completion, cycle-time, correctness
Architecture comparison
AI Chatbot architecture
A typical chatbot architecture centers on processing user messages and generating appropriate responses. Key components include:
NLP or LLM processing layer.
Intent detection and entity extraction.
Dialogue state management.
Response generation (template-based or LLM-generated).
Backend integrations for data retrieval.
The architecture prioritizes conversation orchestration, managing turns in a dialogue, maintaining context, and providing relevant responses.
AI Agent architecture
Built for autonomy and workflow execution. Key components include:
Reasoning engine to choose next actions.
Planning module to break tasks into steps.
Tool execution system for API/database interactions.
Memory layer for long-term context.
Execution loop to evaluate and adjust.
This enables agents to operate independently across multiple systems.
Real-world use cases
AI Chatbot use cases
Chatbots excel in scenarios focused on user interaction and information delivery:
Customer support automation: Answering common questions, troubleshooting issues, and routing complex queries.
Building agents typically requires more sophisticated usage, careful tool design, and robust error handling compared to chatbots.
The future of conversational and autonomous AI
The Modern systems increasingly combine chatbots and agents. The chatbot acts as the conversational layer, while agents perform complex tasks behind the scenes.
This hybrid approach provides:
Natural interaction.
Autonomous execution.
Shared context and continuity.
As frameworks mature, more developers will adopt these blended systems.
Frequently Asked Questions
Can I start with a chatbot and later upgrade it into an AI agent?
Yes. Many teams begin with a chatbot to handle conversation and information retrieval, then extend it with agent capabilities such as planning, tool usage, and workflow execution. This phased approach reduces initial complexity and lets you introduce autonomy only when your application and infrastructure are ready.
Do AI agents always require long-term memory to work correctly?
Not always. Some tasks only need a short‑term task context. Long‑term or persistent memory becomes useful when the agent must recall past actions, preferences, or task histories. Developers should enable memory only when it improves performance or user experience, since it introduces additional storage, privacy, and governance requirements.
How do I decide which tools or APIs my AI agent should be allowed to access?
Define a clear, minimal set of tools that map directly to the agent’s responsibilities. Each tool should have strict input/output schemas, validation rules, rate limits, and safety checks. Limiting tool scope helps contain agent behavior, improves traceability, and reduces risk while still allowing autonomous action where it’s safe and valuable.
Conclusion
Thank you for reading! AI chatbots and AI agents offer two distinct approaches to building intelligent systems.
Chatbots support conversational UI and guided assistance.
Taking a quick glance at the new features shipped in Angular Signal Forms. We'll cover provideSignalFormsConfig, focusBoundControl, the Submission API, and the formRoot directive, and why they are useful.
Let’s start with a 7-minute demo video – I didn’t edit this down because I want you to be able to see what happens in real time. In this video, I point the desktop version of Claude Code at a Github issue for the First Responder Kit and tell it to do the needful:
That’s certainly not the most complex Github issue in the world, but the idea in that short video was to show you how easy the overall workflow is, and why you and your coworkers might find it attractive.
Now, let’s zoom out and talk big picture.
The rest of this blog post is not for people who already use Claude Code. I don’t wanna hear Code users complaining in the comments about how I didn’t cover X feature or Y scenario. This is a high-level ~2,000-word overview of what it is, why you’d want it, what you’ll need to talk to your team about, and where to go to learn more.
I should also mention that I use a ton of bullet points in my regular writing. As with all of my posts, none of this is written with AI, period, full stop. These words come directly from my booze-addled brain, written as of March 2026, and this stuff will undoubtedly drift out of correctness over time.
What’s Claude Code?
Think of it as an app (either desktop or command line) that can call other apps including:
sqlcmd – Microsoft’s command-line utility for running queries. You’re used to using SSMS because it’s much prettier and more powerful, but sqlcmd is fine if all you need to do is run queries and get results, and that’s all Claude Code needs to get started. As you get more advanced, you can use something called an MCP that gives Claude Code an easier way to chat with the database.
Git / Github – so that it can get the latest versions of your app code (or DBA scripts, or in this case, the First Responder Kit) from source control, make changes, and submit pull requests for you to review. For the purposes of this post, I’m just gonna use the term Github, but if your company uses a different source control method, the same principles apply.
That means it has access to:
Your Github issues and pull requests – which may present confidentiality issues for your company.
Your local file system – in theory, you might be able to lock this down, but in practice you’re probably going to gradually expand Claude Code’s permissions to let it do more stuff over time.
A database server – so think about where you’re pointing this thing, and what login you give it. If it’s going to test code changes, it’s probably going to need to alter procs, create/alter/drop tables, insert/update/delete test data, etc. On harder/longer tasks, it’s also going to be processing in the background while you’re doing other stuff, so you’re probably going to want to give it its own SQL Server service for its development use so it doesn’t hose up yours.
Your code base – and if everything before didn’t raise security and privacy concerns, this one certainly should.
Think of it as an outside contractor.
When your company hires outside contractors, they put a lot of legal protections in place. They’ll set up:
A non-disclosure agreement to make sure the contractor doesn’t share your secrets with the rest of the world
A contract specifying what exactly each side is responsible for and what they’ll deliver to each other
Insurance requirements to make sure the contractor will be able to pay for any egregious mistakes
Human resources standards to make sure the contractor isn’t high and hallucinating while they work
With AI tools, you don’t really get any of that. That means if you choose to hire one of these tools for your company, all of this is on you. Even worse, anybody on your team can endanger your entire company if they don’t make good decisions along the way. I can totally understand why some/most companies are a little gun-shy on this stuff. It’s right to be concerned about these risks.
Here – and most of the time when you see me working with AI on the blog or videos – I’m working with the open source First Responder Kit, or code that I use as part of my training classes. This stuff is all open source, licensed under the MIT License. I’m not concerned about AI companies stealing my code.
When your company brings in an outside contractor…
The security and legal teams are going to care about:
What Claude Code has access to – aka, Github, your local file system, your development database server, etc.
Where Claude Code sends that data for thinking/processing – you should assume that it’s sending all of the accessible data somewhere
If you send that data outside your company walls for thinking/processing, your company is also going care about how the thinker/processor uses your data – as in, not just to process your requests, but possibly for analysis to help the overall public or paying users
This leads to one of the big decisions when you’re using Claude Code: where does the thinking/processing happen?
The thinking can be done locally or remotely.
Claude Code is an app, but the thinking doesn’t actually happen in the app. Claude Code sends your data, prompt, database schema, etc somewhere.
Most people use Anthropic’s servers. They’re the makers of Claude Code. For around $100/month per person, you get unlimited processing up in their cloud. The advantage of using Anthropic’s servers is that you’ll get the fastest performance, with the biggest large language models (LLMs) that have the best thinking power, most accurate answers, and largest memories (context.) The drawback, of course, is that you’re sending your data outside your company’s walls, and you may not be comfortable with that.
If you’re not comfortable with Anthropic, maybe your company is more comfortable with Google Gemini’s models, or OpenAI’s ChatGPT models. At any given time, it’s an arms race between those top companies (and others, like hosting companies like OpenRouter) as to who produces the best tradeoffs for processing speed, accuracy, and cost.
If you’re not comfortable with any of those, you can do the processing on your own server. When I say “server”, that could be a Docker container running on your laptop, an app installed on your gaming PC with a high-powered video card, or a shared server at your company with a bunch of GPUs stuffed in it.
In that case, it’s up to you to pick the best LLM that you can, that runs as quickly as possible, given your server’s hardware. There are tiny not-so-bright models that run (or perhaps, leisurely stroll) on hardware as small as a Raspberry Pi. There are pretty smart models that require multiple expensive and power-hungry video cards. But even the best local models can’t compete with what you get up in Anthropic’s servers today.
The good news is that you don’t have to make some kind of final decision: you can switch between hosted and local models by just changing Claude Code’s config file.
The contractor and prompt qualities affect the results.
Generally speaking, the better/newer LLM that you use, and the smaller of a problem you’re working with, the more vague prompts you can get away with, like “we’re having deadlock problems – can you fix that?”
On the other hand, the older/smaller/cheaper LLM that you use – especially small locally hosted models – the more specific and directed your prompts have to be to get great results. For example, you may have to say something like, “sp_AddCustomer and sp_AddOrder are deadlocking on the CustomerDetails table when both procs are called simultaneously. Can you reduce the deadlock potential by making code changes to one or both of those procs? You can use hints, query rewrites, retry logic, whatever, as long as the transactions still finish the same way.”
And no matter what kind of LLM you’re using, the more ambitious your code changes become, the more important the prompt becomes. When I’m adding a major new feature or proposing a giant change, I start a chat session with Claude – not Claude Code, but just plain old Claude, the chat UI like ChatGPT – and say something like:
I’m working on the attached sp_Blitz.sql script, which builds a health check report on Microsoft SQL Server. It isn’t currently compatible with Azure SQL DB because it uses sp_MSforeachdb and some of the dynamic SQL uses the USE command. I’d like to use Claude Code to perform the rewrite. Can you review the code, and help me write a good prompt for Claude Code?
I know, it sounds like overkill, using one AI to tell another AI what to do, but I’ve found that in a matter of seconds, it produces a muuuuch better prompt than I would have written, taking more edge cases of the code into account. Then I edit that prompt, clarify some of my design decisions and goals, and then finally take the finished prompt over to Claude Code to start work there.
For now, I use Claude Code on a standalone machine.
I really like to think of AI tools like Claude Code as an outside contractor.
I’m sure the contractor is a nice person, and I have to trust it at least a little – after all, I’m the guy who hired it, and I shouldn’t hire someone that I don’t trust. Still, though, I gotta put safeguards in place.
So I keep Claude Code completely isolated.
I know that sounds a little paranoid, but right now in the wild west of AI, paranoia is a good thing.
For me, it starts with isolated hardware. A few years ago, I got a Windows desktop to use for gaming, streaming, and playing around with local large language models (LLMs). It’s got a fast processor, 128GB RAM, a decently powerful NVidia 4090 GPU, Windows 11, Github, and SQL Server 2025.
I think of that computer as Claude Code’s machine: he works there, he lives there. That way, I can guarantee none of my clients’ code or data is on there, and it doesn’t have things like my email either. When I wanna work, stream, record videos from that Windows machine, I just remote desktop into it from my normal Mac laptop.
When I wanna do client work without sending the data to Anthropic, I’ve got Ollama set up on that machine too. It’s a free, open source platform for running your own local models. It supports a huge number of LLMs, and there is no one right answer for which model to use. I love finding utilities like llmfit which check hardware to see what models can be run on it, and finding posts like which models run best on NVidia RTX 40 series GPUs as of April 2025 or on Apple Silicon processors as of February 2026, because they help me take the guesswork out of experimenting. I copy client data onto that machine temporarily, do that local work, and then delete the client data again before reconfiguring Claude Code to talk to Anthropic’s servers.
How you can get started with Claude Code
Your mission, should you choose to accept it, is to add a new warning to sp_Blitz when a SQL Server has Availability Groups enabled at the server level, but it doesn’t have any databases in an AG. To help, I’ve written a short, terse Github issue for this request, and a longer, more explicit one so you can also see how the quality of the input affects the quality of your chosen LLM’s code.
To accomplish the task, the bare minimum tasks would be:
Install Claude Code (I’d recommend the terminal version first because the documentation is much better – the desktop version looks cool, but it’s much harder to get started with)
Clone the First Responder Kit repo locally
Prompt Claude Code to write the code – tell it about the Github issue and ask it to draft a pull request with the improved code, for your review
Stretch goals:
Set up a SQL Server instance for Claude Code to connect to – could be an existing instance or a new one
Set up sqlcmd or the SQL Server MCP so Claude Code can connect to it – if you use the MCP, you’ll need to edit Claude Code’s config files to include the server, login, password you want it to use
Prompt Claude Code to test its code
You don’t have to submit your actual work as a pull request – I’m not going to accept any of those pull requests anyway. (I’ll just delete them if they come in – and it’s okay if you do one, I won’t be offended.) These Github issues exist solely to help you learn Claude Code.
How I can help
Unfortunately, I can’t do free personalized support for tens of thousands of readers to get their Claude Code setups up and running. At some point, I might build a paid training class for using Claude Code with SQL Server, and at that point, the paid students would be able to get some level of support. For now, though, I wanted to get this blog post, video, and GitHub issues out there for the advanced folks to start getting ahead of the curve.
However, If your company would like to hire me to help get a jump start on using Claude Code to improve your DBA productivity, proactively find database issues before they strike, and finally start making progress on your known issues backlog, email me.
In this video, I delve into the topic of keeping embeddings fresh in SQL Server databases, a critical aspect often overlooked due to its complexity and lack of straightforward solutions from Microsoft. I explore why regenerating all embeddings can be impractical and discuss more efficient methods like using queue tables and triggers to handle incremental updates. By walking through the setup process and providing examples, including how to create a computed column (despite SQL Server’s current limitations), I aim to equip you with practical strategies to avoid embedding drift and ensure your data remains relevant and accurate. Whether you’re dealing with user profiles, content management systems, or any application that relies on embeddings, this video offers valuable insights into maintaining the integrity of your data.
Erik Darling here with Darling Data and more AI embedding goodness for you fine folks out there in the world. We’re going to talk in this video about embedding freshness or keeping embeddings fresh in SQL Server because right now there is not a terribly good story with doing that. So embeddings of course get stale if the source text is edited, new records are added, or records are deleted. Your options are to regenerate all of them, which can be slow and expensive, or you could use some facility inside of SQL Server to track changes and update incrementally, which is probably the smarter thing to do. We are not going to do the stupid thing because the whole point of this course is to help you avoid the stupid thing, right? Don’t do the dumb thing. If you can take anything away from all of this, don’t do the dumb thing. Let Microsoft be your cautionary tale. Don’t do the dumb thing.
So again, coming back to our dear friends at Microsoft, you would think that they would make this easy, but no. Apparently they didn’t want to make this easy, right? And I don’t know. I kind of don’t blame them on this one, but it would be nice if this were a little more convenient. So like if I wanted to create a table and I wanted to say, hey, I want a computed column. You can already see this squiggle.
Now, be fair here. All right. SSMS puts red squiggles under a lot of stuff. It’s completely valid. Like, just has no idea what’s going on most of the time and is lost. This red squiggle is, this is a legitimate red squiggle, right? This is there, right? There for a reason this time, at least. But we cannot create this table and the error messages are rather amusing.
It will tell you that the database model doesn’t exist. Make sure the name is entered correctly. I think I get this one because my, my SQL instance is case sensitive, but it’s still, it’s still funny. Right?
That might make you freak out. Uh, but we’ve got incorrect syntax near the keyword use. So, uh, we can not create a computed column that will just generate this stuff for us. SQL service is like, nah, no.
And who knows? Maybe we’ll get this someday, right? Maybe this is a preview feature too. We don’t know, right? We, who could, who could, who could foresee the future when there’s no communication between the PMs who manage things and, and the end users who care about things, right? Why, why would we ever want to know these things?
But, uh, what you would generally want to do is in this, you know, again, uh, something that is very database, right? Something that is very sort of natural to, um, you know, most people who have had to manage databases is to create a queue table, which we could do here. And we can do that.
And we can set this up in a way that will give us, uh, something to work off of in which we can tell what, what action we need to take on a row and sort of just process through this queue table and do our updates. I’m going to give you an example trigger for, uh, for inserts. Um, in real life, you would also want to create other triggers to handle updates and deletes, right?
You would not want to put everything into one trigger because life gets real weird and complicated when you do that, right? All that sort of checking the inserted and deleted tables, uh, make separate triggers. What’s wrong with you?
But, uh, you would do something like this where you would do the normal things, the, the, the, the standard, the canonical trigger header, right? You bail out if row count big is zero. Uh, you make sure that no other sessions have interfered with important settings, set options, uh, at the, at the session level.
And then you would insert into your queue table, this stuff, right? These, this information from, um, when stuff got inserted into, uh, the, the, the post table, right? So like data goes into the post table.
We put that row into the queue table and then the queue table would process, right? Then we’d have like something else that would process, um, all the other stuff. From here, you would probably want, you know, a cursor, uh, like a, like a separate, you would want to store procedure that like works off the queue table.
And you would want to like loop over it, right? Cause again, like generating embeddings is not free, right? It’s an external call.
It’s outside the database. SQL Server does not generate embeddings on its own. You have to call out to a separate service. It could be a local service on your VM, uh, with a mix of Olama and something that helps SQL Server talk to Olama. Uh, cause Olama only has HTTP, uh, as a, um, as a protocol does not have HTTPS that SQL Server requires.
So you need something that sits between, um, you know, Olama and SQL Server to sort of deal with the HTTP, HTTPS stuff. Um, and, uh, you know, again, it’s not free, right? It’s not happening within the SQL engine.
It’s outside SQL. And if you were to use open AI for this and you had to make an internet call, God bless. You know, I used to make fun of people. Cause like, you know, like I’ve, I’ve worked with lots of like, you know, clients who have had to install software made by people who hate them.
Right? Like I made, I made a little joke in, I think the last video about like, I’m a software developer who loves databases. Like you would have the people like make, you know, like they have a trigger on a table that would use XP command shell to call like curl.
And, or like call like an executable that would go do something on the internet. And you’re like, Oh, of course it sucks. So what’s the matter with your face?
Like why, why would you do that? But here would you, you would, if you just had a store procedure, right? Like, you know, again, like, you know, row at a time or batch process this stuff, you would want to find any pending items. Um, if it’s a delete, then, you know, it’s easy enough to just delete a row.
Um, like I only had an insert trigger up there, but again, you would want, you know, update and delete triggers to handle other modifications to the table. And if merge fires, God help you. Um, and then you would want to delete and reinsert any rows that were marked as an update by the trigger.
And then if it was an insert, then you would just stick the new row in. Um, you know, you could also use last activity date to find change posts, to find any posts that were altered. Uh, you wouldn’t need triggers for that, but you would like, you would have to pull the table constantly for stuff.
Um, the trade-offs there are of course on the pro pro side, like, you know, triggers can be, especially under concurrency triggers can be tough to, you know, get right. Um, it’s a bit of a simpler setup, but, um, you know, then you like the, like the cons there are, you know, it’s, it’s possible that you might miss some changes. Um, you know, if you like paralyzing the process, like if you want to have like multiple workers, I’ll looking at the queue table for stuff that can get complicated.
So, you know, a lot of locking hints and other things go into, um, go into making that work, you know, as flawlessly as possible. Uh, but then you also have like the overhead of, you know, pulling the table. And then, you know, when you find work to do going and doing work, but the basic idea behind either of them is that you, you know, find posts that without embeddings, uh, that need them.
Uh, you would find modified posts that need embeddings and re-embed them. And if you delete a row, then, you know, it doesn’t matter cause that doesn’t go into the polling table. Cause there’s nothing to do.
It’s just gone from the table. But that’s like sort of like, you know, some, like some ideas around how to keep embeddings fresh and up to date in your database. Because one thing that you don’t want is like embedding drift, right?
You don’t want like, you know, um, like say like our example in the last video was the about me thing in the users table, right? Let’s say someone’s like, I’m a software developer who loves databases one day. And then the next day they’re like, um, like I’m a hot yoga instructor, right?
Like, like, but your embedding would still say, I’m a software developer who loves databases. And so like the, like the headhunters from the stack overflow job board would be like, let me get you this job. And like, sorry, I’m just all hot yoga.
I’m just all hot yoga now, right? You want some hot yoga? I got you. But this, and they’re like, but the embedding said you, you were a software developer who loves databases. And everyone’s just walks away sad and confused and no one actually does hot yoga.
And well, what can you do? Anyway, that’s enough here. Thank you for watching.
I hope you enjoyed yourselves. I hope you learned something and I will see you in tomorrow’s video. The final video of the week, our Friday, our gal Friday. And, uh, we’ll, we’ll, we’ll, we’ll do that, I guess.
Thank you for watching.
Going Further
If this is the kind of SQL Server stuff you love learning about, you’ll love my training. Blog readers get 25% off the Everything Bundle — over 100 hours of performance tuning content. Need hands-on help? I offer consulting engagements from targeted investigations to ongoing retainers. Want a quick sanity check before committing to a full engagement? Schedule a call — no commitment required.