Modern infrastructure work is increasingly agent driven, but only if your AI actually understands the platform you’re deploying. This guide shows how to turn GitHub Copilot CLI into an Azure Cosmos DB aware infrastructure expert by loading the Azure Cosmos DB Agent Kit. In under a minute, you’ll give Copilot deep, opinionated knowledge of Azure Cosmos DB best practices so it can review, generate, and optimize your Terraform, Bicep, Docker, and CI/CD configurations directly from your terminal.
TL;DR — Get Started in 30 Seconds
npx add-skill AzureCosmosDB/cosmosdb-agent-kit
copilot
> Review my Terraform config for Azure Cosmos DB multi-region best practices:
> @infra/main.tf
That’s it! Copilot CLI now has expert-level Azure Cosmos DB infrastructure knowledge for your IaC workflows.
What is GitHub Copilot CLI?
GitHub Copilot CLI is a terminal-native AI coding agent that brings the full power of GitHub Copilot to your command line. Unlike browser-based tools, it works directly in your development environment with full context awareness.
Key Capabilities
Feature
What it does for Infrastructure Engineers
Agentic coding
Generate, review, and refactor Terraform, Bicep, and Docker configs
File system access
Read and modify IaC modules, Dockerfiles, and CI/CD pipelines
Terminal execution
Run terraform plan, docker build, az cli commands directly
copilot
> What's wrong with enabling multiple_write_locations without proper conflict resolution?
Skill loaded: Mentions conflict resolution policies, Last Writer Wins, custom stored procedures
Skill not loaded: Generic advice about multi-region databases
Example: Review Terraform Configuration
Use Copilot CLI to review your Azure Cosmos DB Terraform modules for infrastructure anti-patterns.
Prompt:
@infra/cosmosdb/main.tf
Review this Terraform config for Cosmos DB best practices:
- Multi-region setup
- Consistency policy
- Throughput configuration
- Backup settings
Copilot CLI reads the file directly and applies the skill’s 45+ rules to identify issues like:
Missing zone redundancy for 99.995% SLA
Incorrect consistency level for the use case
Fixed throughput instead of autoscale
Missing automatic failover configuration
Suboptimal partition key strategy
Example: Generate Terraform Module
Generate production-ready Azure Cosmos DB Terraform configuration.
Prompt:
Generate a Terraform module for Cosmos DB with:
- Multi-region deployment (East US primary, West US secondary)
- Zone redundancy enabled
- Autoscale throughput 400-4000 RU/s
- Hierarchical partition key support
- Continuous backup with 7-day retention
What Copilot generates:
Configuration
Best Practice Applied
automatic_failover_enabled = true
High availability
zone_redundant = true
99.995% SLA
partition_key_version = 2
Hierarchical partition keys
backup.type = "Continuous"
Point-in-time restore
Save directly to your infra directory:
> Write that to infra/modules/cosmosdb/main.tf
Example: Docker Compose with Azure Cosmos DB Emulator
Generate local development infrastructure using the Cosmos DB emulator.
Prompt:
Generate a docker-compose.yml for local development with:
- Cosmos DB emulator container
- My .NET API container that connects to it
- Proper networking and health checks
- Environment variables for connection strings
Copilot generates the complete Docker Compose configuration:
> Save that to docker-compose.dev.yml
Example: Review Indexing Policy for IaC
Optimize the indexing policy defined in your Terraform or Bicep configuration.
Prompt:
@infra/cosmosdb/main.tf
My container queries by tenantId, status, and orders by createdAt.
Review and optimize the indexing_policy block for these query patterns.
What Copilot suggests:
Optimization
Impact
Add composite index (tenantId, createdAt DESC)
Efficient ORDER BY
Exclude /payload/* from indexing
Reduce storage costs
Use included_path instead of wildcard
Lower RU for writes
Add spatial index if using geo queries
Enable geo-filtering
Example: CI/CD Pipeline Integration
Generate GitHub Actions workflow for Cosmos DB infrastructure deployment.
Prompt:
Generate a GitHub Actions workflow that:
- Runs terraform plan on PRs
- Runs terraform apply on merge to main
- Uses OIDC authentication to Azure
- Has separate jobs for dev/staging/prod environments
Copilot generates the workflow and can write it directly:
> Write that to .github/workflows/infra-deploy.yml
Example: Bicep Module Review
Review Azure Bicep configurations for Azure Cosmos DB best practices.
Prompt:
@infra/cosmosdb.bicep
Review this Bicep module for:
- Proper use of parameters vs variables
- Security best practices (private endpoints, managed identity)
- Cost optimization settings
Tips for Effective Prompts
Technique
Example
Reference IaC files with @
@infra/main.tf Review this for multi-region best practices
Be specific about concerns
“Review for HA and DR settings” vs “Review this config”
Chain actions
“Generate the module, then write it to infra/modules/cosmosdb/”
Ask for explanations
Add “…and explain the tradeoffs” for architecture decisions
Include constraints
“…with budget under $500/month” or “…for dev environment”
Troubleshooting
Problem
Solution
Skill not loading
Update CLI: winget upgrade GitHub.Copilot
Generic responses
Verify skill exists: ls ~/.copilot/skills/cosmosdb-best-practices/
Tell us about your Azure Cosmos DB experience! Leave a review on PeerSpot and get a $50 gift. Get started here.
About Azure Cosmos DB
Azure Cosmos DB is a fully managed and serverless NoSQL and vector database for modern app development, including AI applications. With its SLA-backed speed and availability as well as instant dynamic scalability, it is ideal for real-time NoSQL and MongoDB applications that require high performance and distributed computing over massive volumes of NoSQL and vector data.
To stay in the loop on Azure Cosmos DB updates, follow us on X, YouTube, and LinkedIn.
In a previous post, I deployed a model to a database using SQL Compare 16. This used a new feature in that connects to Redgate Data Modeler. In this post, I want to update my model, and again use SQL Compare to just get the changes deployed.
There’s a video of this post at the bottom if you’d rather watch me work.
As with the last article, everything was in sync with SQL Compare. You can see this below.
Let’s alter a few things. First, I’ll add a new table. I wrote about this in another post, but I’ll click new table, click in the diagram and then fill in details. In this case, I’m creating the Organization table.
I’ll also alter an existing table. I’ll click the “Add column” in the lower right of the properties blade after selecting the table in the diagram.
I fill in some details here.
I’ve made my changes, so let’s now return to SQL Compare and click “Refresh” at the top. This re-runs the comparison and as you can see, I have some changes. My new table is listed at the top, and I’ve clicked on the altered table, UserAuthProvider. At the bottom, I can see the change in the diff view.
I’ll click “Deploy” just as I did previously and run the deployment. Once it complete, I can see the changes in my database.
Summary
If there’s one thing I’ve learned in many years of work, it’s that I’ll make mistakes in my design and I need to change things. Hopefully I catch these mistakes in development, but even when I do, I need to update my dev database.
This post showed how I can adjust my model, or someone else can, and I can then pull the new changes into my database with SQL Compare 16.
SQL Compare is an amazing tool that millions of users have enjoyed for 25 years. If you’ve never tried it, give it an eval today and see what you think. Give Redgate Data Modeler a try and see if it helps you and your team get a handle on your database.
In this video, I delve into the often-overlooked world of date and datetime data types in SQL Server. Erik Darling from Darling Data shares his insights on why these data types are frequently mishandled and how to properly manage them for accurate results. We explore the nuances of date formatting across different languages and regions, emphasizing the importance of using unambiguous date formats to avoid sorting issues and other anomalies. I also discuss Microsoft’s recommendation to use `datetime2` over `datetime`, highlighting its advantages in precision and portability, which are crucial for globally deployed applications. The video covers practical tips on using the `convert` function with appropriate styles to ensure your dates are handled correctly, making it a must-watch for anyone working with date data in SQL Server.
Full Transcript
Erik Darling here with Darling Data. Feeling extra optimistic about this video. Why is my phone being? Anyway, in today’s video, we’re going to talk about continuing to love our data types. And we are going to do that today with dates. So we have much to go over here. So get a coffee or something. Down in the video description, all sorts of ways that you can help me pay for stuff. You can hire me for consulting, buy my training, become a supporting member of the channel. And of course, if you just want to harass me with questions, you can do that. There’s a link. Ask me office hours questions. Without you, we have no office hours. I’m not asking myself questions. That would be weird. I don’t want to think about that world. And of course, if you enjoy this content, please do like, subscribe, tell a friend, all that stuff. We are well on our way to having nearly 7,852 subscribers, which is miraculous in this day and age.
Out in the world, data to Nashville, data Saturday, Chicago, that’ll be out in March. I’ll have some new stuff to add to this slide shortly. There have been some other stirrings and acceptances. I don’t know. I don’t know how I’m going to do that. I might need, I might need two slides to talk about all the stuff that I’m going to be doing. Maybe even three. We’ll see. If I can only fit two per slide, man, we’re in trouble. It’s going to be, it’s going to be a crazy year for darling data getting out in the world. Tell you that much. But for now, let us cherish the spirit of the season and talk about dates.
Now, dates are without question the data type or dates and date times and all that stuff are without question the data type that I see people behave the laziest with. The absolute laziest. And SQL Server is partially to blame for that because SQL Server goes out of its way to make this easy on you. There are all sorts of crazy things SQL Server will do to accommodate whatever nonsense you type into your strings and expect it to infer the date timiness of.
So, first, let’s just start by using this function, sys.dm exec describe first result set. And we have some candidate dates in here. We think these are very datey. We think SQL Server should be able to date these appropriately. But alas, all of these columns come back as being various takes on varchar. We have one that is an eight. That is this one right here.
And then we have a bunch of tens. Just because you type a date or a date time into a string does not mean that SQL Server automatically says, Oh, look at this wonderful date or date time that this caring user typed in here. I will infer it as such. It does not do that. All right. So, sometimes we might have to tell it or nudge it or give SQL Server a little bit of extra information in order for it to understand what we’re after.
Using the convert function, even without a style here, but I am a big proponent of using styles. We just can’t use one here for reasons that we’ll discuss. If we do this, then all of a sudden SQL Server figures out, yes, these are all dates.
Thank you for pointing that out, right? We have successfully converted our strings to the correct data type with the magic of the convert function. Wonderful stuff. Wonderful.
It’s fantastic the things you can do when you type a little bit more. The reason why we can’t use a style on these, though, is because, well, you know, I’m an American and, you know, my country tis of thee and all that. But we are completely alone in our treatment of the date of date formatting.
Let’s do our magic SSMS 22 zoom here. We are the only one. And while the USA is number one, it is quite lonely on top, at least in this DMV.
You’ll see that U.S. English is the only one where the date formatted is treated as month, day, year. There are 29, I think, if I’m remembering correctly, sorry, 24 rows, 24 rows where the date is treated as day, month, year in the format. Some friends of ours, like the British and the Danish and even I have one Czechoslovakian friend, so we’ll.
Or Czech Republican friend, what do you call them these days? I don’t know. Those maps keep changing.
It happened to Gorbachev. But there’s a whole bunch of languages in here that do not accommodate the month, day, year format. There’s many things in here that, well, some of these quite suspect, quite suspect typing in there, but that’s not the point.
Anyway, there are even countries that do things without MDY or DMY. There are some that do YMD, like our friends the Croatian-Lovakians and the Lithuanian-Lovakians, right? The Swedish, who are not Swiss, right?
So all of these different countries and languages, I shouldn’t say countries, these are languages, right? I mean, some of them are, you know, specific to a country, but some of them are definitely multi-country languages. But these ones are treated even different.
And like something I didn’t point out in the others, but there is even some disagreement. Even once we have sorted out the date format that we want to use, some people disagree about which day should be first in the week, Monday or Sunday. So we don’t, so there is disagreement about how this gets treated, even when we have agreement on date format.
So you have to be very careful out there when you’re building SQL Server applications and queries to make sure that when you pass in dates and date times and all those other things, that you are quite careful to write unambiguous dates that cannot be confused. Because there are many ambiguous dates.
Now, we’re going to go back sort of to our original problem, which is that SQL Server does not automatically infer dates just because a string looks like a date, right? So we have three dates here. We have December 1st, 2025.
And then, well, I guess there is ambiguity in either of these and all of these, right? Because this could be December 1st of 2025. This could also be January 12th of 2025.
And likewise, these could be January 2nd or February 1st of 2025, 2026. We don’t know, right? But the point is that without SQL Server knowing these are dates, SQL Server sorts these as strings, which is completely understandable.
I would not want to figure this out either, right? So 12 gets sorted down here, even though technically that year is earlier. We do not automatically infer any datiness from our strings.
Which gets even weirder. I mean, not weird like bad weird, but just like where you might find weirdness in your applications if maybe someone from outside of America uses your application. You might find some oddities if you are not passing in dates, again, in a consistent, unambiguous format.
So let’s look at this. So we’re going to set the language to English, which, you know, yeah, U.S. English, right? Proper English.
But over in our results pane, right? We get back these three rows in order, one, two, three, which as far as the U.S. dating system goes, this is correct, right? We have December 1st of 2025.
We have January 1st of 2026. And we have February 2nd, sorry, February 1st of 2026. So according to U.S. dating policy, this is correct.
But according to our friends in Britain, this is different. And I assume that British also encompasses Canada, Australia, New Zealand. And I don’t quite know where else.
But let’s just go with those ones. If we set the language to British, right, we look over in the messages tab, we will see we are very British now. We are hip, hip, cheerio British.
And we look in our results. The row ordering has changed, right? Before it was one, two, three. Now it is one, three, two. Right? We still, well, I mean, this changes. I mean, this is no longer December 1st of 2025.
This is now January 12th of 2025. But these two rows here that were ambiguous to us, right? These flipped, right?
Because according to British dating policy, these are different from American dating policies. And so these dates, they are not what we think they are. Actually, none of these dates are what we think they are.
We have been lied to. We’ve been run amok, bamboozled, hornswoggled. I forget how the rest of that thing goes. But all of those things, right? Swindled.
Swindled. So really, what you should always do is put the year first. Four-digit year first. That will at least buy you some forgiveness from me. As if you always put the year first.
Because you reduce ambiguity when you put the year first. Where things may still be ambiguous. And let’s just, let’s quote this out so it doesn’t get in the way. Sorry, English.
We’ll come back to you, I promise. Where things that remain ambiguous is with sort order. Because if we set the language to British, we do not have a reliable sort order for our rows, for our duplicates. Right?
Because, you know, these dates basically repeat. Right? Like row one is 2025, 1201. Row four is 2025, 1201. There’s some dashes in here for some of these.
So you can even be forgiven some dashiness if you, if you, long as you put the four-digit year first. But if we want, if we want unambiguous sorting, then we also need to have a deterministic way to sort this data when we encounter duplicates. So remaining with U.S.-British, now we get the data back in 142536 instead of 415263.
But these dates have all been applied correctly to us. Oh dear, I’m getting some green screen artifacts. We’ll have to stay close to the camera so we don’t mess anything up.
But if we bring, if we set this to U.S. English now, again, the number one English, we will get back the same thing without the ambiguity. SQL Server is no longer switching things around on us. Right?
So things came back the way that we expected them to. Where things get, so we’ve just been talking about dates. And what’s crazy, wild to the extreme, is that Microsoft considers date to be a more modern data type than date time. We’re going to talk about that a little bit more in a minute.
But date time is technically a legacy data type. And Microsoft even says you shouldn’t use it anymore. Right? Don’t use date time. Use date time 2 instead with whatever precision you need.
It goes from 0 to 7. Do all sorts of things with it. But date time is no longer good to use. And that’s especially painful for me to talk about because the very, wow, this green screen is going crazy on me here. What’s happening?
There we go. Maybe I should, maybe I’ll stay here. The reason why that’s so painful and crazy to me is because the Stack Overflow database that I use, all of the date columns are date times. And I hate it because I look at them and I have to write procedures that use date time.
Because who has the patience to convert all that stuff over? Do you have any idea how many demos I’d have to change? It would be a nightmare.
Right? And then if I did that and I asked anyone else to look at something and their copy still had date time, who knows? Right? So it would just make no sense. But anyway, we’re stuck with date time in Stack Overflow land. We’re also stuck up going up to, we’re also stuck with the world ending in 2014.
So we got an assortment of problems. But anyway, where things get even weirder is when you get dashes involved with date times. Right?
So what we’re going to do here is we’re going to look way ahead to, this is the final Friday the 13th of 2026. It will be in November. November 13th of 2026.
That’ll be a Friday the 13th. And if we try to do this in English, this will work. SQL Server is like, oh, you’re an American date. Gotcha.
No problem. Right? So US English, this is totally fine. This arrangement works beautifully. But British English, this stops working when we put dashes involved and we try to convert to a date time. This now returns a null because there is no 13th month.
There is no 11th day in the 13th month. I’ve heard various rumors that if we had 13 months in the year, that we would have, all of them would have 30 days. And all of them would start on a Monday and end in a Friday.
And I’m like, wow, we’ve really made things complicated for ourselves. All these 12 months and various other things. So I don’t know.
Maybe, maybe that would be better. I don’t know. Anyway. I’m not even sure if that’s true. Maybe I was lied to. Maybe Wikipedia lied to me. There are some things that I want to talk about before we end this video.
The first one is I hate the cast function. Cast is for lazy dummies. And I say this having been a lazy dummy many times in my own past. Having to be confronted with old code that I’ve written where I see cast use.
And I’m like, man, I screwed up. So learn from me. Learn from the patient zero lazy dummy. Don’t use cast.
Use convert. Convert has styles. And those styles can be very, very useful when trying to figure out if your things are ambiguous. I mean, there’s all sorts of things that you have to get crazy with. And then when you’re dealing with binary and XML and other stuff.
But those are outlandish cases. For the general SQL Server user, you will care very much about styles for having unambiguous date formats that don’t mess things up. For example, 112 is great if you just need dates.
121 is actually the default for time. Date. DateTime2 and DateTimeOffset. Great stuff.
Right? And 127 is ISO 8601 with timezone Z. Right? So this thing at the end. Right? Timezone Z. So very important stuff here.
If you go to the documentation for, well, I mean, we should just not, like, really the documentation that we care about is for convert. Cast has very little documentation. It’s just garbage.
Don’t use cast. Lazy. The documentation for convert has all sorts of helpful things that can help you use convert better. Right?
So we do care about that. Another thing, and this is coming back to something that I said earlier that I didn’t, I said, we’ll talk about this in a minute. And here we are talking about this in several minutes. But, okay.
And that is at the documentation for DateTime. Right? And that is avoid using DateTime for new work. Right? Do not do it.
Right? It is a legacy data type. It is no longer good enough. Instead, use the time, date, DateTime2, and DateTimeOffset data types. These types align with the SQL standard and are more portable.
Time, DateTime2, and DateTimeOffset provide more seconds precision. DateTimeOffset provides timezone support for globally deployed applications. And, gosh darn it, don’t we want our applications to be globally deployed?
Why settle for dominating a single market when you can dominate them all? Right? Why would you want to do that? So, I told you this video would be a little bit more involved than the string one.
I didn’t lie to you. Please, do not use ambiguous dates or date formats in your code. Um, I’ve seen so many dumb things crop up over the years, either because of implicit conversions, or a lack of implicit conversions, or errors, or just, like, things not functioning correctly because people decided to type dates stupidly.
And they paid dearly for it. They paid me dearly for it. Which I appreciate. But, you know, if you want to avoid paying me dearly for these things, use consistent, unambiguous date formats.
And please, use the convert function with a style. Just, I ask very little of you. Please do these things.
Anyway, thank you for watching. Hope you enjoyed yourselves. I hope you learned something. And I’ll see you in tomorrow’s video, where I’m going to go crazy and I’m going to defend merge. Alright.
Adios.
Going Further
If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 25% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that, and need to solve database performance problems quickly. You can also get a quick, low cost health check with no phone time required.
Normally, when SQL Server updates statistics on an object, it invalidates the cached plans that rely on that statistic as well. That’s why you’ll see recompiles happen after stats updates: SQL Server knows the stats have changed, so it’s a good time to build new execution plans based on the changes in the data.
However, updates to system-created stats don’t necessarily cause plan recompiles.
This is a really weird edge case, and you’re probably never gonna hit it, but I hit it during every single training class I teach. I casually mention it each time to the class, and I don’t even take much notice of it anymore. However, a student recently asked me, “Is that documented anywhere?” and I thought, uh, maybe, but I’m not sure, so might as well document it here on the ol’ blog.
To illustrate it, I’ll take any version of the Stack Overflow database (I’ll use the big 2024 one), drop the indexes, free the plan cache. I’m using compat level 170 (2025) because I’m demoing this on SQL Server 2025, and I wanna prove that this still isn’t fixed in 2025. Then run a query against the Users table:
DropIndexes;
GO
DBCC FREEPROCCACHE;
GO
ALTER DATABASE CURRENT SET COMPATIBILITY_LEVEL = 170; /* 2025 */
GO
SELECT TOP 101 *
FROM dbo.Users
WHERE Location = N'Netherlands'
ORDER BY Reputation DESC;
GO
To run this query, SQL Server needs to guess how many rows will match our Location = ‘Netherlands’ predicate, so it automatically creates a statistic on the Location column on the fly. Let’s check it out with sp_BlitzIndex, which returns a result set row with all of the stats histogram data for that table:
I’m going to scroll down to the Netherlands area, and show a few more relevant columns. You’ll wanna click on this to zoom in if you want to follow along with my explanation below – I mean, if you don’t trust my written explanation, which of course you do, because you’re highly invested in my credibility, I’m sure:
Things to note in that screenshot:
It’s an auto-created stat with a name that starts with _WA_Sys (system-created)
It was sampled: note the fractional numbers for range rows and quality rows, plus notice the “Rows Sampled” near the far right
It was last updated at 2025-12-24 01:10:55.0733333 – which tells you that I’m writing this post on Christmas Eve day, but the timing is odd because I’m writing this at a hotel in China, and my server is in UTC, so God only knows what time it is where you’re at, and no, you don’t have to worry about my mental health even though I’m blogging on Christmas Eve Day, because I’m writing this at the hotel’s continental breakfast while I wait for Yves to wake up and get ready, because we’re going out to Disney Shanghai today, which has the best popcorn varieties of any Disney property worldwide, and you’re gonna have to trust me on that, but they’re amazing, like seriously, who ever knew they needed lemon popcorn and that it would taste so good
Note the estimate for Netherlands: Equal Rows is 17029.47
Now let’s say we’re having performance issues, so we decide to update statistics with fullscan. Then, we’ll check the stats again:
UPDATE STATISTICS dbo.Users WITH FULLSCAN;
GO
/* Check the stats on Location again: */
sp_BlitzIndex @TableName = 'Users';
GO
The updated stats for Netherlands have indeed changed:
Stuff to note in that screenshot after you click on it while saying the word “ENHANCE!” loudly:
Netherlands Equal Rows has changed to 17100
The numbers are all integers now because Rows Sampled is the same as the table size
Stats last updated date has changed to 2025-12-24 01:17:54.5133333, so about 6 minutes have passed, which gives you an idea of what it’s like writing a blog post – this stuff looks deceivingly easy, but it’s not, and I’ve easily spent an hour on this so far, having written the demo, hit several road blocks along the way, then started writing the blog post and capturing screen shots, but it doesn’t really matter how long it takes because I’m sure Yves will be ready “in five minutes”, and we both know what that means, and by “we both” I mean you and I, dear reader
So if I run the query again, it gets a new plan, right? Let’s see its actual execution plan:
Focus on the bottom right numbers: SQL Server brought back 17,100 rows of an estimated 17,030 rows. That 17,030 estimate tells you we didn’t get a new query plan – the estimates are from the initial sampled run of stats. Another way to see it is to check the plan cache with sp_BlitzCache:
I’ve rearranged the columns for an easier screenshot – usually these two columns are further out on the right side:
# Executions – the same query plan has been executed 2 times.
PlanGenerationNum – 1 because this is still the first variation of this query plan
So, what we’re seeing here is that if a system-generated statistic changes, even if the contents of that stat changed, that still doesn’t trigger an automatic recompilation of related plans. If you want new plans for those objects, you’ll need to do something like sp_recompile with the table name passed in.
In the real world, is this something you need to worry about? Probably not, because in the real world, you’re probably more concerned about stats on indexes, and your plan cache is likely very volatile anyway. Plus, in most cases, I’d rather err on the side of plan cache stability rather than plan cache churn.
Now, some of you are going to have followup questions, or you’re going to want to reproduce this demo on your own machines, with your own version of SQL Server, with your own Stack Overflow database (or your own tables.) You’re inevitably going to hit lots of different gotchas with demos like this because statistics and query plans are complicated issues. For example, if you got a fullscan on your initial stats creation (because you had a really tiny object – no judgment – or because you had an older compatibility level), then you might not even expect to see stats changes. I’m not going to help you troubleshoot demo repros here for this particular blog post just because it’d be a lot of hand-holding, but if you do run into questions, you can leave a comment and perhaps another reader might be willing to take their time to help you.
An open-source AI agent originally called Clawdbot (now renamed Moltbot) is gaining cult popularity among developers for running locally, 24/7, and wiring itself into calendars, messages, and other personal workflows. The hype has gone so far that some users are buying Mac Minis just to host the agent full-time, even as its creator warns that's unnecessary. Business Insider reports: Founded by [creator Peter Steinberger], it's an AI agent that manages "digital life," from emails to home automation. Steinberger previously founded PSPDFKit. In a key distinction from ChatGPT and many other popular AI products, the agent is open source and runs locally on your computer. Users then connect the agent to a messaging app like WhatsApp or Telegram, where they can give it instructions via text.
The AI agent was initially named after the "little monster" that appears when you restart Claude Code, Steinberger said on the "Insecure Agents" podcast. He formed the tool around the question: "Why don't I have an agent that can look over my agents?" [...] It runs locally on your computer 24/7. That's led some people to brush off their old laptops. "Installed it experimentally on my old dusty Intel MacBook Pro," one product designer wrote. "That machine finally has a purpose again."
Others are buying up Mac Minis, Apple's 5"-by-5" computer, to run the AI. Logan Kilpatrick, a product manager for Google DeepMind, posted: "Mac mini ordered." It could give a sales boost to Apple, some X users have pointed out -- and online searches for "Mac Mini" jumped in the last 4 days in the US, per Google Trends. But Steinberger said buying a new computer just to run the AI isn't necessary. "Please don't buy a Mac Mini," he wrote. "You can deploy this on Amazon's Free Tier."