I've been using AI coding agents a lot lately - tools like Claude Code, GitHub Copilot CLI, and OpenCode. And the more I use them, the more I keep discovering new use-cases and ways they can be used.
Most people think of AI coding tools as just that - tools that write code. And yes, they do that, and they do it really well - but they also do much more than that.
Writing Code
First, let's start with writing code, before we dig into other use-cases. Watching the speed at which AI generates code might initially be seen as terrifying. But now, what I'm genuinely finding terrifying - is that I used to write all that myself by hand! The sheer amount of time I spent writing code that just did small, mundane things. Boilerplate, plumbing, wiring things up - all of it. Looking back, it's crazy how much time that took!
I rarely write code by hand any more. I describe what I want, and the agent generates the code. And the code it produces is genuinely good - often much better than what I'd have quickly thrown together myself. And I'm also no longer restricted to just languages I know - I recently created a Garmin app for my watch that used Monkey C!
And when I say "code", I mean all of it - application code, infrastructure-as-code, CI/CD pipelines, Dockerfiles, Kubernetes manifests, Terraform, the lot. From the agent's perspective, there's no distinction between "application code" and "infrastructure code" - it's all just text and tools. No more hand-writing YAML! 🎉
AI also doesn't have the mental resistance that a human has - for example, when faced with various time and business pressures, it's common for a human developer to skip "just a few" of those automated test cases. "Let me just write enough test coverage to get it through the PR review" (sound familiar?). AI doesn't have that constraint - it can generate a lot of test coverage without the mental fatigue / resistance, and also in a fraction of the time. Why wouldn't you take advantage of that?
The Agentic Loop
There's a concept that underpins a lot of what makes AI coding agents so effective, and it's worth calling out explicitly - the agentic loop.
When an AI agent is working on a task, it doesn't just have one go and hand you the result. It tries something, then verifies its own work. If the build fails, it reads the errors and fixes them. If the tests fail, it looks at the failures and iterates. It keeps going round this loop - try, verify, fix, repeat - until the task is done.
Using AI is no longer "type a prompt into a textbox, then manually copy code snippets out of the response" - then the typical back and forth, complaining about AI hallucinating. Hallucination is still a thing - but we just see less of it thanks to this hands-off self-correcting agentic loop.
Calling CLI Tools You Already Have
This one is huge, and I think it's massively underappreciated.
Your machine is already full of powerful command-line tools - kubectl, docker, az, dotnet, terraform, PowerShell, the Aspire CLI, and countless others. You probably use a handful of them regularly, but let's be honest - how many of us actually remember the syntax for anything beyond the basics? And even if you do - it takes time to type - so you're focusing on that, rather than what you're actually trying to achieve.
AI coding agents have access to those commands, and for most of them they'll already know all the syntax and tricks anyway. If it doesn't - it can quickly find out. Then it can just run these tools directly for you - they become an implementation detail, rather than something you have to worry about. You don't need to remember the syntax - you just tell the agent what you want to achieve, and it figures out which commands to run.
A few weeks ago, my Kubernetes cluster was reaching capacity. Rather than manually inspecting deployments and working out what was over-provisioned, I just told Claude Code to use kubectl to explore the cluster and see if anything could be optimised. I also pointed it at the code for the projects I was hosting. It inspected resource requests/limits across all my deployments, identified things that were over-provisioned, and made a bunch of suggestions to clean things up. The whole process would have taken me ages - it took the agent minutes. It also meant I didn't need to increase my cluster's capacity - so immediately saved me money.
Similarly, the Aspire team have recently ramped up what the Aspire CLI can do - it can now pull logs, traces, and metrics. This means your AI coding agent can just call the Aspire CLI to get all of that observability data. Combine that with the fact that the agent also has access to your code, and it can connect the dots between what's happening at runtime and what's in the codebase. That's incredibly powerful.
Debugging and Investigating Issues
Because the agent has access to logs, traces, and metrics - whether that's via the Aspire CLI, a Grafana MCP, or just reading log files - and it also has access to the code itself, it can investigate issues far faster than I can. It can trace a problem from the symptoms in the logs right through to the offending line of code, all in one flow.
But it's not just code bugs. A few weeks ago, my machine was running slowly. I couldn't figure out why. So I told Claude Code to investigate it. The agent used a whole bunch of command-line tools to look at event logs and system diagnostics. It found that my network driver was outdated and was raising a huge number of error events, which was dragging the whole system down. I would never have found that myself - at least not without a lot of frustration and wasted time.
Same goes for CI pipeline failures - I quite often just give the build id to the agent and tell it to investigate and fix the issue. It'll grab the build logs from either the gh CLI if in GitHub Actions, or the Azure DevOps MCP if in Azure DevOps (those are the two I use) - and do the rest itself!
Throwaway Scripts
There are tasks that come up from time to time where you could write a script to automate it, but the effort of writing, testing, and debugging that script just isn't worth it for a one-off job. So you end up doing it manually, which is tedious and error-prone.
AI agents completely change that. They can knock out a throwaway script in seconds, use it, and then you just delete it.
Here's a real example: I wanted to export my newsletters from Beehiiv and convert them into Markdown files so they could be source-controlled. The Beehiiv export gives you CSVs full of HTML - completely unreadable. I told Claude Code to take the CSV and convert all the newsletter issues into individual Markdown files. I also told it to use the Playwright CLI to verify that the converted output matched the original.
The agent then created a Python script full of regular expressions to bulk-convert the HTML. It ran Playwright to verify the output, found mistakes, went back and modified the Python script, and kept iterating until everything matched up perfectly - all without my involvement. Once it was done, we just deleted the script. Job done.
In the past, writing that script by hand - with all the edge cases and verification - just wouldn't have been an option. But with an AI agent, I'm now able to do things that I previously couldn't have justified spending the time on.
Custom Skills and Commands
Most AI coding agents support some form of custom skills or commands - reusable prompts that you can trigger with a shortcut. In Claude Code, for example, you can create skills and invoke them with a slash command.
This has been a game-changer for me. Here are a few I use regularly:
- Podcast show notes - I've got a skill that creates new show notes for my podcast, following my exact format and structure.
- Newsletter editions - A skill that creates a new newsletter edition and researches news items for me. (note that I still manually cherry pick what news items I want, and more often than not, hand-write the descriptions - my newsletter is certainly not AI slop!)
- Git commits - When I run
/commit, it asks me if I want to include "Closes #123" for a GitHub issue, and whether I want to push afterwards. It's a small thing, but it's exactly how I want my workflow to work. - Jira tickets - I've got a skill that creates Jira tickets and fills out all the custom fields that my client requires, asking me questions along the way. No more fighting with Jira's UI, and remembering that client's various custom fields I need to set.
- Plus many others.
Creating these skills is surprisingly easy - it's just a directory with a markdown file called SKILL.md, plus any other additional files you want to include (images, PDFs, etc). Claude Code even has a skill creator skill that helps you build, improve, and verify them. If you don't use Claude Code, you can still use that skill - as skills are just portable markdown, following a standard format.
Tickets, PRs, and Collaboration Tools
Thanks to tools like the GitHub CLI (gh), the Azure DevOps MCP server, and similar - your agent can read and write across all the collaboration tools you use day-to-day. Tickets (JIRA, Azure DevOps, GitHub Issues, etc), PR comments, the lot.
I often get the agent to query a ticket to pull out the description and acceptance criteria before starting work. It's an amazing context primer, and gives the agent a good checklist to work against. As mentioned above, I also use a skill that writes JIRA tickets for me.
On the PR side, the agent can read comments that reviewers have left and either fix them directly, or suggest reasons to push back. Technically, you can also get it to reply in the comments for you - but I probably wouldn't recommend that if the reviewer is expecting a human response. Unless of course it's an accepted practice within the team.
As an aside, GitHub does have native cloud-based AI code reviews too - but those are out of scope/context of this post.
Standard disclaimer!
Before you go and let your AI agent loose pointing at all your production environments - obviously ensure you take the necessary precautions. If you're giving it access to a prod Kubernetes cluster for example - ensure its authentication has the necessary authorisation restrictions - eg. readonly.
It's easy to anthropomorphise these AI agents - and yet, people are happy to only anthropomorphise certain traits, and still expect AI to not make mistakes. AI does make mistakes, just like humans do. The problem is that if AI makes mistakes - it makes them MUCH FASTER. So do be careful.
Personal projects
On a personal note, I'm currently using Claude Code to build lots of stuff that I wouldn't have previously been able to justify...
- An entire fitness product that I'm planning to productise. Before AI, this project would have needed a team of developers doing years of work. Now I'm able to build it in my spare time without having to quit my job, or it impacting my family-time or fitness training. That's how much of a difference this makes.
- Plus so many small utility apps - from a desktop pomodoro timer, to a time blocking app. Things I never would have been able to justify the time creating previously.
Also, lots of non-code stuff - a few examples...
- I coach kids' athletics and have recently been managing a series of competitions that our club was competing in. This is a real pain to do, as there are various activities with limited slots, and I get various requests from parents. There's also a bunch of forms to fill in, etc. Claude Code has made this WAY easier - especially working out the allocations, which can get complicated.
- I wanted a personal-finance long-term forecast solution - which takes into account income, pensions (including future draw-down over time), inflation, future big spends, etc, etc. I just threw all my info into Claude Code, and it generated a system for me, which shows charts and projections, doing all the tax calculations, etc. It's amazing!
Plus many more things that I won't bore you with - but the point is, the use-cases for AI coding agents go way beyond just writing code.
Wrapping Up
I've covered a lot of use cases here, but honestly, this barely scratches the surface. Every week I seem to discover a new use-case that I can use an AI agent for. The combination of being able to write code, run any CLI tool on your machine (or use MCP Servers), generate throwaway scripts, and iterate on their own work through the agentic loop - it all adds up to something that totally changes the way we work.
I should probably briefly touch on cost. Personally, I pay for the Claude Max 5x plan. It's not cheap if you compare it to a subscription like Netflix or Spotify. But if you compare it to having a team of developers at your disposal, it's an absolute bargain. The amount of time it saves me, and the amount of stuff it enables me to build - both personally and professionally - is worth way more than the cost.
If you're using AI coding tools but only for writing code, I'd really encourage you to explore what else they can do. I'd love to hear about the different use-cases you're using AI for! Please let me and the other readers know in the comments below! 👇

OOBE screen showing ability to update later[/caption]
*Note – this is not applicable to commercial devices where the out of box experience is being
Showing ability in Windows Update to extend update pause[/caption]
Showing ability to restart and shut down without installing pending updates[/caption]
If you choose Restart or Shut down, Windows will perform exactly that action, without unexpectedly starting an update. If you’re ready to install updates, you can explicitly choose one of the update options.
After a restart, Windows will attempt to restore previously opened applications faster, allowing users to return more easily to what they were working on.
This change is about is about making the Power menu more predictable, so when you need a quick restart or want to power off before heading out, Windows does exactly what you expect.
Showing all updates available under Available updates[/caption]
Updates will download in the background, then will wait for a coordinated installation and restart. This installation and restart will align with the next Windows quality update or other update that you manually approve.
Users can always acquire all or specific updates earlier if desired by initiating download, install, restart (if applicable) for available updates. If none of these actions are taken, updates will be downloaded in the background and applied alongside the next scheduled Windows quality update.