Artificial intelligence has continued to enter our daily lives, minimizing its presence in laboratories and traditionally viewed as a component of applications and tools. Large language models (LLMs) are at the core of this change and now power products that can produce text, respond to queries or even direct workflows. Python frameworks have enabled this jump by providing developers with pre-built methods to integrate LLMs into the real world.
However, this brings up a significant issue, which is whether these frameworks are merely wrappers of the existing models, or they are redefining the way we design and develop modern software. Outside laboratory settings, LLMs have transformed the industries in regard to automation, individualization and data-driven decision-making. Since the beginning of their use by startups to interact with the customers and the enterprises automating documentation, AI products impact nearly every area. The change of development is not only technical, but strategic to make the cycle of innovations faster and the distance between an idea and its implementation shorter.
Learn how to integrate TX Spell .NET with the Document Editor on a Linux server using ASP.NET Core. This guide covers setup, configuration, and best practices for seamless spell checking functionality.
In the previous post, you added environment-scoped variables to customize the message displayed in each environment. In this post, you’ll add an environment-scoped step to prompt for manual approval before deploying to production.
Prerequisites
An Octopus Cloud account. If you don’t have one, you can sign up for a free trial.
The Octopus AI Assistant Chrome extension. You can install it from the Chrome Web Store.
The Octopus AI Assistant will work with an on-premises Octopus instance, but it requires more configuration. The
cloud-hosted version of Octopus doesn’t need extra configuration. This means the cloud-hosted version is the easiest way to get started.
Creating the project
Paste the following prompt into the Octopus AI Assistant and run it:
Create a Script project called "03. Script App with Manual Intervention", and then:* Add a manual intervention step as the first step in the deployment process, scoped to the Production environment only, with the instruction "Please approve deployment to Production".
Before the deployment to Production can proceed, you will need to approve the manual intervention step. This step is often used to enforce a business approval process before deploying to production.
Agile in Construction: The DOWNTIME Strategy—Eliminating Waste Before Adding Process With Felipe Engineer-Manriquez
Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes.
"My first rule is that I will do no harm. And if something goes wrong, I will take full responsibility with leadership. My neck is literally on the line." - Felipe Engineer-Manriquez
Felipe shares his change strategy for introducing Lean and Agile into construction projects, and it starts with an unexpected principle borrowed from Hippocrates: do no harm. He explicitly tells teams this promise, putting his neck on the line to build trust. But the real magic happens in what comes next: instead of adding new processes, Felipe first helps teams stop doing things. Using the DOWNTIME acronym (Defects, Overproduction, Waiting, Transportation, Inventory, Motion, Excess processing), he identifies wasteful activities that don't add value. In construction, 60-80% of every dollar doesn't add value from the customer's perspective—compared to manufacturing (above 50% value) or agriculture (90% value). Felipe's approach: eliminate waste first to create excess capacity, then introduce new processes. On a project that was 2 years behind schedule with lawyers already engaged, he spent just 5 minutes with the team defining a visible milestone goal on a whiteboard. Two weeks later, they met their schedule and improved by 4 days—the first time ever. The superintendent said, "Never in the entire time I've worked here have we ever met a schedule commitment." The secret? Free up capacity before adding anything new.
Self-reflection Question: Before introducing your next process improvement, what wasteful activity could you help your team stop doing to free up the capacity they need to embrace change?
[The Scrum Master Toolbox Podcast Recommends]
🔥In the ruthless world of fintech, success isn't just about innovation—it's about coaching!🔥
Angela thought she was just there to coach a team. But now, she's caught in the middle of a corporate espionage drama that could make or break the future of digital banking. Can she help the team regain their mojo and outwit their rivals, or will the competition crush their ambitions? As alliances shift and the pressure builds, one thing becomes clear: this isn't just about the product—it's about the people.
🚨 Will Angela's coaching be enough? Find out in Shift: From Product to People—the gripping story of high-stakes innovation and corporate intrigue.
Felipe Engineer-Manriquez is a best-selling author, international speaker, and host of The EBFC Show. A force in Lean and Agile, he helps teams build faster with less effort. Felipe trains and coaches changemakers worldwide—and wrote Construction Scrum to make work easier, better, and faster for everyone.
It is review season, and you might be finding yourself confused: you received high ratings and "exceeded expectations," yet the promotion you expected didn't happen. In this episode of the Career Growth Accelerator, I break down exactly why high performance doesn't always lead to promotion, helping you identify the structural roadblocks and strategic shifts necessary to move from senior individual contributor to staff, principal, or leadership roles,.
• Understand why your performance review is never conducted in a vacuum and why your manager’s peers—not just your manager—are the "voters" you need to convince with clear evidence,.
• Learn why high ratings often fail to translate into a promotion if you haven't demonstrated specific impact on the company's strategic goals rather than just your own deliverables.
• Discover the first major roadblock: Structural limitations where the role you want simply doesn't exist because the business context or organizational pyramid doesn't currently support it,.
• Explore the concept of "Outer Layers" of scope—moving from self-focus to team-focus, and finally to business-strategy focus—to unlock the next stage of your career,,.
• Identify the "indispensable trap" where performing too well at your current inner-layer responsibilities makes you terminal in your role rather than promotable.
There’s a good chance you’ve already tried a few AI code review tools — and you’re probably ignoring most of their comments.
Not because AI can’t review code, but because it’s missing context. Most AI reviewers focus on surface-level issues: style nits, obvious refactors, or restating what’s already clear from the diff. Meanwhile, the things you actually care about, like whether a change violates an earlier architectural decision or quietly duplicates existing logic, go unnoticed.
That’s the problem Unblocked is built to solve.
Unblocked’s AI code review is grounded in decision-grade context, prior PRs, design discussions, documentation, and system-level constraints, the same context senior engineers rely on when reviewing code.
Teams using Unblocked report fewer comments, higher signal, and automated reviews they actually trust — enough that many have turned off other AI review tools entirely.
Even if you’ve already written off AI code review, Unblocked is worth a look.
If you enjoyed this episode and would like me to discuss a question that you have on the show, drop it over at: developertea.com.
📮 Join the
If you want to be a part of a supportive community of engineers (non-engineers welcome!) working to improve their lives and careers, join us on the Developer Tea Discord community by visiting https://developertea.com/discord today!
🧡 Leave a Review
If you're enjoying the show and want to support the content head over to iTunes and leave a review! It helps other developers discover the show and keep us focused on what matters to you.
In which Claude and [A]I play at being webdevs.For some reflections on the bigger picture of AI as a productivity tool for developers, have a look at the companion post to this one.
I used to speak at a lot of conferences and meetups, and published my talks on a site called noti.st.
It’s free to use, but you could pay for bells and whistles including a custom domain, which I duly did: talks.rmoff.net.
My background is databases and SQL; I can spell HTML (see, I just did) and am aware of CSS and can fsck about in the Chrome devtools to fiddle with the odd detail…but basically frontend webdev is completely beyond me.
That meant I was more than happy to pay someone else to host my talks for me on an excellent platform.
This was a few years ago, and the annual renewal of the plan was starting to bite—over £100 for what was basically static content that I barely ever changed (I’ve only done three talks since 2021). So I decided to see what Claude Code/Opus 4.5 could do, and signed up for the £18/month "Pro" plan.
The way Claude Code works is nothing short of amazing.
You use natural language to tell it what to do…and it does it.
I started off by saying to (prompting) it with something like this:
I would like to migrate my noti.st-based site at https://noti.st/rmoff/ to a static site like my blog at rmoff.net which is built on hugo.
What I actually said is kinda irrelevant, because it’s not precise. It doesn’t care about typos; it captures the intent.
Claude Code then poked around the two sites and probably asked me some questions (did I want to import all content, what kind of style, etc), and then spat out a Python script to do a one-time ingest of all the content from noti.st.
After seeking permission it then ran the Python script, debugged the errors that were thrown, until it was happy it had a verbatim copy of the data.
Along the way it’d report in on what it was doing and I could steer it—much the same way you would a junior developer.
For example, on noti.st a slide deck’s PDF is exploded out into individual images so that a user can browse it online.
This meant a crap-ton of images which I didn’t care about, but Claude Code assumed I would so started grabbing them.
Claude then proceeded to build and populate a site to run locally.
There were plenty of mistakes, as well as plenty of yak-shaving ("hmm can you move this bit to there, and change the shade of that link there").
This can be part of the danger with Claude.
It will never roll its eyes and sigh at you when you ask for the hundredth amendment to your original spec, so it’s easy to get sucked into endless fiddling and tweaking.
I found I quickly burnt through my Pro token allowance, which actually served well as a gatekeeper on my time, forcing me to step back until the tokens were refreshed.
After four early morning/late nights around my regular work, I cut over my DNS and you can see the results at https://talks.rmoff.net/.
The key things that Claude Code did that I’d not been able to get ad hoc chat sessions (or even Cursor) to do last year include:
Planning out a full project like this one, from the overview down to every detail
Talking the talk (writing the code) and walking the walk (independently running the code, fixing errors, evaluating logic problems, etc)
Rapidly iterating over design ideas, including discussing them and not just responding one-way to instructions
Discussing deployment options, including working through challenges given the cumulative size of the PDFs
Explaining and building and executing and testing the deployment framework
Before the sceptics jump in with their well, ackchuallyyy, my point is not that I couldn’t theoretically have done this without Claude.
It’s that it took, cumulatively, perhaps eight hours—and half of that will have been learning how to effectively interact with Claude.
It’s that it’s a single terminal into which one types, that’s it.
No explosion of tabs.
No rabbit-holes of threads trying to figure this stuff out.
One place.
That fixes its own errors.
That writes code that you could never have done without a serious investment of time.
Would I apply for a frontend engineering job? Heck no!
Does my new site stand up to scrutiny? Probably not.
Will real frontend devs look at the code and be slightly sick in their mouths? Perhaps.
Does this weaken my point?
Not in the slightest!
£18-worth of Claude Code (less, if you pro-rata it over the month) and I’ve saved myself an ongoing annual bill of £100, built a custom website that looks exactly as I want it, has exactly the functionality that I want—oh, and was a fuck-ton of fun to build too :)
Does it matter that I didn’t write the code and don’t understand it?
Not whilst I have access to Claude ;)
I realise that in reading this the choler will be rising in some seasoned software engineers.
After all, who is this data engineer poncing around pretending to build websites?
And that’s perhaps the crux of it: I’m a data engineer, branching out into something I couldn’t do before, courtesy of Claude.
I would definitely use Claude to help me write SQL queries and generate DDL, but I’d be damned if I’d put my name to a pull request with a single byte of code that I couldn’t explain—because that’s my job.
However powerful they may be, LLMs are but a tool, ultimately acting at the behest of a human. Oxide employees bear responsibility for the artifacts we create, whatever automation we might employ to create them.
So I can have fun building a website that’s just my personal site and only on me if it fails.
But if I’m writing code as a professional for my job, it’s on me to make sure that it’s code I can put my name to.
Claude tips
There is a lot written about Claude Code.
Some of it cargo-culting, some of it frothy-hype nonsense.
Below I’ve listed out some of my 'top tips' that I’d be passing onto a colleague getting into Claude Code from scratch tomorrow.
Playwright
If you’re doing any kind of webdev work, follow Kris Jenkins' tip and use Playwright so that Claude can "see" as it develops.
You can manually take screenshots and paste those into Claude too if you want (including ones you’ve annotated with observations and instructions) but in general and particularly for regression testing, Playwright is an excellent addition.
Because this is Claude, you don’t need to actually know how to configure Playwright or run its tests, or anything like that.
You just tell Claude: "Use Playwright to test the changes".
And it does.
Oh, and it’ll install it for you if you don’t have it already.
🛎️ Ding Dong 🔔
Claude will sometimes ask for permission to do something, or tell you it’s finished its current task.
If you’ve got it sat in a terminal window behind your other work you may not realise this, so adding a sound prompt can be useful.
In your ~/.claude/settings.json include:
Obviously, you can waste a lot of time customising it to use just the right sound effect from your favourite 1980s arcade game.
You might not want to always do this; see my observation above about context switching and continuous interruptions.
Keep an eye on cost
Depending on how you pay for Claude (fixed plans, or per API calls) you’ll discover sooner or later that it can be quite expensive.
You can include the cost of the current session in the status line by adding this to the same config file as above, ~/.claude/settings.json:
Also check out ccusage which uses the Claude log data to calculate usage and break it down in different ways which can help you optimise your use of it.
Different Claude models (Opus, Sonnet, Haiku) cost different amounts, and you can optimise your spend by learning a bit about their relative strengths.
I found that asking Claude itself was useful; using Opus (the most capable model) you can describe what you’re going to want it to do, and which model it would recommend.
Like all of this LLM malarky, none of it is absolute, but I found its recommendations useful (i.e. the models it recommended were cheaper and did achieve what I needed them to).
Think of it as having different pairs of running shoes in your closet—different ones are going to be suited to different tasks.
You’re not going to wear your $200 carbon-plate running shoes to kick the ball around the park, are you?
Master the tooling
Go read up on things like:
Context windows—what the LLM knows about what you’re doing
Context rot—the more that’s in the LLM’s context window, the less effective it can sometimes become
CLAUDE.md—where Claude makes a note of what it is you’re building and core principles, toolsets, etc
You can get a lot of value by spending some time on this so that you can restart your session when you need to (e.g. to clear the context window) without having Claude 'forget' too much of the basics of what you’ve told it
Work with Claude on this file—literally say, look at your CLAUDE.md, I have to keep telling you to do x, how can you remember it better. If you give it permission, it’ll then go and update its own file
Use plan mode and accept-change (shift-tab) judiciously.
If you just YOLO it and accept changes without seeing the plan you’ll often end up with a very busy fool going in the wrong direction.
Claude is your servant (for now) and it’s up to you to boss it around firmly as needs be.
Watch out for Claude spinning its wheels—if you see it trying to repeatedly fix something and getting stuck you might be burning a ton of tokens on something that it’s misunderstood or doesn’t actually matter
Claude Code is not just about churning out code
I’ve been experimenting with a few non-coding examples, both pairing Claude with basic-memory and an Obsidian vault.
Proofreading my blog (here’s the prompt, if you’re interested; PRs welcome 😉).
I also have a Raycast AI Preset to do this, but am finding myself more and more reaching for Claude’s terminal window.
It works well because I write my blog posts in Asciidoc, which Claude can read and edit directly (if I ask it to).
Claude can also help you write the prompt/skill—I gave it verbatim some feedback I got from a real human being on the initial version of this post, and it distilled that into what it ought to check for next time and updated its skill.
Neat.
Planning a holiday.
Iteratively building up with Claude a spec that captures the requirements of the holiday, it can then help with itineraries, checklists, discuss areas, etc etc.
As with the coding project above, it being one window with which to interact is really powerful.
Acting as a running coach.
Plugging in Garmin and Strava data via MCP I can capture all of my running and health info, and discuss with Claude planned workouts, even weaving in notes from past physio appointments.
Obviously I am not following it blindly but as an exercise (geddit?!) in integration and LLMs, it’s pretty fun.
But…but…AI…
This post may well have you spitting coffee into your cornflakes, I realise that.
For some reflections on the bigger picture of AI as a productivity tool for developers, have a look at the companion post to this one.