Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
146420 stories
·
33 followers

Octopus Easy Mode - Manual Interventions

1 Share

In the previous post, you added environment-scoped variables to customize the message displayed in each environment. In this post, you’ll add an environment-scoped step to prompt for manual approval before deploying to production.

Prerequisites

  • An Octopus Cloud account. If you don’t have one, you can sign up for a free trial.
  • The Octopus AI Assistant Chrome extension. You can install it from the Chrome Web Store.

The Octopus AI Assistant will work with an on-premises Octopus instance, but it requires more configuration. The cloud-hosted version of Octopus doesn’t need extra configuration. This means the cloud-hosted version is the easiest way to get started.

Creating the project

Paste the following prompt into the Octopus AI Assistant and run it:

Create a Script project called "03. Script App with Manual Intervention", and then:
* Add a manual intervention step as the first step in the deployment process, scoped to the Production environment only, with the instruction "Please approve deployment to Production".

You can now create a release and deploy it to the first environment. Progress the deployment through to the Production environment to see the Manual Intervention step in action.

Before the deployment to Production can proceed, you will need to approve the manual intervention step. This step is often used to enforce a business approval process before deploying to production.

What just happened?

You created a sample project with:

What’s next?

The next step is to add retry logic to the steps.

Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

The DOWNTIME Strategy—Eliminating Waste Before Adding Process | Felipe Engineer-Manriquez

1 Share

Agile in Construction: The DOWNTIME Strategy—Eliminating Waste Before Adding Process With Felipe Engineer-Manriquez

Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes.

 

"My first rule is that I will do no harm. And if something goes wrong, I will take full responsibility with leadership. My neck is literally on the line." - Felipe Engineer-Manriquez

 

Felipe shares his change strategy for introducing Lean and Agile into construction projects, and it starts with an unexpected principle borrowed from Hippocrates: do no harm. He explicitly tells teams this promise, putting his neck on the line to build trust. But the real magic happens in what comes next: instead of adding new processes, Felipe first helps teams stop doing things. Using the DOWNTIME acronym (Defects, Overproduction, Waiting, Transportation, Inventory, Motion, Excess processing), he identifies wasteful activities that don't add value. In construction, 60-80% of every dollar doesn't add value from the customer's perspective—compared to manufacturing (above 50% value) or agriculture (90% value). Felipe's approach: eliminate waste first to create excess capacity, then introduce new processes. On a project that was 2 years behind schedule with lawyers already engaged, he spent just 5 minutes with the team defining a visible milestone goal on a whiteboard. Two weeks later, they met their schedule and improved by 4 days—the first time ever. The superintendent said, "Never in the entire time I've worked here have we ever met a schedule commitment." The secret? Free up capacity before adding anything new.

 

In this episode, we refer to the 8 wastes video by Orbus and WIP limits.

 

Self-reflection Question: Before introducing your next process improvement, what wasteful activity could you help your team stop doing to free up the capacity they need to embrace change?

 

[The Scrum Master Toolbox Podcast Recommends]

🔥In the ruthless world of fintech, success isn't just about innovation—it's about coaching!🔥

Angela thought she was just there to coach a team. But now, she's caught in the middle of a corporate espionage drama that could make or break the future of digital banking. Can she help the team regain their mojo and outwit their rivals, or will the competition crush their ambitions? As alliances shift and the pressure builds, one thing becomes clear: this isn't just about the product—it's about the people.

 

🚨 Will Angela's coaching be enough? Find out in Shift: From Product to People—the gripping story of high-stakes innovation and corporate intrigue.

 

Buy Now on Amazon

 

[The Scrum Master Toolbox Podcast Recommends]

 

About Felipe Engineer-Manriquez

 

Felipe Engineer-Manriquez is a best-selling author, international speaker, and host of The EBFC Show. A force in Lean and Agile, he helps teams build faster with less effort. Felipe trains and coaches changemakers worldwide—and wrote Construction Scrum to make work easier, better, and faster for everyone.

 

You can link with Felipe Engineer-Manriquez on LinkedIn.

 

You can also find Felipe at thefelipe.bio.link, check out The EBFC Show podcast, and join the EBFC Scrum Community of Practice.





Download audio: https://traffic.libsyn.com/secure/scrummastertoolbox/20260128_Felipe_Engineer_W.mp3?dest-id=246429
Read the whole story
alvinashcraft
17 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Career Growth Accelerator - Promotion Roadblocks and Knocking it Out of the Park During Performance Review Season

1 Share

It is review season, and you might be finding yourself confused: you received high ratings and "exceeded expectations," yet the promotion you expected didn't happen. In this episode of the Career Growth Accelerator, I break down exactly why high performance doesn't always lead to promotion, helping you identify the structural roadblocks and strategic shifts necessary to move from senior individual contributor to staff, principal, or leadership roles,.

• Understand why your performance review is never conducted in a vacuum and why your manager’s peers—not just your manager—are the "voters" you need to convince with clear evidence,.

• Learn why high ratings often fail to translate into a promotion if you haven't demonstrated specific impact on the company's strategic goals rather than just your own deliverables.

• Discover the first major roadblock: Structural limitations where the role you want simply doesn't exist because the business context or organizational pyramid doesn't currently support it,.

• Explore the concept of "Outer Layers" of scope—moving from self-focus to team-focus, and finally to business-strategy focus—to unlock the next stage of your career,,.

• Identify the "indispensable trap" where performing too well at your current inner-layer responsibilities makes you terminal in your role rather than promotable.

🙏 Today's Episode is Brought To you by: Unblocked

There’s a good chance you’ve already tried a few AI code review tools — and you’re probably ignoring most of their comments.

Not because AI can’t review code, but because it’s missing context. Most AI reviewers focus on surface-level issues: style nits, obvious refactors, or restating what’s already clear from the diff. Meanwhile, the things you actually care about, like whether a change violates an earlier architectural decision or quietly duplicates existing logic, go unnoticed.

That’s the problem Unblocked is built to solve.

Unblocked’s AI code review is grounded in decision-grade context, prior PRs, design discussions, documentation, and system-level constraints, the same context senior engineers rely on when reviewing code.

Teams using Unblocked report fewer comments, higher signal, and automated reviews they actually trust — enough that many have turned off other AI review tools entirely.

Even if you’ve already written off AI code review, Unblocked is worth a look.

Get a free three-week trial at getunblocked.com/developertea.

🎥 Subscribe to our Youtube Channel here! https://www.youtube.com/@developertea

📮 Ask a Question

If you enjoyed this episode and would like me to discuss a question that you have on the show, drop it over at: developertea.com.

📮 Join the

If you want to be a part of a supportive community of engineers (non-engineers welcome!) working to improve their lives and careers, join us on the Developer Tea Discord community by visiting https://developertea.com/discord today!

🧡 Leave a Review

If you're enjoying the show and want to support the content head over to iTunes and leave a review! It helps other developers discover the show and keep us focused on what matters to you.





Download audio: https://dts.podtrac.com/redirect.mp3/cdn.simplecast.com/audio/c44db111-b60d-436e-ab63-38c7c3402406/episodes/ac163d67-e980-4bf8-a7a0-8c240b61f66c/audio/f0f4416a-0c30-4bb7-95fe-37a4f5fe3a0f/default_tc.mp3?aid=rss_feed&feed=dLRotFGk
Read the whole story
alvinashcraft
22 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Cosplaying as a webdev with Claude Code in January 2026

1 Share

In which Claude and [A]I play at being webdevs. For some reflections on the bigger picture of AI as a productivity tool for developers, have a look at the companion post to this one.

I used to speak at a lot of conferences and meetups, and published my talks on a site called noti.st. It’s free to use, but you could pay for bells and whistles including a custom domain, which I duly did: talks.rmoff.net.

My background is databases and SQL; I can spell HTML (see, I just did) and am aware of CSS and can fsck about in the Chrome devtools to fiddle with the odd detail…but basically frontend webdev is completely beyond me. That meant I was more than happy to pay someone else to host my talks for me on an excellent platform.

This was a few years ago, and the annual renewal of the plan was starting to bite—over £100 for what was basically static content that I barely ever changed (I’ve only done three talks since 2021). So I decided to see what Claude Code/Opus 4.5 could do, and signed up for the £18/month "Pro" plan.

The way Claude Code works is nothing short of amazing. You use natural language to tell it what to do…and it does it.

I started off by saying to (prompting) it with something like this:

I would like to migrate my noti.st-based site at https://noti.st/rmoff/ to a static site like my blog at rmoff.net which is built on hugo.

What I actually said is kinda irrelevant, because it’s not precise. It doesn’t care about typos; it captures the intent.

Claude Code then poked around the two sites and probably asked me some questions (did I want to import all content, what kind of style, etc), and then spat out a Python script to do a one-time ingest of all the content from noti.st. After seeking permission it then ran the Python script, debugged the errors that were thrown, until it was happy it had a verbatim copy of the data.

Along the way it’d report in on what it was doing and I could steer it—much the same way you would a junior developer. For example, on noti.st a slide deck’s PDF is exploded out into individual images so that a user can browse it online. This meant a crap-ton of images which I didn’t care about, but Claude Code assumed I would so started grabbing them.

Claude then proceeded to build and populate a site to run locally. There were plenty of mistakes, as well as plenty of yak-shaving ("hmm can you move this bit to there, and change the shade of that link there"). This can be part of the danger with Claude. It will never roll its eyes and sigh at you when you ask for the hundredth amendment to your original spec, so it’s easy to get sucked into endless fiddling and tweaking.

I found I quickly burnt through my Pro token allowance, which actually served well as a gatekeeper on my time, forcing me to step back until the tokens were refreshed. After four early morning/late nights around my regular work, I cut over my DNS and you can see the results at https://talks.rmoff.net/.

2026 01 27T15 36 05 008Z

The key things that Claude Code did that I’d not been able to get ad hoc chat sessions (or even Cursor) to do last year include:

  • Planning out a full project like this one, from the overview down to every detail

  • Talking the talk (writing the code) and walking the walk (independently running the code, fixing errors, evaluating logic problems, etc)

  • Rapidly iterating over design ideas, including discussing them and not just responding one-way to instructions

  • Discussing deployment options, including working through challenges given the cumulative size of the PDFs

  • Explaining and building and executing and testing the deployment framework

Before the sceptics jump in with their well, ackchuallyyy, my point is not that I couldn’t theoretically have done this without Claude. It’s that it took, cumulatively, perhaps eight hours—and half of that will have been learning how to effectively interact with Claude. It’s that it’s a single terminal into which one types, that’s it. No explosion of tabs. No rabbit-holes of threads trying to figure this stuff out. One place. That fixes its own errors. That writes code that you could never have done without a serious investment of time.

Would I apply for a frontend engineering job? Heck no!
Does my new site stand up to scrutiny? Probably not.
Will real frontend devs look at the code and be slightly sick in their mouths? Perhaps.

Does this weaken my point? Not in the slightest!

£18-worth of Claude Code (less, if you pro-rata it over the month) and I’ve saved myself an ongoing annual bill of £100, built a custom website that looks exactly as I want it, has exactly the functionality that I want—oh, and was a fuck-ton of fun to build too :)

Does it matter that I didn’t write the code and don’t understand it?

Not whilst I have access to Claude ;)

I realise that in reading this the choler will be rising in some seasoned software engineers. After all, who is this data engineer poncing around pretending to build websites?

And that’s perhaps the crux of it: I’m a data engineer, branching out into something I couldn’t do before, courtesy of Claude.

I would definitely use Claude to help me write SQL queries and generate DDL, but I’d be damned if I’d put my name to a pull request with a single byte of code that I couldn’t explain—because that’s my job.

I like Oxide’s words here:

However powerful they may be, LLMs are but a tool, ultimately acting at the behest of a human. Oxide employees bear responsibility for the artifacts we create, whatever automation we might employ to create them.

So I can have fun building a website that’s just my personal site and only on me if it fails. But if I’m writing code as a professional for my job, it’s on me to make sure that it’s code I can put my name to.

Claude tips

There is a lot written about Claude Code. Some of it cargo-culting, some of it frothy-hype nonsense. Below I’ve listed out some of my 'top tips' that I’d be passing onto a colleague getting into Claude Code from scratch tomorrow.

Playwright

If you’re doing any kind of webdev work, follow Kris Jenkins' tip and use Playwright so that Claude can "see" as it develops. You can manually take screenshots and paste those into Claude too if you want (including ones you’ve annotated with observations and instructions) but in general and particularly for regression testing, Playwright is an excellent addition.

Because this is Claude, you don’t need to actually know how to configure Playwright or run its tests, or anything like that. You just tell Claude: "Use Playwright to test the changes". And it does. Oh, and it’ll install it for you if you don’t have it already.

🛎️ Ding Dong 🔔

Claude will sometimes ask for permission to do something, or tell you it’s finished its current task. If you’ve got it sat in a terminal window behind your other work you may not realise this, so adding a sound prompt can be useful. In your ~/.claude/settings.json include:

  "hooks": {
    "Notification": [ {
        "hooks": [ { "type": "command", "command": "afplay /System/Library/Sounds/Funk.aiff" } ] } ],
    "Stop": [ {
        "hooks": [ { "type": "command", "command": "afplay /System/Library/Sounds/Ping.aiff" } ] } ] },

Obviously, you can waste a lot of time customising it to use just the right sound effect from your favourite 1980s arcade game.

You might not want to always do this; see my observation above about context switching and continuous interruptions.

Keep an eye on cost

Depending on how you pay for Claude (fixed plans, or per API calls) you’ll discover sooner or later that it can be quite expensive. You can include the cost of the current session in the status line by adding this to the same config file as above, ~/.claude/settings.json:

  "statusLine": {
    "type": "command",
    "command": "input=$(cat); cwd=$(echo \"$input\" | jq -r '.workspace.current_dir'); tin=$(echo \"$input\" | jq -r '.context_window.total_input_tokens'); tout=$(echo \"$input\" | jq -r '.context_window.total_output_tokens'); mid=$(echo \"$input\" | jq -r '.model.id'); mname=$(echo \"$input\" | jq -r '.model.display_name'); used=$(echo \"$input\" | jq -r '.context_window.used_percentage // \"--\"'); if [[ \"$mid\" == *\"opus\"* ]]; then cost=$(echo \"scale=4; ($tin * 15 + $tout * 75) / 1000000\" | bc); elif [[ \"$mid\" == *\"haiku\"* ]]; then cost=$(echo \"scale=4; ($tin * 0.80 + $tout * 4) / 1000000\" | bc); else cost=$(echo \"scale=4; ($tin * 3 + $tout * 15) / 1000000\" | bc); fi; printf \"\\e[36m◆\\e[0m \\e[1m\\e[96m%s\\e[0m \\e[36m◆\\e[0m \\e[35m%s\\e[0m \\e[36m▸\\e[0m \\e[33mTokens:\\e[0m \\e[32m%'d\\e[0m↓ \\e[34m%'d\\e[0m↑ \\e[36m●\\e[0m \\e[93mCtx Used:\\e[0m \\e[92m%s%%\\e[0m \\e[36m●\\e[0m \\e[1m\\e[31mCost: \\$%s\\e[0m\" \"$mname\" \"$cwd\" \"$tin\" \"$tout\" \"$used\" \"$cost\""
  },

It’ll look something like this:

2026 01 27T15 45 53 209Z

Also check out ccusage which uses the Claude log data to calculate usage and break it down in different ways which can help you optimise your use of it.

 ╭───────────────────────────────────────────╮
 │                                           │
 │  Claude Code Token Usage Report - Weekly  │
 │                                           │
 ╰───────────────────────────────────────────╯

┌───────────┬────────────────────┬──────────┬──────────┬───────────┬────────────┬─────────────┬───────────┐
│ Week      │ Models             │    Input │   Output │     Cache │ Cache Read │       Total │      Cost │
│           │                    │          │          │    Create │            │      Tokens │     (USD) │
├───────────┼────────────────────┼──────────┼──────────┼───────────┼────────────┼─────────────┼───────────┤
│ 2026      │ - claude-3-5-haiku │   39,694 │   80,640 │ 4,462,577 │ 28,392,013 │  32,974,924 │    $26.35 │
│ 01-25     │ - sonnet-4-5       │          │          │           │            │             │           │
├───────────┼────────────────────┼──────────┼──────────┼───────────┼────────────┼─────────────┼───────────┤

Learn a bit about the models (ask Claude)

Different Claude models (Opus, Sonnet, Haiku) cost different amounts, and you can optimise your spend by learning a bit about their relative strengths. I found that asking Claude itself was useful; using Opus (the most capable model) you can describe what you’re going to want it to do, and which model it would recommend. Like all of this LLM malarky, none of it is absolute, but I found its recommendations useful (i.e. the models it recommended were cheaper and did achieve what I needed them to).

Think of it as having different pairs of running shoes in your closet—different ones are going to be suited to different tasks. You’re not going to wear your $200 carbon-plate running shoes to kick the ball around the park, are you?

Master the tooling

Go read up on things like:

  • Context windows—what the LLM knows about what you’re doing

  • Context rot—the more that’s in the LLM’s context window, the less effective it can sometimes become

  • CLAUDE.md—where Claude makes a note of what it is you’re building and core principles, toolsets, etc

    • You can get a lot of value by spending some time on this so that you can restart your session when you need to (e.g. to clear the context window) without having Claude 'forget' too much of the basics of what you’ve told it

    • Work with Claude on this file—literally say, look at your CLAUDE.md, I have to keep telling you to do x, how can you remember it better. If you give it permission, it’ll then go and update its own file

  • Use plan mode and accept-change (shift-tab) judiciously. If you just YOLO it and accept changes without seeing the plan you’ll often end up with a very busy fool going in the wrong direction. Claude is your servant (for now) and it’s up to you to boss it around firmly as needs be.

  • Watch out for Claude spinning its wheels—if you see it trying to repeatedly fix something and getting stuck you might be burning a ton of tokens on something that it’s misunderstood or doesn’t actually matter

Claude Code is not just about churning out code

I’ve been experimenting with a few non-coding examples, both pairing Claude with basic-memory and an Obsidian vault.

  • Proofreading my blog (here’s the prompt, if you’re interested; PRs welcome 😉).

    proofread

    I also have a Raycast AI Preset to do this, but am finding myself more and more reaching for Claude’s terminal window. It works well because I write my blog posts in Asciidoc, which Claude can read and edit directly (if I ask it to). Claude can also help you write the prompt/skill—I gave it verbatim some feedback I got from a real human being on the initial version of this post, and it distilled that into what it ought to check for next time and updated its skill. Neat.

  • Planning a holiday. Iteratively building up with Claude a spec that captures the requirements of the holiday, it can then help with itineraries, checklists, discuss areas, etc etc. As with the coding project above, it being one window with which to interact is really powerful.

  • Acting as a running coach. Plugging in Garmin and Strava data via MCP I can capture all of my running and health info, and discuss with Claude planned workouts, even weaving in notes from past physio appointments. Obviously I am not following it blindly but as an exercise (geddit?!) in integration and LLMs, it’s pretty fun.

But…but…AI

This post may well have you spitting coffee into your cornflakes, I realise that. For some reflections on the bigger picture of AI as a productivity tool for developers, have a look at the companion post to this one.



Read the whole story
alvinashcraft
43 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Vendor Lock-In vs. Strategic Partnership for Your CI/CD

1 Share

This article was brought to you by Jakkie Koekemoer & Shweta, draft.dev.

Traditional wisdom in the dev world is that once you opt for proprietary APIs or vendor-specific tooling, migrating to alternative platforms is costly or impractical to adopt in the future.

That’s why engineering teams have considered vendor lock-in a trap. You end up paying more, limiting technical flexibility, losing bargaining power, and cramming your processes into someone else’s roadmap. And if you still want control and flexibility, you need to be on your toes with proactive planning, regular exports, and negotiating strict contractual exit terms.

But this binary thinking of lock-in vs. freedom is misleading.

Whenever you spend significant time investing in any technology or architecture, you’re locked in. It might not seem like it on paper, but your reliance on that system grows the more you use it. Switching to something else requires a significant amount of work, even if there aren’t any other costs involved.

Instead of trying to escape the unavoidable, consider to what extent any tool or technology meets your needs without enforcing limitations that hold you back.

In the context of CI/CD, that means a tool that accelerates delivery without sacrificing control.

Your dependencies in CI/CD run across integrations, pricing tiers, hosted vs. self-hosted trade-offs, supported SLAs, and ecosystem workflows. A CI/CD vendor that provides open APIs, clear export paths, predictable pricing, and collaborative roadmapping has the potential to become a strategic partner.

This guide highlights the potential technical, financial, operational, and strategic risks CI/CD solutions can pose for your organization. Based on this framework, we’ll walk you through a practical questionnaire for evaluating potential solutions against your organization’s needs.

Understanding your organization’s risk profile

Any CI/CD platform can present you with a list of features, and that’s enough if you only want to choose a product.

However, if you want to make a long-term commitment, you need to stop asking “What features does this solution have?” and instead ask “What risks does this vendor introduce?” By considering risks, you can assess whether a vendor will be a good strategic partner for you in the long term.

Let’s look at the most important risks a CI/CD platform can introduce for an organization.

Technical risks

The foundation of any CI/CD platform is its portability. Consider whether the platform locks you into proprietary languages or relies on custom plugins for critical integrations.

More importantly, can pipelines, configurations, build artifacts, and logs be exported in a reusable format? If not, you run the risk of significant rework whenever there’s a need to reproduce builds in a different environment, be it a new data center, cloud region, or entirely different infrastructure provider.

Reproducing the same build reliability elsewhere requires substantial rewrites or custom tooling, potentially turning what should be a straightforward migration into a months-long engineering effort.

Financial risks

Hidden costs can accumulate quickly if you’re not careful. Consider data-transfer fees that you pay every time your team downloads a large build artifact, storage overages, and unpredictable API pricing when you’re building the financial case for adoption.

Also consider what happens when it’s time to leave. Do contracts lock in long-term minimums? Are there punitive exit clauses? A single large migration with consulting fees, rework, and downtime costs can make switching prohibitively expensive.

Operational risks

Day-to-day dependence on a vendor creates hidden fragility. If SLAs are slow, outages linger, or documentation falls out of sync with the product, you become increasingly reliant on vendor support just to keep critical systems running.

Over time, you may end up locking into a specific UX and workflow, making it painful to change platforms even when better alternatives emerge.

Strategic risks

A platform that serves your current needs may not support your future growth, so consider alignment. Does the vendor’s product vision match your evolving direction? If the vendor deprioritizes features you rely on, undergoes a strategic shift in pricing or focus, or is acquired by a competitor with different priorities, will you be forced to compromise your own product goals?

Without formal channels to influence the vendor’s roadmap, you’re betting that the vendor’s interests will remain aligned with yours, which is a risky assumption over a multiyear relationship.

Evaluating CI/CD platforms

The ideal CI/CD solution reduces risk, accelerates delivery, and empowers you to evolve your pipelines and processes. Which vendor or solution meets these requirements will depend on your organizational requirements and risk factors.

The following questionnaire can help you with your evaluation.

Use the questions and suggested importance (Must, Should, or Nice) as a starting point, but feel free to adapt the questions and importance based on your organization’s needs.

Complete the questionnaire for each platform or vendor you’re assessing, and rate questions from 0–2:

  • 0 = not available/unacceptable gap (indicates a real blocker)
  • 1 = partial/mitigatable (the capability exists but has limits or caveats)
  • 2 = fully available/meets requirements (the vendor delivers the capability as needed)
The question and the risk it mitigatesTypeVendor A (0–2)Vendor B (0–2)
Can you export all configs, artifacts, and logs in usable, open formats?Red flag: Proprietary formats hold your operational data hostage.Must
Are core APIs open, versioned, and comprehensively documented?Red flag: Private APIs prevent you from building essential custom automations.Must
Is core functionality built-in or reliant on a third-party plugin ecosystem?Red flag: A plugin-first model creates architectural fragility and a massive, unmanaged security risk.Must
Does the vendor offer a self-hosted or portable runner option?Red flag: A single hosting model can lock you into an infrastructure that doesn’t meet your security or cost needs.Must
Is pricing published, transparent, and predictable as you scale?Red flag: Opaque, usage-based pricing makes budgeting impossible and exposes you to surprise costs.Must
Does the contract include a clear support SLA and fair termination terms?Red flag: Without a guaranteed SLA, you have no support in a crisis. Punitive exit clauses are a major lock-in tactic.Must
Does the platform provide an intuitive UI that empowers the entire team, not just experts?Red flag: A clunky, expert-only UI creates a “guru” bottleneck and dramatically increases onboarding costs.Should
Can you easily extract the raw data needed to track performance metrics (like DORA) in your own tools?Red flag: A “data black box” platform prevents you from measuring what matters and achieving data-driven improvement.Should
Is there an active and transparent customer feedback channel that influences the roadmap?Red flag: A closed roadmap signals that the vendor’s priorities, not yours, will drive the product’s future.Should
Does the vendor have a clear and credible vision for incorporating AI-driven capabilities?Red flag: No AI vision signals strategic stagnation. You’re buying into a platform that is already behind the innovation curve.Nice
Does the platform have built-in, first-class features for performance (e.g. test parallelization)?Red flag: Lacking built-in performance features means you will spend engineering time building workarounds.Nice

Score yourself as follows:

  1. Add each vendor’s Must scores (max 11). Multiply the sum by 2 (to factor in its criticality).
  2. Add the raw sums for Should (max 8) and Nice (max 6).
  3. Total score = (Must sum × 2) + Should sum + Nice sum.
  4. Max possible = 36. Convert to a percentage if you want.

Interpret the results as follows:

  • >60 percent: Strong candidate (confirm that there are no instances where Must = 0)
  • 30–60 percent: Proceed with caution
  • <30 percent: High risk, don’t commit

If any Must row is scored 0 for a vendor, it should be treated as a major deal-breaker, and you ought to remove the vendor from the shortlist. If you’re considering any option that scores poorly on export, automation, or billing transparency, running a short portability proof of concept (POC) is recommended.

If you’ve identified any red flags, categorize them to decide on your next steps:

  • Blockers: Are there critical gaps that are deal-breakers (for example, no data export or punitive exit clauses)? This should remove the vendor from consideration.
  • Major risks: Are there issues that could be a deal-breaker (for example, API limitations)? Do a POC before making a decision.
  • Concerns: Are there any concerning issues that aren’t critical (for example, uncertainty about the product roadmap)? Contact the vendor to see if they can shed some light.

Lastly, remember to escalate anything affecting compliance, SLAs, or costs to SRE, procurement, and/or legal teams.

DIY vs. integrated CI/CD platforms: Risks, rewards, and decision criteria

Partnering with a vendor isn’t your only option for a CI/CD solution. You can also build your own.

A do-it-yourself (DIY) CI/CD gives you full control to tailor every component to niche needs and stitch together the best available tools. This can be useful for unique architectures or strict compliance demands.

However, that control comes with costs. You must own maintenance, upgrades, security, and scaling, and the glue code between tools often becomes brittle technical debt.

DIY prosDIY cons
You have full control of every component; optimize for niche requirements.You own maintenance, upgrades, and security for the entire stack.
Choose best-of-breed tools for each stage (VCS, runner, artifact store).Integration and cross-tool orchestration add complexity and brittle glue code.
Lower vendor dependency; avoids many proprietary lock-in vectors.Hidden TCO from ops time, custom plugins, and scaling pain.

Choose DIY if you need extreme customization, have seasoned DevOps capacity, and want full control over cost levers.

However, if you value developer productivity, shorter time to market, and vendor-managed reliability, opt for an integrated solution.

Integrated platform prosIntegrated platform cons
Unified UX, fewer integration points, and faster onboarding for engineers.Some feature gaps or vendor-specific behaviors can be costly to reproduce.
Vendor accountability for reliability, scaling, and parts of security.Pricing models and metering can be hard to forecast as usage grows.
Built-in productivity features (e.g. parallelization, flaky test detection, and visual pipelines).Partial lock-in remains unless the vendor supports portability and exports.

Other resources

  • Two-thirds of engineers lose 20%+ of their time to inefficiencies. Check out our ROI of Developer Experience blog post to see how TeamCity’s dev-centric CI/CD platform helps reclaim productivity through intuitive UIs, deep VCS integration, and unified monitoring that eliminates context switching.
  • Thinking about the practical implications of migration? Our Migration Planning Kit for DevOps engineers and engineering managers includes a migration readiness checklist and a phased rollout plan.
Read the whole story
alvinashcraft
52 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Amazon confirms 16,000 more job cuts, bringing total layoffs to 30,000 since October

1 Share

Amazon is laying off another 16,000 corporate employees globally, the company confirmed Wednesday morning, the second phase in a restructuring that now totals 30,000 positions — marking the largest workforce reduction in the company’s history.

The company is “reducing layers, increasing ownership, and removing bureaucracy,” according to a memo to employees from Beth Galetti, Amazon’s senior vice president of people experience and technology.

“While many teams finalized their organizational changes in October, other teams did not complete that work until now,” Galetti wrote.

The latest job cuts come after Amazon laid off about 14,000 workers in October. The company indicated at the time that more layoffs could occur in 2026 while noting it would continue to hire in key strategic areas.

In the new memo, Galetti sought to reassure employees that the company does not plan to make regular rounds of massive cuts. “Some of you might ask if this is the beginning of a new rhythm — where we announce broad reductions every few months,” she wrote. “That’s not our plan.” 

But she added that teams will continue to evaluate their operations and “make adjustments as appropriate,” saying that’s “never been more important than it is today in a world that’s changing faster than ever.”

Amazon on Tuesday announced that it will close all of its Amazon Go and Amazon Fresh grocery store locations. Last night, the company began informing customers that it’s discontinuing its Amazon One biometric palm recognition service, as well. 

This week’s announcement, combined with the cuts in October, tops the 27,000 positions the company eliminated in 2023 across multiple rounds of layoffs.

The lworkforce reduction comes amid an efficiency push at the company. Amazon CEO Andy Jassy, who replaced founder Jeff Bezos in 2021, has cited a need to reduce bureaucracy and become more efficient in the new era of artificial intelligence. 

On the company’s third quarter earnings call, Jassy framed the layoffs in October as a push to stay nimble, and said Amazon’s rapid growth over the past decade led to extra layers of management that slowed decision-making. He has said he wants Amazon to operate like the “world’s largest startup.” 

Jassy also told employees in June that he expected Amazon’s total corporate workforce to shrink over time due to efficiency gains from AI.

Amazon’s corporate workforce numbered around 350,000 people in early 2023, the last time the company provided a public number. Amazon has an overall workforce of 1.57 million people, which includes workers in its warehouses.

The company employs around 50,000 corporate workers in the Seattle region, its primary headquarters. There were 2,303 corporate employees in Washington state that were laid off last year in October.

Amazon implemented a 5-day return-to-office policy at the beginning of last year for corporate employees, drawing pushback from some employees. The company’s workforce helps generate foot traffic for nearby small businesses near its office buildings. 

Jon Scholes, president of the Downtown Seattle Association, said that a “workforce change of this scale has ripple effects on the community.”

“The tech ecosystem has been a key driver to our city’s growth and bolstered the tax coffers, which helped fuel our city’s investments in housing, public safety and economic development the last 20 years or so,” he said in a statement. “As companies grapple with emerging trends, we hope this pain is short-term.”

Layoffs have hit various tech companies across the Seattle region over the past few years. Meta cut 331 positions earlier this month. Microsoft laid off more than 3,200 employees in Washington state last year, part of broader cuts that impacted 15,000 people globally

Amazon reports its latest quarterly earnings on Feb. 5. The company’s stock underperformed relative to the “Magnificent Seven” tech giants last year. Some analysts predict that Amazon’s cloud unit will help boost the stock as AI demand rises. The company, along with other tech giants, is investing heavily in AI-related infrastructure.

Amazon reported about $1.8 billion in estimated severance costs related to its 14,000 corporate layoff announced in October.

Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories