Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
151382 stories
·
33 followers

A love letter to Raycast ❤️

1 Share

"What are the must-have apps to install on my new Mac?"… "Which tool makes you the most productive?"… "Do you still use Alfred?"…

All these questions and more—and the answer to all of them is Raycast!

I was previously a huge fan of Alfred since back in 2012. I liked Alfred, I enthused about Alfred; I also ditched Alfred for Raycast in early 2023.

Raycast is the ultimate productivity tool. The kind of thing that leaves you bereft on a new laptop until you’ve installed it. You don’t even realise you’re using it until it’s not there, and then you cry. I happily pay for the Pro features; the AI stuff alone is definitely worth it.

As is the tedious vogue, Raycast also did a "Wrapped", from which I learn that I’ve invoked it 17k times this year, with the top four actions being these:

raycast01
That’s a lot of clipboard history use

Let’s explore these—and a few more—to understand quite how useful Raycast is.

But first, Raycast’s secret sauce: A killer UX

Raycast isn’t just an app. It’s a way of life, it’s a mindset, it’s a state of being.

Well, perhaps not quite that. But it’s really not just an app. It’s a framework with a very very very good UX design, on which it ships with extensions and has an open API meaning others can also contribute to a rich ecosystem of 3rd-party plugins/extensions.

The beauty of this is that every tool you probably want, exists within Raycast—and the absolute killer feature is you probably know how to use that tool already.

Let me show you.

Raycast replaces the Spotlight launcher in macOS, by default opening when you press Cmd-Space. All dialogues have a standard layout, which is at the heart of the UX.

Filter entries

For example, here’s the Clipboard manager extension:

raycast00
Clipboard manager

I can start typing to filter the entries (notice how it’s also filtering on text that appears within images that were on my clipboard):

Filtering Clipboard manager entries

It’s the exact same interface in another extension, such as the file search one here:

File search

Context-appropriate Actions

The next bit is the actions, opened with Cmd-K. The actions shown will depend on what you’re doing, as they are based on the context.

So if you’re using the clipboard manager and have a URL, you get the option to open it in a browser, paste it into the current editor, set it as the current clipboard entry, etc:

raycast04
Context actions for a URL

Whereas if you have a file you can open the containing folder, copy the path, the file, etc:

raycast05
Context actions for a file

All the actions have a keyboard shortcut, making them easy to invoke without even needing to open the menu.

Filter

In the top-right of the dialogue is the filter. This isn’t always present—it depends on the context.

For the clipboard manager you can filter on the type of entry, including images, text, URLs—and colours! Notice how it renders the colours too. Just one more thing on the long list of awesome touches that makes Raycast :chefkiss:

raycast06
Filtering clipboard by type

Everywhere you look in Raycast the UX is smoooooth. An example of this is that when you open the filter (or actions, or any other menu), you can start typing to filter the entries shown (just like in the main dialogue):

Refining the filter list by typing

You could use the arrow keys or mouse to select the option, but if your hands are on the keyboard already it becomes second nature, particularly when it’s the same UX pattern repeated throughout the app.

The Keyboard

Raycast is a keyboard-first app. You can use the mouse, of course, but the real productivity comes from the plethora of keyboard shortcuts available. I don’t just mean customising shortcuts to launch apps, etc—that is just table stakes.

The winning feature—that is so good that I keep trying to use it on other non-Raycast apps—is that an option can be accessed by its number in the list. This might not sound super useful, but let me explain.

You open the clipboard manager, and want to paste the fourth option shown into the current app. You could arrow down to it. You could mouse down to it. You could start typing to filter to it. Or you just press Cmd-4 and select it immediately.

Quickly access clipboard history with keyboard shortcuts

Cool Stuff with the Clipboard

Clipboard managers are ridiculously useful, because whenever you copy (or cut) something, it’s stored in it. That means that you can use things that you put on the clipboard previously—a minute ago, an hour ago, a week ago.

Raycast’s clipboard manager has some very nice additional features, including:

  • Save an entry as a "snippet" (for permanent re-use - great for things like your email address, website, stock phrases you use in emails, etc)

  • Copy text out of an image (OCR)

  • Quick-look at any image on the clipboard (Cmd-Y)

    Quick-view to look at any image on the clipboard history
  • Drag and drop an image into any application. This one is not so obvious until you use it and then it’s 🤯

    Drag and drop clipboard history items into other applications

Window management

The window manager in Raycast works great, particularly when bound to hotkeys

Focus mode

I am the worst for losing focus when working; a computer with the internet gives you access to infinite distractions. The Focus mode option in Raycast is a good way to take a Pomodoro approach to working, blocking things you don’t want to be able to access for a set amount of time

raycast14
Srsly, just write that damn blog post already

Emojis 😅

The humble emoji may have been appropriated by the LLMs as a signature of their slop, but I still like to use them (just like the honourable em dash).

Raycast has a nice built-in emoji picker which uses natural language to offer up suitable emojis, as well as having nice filtering and action options

Some of my favourite extensions

GitHub

Access and search all of your issues:

raycast20
GitHub issues search

It’s the same UX that you use elsewhere in Raycast. This means that you can just start typing to filter issues, as well as selecting specific repositories (Cmd-P), as well as invoke actions on the selected issue:

raycast21
Actions available on a GitHub issue

Date conversion

Being able to convert back and forth between different time formats, including Unix time, might not seem like a big deal. Everyone’s got their own favourite CLI command or webpage to do it.

The power of Raycast is that once you’re in its ecosystem, why would you do anything other than hit launch (Cmd-Space), type a couple of characters to find the command:

raycast22
Finding the date conversion extension

and then dive straight in, starting with today’s date by default:

raycast23
Date conversion showing today’s date in multiple formats

or with a Unix timestamp:

raycast24
Parsing a Unix timestamp

You can also type in freeform ("yesterday", "last week", "23 dec") and it parses those just fine.

AI

I’m perhaps burying the lede with this one. Some people are allergic to the AI word (acronym, I guess), and if you’re an AI-denier, you can finish reading the article here :-P

For me, I’m using AI a lot. After all, guess who wrote this blog post?!

jokes. I wrote this myself. I’d not get AI to do that, it’s gross.

But AI will have proofread it, AI helped fix the CSS problems, AI told me how to quickly resize the videos from the CLI without me having to navigate the ffmpeg man pages. AI is a HUGE productivity booster, used correctly (just like any tool).

Raycast offers access to a bunch of LLMs through its AI chat (similar, conceptually, to user-facing Claude Desktop, ChatGPT, etc):

raycast16
Raycast AI chat interface

Raycast gives you access to dozens of LLMs, including all the very latest ones—which as of December 2025 are GPT 5.1, Claude 4.5 Sonnet, Gemini 3 Pro. You can switch between models mid-conversation, as well as re-run a previous prompt with a different model. Here’s the above prompt re-run using GPT and Gemini. The current model is shown in the bottom-left:

Raycast includes its own "AI Extensions" (for things like web page browsing, image generation, etc), as well as an MCP client. This means you can do magical things like this:

raycast18
Using MCP to read from an API

The Markdown output that LLMs (particularly Claude) generate is rendered nicely, and includes copy buttons for code snippets:

raycast19
Code snippets with copy buttons

You can upload images and files through the chat, including from elsewhere in the Raycast application. For example, let’s take an image from my clipboard history:

raycast25

and use it in chat:

raycast26

AI Presets

I mentioned above some of the things that I use AI for; some are just ad-hoc, whilst others I’ll use again. For these it makes sense to create a preset, in which you define a system prompt for the LLM that’s stored so you don’t need to enter it each time.

So when I want to get it to proofread this blog post, I invoke Raycast (Cmd-Space) and start typing the name:

raycast27

I hit enter to launch it, and then paste in the contents of the draft, and off it goes.

The preset looks like this:

raycast28
This is the draft of a blog article I am about to publish. I would like you to concisely list the following:
- any typos. check what I give you five times to make sure you have caught everything.
- any factual errors or inconsistencies

Your primary responsibility is to catch typos and errors. I write in en-gb. Do not challenge any en-gb spellings.
Do not report on the use of `automagically`. This is a good word.

In addition:
- Provide a very brief summary of the readability of the article. My voice is a technical yet informal one, aimed solely at a developer audience. I use colloquialisms and snark.
- Highlight any fallacies, lazy arguments, inconsistencies or illogical statements.

Yes I know, the "check five times" is cargo-culting, but for some versions of the LLMs it made them more attentive ¯\_(ツ)_/¯

raycast30

Being able to switch between models is very useful because as the AI-skeptics will rush to breathlessly tell you about LLMs: ThEy HaLlUcInAtE aNd YoU CaN’T TRusT ThEM!!!111!. For example, Claude 4.5 Sonnet is clearly talking out of its digital exhaust hole above, and I have access to different LLMs at the press of a button (Shift-Cmd-R). So I can easily see what a different model (e.g. Gemini Pro 3) thinks:

raycast29

I can’t resist including this entry from Gemini when it reviewed this post and in total seriousness corrected my spelling:

raycast32

This is some clever stuff.

Like what you see?

You can download Raycast here.

It’s got a free plan for all of the core functionality, and then you can pay for Pro ($8 pcm)/Pro + Advanced AI ($16 pcm) for access to some or all LLMs plus stuff like syncing your settings to the cloud, etc.

I have no affiliation with Raycast—I just love their product (and keep on recommending it to people, hence taking the time to write down what is quite so good about it).


Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

First-Class Docker Support: Building and Deploying Containers With TeamCity

1 Share

This article was brought to you by Kumar Harsh, draft.dev.

Docker has changed the way we build and ship software. Instead of wrestling with “it works on my machine” issues, developers can now package applications and all their dependencies into a neat, portable container that runs anywhere. No wonder Docker has become the de facto standard for modern DevOps workflows.

However, building and deploying containers at scale isn’t as simple as running docker build. You need a reliable CI/CD system that can consistently build, test, and push images to your registries while keeping the process fast, repeatable, and secure.

In this article, we’ll explore how to set up a complete Docker-based build and deployment pipeline with TeamCity’s first-class Docker support. You’ll see how features like built-in runners, native registry integration, and Kotlin DSL support make container pipelines smoother and more maintainable compared to the plugin-heavy, script-driven approach in Jenkins.

By the end, you’ll know exactly how to create and run a Docker pipeline in TeamCity, from building images to pushing them to your registry and even deploying them to staging.

The Docker pipeline setup experience in Jenkins vs. TeamCity

If you’ve ever tried to set up a Docker pipeline in Jenkins, you know the drill: Find and install the right plugins, configure them to match your environment, and then hold your breath hoping they don’t break when Jenkins upgrades. 

Even the official Docker plugin, while powerful, requires manual setup, custom scripting, and constant upkeep to stay compatible.

For many teams, this quickly turns into a maintenance burden, especially as pipelines grow more complex.

TeamCity takes a very different approach. Docker support isn’t added on via third-party plugins; it’s baked into the product. Right out of the box, you get dedicated Docker build runners, registry integration, and full support for defining Docker steps in both the UI and the Kotlin DSL. That means no hunting down plugins, no brittle scripts, and far fewer surprises during upgrades.

Another difference lies in configuration. Jenkins pipelines often rely on long Groovy scripts or scattered YAML files, which can be challenging to maintain over time. TeamCity, on the other hand, offers a clean UI-driven configuration for quick setup, with the option to switch over to the Kotlin DSL for version-controlled, production-grade pipelines. This dual approach makes it easy to start simple and then scale your configuration as your projects demand.

How TeamCity handles Docker better

Here’s what TeamCity’s native Docker support looks like in practice:

  • Docker build runners: Instead of writing ad hoc scripts, you can add dedicated Docker build steps directly in your pipeline. Whether you’re building images, running containers, or cleaning up afterward, it’s all handled through first-class runners.
  • Built-in registry support: Authenticating and pushing images to Docker Hub, GitHub Container Registry, or a private registry is straightforward. TeamCity provides registry connections out of the box, so you don’t have to wire up custom credentials every time.
  • Kotlin DSL integration: If you prefer pipelines as code, you can declare Docker build and push steps in the Kotlin DSL with just a few lines. This makes it easy to track changes in version control and keep your pipelines reproducible.
  • Bundled Docker plugin: Perhaps the best part about all this is that there’s no separate plugin to install. The Docker integration is bundled with TeamCity, maintained alongside the product itself. That means fewer moving parts and no surprises during upgrades.

Creating a Docker build and push pipeline in TeamCity

Let’s now see TeamCity’s Docker support in action by setting up a simple build-and-push pipeline. The goal here is to take a standard Dockerfile, build an image from it, and push that image to a container registry like Docker Hub or GitHub Container Registry.

Step 1: Set up your Dockerfile

Start with a project that has a valid Dockerfile at its root. (You can use this one if you don’t have your own. Make sure to fork it to your GitHub account.)

Here’s what the Dockerfile in this project looks like:

# Use official Node.js LTS image

FROM node:24-alpine

# Set working directory

WORKDIR /usr/src/app

# Copy package files and install dependencies

COPY package*.json ./

RUN npm install --production

# Copy app source

COPY index.js ./

# Expose the port the app runs on

EXPOSE 3000

# Start the app

CMD ["node", "index.js"]

It’s a pretty barebones Dockerfile for setting up the environment for a Node.js app, copying source code files, and running the app.

Step 2: Add a Docker build step

In TeamCity, set up a new project for your pipeline.

Note: If you’re creating a new TeamCity project with a Dockerfile, TeamCity will most likely autodetect the right build steps for you to get started quickly. You can select the right steps for your workflow and click Use selected to set up the pipeline right away!

To learn how to add a Docker build step by yourself, read along.

In your TeamCity project, create a new build configuration if you don’t have one prepared already. In the build configuration settings page, go to Build Steps and add a new build step to build the Docker image.

Choose Docker as the runner for the build step:

On the next page, you can configure what happens in this new build step. TeamCity’s Docker build runner makes this process straightforward. You don’t have to write ad hoc shell scripts for every operation – just pick the command you want (build, push, or other) and fill in the additional parameters as you need.

For example, in the build step, you need to configure the path to your Dockerfile, the platform your built images should target, and the name and tag for it. You can also supply additional arguments to add to the docker build command as follows, should you need to:

Thanks to TeamCity’s registry connections, you don’t need to embed credentials in scripts. TeamCity logs in before the build and automatically logs out afterward.

💡 Pro tip: You can set environment variables in TeamCity (like commit SHA or build number) and use them in your image tags for traceability.

Here’s the equivalent Kotlin DSL snippet:

steps {

    dockerCommand {

        name = "Build"

        id = "Build"

        commandType = build {

            source = file {

                path = "Dockerfile"

            }

            platform = DockerCommandStep.ImagePlatform.Linux

            namesAndTags = "krharsh17/hello-node:latest"

            commandArgs = "--pull"

        }

    }

}

Step 3: Add a Docker push step

Next, add a Docker push build step. Select Docker once again as the build runner, and select push as the Docker command this time. Provide an image name and tag to use when pushing the image to your Docker registry:

Here’s what the build step looks like as a Kotlin DSL snippet:

steps {
    dockerCommand {
        name = "Push"
        id = "Push"
        commandType = push {
            namesAndTags = "krharsh17/hello-node:latest"
        }
    }
}

Save the build step.

Step 4: Configure Docker registry connection

All that’s left now is to provide the TeamCity project with instructions on how to access your container registry account. You’ll need to do two things:

  • Create a new connection in your project.
  • Configure your build-and-push build configuration to use the Docker Registry Connections build feature to access the connection you just created.

To create the new connection, head over to Admin | Your Project | Connections | New Connection.

Choose Docker Registry as the connection type, and provide your registry address and a username and password pair if needed:

Test the connection and save it.

To use this connection through the Docker Registry Connections build feature, head over to your build configuration’s settings page and click the Build Features tab. Click the + Add Build Feature button here. In the dialog that opens, select Docker Registry Connections as the build feature to add.

Next, you need to choose which connection to link here. Click on the + Add registry connection button, and select the new connection you just created:

Click Save to add the feature.

If you prefer Kotlin DSL, here’s what the new build feature would look like:

features {
    dockerRegistryConnections {
        loginToRegistry = on {
            dockerRegistryId = "PROJECT_EXT_3"
        }
    }
}

PROJECT_EXT_3 is the connection ID. You can get this value from the Connections page on your TeamCity project.

Step 5: Testing the pipeline

You’re all set! It’s time to test the pipeline now.

Try triggering a build. You should see a new image tag get pushed to your Docker registry as soon as the build completes:

This means that your Docker-native pipeline is ready.

You can also go further by adding steps to run containerized tests or deploy to a staging environment. For instance, spin up the freshly built container with docker run as part of your CI/CD workflow, then run integration tests against it.

Integrated security and caching features

When building and pushing containers, you need to ensure functionality, security, and efficiency. TeamCity’s native Docker support includes features that help you protect sensitive data and speed up pipelines without extra work:

  • Secure registry authentication: TeamCity’s Docker Registry Connections build feature automatically logs in to container registries (like Docker Hub or private registries) before each build and logs out afterward. You don’t need to embed credentials in scripts. TeamCity manages them securely for you.
  • Image cleanup: When enabled, the Docker Registry Connections feature can automatically clean up pushed images after builds are cleaned up on the server. This keeps registry storage tidy and maintains good hygiene for build artifacts.
  • Layer caching for speed: Rebuilding from scratch every time slows down development. With TeamCity’s Build Cache feature, key files and dependencies (like node_modules/ or .m2/) can be cached and reused across builds, significantly accelerating repeat runs.
  • Optimized for iterative workflows: With secure, credential-managed builds and reusable cache artifacts, teams can iterate quickly on Docker pipelines. Small updates don’t mean starting over from scratch, and the process stays secure by default.

Conclusion

If you’ve ever grappled with Docker pipelines in Jenkins, you know how fragile things can feel: chasing down plugin updates, maintaining brittle scripts, and dealing with configs that never quite stay consistent. It works, but it often feels like you’re spending more time nursing your CI/CD than actually delivering software.

TeamCity treats Docker as a first-class citizen. Native runners, registry integrations, caching, secrets management, and the Kotlin DSL replace Jenkins’s patchwork setup with a workflow you can actually rely on. Instead of simply trying to get builds to pass, you have a system you can trust to scale with you.

If you’re already running Docker pipelines in Jenkins, the migration path is straightforward and liberating. You’ll spend less time firefighting pipeline issues and more time shipping the features your users are waiting for.

If you’re ready to modernize your container pipelines, it’s worth seeing TeamCity in action. Head over to the TeamCity Docker documentation, or try TeamCity yourself and experience how first-class Docker support can simplify your CI/CD pipeline.

Read the whole story
alvinashcraft
37 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

AGL 448: MichaelAaron Flicker

1 Share

About MichaelAaron

hackingMichaelAaron Flicker is founder and president of XenoPsi Ventures, a brand incubator
firm providing financial, marketing and intellectual capital to a growing portfolio of companies.

He launched the business as a high school freshman 27 years ago in Ridgewood, NJ. Strategically, he focuses on managing XenoPsi Ventures portfolio of businesses, launching new companies and building equity-based partnerships with advertisers via XenoPsi Ventures’ innovative remuneration packages based on equity, not billable hours for services rendered.

He is president of Method1, Function Growth and Z/Axis Strategies, three of XenoPsi Ventures professional services portfolio companies. He is also the president and founder of the Wellow compression sock e-commerce brand launched in November 2021 with no outside investment.

He is a co-founder of the Consumer Behavior Lab in tandem with Richard Shotton. The CBL’s mission is to explore how behavioral science can be applied to improve the effectiveness and efficiency of media and marketing.

In 2022, he was recognized as one of the “40 Under 40” by NJBIZ. He is a Board Advisor to Shady Rays and Frances Prescott. Between 2022 – 2024, XenoPsi was named every year to the Inc. 5000 list of the fastest-growing private companies in America.

MichaelAaron has worked with many of the country’s leading brands including Nike, JPMorgan Chase & Co, AstraZeneca Pharmaceuticals, ACE Insurance, Chubb, & Evan Williams Bourbon.

Outside of work, MichaelAaron is Executive Director of Super Science Saturday (a Northern New Jersey science extravaganza for kids), a loving husband, and father of three beautiful children.


Today We Talked About

  • Background
  • Behavioral Science
  • 17 Brands
  • What is the “Why” behind what works
  • Human Mind
  • Not what we say, but how we behave
  • Ikea affect
  • Choices of words matter
  • A->B Testing your leadership style
  • Assume everyone has your best intention in mind
  • Respond with open ended questions

Connect with MichaelArron


Leave me a tip $
Click here to Donate to the show


I hope you enjoyed this show, please head over to Apple Podcasts and subscribe and leave me a rating and review, even one sentence will help spread the word.  Thanks again!





Download audio: https://media.blubrry.com/a_geek_leader_podcast__/mc.blubrry.com/a_geek_leader_podcast__/AGL_448_MichaelAaron_Flicker.mp3?awCollectionId=300549&awEpisodeId=11821869&aw_0_azn.pgenre=Business&aw_0_1st.ri=blubrry&aw_0_azn.pcountry=US&aw_0_azn.planguage=en&cat_exclude=IAB1-8%2CIAB1-9%2CIAB7-41%2CIAB8-5%2CIAB8-18%2CIAB11-4%2CIAB25%2CIAB26&aw_0_cnt.rss=https%3A%2F%2Fwww.ageekleader.com%2Ffeed%2Fpodcast
Read the whole story
alvinashcraft
6 hours ago
reply
Pennsylvania, USA
Share this story
Delete

How To Measure The Impact Of Features

1 Share

So we design and ship a shiny new feature. How do we know if it’s working? How do we measure and track its impact? There is no shortage in UX metrics, but what if we wanted to establish a simple, repeatable, meaningful UX metric — specifically for our features? Well, let’s see how to do just that.

I first heard about the TARS framework from Adrian H. Raudschl’s wonderful article on “How To Measure Impact of Features”. Here, Adrian highlighted how his team tracks and decides which features to focus on — and then maps them against each other in a 2×2 quadrants matrix.

It turned out to be a very useful framework to visualize the impact of UX work through the lens of business metrics.

Let’s see how it works.

1. Target Audience (%)

We start by quantifying the target audience by exploring what percentage of a product’s users have the specific problem that a feature aims to solve. We can study existing or similar features that try to solve similar problems, and how many users engage with them.

Target audience isn’t the same as feature usage though. As Adrian noted, if we know that an existing Export Button feature is used by 5% of all users, it doesn’t mean that the target audience is 5%. More users might have the problem that the export feature is trying to solve, but they can’t find it.

Question we ask: “What percentage of all our product’s users have that specific problem that a new feature aims to solve?”
2. A = Adoption (%)

Next, we measure how well we are “acquiring” our target audience. For that, we track how many users actually engage successfully with that feature over a specific period of time.

We don’t focus on CTRs or session duration there, but rather if users meaningfully engage with it. For example, if anything signals that they found it valuable, such as sharing the export URL, the number of exported files, or the usage of filters and settings.

High feature adoption (>60%) suggests that the problem was impactful. Low adoption (<20%) might imply that the problem has simple workarounds that people have relied upon. Changing habits takes time, too, and so low adoption in the beginning is expected.

Sometimes, low feature adoption has nothing to do with the feature itself, but rather where it sits in the UI. Users might never discover it if it’s hidden or if it has a confusing label. It must be obvious enough for people to stumble upon it.

Low adoption doesn’t always equal failure. If a problem only affects 10% of users, hitting 50–75% adoption within that specific niche means the feature is a success.

Question we ask: “What percentage of active target users actually use the feature to solve that problem?”
3. Retention (%)

Next, we study whether a feature is actually used repeatedly. We measure the frequency of use, or specifically, how many users who engaged with the feature actually keep using it over time. Typically, it’s a strong signal for meaningful impact.

If a feature has >50% retention rate (avg.), we can be quite confident that it has a high strategic importance. A 25–35% retention rate signals medium strategic significance, and retention of 10–20% is then low strategic importance.

Question we ask: “Of all the users who meaningfully adopted a feature, how many came back to use it again?”
4. Satisfaction Score (CES)

Finally, we measure the level of satisfaction that users have with that feature that we’ve shipped. We don’t ask everyone — we ask only “retained” users. It helps us spot hidden troubles that might not be reflected in the retention score.

Once users actually used a feature multiple times, we ask them how easy it was to solve a problem after they used that feature — between “much more difficult” and “much easier than expected”. We know how we want to score.

Using TARS For Feature Strategy

Once we start measuring with TARS, we can calculate an S÷T score — the percentage of Satisfied Users ÷ Target Users. It gives us a sense of how well a feature is performing for our intended target audience. Once we do that for every feature, we can map all features across 4 quadrants in a 2×2 matrix.

Overperforming features are worth paying attention to: they have low retention but high satisfaction. It might simply be features that users don’t have to use frequently, but when they do, it’s extremely effective.

Liability features have high retention but low satisfaction, so perhaps we need to work on them to improve them. And then we can also identify core features and project features — and have a conversation with designers, PMs, and engineers on what we should work on next.

Conversion Rate Is Not a UX Metric

TARS doesn’t cover conversion rate, and for a good reason. As Fabian Lenz noted, conversion is often considered to be the ultimate indicator of success — yet in practice it’s always very difficult to present a clear connection between smaller design initiatives and big conversion goals.

The truth is that almost everybody on the team is working towards better conversion. An uptick might be connected to many different initiatives — from sales and marketing to web performance boost to seasonal effects to UX initiatives.

UX can, of course, improve conversion, but it’s not really a UX metric. Often, people simply can’t choose the product they are using. And often a desired business outcome comes out of necessity and struggle, rather than trust and appreciation.

High Conversion Despite Bad UX

As Fabian writes, high conversion rate can happen despite poor UX, because:

  • Strong brand power pulls people in,
  • Aggressive but effective urgency tactics,
  • Prices are extremely attractive,
  • Marketing performs brilliantly,
  • Historical customer loyalty,
  • Users simply have no alternative.

Low Conversion Despite Great UX

At the same time, a low conversion rate can occur despite great UX, because:

  • Offers aren’t relevant to the audience,
  • Users don’t trust the brand,
  • Poor business model or high risk of failure,
  • Marketing doesn’t reach the right audience,
  • External factors (price, timing, competition).

An improved conversion is the positive outcome of UX initiatives. But good UX work typically improves task completion, reduces time on task, minimizes errors, and avoids decision paralysis. And there are plenty of actionable design metrics we could use to track UX and drive sustainable success.

Wrapping Up

Product metrics alone don’t always provide an accurate view of how well a product performs. Sales might perform well, but users might be extremely inefficient and frustrated. Yet the churn is low because users can’t choose the tool they are using.

We need UX metrics to understand and improve user experience. What I love most about TARS is that it’s a neat way to connect customers’ usage and customers’ experience with relevant product metrics. Personally, I would extend TARS with UX-focused metrics and KPIs as well — depending on the needs of the project.

Huge thanks to Adrian H. Raudaschl for putting it together. And if you are interested in metrics, I highly recommend you follow him for practical and useful guides all around just that!

Meet “How To Measure UX And Design Impact”

You can find more details on UX Strategy in 🪴 Measure UX & Design Impact (8h), a practical guide for designers and UX leads to measure and show your UX impact on business. Use the code 🎟 IMPACT to save 20% off today. Jump to the details.

Video + UX Training

$ 495.00 $ 799.00 Get Video + UX Training

25 video lessons (8h) + Live UX Training.
100 days money-back-guarantee.

Video only

$ 250.00$ 395.00
Get the video course

25 video lessons (8h). Updated yearly.
Also available as a UX Bundle with 2 video courses.

Useful Resources

Further Reading



Read the whole story
alvinashcraft
6 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Article: Where Architects Sit in the Era of AI

1 Share

As AI evolves from tool to collaborator, architects must shift from manual design to meta-design. This article introduces the "Three Loops" framework (In, On, Out) to help navigate this transition. It explores how to balance oversight with delegation, mitigate risks like skill atrophy, and design the governance structures that keep AI-augmented systems safe and aligned with human intent.

By Dave Holliday, João Carlos Gonçalves, Manoj Kumar Yadav
Read the whole story
alvinashcraft
6 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Giving OpenAI codex a try in VSCode

1 Share

At GitHub Universe, GitHub announced that you can use OpenAI Codex with your existing GitHub Copilot Pro+ subscription.

Therefore we first need to install the OpenAI Codex extension and sign in with GitHub Copilot.

Installation & configuration

You can directly install the extension from the extensions or through the Agent sessions view:

After the installation has completed, you need to sign in. You can either use your ChatGPT account or your (existing) GitHub Copilot subscription.

Once signed in, we have an extra chat window available:

There are a few things we can configure here:

  • Environment:
    • Local workspace: The agent will interact with your local machine and VSCode workspace.
    • Connect Codex Web: Send the chat to the ChatGPT web interface.
    • Send to cloud: The agent will operate in a sandboxed cloud environment.

 

  • Chat Mode (called approval modes in OpenAI Codex):
    • Chat: Regular chat, doesn’t do any changes directly.
    • Agent: The Codex agent can read files, make edits, and run commands in the working directory automatically. However, it needs approval to work outside the working directory or access the internet network.
    • Agent (Full Access): The Codex agent is allowed to read files, make edits, and run commands with network access, without approval.

 

  • Models:
    • Select any of the available OpenAI models

 

  • Reasoning effort:
    • You can adjust the reasoning effort of Codex to make it think more or less before answering.
    • Remark: In my case this option is disabled, probably because I’m using a GitHub Copilot subscription.

You can further tweak Codex through the config.toml file. Therefore click on the gear icon in the top right corner of the extension and then clicking Codex Settings > Open config.toml.

 

Our first interaction

The basic interactions are quite similar to any other AI agent in your IDE. We can ask it to do a review for example:

Notice that the Codex agent is using ‘Auto Context’ and limits its review to the active open file in VS Code.

Codex also supports a (limited) set of slash commands to execute common and specific tasks:

 

You can monitor the amount of tokens used by hovering over the icon in the right corner of the chat window:

My feedback

I only spent a limited amount of time using the Codex extension so don’t see this as a full review. Being used to having GitHub Copilot as an integrated part of my development experience, I found the Codex extension quite limited. It felt mostly like a command line tool with a minimum shell built on top of it. MCP server integration, slash commands, IDE integration, … all felt a bit more cumbersome compared to what I’m used of.

The output itself is quite good so no complaints there.

One feature that stood out for me is the sandbox mode. In this mode, Codex will work in a restricted environment and do the following:

  • Launches commands inside a restricted token derived from an AppContainer profile.
  • Grants only specifically requested filesystem capabilities by attaching capability SIDs to that profile.
  • Disables outbound network access by overriding proxy-related environment variables and inserting stub executables for common network tools.

Another option you have is to run Codex inside WSL which they recommend:

 

Remark: Important to notice is that we are not talking about the OpenAI GPT 5 Codex model which can be used directly from the list of available models in GitHub Copilot.

More information

Codex IDE extension

Codex – OpenAI’s coding agent - Visual Studio Marketplace

Read the whole story
alvinashcraft
6 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories