Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
151380 stories
·
33 followers

AGL 448: MichaelAaron Flicker

1 Share

About MichaelAaron

hackingMichaelAaron Flicker is founder and president of XenoPsi Ventures, a brand incubator
firm providing financial, marketing and intellectual capital to a growing portfolio of companies.

He launched the business as a high school freshman 27 years ago in Ridgewood, NJ. Strategically, he focuses on managing XenoPsi Ventures portfolio of businesses, launching new companies and building equity-based partnerships with advertisers via XenoPsi Ventures’ innovative remuneration packages based on equity, not billable hours for services rendered.

He is president of Method1, Function Growth and Z/Axis Strategies, three of XenoPsi Ventures professional services portfolio companies. He is also the president and founder of the Wellow compression sock e-commerce brand launched in November 2021 with no outside investment.

He is a co-founder of the Consumer Behavior Lab in tandem with Richard Shotton. The CBL’s mission is to explore how behavioral science can be applied to improve the effectiveness and efficiency of media and marketing.

In 2022, he was recognized as one of the “40 Under 40” by NJBIZ. He is a Board Advisor to Shady Rays and Frances Prescott. Between 2022 – 2024, XenoPsi was named every year to the Inc. 5000 list of the fastest-growing private companies in America.

MichaelAaron has worked with many of the country’s leading brands including Nike, JPMorgan Chase & Co, AstraZeneca Pharmaceuticals, ACE Insurance, Chubb, & Evan Williams Bourbon.

Outside of work, MichaelAaron is Executive Director of Super Science Saturday (a Northern New Jersey science extravaganza for kids), a loving husband, and father of three beautiful children.


Today We Talked About

  • Background
  • Behavioral Science
  • 17 Brands
  • What is the “Why” behind what works
  • Human Mind
  • Not what we say, but how we behave
  • Ikea affect
  • Choices of words matter
  • A->B Testing your leadership style
  • Assume everyone has your best intention in mind
  • Respond with open ended questions

Connect with MichaelArron


Leave me a tip $
Click here to Donate to the show


I hope you enjoyed this show, please head over to Apple Podcasts and subscribe and leave me a rating and review, even one sentence will help spread the word.  Thanks again!





Download audio: https://media.blubrry.com/a_geek_leader_podcast__/mc.blubrry.com/a_geek_leader_podcast__/AGL_448_MichaelAaron_Flicker.mp3?awCollectionId=300549&awEpisodeId=11821869&aw_0_azn.pgenre=Business&aw_0_1st.ri=blubrry&aw_0_azn.pcountry=US&aw_0_azn.planguage=en&cat_exclude=IAB1-8%2CIAB1-9%2CIAB7-41%2CIAB8-5%2CIAB8-18%2CIAB11-4%2CIAB25%2CIAB26&aw_0_cnt.rss=https%3A%2F%2Fwww.ageekleader.com%2Ffeed%2Fpodcast
Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

How To Measure The Impact Of Features

1 Share

So we design and ship a shiny new feature. How do we know if it’s working? How do we measure and track its impact? There is no shortage in UX metrics, but what if we wanted to establish a simple, repeatable, meaningful UX metric — specifically for our features? Well, let’s see how to do just that.

I first heard about the TARS framework from Adrian H. Raudschl’s wonderful article on “How To Measure Impact of Features”. Here, Adrian highlighted how his team tracks and decides which features to focus on — and then maps them against each other in a 2×2 quadrants matrix.

It turned out to be a very useful framework to visualize the impact of UX work through the lens of business metrics.

Let’s see how it works.

1. Target Audience (%)

We start by quantifying the target audience by exploring what percentage of a product’s users have the specific problem that a feature aims to solve. We can study existing or similar features that try to solve similar problems, and how many users engage with them.

Target audience isn’t the same as feature usage though. As Adrian noted, if we know that an existing Export Button feature is used by 5% of all users, it doesn’t mean that the target audience is 5%. More users might have the problem that the export feature is trying to solve, but they can’t find it.

Question we ask: “What percentage of all our product’s users have that specific problem that a new feature aims to solve?”
2. A = Adoption (%)

Next, we measure how well we are “acquiring” our target audience. For that, we track how many users actually engage successfully with that feature over a specific period of time.

We don’t focus on CTRs or session duration there, but rather if users meaningfully engage with it. For example, if anything signals that they found it valuable, such as sharing the export URL, the number of exported files, or the usage of filters and settings.

High feature adoption (>60%) suggests that the problem was impactful. Low adoption (<20%) might imply that the problem has simple workarounds that people have relied upon. Changing habits takes time, too, and so low adoption in the beginning is expected.

Sometimes, low feature adoption has nothing to do with the feature itself, but rather where it sits in the UI. Users might never discover it if it’s hidden or if it has a confusing label. It must be obvious enough for people to stumble upon it.

Low adoption doesn’t always equal failure. If a problem only affects 10% of users, hitting 50–75% adoption within that specific niche means the feature is a success.

Question we ask: “What percentage of active target users actually use the feature to solve that problem?”
3. Retention (%)

Next, we study whether a feature is actually used repeatedly. We measure the frequency of use, or specifically, how many users who engaged with the feature actually keep using it over time. Typically, it’s a strong signal for meaningful impact.

If a feature has >50% retention rate (avg.), we can be quite confident that it has a high strategic importance. A 25–35% retention rate signals medium strategic significance, and retention of 10–20% is then low strategic importance.

Question we ask: “Of all the users who meaningfully adopted a feature, how many came back to use it again?”
4. Satisfaction Score (CES)

Finally, we measure the level of satisfaction that users have with that feature that we’ve shipped. We don’t ask everyone — we ask only “retained” users. It helps us spot hidden troubles that might not be reflected in the retention score.

Once users actually used a feature multiple times, we ask them how easy it was to solve a problem after they used that feature — between “much more difficult” and “much easier than expected”. We know how we want to score.

Using TARS For Feature Strategy

Once we start measuring with TARS, we can calculate an S÷T score — the percentage of Satisfied Users ÷ Target Users. It gives us a sense of how well a feature is performing for our intended target audience. Once we do that for every feature, we can map all features across 4 quadrants in a 2×2 matrix.

Overperforming features are worth paying attention to: they have low retention but high satisfaction. It might simply be features that users don’t have to use frequently, but when they do, it’s extremely effective.

Liability features have high retention but low satisfaction, so perhaps we need to work on them to improve them. And then we can also identify core features and project features — and have a conversation with designers, PMs, and engineers on what we should work on next.

Conversion Rate Is Not a UX Metric

TARS doesn’t cover conversion rate, and for a good reason. As Fabian Lenz noted, conversion is often considered to be the ultimate indicator of success — yet in practice it’s always very difficult to present a clear connection between smaller design initiatives and big conversion goals.

The truth is that almost everybody on the team is working towards better conversion. An uptick might be connected to many different initiatives — from sales and marketing to web performance boost to seasonal effects to UX initiatives.

UX can, of course, improve conversion, but it’s not really a UX metric. Often, people simply can’t choose the product they are using. And often a desired business outcome comes out of necessity and struggle, rather than trust and appreciation.

High Conversion Despite Bad UX

As Fabian writes, high conversion rate can happen despite poor UX, because:

  • Strong brand power pulls people in,
  • Aggressive but effective urgency tactics,
  • Prices are extremely attractive,
  • Marketing performs brilliantly,
  • Historical customer loyalty,
  • Users simply have no alternative.

Low Conversion Despite Great UX

At the same time, a low conversion rate can occur despite great UX, because:

  • Offers aren’t relevant to the audience,
  • Users don’t trust the brand,
  • Poor business model or high risk of failure,
  • Marketing doesn’t reach the right audience,
  • External factors (price, timing, competition).

An improved conversion is the positive outcome of UX initiatives. But good UX work typically improves task completion, reduces time on task, minimizes errors, and avoids decision paralysis. And there are plenty of actionable design metrics we could use to track UX and drive sustainable success.

Wrapping Up

Product metrics alone don’t always provide an accurate view of how well a product performs. Sales might perform well, but users might be extremely inefficient and frustrated. Yet the churn is low because users can’t choose the tool they are using.

We need UX metrics to understand and improve user experience. What I love most about TARS is that it’s a neat way to connect customers’ usage and customers’ experience with relevant product metrics. Personally, I would extend TARS with UX-focused metrics and KPIs as well — depending on the needs of the project.

Huge thanks to Adrian H. Raudaschl for putting it together. And if you are interested in metrics, I highly recommend you follow him for practical and useful guides all around just that!

Meet “How To Measure UX And Design Impact”

You can find more details on UX Strategy in 🪴 Measure UX & Design Impact (8h), a practical guide for designers and UX leads to measure and show your UX impact on business. Use the code 🎟 IMPACT to save 20% off today. Jump to the details.

Video + UX Training

$ 495.00 $ 799.00 Get Video + UX Training

25 video lessons (8h) + Live UX Training.
100 days money-back-guarantee.

Video only

$ 250.00$ 395.00
Get the video course

25 video lessons (8h). Updated yearly.
Also available as a UX Bundle with 2 video courses.

Useful Resources

Further Reading



Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Article: Where Architects Sit in the Era of AI

1 Share

As AI evolves from tool to collaborator, architects must shift from manual design to meta-design. This article introduces the "Three Loops" framework (In, On, Out) to help navigate this transition. It explores how to balance oversight with delegation, mitigate risks like skill atrophy, and design the governance structures that keep AI-augmented systems safe and aligned with human intent.

By Dave Holliday, João Carlos Gonçalves, Manoj Kumar Yadav
Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Giving OpenAI codex a try in VSCode

1 Share

At GitHub Universe, GitHub announced that you can use OpenAI Codex with your existing GitHub Copilot Pro+ subscription.

Therefore we first need to install the OpenAI Codex extension and sign in with GitHub Copilot.

Installation & configuration

You can directly install the extension from the extensions or through the Agent sessions view:

After the installation has completed, you need to sign in. You can either use your ChatGPT account or your (existing) GitHub Copilot subscription.

Once signed in, we have an extra chat window available:

There are a few things we can configure here:

  • Environment:
    • Local workspace: The agent will interact with your local machine and VSCode workspace.
    • Connect Codex Web: Send the chat to the ChatGPT web interface.
    • Send to cloud: The agent will operate in a sandboxed cloud environment.

 

  • Chat Mode (called approval modes in OpenAI Codex):
    • Chat: Regular chat, doesn’t do any changes directly.
    • Agent: The Codex agent can read files, make edits, and run commands in the working directory automatically. However, it needs approval to work outside the working directory or access the internet network.
    • Agent (Full Access): The Codex agent is allowed to read files, make edits, and run commands with network access, without approval.

 

  • Models:
    • Select any of the available OpenAI models

 

  • Reasoning effort:
    • You can adjust the reasoning effort of Codex to make it think more or less before answering.
    • Remark: In my case this option is disabled, probably because I’m using a GitHub Copilot subscription.

You can further tweak Codex through the config.toml file. Therefore click on the gear icon in the top right corner of the extension and then clicking Codex Settings > Open config.toml.

 

Our first interaction

The basic interactions are quite similar to any other AI agent in your IDE. We can ask it to do a review for example:

Notice that the Codex agent is using ‘Auto Context’ and limits its review to the active open file in VS Code.

Codex also supports a (limited) set of slash commands to execute common and specific tasks:

 

You can monitor the amount of tokens used by hovering over the icon in the right corner of the chat window:

My feedback

I only spent a limited amount of time using the Codex extension so don’t see this as a full review. Being used to having GitHub Copilot as an integrated part of my development experience, I found the Codex extension quite limited. It felt mostly like a command line tool with a minimum shell built on top of it. MCP server integration, slash commands, IDE integration, … all felt a bit more cumbersome compared to what I’m used of.

The output itself is quite good so no complaints there.

One feature that stood out for me is the sandbox mode. In this mode, Codex will work in a restricted environment and do the following:

  • Launches commands inside a restricted token derived from an AppContainer profile.
  • Grants only specifically requested filesystem capabilities by attaching capability SIDs to that profile.
  • Disables outbound network access by overriding proxy-related environment variables and inserting stub executables for common network tools.

Another option you have is to run Codex inside WSL which they recommend:

 

Remark: Important to notice is that we are not talking about the OpenAI GPT 5 Codex model which can be used directly from the list of available models in GitHub Copilot.

More information

Codex IDE extension

Codex – OpenAI’s coding agent - Visual Studio Marketplace

Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Last week in AWS re:Invent with Corey Quinn

1 Share
Ryan sits down with Corey Quinn, Chief Cloud Economist at Duckbill, at AWS re:Invent to get Corey’s patented snarky take on all the happenings from the conference.
Read the whole story
alvinashcraft
6 hours ago
reply
Pennsylvania, USA
Share this story
Delete

JSON Web Token (JWT) Validation in Azure Application Gateway: Secure Your APIs at the Gate

1 Share

Hello Folks!

In a Zero Trust world, identity becomes the control plane and tokens become the gatekeepers.

Recently, in an E2E conversation with my colleague Vyshnavi Namani, we dug into a topic every ITPro supporting modern apps should understand: JSON Web Token (JWT) validation, specifically using Azure Application Gateway.

 

In this post we’ll distill that conversation into a technical guide for infrastructure pros who want to secure APIs and backend workloads without rewriting applications.

Why IT Pros Should Care About JWT Validation

JSON Web Token (JWT) is an open standard token format (RFC 7519) used to represent claims or identity information between two parties.

JWTs are issued by an identity provider (Microsoft Entra ID) and attached to API requests in an HTTP Authorization: Bearer <token> header. They are tamper-evident and include a digital signature, so they can be validated cryptographically.

JWT validation in Azure Application Gateway means the gateway will check every incoming HTTPS request for a valid JWT before it forwards the traffic to your backend service.

Think of it like a bouncer or security guard at the club entrance: if the client doesn’t present a valid “ID” (token), they don’t get in. This first-hop authentication happens at the gateway itself. No extra custom auth code is needed in your APIs. The gateway uses Microsoft Entra ID (Azure AD) as the authority to verify the token’s signature and claims (issuer/tenant, audience, expiry, etc.).

By performing token checks at the edge, Application Gateway ensures that only authenticated requests reach your application. If the JWT is missing or invalid, the gateway could deny the request depending on your configuration (e.g.  returns HTTP 401 Unauthorized) without disturbing your backend. If the JWT is valid, the gateway can even inject an identity header (x-msft-entra-identity) with the user’s tenant and object ID before passing the call along9. This offloads authentication from your app and provides a consistent security gate in front of all your APIs.

Key benefits of JWT validation at the gateway:

  • Stronger security at the edge: The gateway checks each token’s signature and key claims, blocking bad tokens before they reach your app.
  • No backend work needed: Since the gateway handles JWT validation, your services don’t need token‑parsing code. Therefore, there is less maintenance and lower CPU use.
  • Stateless and scalable: Every request brings its own token, so there’s no session management. Any gateway instance can validate tokens independently, and Azure handles key rotation for you.
  • Simplified compliance: Centralized JWT policies make it easier to prove only authorized traffic gets through, without each app team building their own checks.
  • Defense in depth: Combine JWT validation with WAF rules to block malicious payloads and unauthorized access.

In short, JWT validation gives your Application Gateway the smarts to know who’s knocking at the door, and to only let the right people in.

How JWT Validation Works

At its core, JWT validation uses a trusted authority (for now it uses Microsoft Entra ID) to issue a token. That token is presented to the Application Gateway, which then validates:

  • The token is legitimate
  • The token was issued by the expected tenant
  • The audience matches the resource you intend to protect

If all checks pass, the gateway returns a 200 OK and the request continues to your backend. If anything fails, the gateway returns 403 Forbidden, and your backend never sees the call.  You can check code and errors here:

Setting Up JWT Validation in Azure Application Gateway

The steps to configure JWT validation in Azure Application Gateway are documented here:

Use Cases That Matter to IT Pros

  • Zero Trust
  • Multi-Tenant Workloads
  • Geolocation-Based Access
  • AI Workloads

Next Steps

  1. Identify APIs or workloads exposed through your gateways.
  2. Audit whether they already enforce token validation.
  3. Test JWT validation in a dev environment.
  4. Integrate the policy into your Zero Trust architecture.
  5. Collaborate with your dev teams on standardizing audiences.

Resources

Final Thoughts

JWT validation in Azure Application Gateway is a powerful addition to your skills for securing cloud applications.

It brings identity awareness right into your networking layer, which is a huge win for security and simplicity. If you manage infrastructure and worry about unauthorized access to your APIs, give it a try. It can drastically reduce the “attack surface” by catching invalid requests early.

As always, I’d love to hear about your experiences. Have you implemented JWT validation on App Gateway, or do you plan to? Let me know how it goes! Feel free to drop a comment or question.

Cheers!

Pierre Roman

Read the whole story
alvinashcraft
6 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories