Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
145713 stories
·
32 followers

On inclusive personas and inclusive user research

1 Share

I’m inclined to take a few notes on Eric Bailey’s grand post about the use of inclusive personas in user research. As someone who has been in roles that have both used and created user personas, there’s so much in here

What’s the big deal, right? We’re often taught and encouraged to think about users early in the design process. It’s user’ centric design, so let’s personify 3-4 of the people we think represent our target audiences so our work is aligned with their objectives and needs. My master’s program was big on that and went deep into different approaches, strategies, and templates for documenting that research.

And, yes, it is research. The idea, in theory, is that by understanding the motivations and needs of specific users (gosh, isn’t “users” an awkward term?), we can “design backwards” so that the end goal is aligned to actions that get them there.

Eric sees holes in that process, particularly when it comes to research centered around inclusiveness. Why is that? Very good reasons that I’m compiling here so I can reference it later. There’s a lot to take in, so you’d do yourself a solid by reading Eric’s post in full. Your takeaways may be different than mine.

Traditional vs. Inclusive user research

First off, I love how Eric distinguishes what we typically refer to as the general type of user personas, like the ones I made to generalize an audience, from inclusive user personas that are based on individual experiences.

Inclusive user research practices are different than a lot of traditional user research. While there is some high-level overlap in approach, know the majority of inclusive user research is more focused on the individual experience and less about more general trends of behavior.

So, right off the bat we have to reframe what we’re talking about. There’s blanket personas that are placeholders for abstracting what we think we know about specific groups of people versus individual people that represent specific experiences that impact usability and access to content.

A primary goal in inclusive user research is often to identify concrete barriers that prevent someone from accessing the content they want or need. While the techniques people use are varied, these barriers represent insurmountable obstacles that stymie a whole host of navigation techniques and approaches.

If you’re looking for patterns, trends, and customer insights, know that what you want is regular user testing. Here, know that the same motivating factors you’re looking to uncover also exist for disabled people. This is because they’re also, you know, people.

Assistive technology is not exclusive to disabilities

It’s so easy to assume that using assistive tools automatically means accommodating a disability or impairment, but that’s not always the case. Choice points from Eric:

  • First is that assistive technology is a means, and not an end.
  • Some disabled people use more than one form of assistive technology, both concurrently and switching them in and out as needed.
  • Some disabled people don’t use assistive technology at all.
  • Not everyone who uses assistive technology has also mastered it.
  • Disproportionate attention placed on one kind of assistive technology at the expense of others.
  • It’s entirely possible to have a solution that is technically compliant, yet unintuitive or near-impossible to use in the actual. 

I like to keep in mind that assistive technologies are for everyone. I often think about examples in the physical world where everyone benefits from an accessibility enhancement, such as cutting curbs in sidewalks (great for skateboarders!), taking elevators (you don’t have to climb stairs in some cases), and using TV subtitles (I often have to keep the volume low for sleeping kids).

That’s the inclusive part of this. Everyone benefits rather than a specific subset of people.

Different personas, different priorities

What happens when inclusive research is documented separately from general user research?

Another folly of inclusive personas is that they’re decoupled from regular personas. This means they’re easily dismissible as considerations.

[…]

Disability is diversity, and the plain and honest truth is that diversity is missing from your personas if disability conditions are not present in at least some of them. This, in turn, means your personas are misrepresentative of the people in the abstract you claim to serve.

In practice, that means:

[…] we also want to hold space for things that need direct accessibility support and remediation when this consideration of accessibility fails to happen. It’s all about approach.

An example of how to consider your approach is when adding drag and drop support to an experience. […] [W]e want to identify if drag and drop is even needed to achieve the outcome the organization needs.

Thinking of a slick new feature that will impress your users? Great! Let’s make sure it doesn’t step on the toes of other experiences in the process, because that’s antithetical to inclusiveness. I recognize this temptation in my own work, particularly if I land on a novel UI pattern that excites me. The excitement and tickle I get from a “clever” idea gives me a blind side to evaluating the overall effectiveness of it.

Radical participatory design

Gosh dang, why didn’t my schoolwork ever cover this! I had to spend a little time reading the Cambridge University Press article explaining radical participatopry design (RPD) that Eric linked up.

Therefore, we introduce the term RPD to differentiate and represent a type of PD that is participatory to the root or core: full inclusion as equal and full members of the research and design team. Unlike other uses of the term PD, RPD is not merely interaction, a method, a way of doing a method, nor a methodology. It is a meta-methodology, or a way of doing a methodology. 

Ah, a method for methodology! We’re talking about not only including community members into the internal design process, but make them equal stakeholders as well. They get the power to make decisions, something the article’s author describes as a form of decolonization.

Or, as Eric nicely describes it:

Existing power structures are flattened and more evenly distributed with this approach.

Bonus points for surfacing the model minority theory:

The term “model minority” describes a minority group that society regards as high-performing and successful, especially when compared to other groups. The narrative paints Asian American children as high-achieving prodigies, with fathers who practice medicine, science, or law and fierce mothers who force them to work harder than their classmates and hold them to standards of perfection.

It introduces exclusiveness in the quest to pursue inclusiveness — a stereotype within a stereotype.

Thinking bigger

Eric caps things off with a great compilation of actionable takeaways for avoiding the pitfalls of inclusive user personas:

  • Letting go of control leads to better outcomes.
  • Member checking: letting participants review, comment on, and correct the content you’ve created based on their input.
  • Take time to scrutinize the functions of our roles and how our organizations compel us to undertake them in order to be successful within them.
  • Organizations can turn inwards and consider the artifacts their existing design and research processes produce. They can then identify opportunities for participants to provide additional clarity and corrections along the way.

On inclusive personas and inclusive user research originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

How developers are using Apple’s local AI models with iOS 26

1 Share
A list of apps that are using Apple's local models to introduce new features.
Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

How to Build Your First LLM Application Using LangChain and TypeScript

1 Share

Learn the quick and easy steps to building your first LLM-powered TypeScript application using LangChain and the OpenAI model.

In this article, I will walk you through a step-by-step guide to building your first LLM-powered application using LangChain and the OpenAI model. I will be using TypeScript to build the app, and by the end you’ll have a working translator built with the OpenAI GPT-4 model and LangChain’s messaging package.

Set Up the Project

To start with, create a Node.js application and install the LangChain core library within it:

npm i langchain @langchain/core

We will be using TypeScript, so let’s configure the project to utilize it. To do that, install:

npm install -D typescript @types/node ts-node

Next, add the tsconfig.json file in the project by running this command:

npx tsc –init

Replace the tsconfig.json file with the configuration below.

{
  "compilerOptions": {
    "target": "ES2020",
    "module": "commonjs",
    "lib": ["ES2020"],
    "outDir": "./dist",
    "rootDir": "./src",
    "strict": true,
    "esModuleInterop": true,
    "skipLibCheck": true,
    "forceConsistentCasingInFileNames": true,
    "resolveJsonModule": true,
    "declaration": true,
    "declarationMap": true,
    "sourceMap": true
  },
  "include": ["src/**/*"],
  "exclude": ["node_modules", "dist"]
}

Next, update the package.json file to use TypeScript for building and running the application.

"scripts": {
    "build": "tsc",
    "start": "tsc && node dist/index.js"
  },

We are going to place project files in the src folder, so create a folder named src in your project directory and, inside it, add a file called index.ts.

Set Up the Environment

To use LangChain with various language models, you’ll need API keys. We’ll store these keys in an environment file. To work with the .env file, you’ll need to have installed the dotenv dependency:

npm install dotenv

Next, add the .env file to the project root and paste the information below inside the file.

OPENAI_API_KEY= "..."
export LANGSMITH_TRACING="true"
export LANGSMITH_API_KEY="..."

# Other common environment variables
NODE_ENV=development
PORT=3000

Lastly, we need to install the type for the environment variables, which can be done by running the following command:

npm install -D @types/dotenv

Now, in the index file, read environment variables as below:

import dotenv from 'dotenv';
dotenv.config();

const openaiApiKey: string | undefined = process.env.OPENAI_API_KEY;
const langChainKey: string | undefined = process.env.LANGSMITH_API_KEY;
const port: number = parseInt(process.env.PORT || '3000', 10);
console.log(`OpenAI API Key: ${openaiApiKey}`);
console.log(`LangChain API Key: ${langChainKey}`);

You should be able to print both your OpenAI and LangSmith keys. Replace these with the keys from your subscription in your project.

Working with the Model

We are going to use the OpenAI model. To use that, install the langchain openai dependency in the project:

npm i @langchain/openai

After installing the OpenAI library, let’s use it to translate text. First, import the following packages:

import { ChatOpenAI } from "@langchain/openai";
import { HumanMessage, SystemMessage } from "@langchain/core/messages";

After installation, create a model object and set the system and human messages using the LangChain messages package to translate the text.

const model = new ChatOpenAI({ model: "gpt-4" });
const messages = [
  new SystemMessage("Translate the following from English into Italian"),
  new HumanMessage("hi!"),
];

async function translate(){
 let m =   await model.invoke(messages);
 console.log(m);
}

translate();

To use the OpenAI model and translate the text:

  • Set up the model
  • Structure the message
  • The system message acts as an instruction to set the context of the model
  • The human message constructs the actual input message to be translated

Putting everything together, a simple LLM application to translate the text should look like this:

import dotenv from 'dotenv';
import { ChatOpenAI } from "@langchain/openai";
import { HumanMessage, SystemMessage } from "@langchain/core/messages";

dotenv.config();

const openaiApiKey: string | undefined = process.env.OPENAI_API_KEY;
const langChainKey: string | undefined = process.env.LANGSMITH_API_KEY;

const model = new ChatOpenAI({ model: "gpt-4" });
const messages = [
  new SystemMessage("Translate the following from English into Italian"),
  new HumanMessage("What rooms are available in the hotel?"),
];

async function translate(){
 let m =   await model.invoke(messages);
 console.log(m);
}

translate();

You should get an output like this:

AIMessage code

Since LangChain models are runnable, they also support streaming output. We can convert the above result into a streaming response as shown below:

async function translate(){
//  let m =   await model.invoke(messages);
 const stream = await model.stream(messages);
 for await (const chunk of stream) {
    console.log(chunk.content);
  }
 console.log('\n');
}

translate();

With streaming, you are basically keeping a live connection to the OpenAI model. It’s super helpful for longer translations or when you want to give users instant feedback. Instead of waiting for the whole thing to load, they can see the translation show up word by word—kind of like it’s being typed out in real time. You will get streaming output as shown below:

Text output

Now you know it is this easy to build your first LLM app using LangChain, OpenAI GPT-4 and TypeScript. I hope you find this helpful article. Thanks for reading.

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Python 3.14: The NEW T-strings are Awesome

1 Share

Python has come a long way since its inception, and with each iteration, it refined its tools and features aimed to improve the developer’s efficiency and code readability. One such evolving feature in Python is template strings. Today, let’s delve into the world of template strings in Python—from the traditional methods to the exciting new syntax of T-strings introduced in Python 3.14. It’s like watching the evolution of a car from a simple engine model to an advanced electric vehicle with regenerative braking systems—the core function remains, but the efficiency and capabilities dramatically increase.

This video is from Indently.

The Classical Era: `f-string` and `format`

Firstly, let’s talk about `f-strings`—a feature that became a favorite for many due to its clarity and succinctness. They allow us to embed expressions inside string literals directly, making it straightforward to construct dynamic messages. However, like a classic car, it’s beautiful and fast but doesn’t always provide the best safety features. There’s no built-in mechanism for escaping or sanitation, making `f-strings` somewhat risky, especially when dealing with user-generated input that might be intended for SQL or HTML.

Then comes the `format` method, the versatile yet slightly more cumbersome predecessor to `f-strings`. With `format`, placeholders are defined by curly braces, and they are replaced by the arguments provided in the function call, allowing for positional or named substitutions. While `format` exhibits more flexibility than `f-strings`, it tends to be verbose and retains no memory of placeholders after its execution—once the string is constructed, its template structure is lost.

Stepping Back to Templates

The third approach revisits the basics—the `string.Template` class, which probably feels more like driving a manual transmission car after getting used to an automatic. It offers a robust way of substituting placeholders marked by a dollar sign, which makes it less prone to security risks associated with direct execution of expressions. However, it lacks the elegance and simplicity of `f-strings`, and like its brethren, it too converts everything to a simple string at the end of the day.

The New Kid on the Block: `T-strings`

Now, enter `T-strings`, the latest introduction to Python’s string formatting arsenal with the release of Python 3.14. Picture this as shifting from driving a gas-powered car to an EV with smart, connected features. `T-strings` use syntax identical to `f-strings` but are prefixed with a ‘T’ instead of an ‘F’. Instead of resulting in a plain string, they produce a template object that retains both the literal part of the string and the interpolated expressions along with their metadata.

What sets `T-strings` apart is their ability to remember the structure of the template. This feature is revolutionary because you can perform operations like locale-aware translations or context-specific escapes around placeholders without losing the template’s integrity. You can defer rendering, inspect, and manipulate parts of the template string before constructing the final string.

To reconstruct the string from a `T-string`, Python provides us with an iterator that combines the literary parts and interpolations. This feature adds a layer of flexibility that allows developers to control how and when the final string is built—very much like choosing when to shift gears in a manual car for optimal performance.

Beyond Simple Reconstruction

Taking it further, `T-strings` allow for additional processing such as applying conversions and formatting explicitly, hence offering a robust mechanism for working with user inputs safely and effectively. This method ensures that even if a user input includes potentially harmful data intended for SQL or HTML, it gets sanitized before being rendered.

Philosophical Underpinning

The evolution from `f-strings` and `format` to `T-strings` not only represents a technical enhancement but also a philosophical shift towards making Python more secure and adaptable for modern development challenges. It fosters a coding environment where safety and efficiency coexist seamlessly, illustrating Python’s ongoing commitment to growth and developers’ needs.

With `T-strings`, Python isn’t just adding another feature; it’s equipping its users with a powerful tool that enhances both productivity and security, ensuring that the language remains relevant and robust amidst evolving tech landscapes.

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Dangers of Automatically Converting a REST API to MCP

1 Share

When converting an existing REST API to the Model Context Protocol, what should you consider? What anti-patterns should you avoid to keep an AI agent’s context clean? This week on the show, Kyle Stratis returns to discuss his upcoming book, “AI Agents with MCP”.

Kyle has been busy since he last appeared on the show in 2020. He’s taken his experience working in machine learning startups and started his own consultancy, Stratis Data Labs. He’s been documenting his explorations working with LLMs and MCP on his blog, The Signal Path.

Kyle is also writing a book about building MCP clients, services, and end-to-end agents. We discuss a recent article he wrote about the hazards of using an automated tool to convert a REST API into an MCP server. He shares his personal experiences with building MCP tools and provides additional resources for you to learn more about the topic.

This episode is sponsored by InfluxData.

Spotlight: Python for Beginners: Code With Confidence – Real Python

Learn Programming Fundamentals and Pythonic Coding in Eight Weeks—With a Structured Course

Topics:

  • 00:00:00 – Introduction
  • 00:02:41 – Updates on career
  • 00:04:36 – The Signal Path - newsletter
  • 00:07:15 – Moving into consulting
  • 00:12:35 – Recent projects
  • 00:14:51 – Need for data skills with MCP
  • 00:16:49 – Describing the differences between REST APIs and MCP
  • 00:19:59 – Interaction model differences
  • 00:27:29 – Sponsor: InfluxData
  • 00:28:21 – Agent stories
  • 00:32:58 – Going through a simple example of MCP server
  • 00:37:50 – Defining client and server
  • 00:40:19 – Examples of servers currently
  • 00:51:44 – Announcement: Python for Beginners: Code with Confidence
  • 01:02:07 – Resources for further study
  • 01:05:07 – Breaking down advice on moving an API to MCP
  • 01:08:04 – What are you excited about in the world of Python?
  • 01:18:20 – What do you want to learn next?
  • 01:21:35 – How can people follow your work online?
  • 01:22:46 – Thanks and goodbye

Show Links:

Level up your Python skills with our expert-led courses:

Support the podcast & join our community of Pythonistas





Download audio: https://dts.podtrac.com/redirect.mp3/files.realpython.com/podcasts/RPP_E266_03_Kyle.9d97c7238336.mp3
Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

What a fortnight! VS2026 and iOS26

1 Share

In the past short while, we’ve seen the release of Visual Studio 2026 and iOS-26. They both have great improvements, though if I had to choose, the one that is making the biggest impact on me is, of course, Visual Studio. And there, the biggest improvement is performance.

You know how real estate people say that the three most important things in selling a house are location, location, location. Well, for me, the three most important things in an IDE are performance, performance and, oh yeah, performance.

A couple years ago I was frustrated by the slowness of the code-build-debug-repeat cycle. Loading Visual Studio, and especially building the project had terrible bursts of slow. I went out and bought a powerhouse desktop with 64GB of memory and 24 Core. At the time, that was about as big a box as I could afford. The impact on Rebuild All… nada. I couldn’t perceive much difference at all.

But now, Visual Studio appears to be taking advantage of the memory, and that, combined with other optomizations, makes it scream. I talked with Mads Kristensen (podcast / video) about 2026 and this was one of the things we focused on.

Mads actually focuses on three big improvements in VS2026: performance, appearance (did you know there are about 4,000 icons in VS!?), and deep CoPilot integration.

Deep CoPilot integration is one of those features whose true benefit emerges over time. One key feature is that CoPilot can now know a lot more about your entire project, allowing it to do a lot of work for you (under your supervision, of course). Add MCP (which, essentially, allows you to add expert AI on a given context) and boom! CoPilot really does become a smart assistant.

For more on CoPilot agents and MCP see my interview of Scott Hunter (podcast / video). Also, check out my podcast with Jeff Fritz.

iOS 26

iOS26 is much prettier than it was, and has a number of cool new features. My favorite is Hold Assist. It lets your iPhone wait on hold for you and alerts you when a person picks up. No more listening to elevator music…

To use it, make your call as you usually do. If you are put on hold (or get an automated message), wait a few seconds and your iPhone will prompt you: “Hold this call?” If you tap Hold, you can leave that screen and use your phone to do other things. When a human picks up on the other end, you are notified and the call is connected. (Note, if you miss the Hold This Call prompt, you can tap More -> Hold Assist from the in-call menu.)

iOS also offers Call Screening. This asks unknown callers who they are and what they want before your phone rings. You can then decide whether or not to take the call. You have a lot of control over this feature and you can integrate it with Focus.

What else? There are a ton of small features, one I particularly like is the ability to take simple polls in messages. What time should we talk? 6pm, 6:30pm, 7pm.

CoPilot says “This update isn’t just a facelift—it’s a full-on personality upgrade for your iPhone.” I agree.

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories