Protecting Personally Identifiable Information (PII) is critical for compliance and user trust. In this video, we introduce Microsoft’s PII Redaction service and show you how it helps detect and redact sensitive data across text, documents, and conversational transcripts. #AzureAI #PIIRedaction #DataPrivacy #AICompliance #MicrosoftDeveloper #AzureLanguage #GDPR #HIPAA #AIForDevelopers #CloudSecurity
What you’ll learn:
• Why PII protection matters and the cost of data breaches
• How Azure AI Language detects and redacts PII in text, documents, and speech transcripts
• Key customization options for entity types, masking styles, and exclusions
• Real-world use cases for finance, healthcare, and AI-driven workflows
• How to try the service in Azure AI Foundry Playground
Resources & Links:
🔗Identity Theft Resource Center: https://www.idtheftcenter.org/wp-content/uploads/2024/01/ITRC_2023-Annual-Data-Breach-Report.pdf
🔗IBM Security Report: https://www.ibm.com/reports/data-breach
🔍 Explore Azure AI Language: https://aka.ms/azureai-language
🚀 Try PII Redaction in AI Foundry: https://ai.azure.com/
📚 Learn more about PII in our Documentation: https://aka.ms/pii-docs-overview
Chapter Markers:
00:00 - Introduction: Why PII Protection Matters
00:10 - Industry Stats & Compliance Challenges
00:58 - Meet Azure AI Language PII Redaction
01:12 - Demo: Detecting PII in Text
01:42 - Customization: Entity Types & Masking Styles
02:16 - Advanced Features: Exclusions & Synonyms
02:40 - Conversational & Document PII Overview
03:30 - Key Use Cases
04:01 - Wrap-Up & Try It Yourself
#AzureAI #PIIRedaction #DataPrivacy #AICompliance #MicrosoftDeveloper #AzureLanguage #GDPR #HIPAA #AIForDevelopers #CloudSecurity
In this episode, we’ll deep dive into Drasi, a new data processing system that simplifies detecting critical events within complex infrastructures and taking immediate action tuned to business objectives. Developers and software architects can leverage its capabilities across event-driven scenarios, whether working on Internet of Things (IoT) integrations, enhancing security protocols, or managing sophisticated applications.
✅ Resources:
Drasi https://drasi.io/
Source code https://github.com/drasi-project
📌 Let's connect:
Aman Singh | https://www.linkedin.com/in/amansinghoriginal
Daniel Gerlag | https://www.linkedin.com/in/daniel-gerlag
Jorge Arteiro | https://www.linkedin.com/in/jorgearteiro
Subscribe to the Open at Microsoft: https://aka.ms/OpenAtMicrosoft
Open at Microsoft Playlist: https://aka.ms/OpenAtMicrosoftPlaylist
📝Submit Your OSS Project for Open at Microsoft https://aka.ms/OpenAtMsCFP
New episode on Tuesdays!
Welcome to Cozy AI Kitchen, where we explore how AI can empower creators! In this episode, actor, producer, and director Hans Obma (known for Better Call Saul, WandaVision, and more) shares how he uses AI tools like ChatGPT to learn languages, prepare for interviews, and promote his projects—including his Welsh-language film.
Hans dives into:
- How AI helped him practice Welsh before a live TV interview
- Using AI to research film festivals and build relationships
- Why empathy and clear communication matter—even with AI
- Practical advice for actors navigating the AI era
If you’re an actor, creator, or just curious about AI in entertainment, this episode is packed with insights you won’t want to miss.
What You’ll Learn
✅ How actors can leverage AI for language learning and interview prep
✅ Practical ways AI can support film promotion and festival submissions
✅ Why context matters when prompting AI for better results
✅ How empathy and Dale Carnegie principles apply to AI-assisted communication
✅ Tips for discovering and amplifying your unique strengths in a competitive industry
Chapters
00:00 - How Actors Can Use AI Today
00:00:17 - AI’s Role in Creativity: Positive and Negative Perceptions
00:00:30 - Meet Hans Obma: Actor, Producer, Director
00:01:04 - The Welsh Language Journey
00:02:00 - Learning Languages as an Actor: Self-Reliance vs. Entourage
00:03:19 - Practicing Welsh with ChatGPT Before a Live Interview
00:05:03 - Overcoming Fear of AI and Using It Strategically
00:06:02 - Submitting to Film Festivals with AI Research
00:07:17 - The Power of Context in AI Prompts
00:08:51 - Empathy, Relationships, and Dale Carnegie Principles with AI
00:11:03 - Advice for Aspiring and Experienced Actors in the AI Era
00:12:36 - Closing Thoughts and Where to Follow Hans’ Work
Speakers
Hans Obma – Actor, Producer, Director
Follow Hans: https://instagram.com/hansobma
IMDB: https://www.imdb.com/name/nm3155819/
Host: John – Cozy AI Kitchen
Links & Resources
🚀 Try Azure for free: https://aka.ms/AzureFreeTrialYT
📚 Learn more about AI tools for creators: https://learn.microsoft.com/ai
🎥 Watch all Cozy AI Kitchen episodes: https://aka.ms/CAIK-YTPlaylist
🔔 Subscribe for more episodes on AI, creativity, and tech!
Hashtags
#CozyAIKitchen #AIForActors #ArtificialIntelligence #ChatGPT #FilmIndustry #ActorLife #BetterCallSaul #WandaVision #MicrosoftAI #AzureAI #MachineLearning #EntertainmentTech #FilmFestival #LanguageLearning #CreatorTools
What we learned from low feature usage — and how we turned it into a smarter, faster experience.

by: Meilin Zhu, Manager, Product Management & Jeremiah Jacobson, Manager, Engineering Tech Lead
Quick Bytes:
Our app is more than a menu — it’s a gateway to convenience, personalization, and the kind of seamless experience our customers expect. As we continue to evolve McDonald’s digital experience, even small features can have a big impact.
A recent redesign of Recents & Faves in the app started with a question: Why weren’t more people using Recents and Faves?
Customer insights pointed the way
Every feature we build is guided by customer insights. With Recents & Faves, we saw its low utilization rate as a signal for change. From our data, we saw that customers who used Recents & Faves were significantly more likely to complete their orders and order more frequently. In other words, while overall utilization was limited, the customers who used these features were our most loyal.
Recognizing this opportunity to deepen engagement with our most loyal users, we set out to better understand the barriers preventing broader adoption. To ensure the redesign would be both intuitive and impactful, we conducted in-depth research to uncover user behaviors, pain points, and expectations. This research became the foundation for our design decisions, helping us reimagine the experience in a way that aligns with how customers actually interact with the app.
One example of this feedback — focused on the legacy Recents & Faves experience:

Discoverability and usability were holding users back
Through a combination of user interviews, journey mapping, and behavioral data analysis, our research uncovered two primary pain points: discoverability and usability.
From a discoverability standpoint, the feature was buried behind obscure entry points — fewer than half of participants could find it. Key ordering pathways like the Order Wall or Deals and Rewards lacked any direct access to Recents & Faves, further limiting visibility.
On the usability front, the legacy experience forced users to reorder entire past orders, often triggering frustrating alerts when items were unavailable. Seasonal items left behind blank tiles with no images or names, and favorites failed to retain customizations. As a result, many customers resorted to manually rebuilding their orders each time. These pain points stood in stark contrast to what customers told us they valued most: speed, accuracy, and convenience — especially for those managing complex or highly personalized orders.

Designing for clarity, speed, and scale
With clear insights into the usability challenges, we approached Phase One of the redesign with a focus on delivering immediate, tangible improvements that prioritized convenience and clarity. We expanded product tiles in both Recents and Faves to display key details like product name, customizations, and availability status. The goal was to empower our customers to reorder with confidence by ensuring they get exactly what they expect.
For Recents, we introduced itemized orders, giving users the flexibility to reorder individual items or entire meals. We also eliminated “dead tiles” by filtering out unavailable products, reducing friction and frustration during the reordering process. Additionally, we streamlined navigation to product pages, making it easier for customers to quickly modify and personalize their orders.
Behind the scenes, we reinforced these frontend enhancements with significant backend improvements. By optimizing our microservices architecture, we achieved a 97% reduction in total load time for Recents on subsequent launches. This performance boost ensures that the redesigned experience is not only more intuitive but also faster and more reliable. Together, these usability-focused updates laid the groundwork for a more seamless and satisfying reordering experience — one that better aligns with customer expectations.
What we’re seeing so far
Early results from the redesign are showing customers responding positively to the new flexibility and transparency. Since the feature has been live, we are seeing measurable improvements, including reduction in cart removals and strong adoption of itemized reordering. These early wins validate our user-centered approach and reinforce the value of designing for clarity and convenience.
Looking ahead, we’ll continue to let data and customer insights guide us in evolving Recents & Faves. While they may seem like small features, they play a big role in making the McDonald’s app more convenient, more personalized, and more valuable to our customers. By listening to our customers, aligning with business goals, and engineering for scale, we’ve reimagined Recents & Faves into an experience that truly delivers on the promise of fast, easy, and reliable service.
Next time you open the McDonald’s app, try reordering your favorite meal — you might be surprised at just how seamless it feels.
Reordering, Reimagined: Designing with Customer Data at McDonald’s was originally published in McDonald’s Technical Blog on Medium, where people are continuing the conversation by highlighting and responding to this story.
Get started on an agentic Angular app with Genkit and Gemini. We’ll build out the ability for users to ask about the status of their order.
Chatbots and AI are in practically every sentence that comes out of our mouths. Most websites come with a chatbot providing answers to customers connected with an LLM that answers questions using its vast knowledge base.
For example, we created a nice Chatbot using Gemini that can recommend which Progress Kendo UI for Angular components to use to build a webpage based on an image input. This chatbot works fine, because Gemini uses its trained data to answer, but what happens when we want a chatbot that answers something not related to its training data?
If you ask something like “check my order status”? It hits a wall. The LLM doesn’t have that knowledge, and this is the moment where an agent can help us, and when we need to move on to talk about agentic apps.
An agentic app is like a smart assistant that not only answers questions but can also utilize tools to complete tasks. It can reason, plan and interact with other systems.
This sounds a bit complex, but don’t worry: this is when Genkit comes to the game, to help us create agentic apps with ease.
Today we’ll build a complete, working application from scratch. Starting with Genkit, we’ll create an agent that can check an order status using the power of Gemini and Genkit.
Let’s move on!
Before we start typing code, let’s take a step back. We need to understand what Genkit is, plus its flows and tools. By using a real-world example we all know, Amazon customer support, this will make everything super clear.
Think of Genkit as the “brain” behind the Amazon customer support system. It’s like Angular for us, allowing us to build apps; Genkit is the entire framework that allows a developer to build the complete system. It’s not the agent itself; it’s the infrastructure that lets you create and manage all the processes and tools the agent will use.
In short, Genkit is the toolbox that helps us to build powerful AI applications.
Imagine you need help with a lost package. You open the chat, and a new support session starts.
That entire process, from the moment you ask “Where’s my order?” to the final resolution where you get a new tracking number or a refund, is a flow.
A flow is the main task or conversation you want to handle. It has the TypeScript functions like checkOrderStatusFlow or processRefundFlow and orchestrates the entire experience.
OK, we know Genkit and flows, but what is a tool ?
If we go back to thinking about how the Amazon support agent works, it doesn’t just magically know your order details. It uses tools internal to Amazon.
For example, we can provide an action called “Look Up Order in Database” or “Send a New Tracking Number.” Each of these is a tool.
A tool is a specific action the AI can perform. When a user asks about their order, the AI sees that its checkOrderStatus tool is perfect for the job. It understands what the tool does by reading its description, and then it knows when to use it to solve the problem.
Let’s connect the dots. First, the flow is the entire conversation, the big picture. The tools are the small, powerful actions the AI can use to complete that conversation. And Genkit is the framework that lets you build and connect all of this together.
So, when you build a new Genkit app, you first create the flow (the main objective), and then you give it a set of tools (the specific capabilities) it can use to get the job done.
Now that we know about each part, let’s build our first agent using Genkit!
Let’s start by building our backend. This will be a Node.js server that exposes an API.
First, make sure you have Node.js (v20+) installed, open your terminal and install the Genkit CLI.
npm install -g genkit-cli
Now, let’s create our project folder for the whole project my-agentic-app and create the backend folder and navigate into it
mkdir my-agentic-app
cd my-agentic-app
mkdir backend
cd backend
Next, we’ll install the Genkit packages we need, but first initialize the project by running npm init -y and install the genkit, google-ai and zod packages.
npm install genkit @genkit-ai/google-genai
Let’s give a small overview about them:
genkit: The core Genkit framework for building agentic apps@google-genkit/google-ai: The plugin for connecting to Gemini modelszod: A library for defining data schemas, which Genkit uses for toolsLast but not least, we use our favorite language, TypeScript, so let’s install TypeScript and tsx npm install -D typescript tsx and create the configuration running npx tsc --init
npm install -D typescript tsx
npx tsc --init
Ok, it’s time to write code!
Now with our project ready and the full picture of Genkit, Flows and Tools, let’s bring it to life! Remember our Amazon chatbot? It’s time to build the agentic app.
First things first, let’s create our project structure. Open your terminal and run these two simple commands to create a folder and index.ts file.
mkdir src
touch src/index.ts
Now, open src/index.ts. This is where all the magic starts. We will set up our Genkit instance, telling it which AI model to use. It’s like the main engine that will power our entire application.
First import the genkit, z from the Genkit package and import googleAI from @genkit-ai/google-genai.
import { genkit, z } from 'genkit';
import { googleAI } from '@genkit-ai/google-genai';
Next, register the Google AI plugin googleAI() and tell the agent which specific brain (model) to use. We will be using the ‘gemini-2.5-flash’ model and the temperature to the model.
The temperature in models with a higher value means more creative, a lower value is more direct. Learn more.
const ai = genkit({
plugins: [googleAI()],
model: googleAI.model('gemini-2.5-flash', {
temperature: 0.8,
}),
});
The final index.ts looks like:
import { genkit, z } from 'genkit';
import { googleAI } from '@genkit-ai/google-genai';
const ai = genkit({
plugins: [googleAI()],
model: googleAI.model('gemini-2.5-flash', {
temperature: 0.8,
}),
});
Ok, let’s move to the tool!
What can our smart agent do? This is where a tool comes in! It’s a specific, controlled action. Our agent will read the description of this tool and decide if it’s the right one to use based on the user’s question.
In this case, we’re giving our agent the ability to check the status of an order. We tell it exactly what information it needs (for example the orderId) and what kind of information it will return (like status or estimatedDelivery).
Note: The description is super important! The AI model uses this to know when to call the tool.
Let’s create a tool using the ai.defineTool function to define what information the tool needs to run and what kind of data the tool returns using zod (z).
For example, we’ll create getOrderStatusTool. This is the actual function that runs when the tool is called. For our demo, we just check for a hardcoded ID to keep it simple, and if the ID isn’t found, we return a clear status.
Check out the code:
export const getOrderStatusTool = ai.defineTool(
{
name: 'getOrderStatus',
description: "Get the status of a user's order by their order ID.",
inputSchema: z.object({
orderId: z.string().describe("The unique ID of the customer's order"),
}),
outputSchema: z.object({
status: z.string(),
estimatedDelivery: z.string(),
}),
},
async (input) => {
if (input.orderId === '123-456') {
return { status: 'Shipped', estimatedDelivery: 'October 9, 2025' };
}
return { status: 'Not Found', estimatedDelivery: 'N/A' };
}
);
Learn more about define tools with Genkit.
The flow is the final piece. It is the main process that takes the user’s prompt (the question) and hands it off to the AI model.
But here’s where the magic happens: we pass a list of tools that the model is allowed to use.
The model (Gemini or wherever) is smart enough to read the user’s prompt and decide if our getOrderStatusTool is the best way to answer. If it is, Genkit will automatically call that function for us!
This keeps our agent safe and predictable, because we can control exactly what actions it can perform.
Let’s make this work in our project!
export const orderSupportFlow = ai.defineFlow(
{
name: 'orderSupportFlow',
inputSchema: z.string(),
outputSchema: z.string(),
},
async (prompt) => {
const llmResponse = await ai.generate({
prompt: prompt,
tools: [getOrderStatusTool],
});
return llmResponse.text;
}
);
The final code looks like:
import { genkit, z } from "genkit";
import { googleAI } from "@genkit-ai/google-genai";
const ai = genkit({
plugins: [googleAI()],
model: googleAI.model("gemini-2.5-flash", {
temperature: 0.8,
}),
});
export const getOrderStatusTool = ai.defineTool(
{
name: "getOrderStatus",
description: "Get the status of a user's order by their order ID.",
inputSchema: z.object({
orderId: z.string().describe("The unique ID of the customer's order"),
}),
outputSchema: z.object({
status: z.string(),
estimatedDelivery: z.string(),
}),
},
async (input) => {
if (input.orderId === "123-456") {
return { status: "Shipped", estimatedDelivery: "October 9, 2025" };
}
return { status: "Not Found", estimatedDelivery: "N/A" };
},
);
export const orderSupportFlow = ai.defineFlow(
{
name: "orderSupportFlow",
inputSchema: z.string(),
outputSchema: z.string(),
},
async (prompt) => {
const llmResponse = await ai.generate({
prompt: prompt,
tools: [getOrderStatusTool],
});
return llmResponse.text;
},
);
We have a final step to connect Genkit with our favorite AI tool, Gemini.
Create a file named .env in the genkit-app/backend, next get a free API key from Google AI Studio and add it to the .env file.
GEMINI_API_KEY="YOUR_API_KEY_HERE"
To make our index.ts have access to the .env file, we are going to use the dotenv package. Open the terminal and run npm install dotenv. After it finishes, import dotenv and initialize it.
import * as dotenv from "dotenv";
dotenv.config();
Because we are using CommonJS, add the field "type: module" to the package.json:
{
"name": "backend",
"version": "1.0.0",
"type": "module",
"description": "",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
....
Ok, everything is ready! Let’s run our agentic app with the command:
genkit start -- npx tsx --watch src/index.ts
If you want to set a custom port, use the --port flag to set a specific port to run genkit like: genkit start --port 4001 -- npx tsx --watch src/index.ts

Yes, our agent is running … but hold on a second. How can we can test the agent if we don’t have an app?
Genkit provides the Developer UI, a local web app that lets us work with models, flows, prompts and other elements in our Genkit projects.
The Developer UI is running in http://localhost:4001 and allows us to use models and call our tools and functions to debug our project.
In the browser showing the Genkit Developer UI, we’re going to focus on testing our code. First, click into Models. It provides a system prompt to ask for the model config and tools.

The config allows us to configure the model, but the key point is in the Tools tab. Click on it and we see our function getOrderStatus.

Before activating our tool, we’re going to write a prompt Where is my order 123-456, and click the Run button. The model doesn’t have any idea about it.

Finally, click in the tools and select available tools getOrderStatus and run the same question.

Tada!! The model gets the tools and executes our code!! And answers our mock response!
We built our first agentic app with flows and tools easily with Genkit and brought power to the models!
We learned about how Genkit helps us to build agentic apps by creating our flows, and how it processes requests and uses tools to make an AI agent smart, combined with Genkit Developer UI to debug, test and play with our flows and tools.
It helps us to turn on models like Gemini or others to access our data to answer any question.
It was so nice to use the Genkit Developer UI to test our flow and tools. However, in the real world, we want to connect its power with a real chatbot. So, in the next chapter, we’re going to connect Genkit with Angular and build a fast chatbot using the power of Kendo UI!