Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
149358 stories
·
33 followers

Security Update for SQL Server 2025 RTM CU2

1 Share

The Security Update for SQL Server 2025 RTM CU2 is now available for download at the Microsoft Download Center and Microsoft Update Catalog sites. This package cumulatively includes all previous security fixes for SQL Server 2025 RTM CUs, plus it includes the new security fixes detailed in the KB Article.

Read the whole story
alvinashcraft
35 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Boosting Android Performance: Introducing AutoFDO for the Kernel

1 Share

Posted by Yabin Cui, Software Engineer









We are the Android LLVM toolchain team. One of our top priorities is to improve Android performance through optimization techniques in the LLVM ecosystem. We are constantly searching for ways to make Android faster, smoother, and more efficient. While much of our optimization work happens in userspace, the kernel remains the heart of the system. Today, we’re excited to share how we are bringing Automatic Feedback-Directed Optimization (AutoFDO) to the Android kernel to deliver significant performance wins for users.


What is AutoFDO?

During a standard software build, the compiler makes thousands of small decisions, such as whether to inline a function and which branch of a conditional is likely to be taken, based on static code hints.While these heuristics are useful, they don't always accurately predict code execution during real-world phone usage.

AutoFDO changes this by using real-world execution patterns to guide the compiler. These patterns represent the most common instruction execution paths the code takes during actual use, captured by recording the CPU's branching history. While this data can be collected from fleet devices, for the kernel we synthesize it in a lab environment using representative workloads, such as running the top 100 most popular apps. We use a sampling profiler to capture this data, identifying which parts of the code are 'hot' (frequently used) and which are 'cold'. 
When we rebuild the kernel with these profiles, the compiler can make much smarter optimization decisions tailored to actual Android workloads.

To understand the impact of this optimization, consider these key facts:
  • On Android, the kernel accounts for about 40% of CPU time.
  • We are already using AutoFDO to optimize native executables and libraries in the userspace, achieving about 4% cold app launch improvement and a 1% boot time reduction.

Real-World Performance Wins

We have seen impressive improvements across key Android metrics by leveraging profiles from controlled lab environments. These profiles were collected using app crawling and launching, and measured on Pixel devices across the 6.1, 6.6, and 6.12 kernels.

The most noticeable improvements are listed below. Details on the AutoFDO profiles for these kernel versions can be found in the respective Android kernel repositories for android16-6.12 and android15-6.6 kernels.




These aren't just theoretical numbers. They translate to a snappier interface, faster app switching, extended battery life, and an overall more responsive device for the end user.

How It Works: The Pipeline

Our deployment strategy involves a sophisticated pipeline to ensure profiles stay relevant and performance remains stable.


Step 1: Profile Collection

While we rely on our internal test fleet to profile userspace binaries, we shifted to a controlled lab environment for the Generic Kernel Image (GKI). Decoupling profiling from the device release cycle allows for flexible, immediate updates independent of deployed kernel versions. Crucially, tests confirm that this lab-based data delivers performance gains comparable to those from real-world fleets.

  • Tools & Environment: We flash test devices with the latest kernel image and use simpleperf to capture instruction execution streams. This process relies on hardware capabilities to record branching history, specifically utilizing ARM Embedded Trace Extension (ETE) and ARM Trace Buffer Extension (TRBE) on Pixel devices.
  • Workloads: We construct a representative workload using the top 100 most popular apps from the Android App Compatibility Test Suite (C-Suite). To capture the most accurate data, we focus on:
    • App Launching: Optimizing for the most visible user delays
    • AI-Driven App Crawling: Simulating contiguous, evolving user interactions
    • System-Wide Monitoring: Capturing not only foreground app activities, but also critical background workloads and inter-process communications
  • Validation: This synthesized workload shows an 85% similarity to execution patterns collected from our internal fleet.
  • Targeted Data: By repeating these tests sufficiently, we capture high-fidelity execution patterns that accurately represent real-world user interaction with the most popular applications. Furthermore, this extensible framework allows us to seamlessly integrate additional workloads and benchmarks to broaden our coverage.

Step 2: Profile Processing

We post-process the raw trace data to ensure it is clean, effective, and ready for the compiler.

  • Aggregation: We consolidate data from multiple test runs and devices into a single system view.
  • Conversion: We convert raw traces into the AutoFDO profile format, filtering out unwanted symbols as needed.
  • Profile Trimming: We trim profiles to remove data for "cold" functions, allowing them to use standard optimization. This prevents regressions in rarely used code and avoids unnecessary increases in binary size.

Step 3: Profile Testing

Before deployment, profiles undergo rigorous verification to ensure they deliver consistent performance wins without stability risks.

  • Profile & Binary Analysis: We strictly compare the new profile's content (including hot functions, sample counts, and profile size) against previous versions. We also use the profile to build a new kernel image, analyzing binaries to ensure that changes to the text section are consistent with expectations.
  • Performance Verification: We run targeted benchmarks on the new kernel image. This confirms that it maintains the performance improvements established by previous baselines.

Continuous Updates

Code naturally "drifts" over time, so a static profile would eventually lose its effectiveness. To maintain peak performance, we run the pipeline continuously to drive regular updates:

  • Regular Refresh: We refresh profiles in Android kernel LTS branches ahead of each GKI release, ensuring every build includes the latest profile data.
  • Future Expansion: We are currently delivering these updates to the android16-6.12 and android15-6.6 branches and will expand support to newer GKI versions, such as the upcoming android17-6.18.

Ensuring Stability

A common question with profile-guided optimization is whether it introduces stability risks. Because AutoFDO primarily influences compiler heuristics, such as function inlining and code layout, rather than altering the source code's logic, it preserves the functional integrity of the kernel. This technology has already been proven at scale, serving as a standard optimization for Android platform libraries, ChromeOS, and Google’s own server infrastructure for years.

To further guarantee consistent behavior, we apply a "conservative by default" strategy. Functions not captured in our high-fidelity profiles are optimized using standard compiler methods. This ensures that the "cold" or rarely executed parts of the kernel behave exactly as they would in a standard build, preventing performance regressions or unexpected behaviors in corner cases.

Looking Ahead

We are currently deploying AutoFDO across the android16-6.12 and android15-6.6 branches. Beyond this initial rollout, we see several promising avenues to further enhance the technology:

  • Expanded Reach: We look forward to deploying AutoFDO profiles to newer GKI kernel versions and additional build targets beyond the current aarch64 support.

  • GKI Module Optimization: Currently, our optimization is focused on the main kernel binary (vmlinux). Expanding AutoFDO to GKI modules could bring performance benefits to a larger portion of the kernel subsystem.

  • Vendor Module Support: We are also interested in supporting AutoFDO for vendor modules built using the Driver Development Kit (DDK). With support already available in our build system (Kleaf) and profiling tools (simpleperf), this allows vendors to apply these same optimization techniques to their specific hardware drivers.

  • Broader Profile Coverage: There is potential to collect profiles from a wider range of Critical User Journeys (CUJs) to optimize them.

By bringing AutoFDO to the Android kernel, we’re ensuring that the very foundation of the OS is optimized for the way you use your device every day.


Read the whole story
alvinashcraft
36 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Announcing Genkit Dart: Build full-stack AI apps with Dart and Flutter

1 Share

Announcing the preview launch of Genkit Dart, an open-source AI framework for building full-stack, AI-powered apps for any platform.

The Dart and Flutter communities have always pushed the boundaries of what’s possible across screens. You’ve shown that building high-quality, multi-platform applications doesn’t require compromising on developer experience. Now we’re bringing the same “write once, run anywhere” philosophy to AI-powered features and applications.

We are thrilled to announce the preview launch of Genkit Dart, an open-source AI framework for Dart and Flutter developers. Already available for TypeScript, Go, and Python, Genkit now empowers you to build high-quality, full-stack, AI-powered applications for any platform.

Announcing Genkit Dart (Preview)

Why choose Genkit Dart?

Genkit Dart provides you with the following capabilities:

  • Model-agnostic API: Supports Google, Anthropic, OpenAI, and OpenAI API-compatible models out-of-the-box. You’re never locked into a single provider.
  • Type safety: Uses Dart’s strong type system with the schemantic package to generate strongly typed data and create type-safe AI flows.
  • Run code anywhere: Write your AI logic once and run it as a backend service or directly inside your Flutter app.
  • Developer UI: Includes a localhost web UI where you can test prompts, view traces, and debug your flows.
  • Complete AI toolkit: Provides everything you need to build high-quality AI features, including structured output, tools, multi-step flows, observability, and more.

Model-agnostic API

Genkit is designed to support any LLM provider, with out-of-the-box support for Google, Anthropic, OpenAI, and OpenAI API-compatible models in this release. This lets you switch between providers with minimal code changes.

import 'package:genkit/genkit.dart';
import 'package:genkit_google_genai/genkit_google_genai.dart';
import 'package:genkit_anthropic/genkit_anthropic.dart';

void main() async {
// Initialize Genkit with plugins
final ai = Genkit(plugins: [
googleAI(),
anthropic(),
]);

// Call Google Gemini
final geminiResponse = await ai.generate(
model: googleAI.gemini('gemini-3.1-pro-preview'),
prompt: 'Hello from Gemini',
);

// Call Anthropic Claude
final claudeResponse = await ai.generate(
model: anthropic.model('claude-opus-4.6'),
prompt: 'Hello from Claude',
);
}

Type-safe AI flows

Genkit lets you wrap your AI logic into testable, observable, deployable functions called flows.

Here is an example of a Travel Planner flow using strongly-typed input and output schemas, with tool calling:

import 'package:genkit/genkit.dart';
import 'package:genkit_google_genai/genkit_google_genai.dart';
import 'package:schemantic/schemantic.dart';

part 'travel_flow.g.dart';

// Define flow input schema with schemantic
@Schema()
abstract class $TripRequest {
String get destination;
int get days;
}

// Define tool input schema
@Schema()
abstract class $WeatherRequest {
@Field(description: 'The city name')
String get city;
}

void main() async {
// Initialize Genkit and register the Google AI plugin
final ai = Genkit(plugins: [googleAI()]);

// Define a tool the model can invoke to fetch live data
ai.defineTool(
name: 'fetchWeather',
description: 'Retrieves the current weather forecast for a given city',
inputSchema: WeatherRequest.$schema,
fn: (request, _) async => request.city.toLowerCase() == 'seattle' ? 'Rainy' : 'Sunny',
);

// Construct a strongly-typed, observable flow
final tripPlannerFlow = ai.defineFlow(
name: 'planTrip',
inputSchema: TripRequest.$schema,
outputSchema: .string(),
fn: (request, _) async {
// Generate content using the model and tool
final response = await ai.generate(
model: googleAI.gemini('gemini-3.1-pro-preview'),
prompt: 'Build a ${request.days}-day travel itinerary for ${request.destination}. '
'Make sure to check the weather forecast first to suggest appropriate activities.',
toolNames: ['fetchWeather'],
);

return response.text;
},
);

// Run the flow
final itinerary = await tripPlannerFlow(
TripRequest(destination: 'Seattle', days: 3)
);
print(itinerary);
}

When you’re ready, you can easily expose your flow as an API using the genkit_shelfpackage and deploy it to any platform that supports Dart.

import 'package:genkit_shelf/genkit_shelf.dart';
import 'package:shelf_router/shelf_router.dart';
import 'package:shelf/shelf_io.dart' as io;

void main() async {
// ... initialize Genkit and define tripPlannerFlow ...

final router = Router()
..post(
'/api/planTrip',
shelfHandler(tripPlannerFlow),
);

await io.serve(router.call, 'localhost', 8080);
}

Run anywhere Dart runs

Most complex AI logic runs on a server. However, because Dart works on both the frontend and backend, Genkit lets you easily move your AI code between your server and your Flutter app.

Here are a few ways you can build with Genkit Dart:

1. Entirely in Flutter for prototyping

You can write all of your Genkit logic, including model calls, directly in your Flutter app. This is great for prototypes or apps where users provide their own API keys and prompts aren’t private.

Warning: You should never publish an app with your API key embedded in the source code as it can be extracted and used by others.

2. Call backend flows from Flutter

When your prompts are sensitive or your AI logic is complex, you can move the entire flow to your backend. Your Flutter app can then call this flow by defining it as a “remote action”. Since the backend and frontend are both in Dart, they can share the same schemas for end-to-end type safety.

Here is an example showing how to call the Trip Planner backend flow we defined earlier from your Flutter app:

import 'package:genkit/client.dart';
import 'package:my_shared_models/models.dart'; // Shared schema

final tripPlannerFlow = defineRemoteAction(
url: 'https://your-server.com/api/planTrip',
inputSchema: TripRequest.$schema,
outputSchema: .string(),
);

final itinerary = await tripPlannerFlow(
input: TripRequest(destination: 'Tokyo', days: 5),
);

3. In Flutter with remote models

To secure your API keys while keeping the core AI logic in your Flutter app, you can create a small Genkit backend that proxies requests to the model provider with custom authorization logic. The models exposed through this backend are remote models.

import 'package:genkit_google_genai/genkit_google_genai.dart';
import 'package:genkit_shelf/genkit_shelf.dart';
import 'package:shelf_router/shelf_router.dart';
import 'package:shelf/shelf_io.dart' as io;

// Backend securely proxies requests to the model
void main() async {
final geminiApi = googleAI();
final targetModel = geminiApi.model('gemini-3.1-flash-lite-preview');
final router = Router()
..post(
'/api/gemini-model',
shelfHandler(
targetModel,
// Insert custom authorization logic here
contextProvider: (req) async => {'customAuth': true},
),
);
await io.serve(router.call, 'localhost', 8080);
}

In your Flutter app, use the remote model instead of a direct model plugin, passing any headers your server needs. This saves you from exposing your API keys and gives you more control over request authorization.

import 'package:genkit/genkit.dart';

// Flutter app communicates with the proxy server
final ai = Genkit();
final secureModel = ai.defineRemoteModel(
name: 'secureModel',
url: 'https://api.yourdomain.com/api/gemini-model',
headers: (context) => {'Authorization': 'Bearer ${fetchSessionToken()}'},
);
final response = await ai.generate(model: secureModel, prompt: 'Write me a poem.');

Powerful tools for AI development

Building high-quality AI applications requires thorough testing and continuous iteration to achieve reliable results. To help with this, Genkit provides a powerful local Developer UI.

You can start the Developer UI alongside your code by running your app with the Genkit CLI:

genkit start -- dart run bin/server.dart

Here is a look at testing a more advanced version of our Trip Planner flow in the Developer UI:

Showing the Genkit Developer UI running a flow

AI coding assistance

For the best experience using Genkit Dart with AI coding tools like Antigravity, Gemini CLI, or Claude Code, install the Genkit Dart agent skill. This gives your AI assistant the knowledge to accurately write and debug your AI features.

Add the skill to your project:

npx skills add genkit-ai/skills

Learn more

This release is an early preview. We want to work with Dart and Flutter developers to improve the framework. You can find the core packages and provider plugins on pub.dev today.

We can’t wait to see what you build with Genkit Dart!


Announcing Genkit Dart: Build full-stack AI apps with Dart and Flutter was originally published in Dart on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read the whole story
alvinashcraft
36 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

PPP 501 | Hope Is Not a Strategy… Or Is It?, with author Jen Fisher

1 Share

Summary

In this episode, Andy welcomes Jen Fisher, author of Hope Is the Strategy: The Underrated Skill That Transforms Work, Leadership, and Wellbeing. In project management circles, we often hear the phrase "hope is not a strategy." Jen challenges that assumption, arguing that real hope is not wishful thinking at all. Instead, it's a practical cognitive process that can help leaders navigate uncertainty, pressure, and change.

In the discussion, Jen explains how hope requires three elements: clear goals, multiple pathways to reach them, and the agency to believe we can influence outcomes. You'll also hear her personal story of realizing she was languishing under constant performance pressure, and how a candid conversation with her boss sparked the beginning of a healthier and more hopeful way of working. Along the way, Jen shares practical tools such as possibility journaling, energy ledgers, and hope spotting. She also explains why vulnerability can be a leadership superpower and how simple language shifts can turn hope killers into hope builders.

If you're leading teams and projects under constant pressure and looking for practical ways to sustain both performance and wellbeing, this episode is for you!

Sound Bites

  • "How would I describe myself? I'm a hope dealer."
  • "Hope is not flimsy. It's not whimsical."
  • "Real hope actually requires action."
  • "What drives hopelessness is feeling like there's nothing you can do."
  • "Hope is the belief that tomorrow can be better than today."
  • "67% of managers said that they've never been trained in how to manage other people. We put humans in charge of other humans, but we give them very little skill and training in how to lead."
  • "You can perform when you're languishing, but the question is really why should we or why would we want to."
  • "For the first time in my professional life, I actually felt seen and heard and valued."
  • "Toxic positivity only makes people feel worse."
  • "Possibility journaling is really thinking about what might be possible here."
  • "Vulnerability is proof that you're human."
  • "When people are feeling uncertain, they want to connect to somebody that feels human."

Chapters

  • 00:00 Introduction
  • 01:45 Start of Interview
  • 02:00 What Hope Is Not: Clearing Up the Misconceptions
  • 03:45 What Real Hope Actually Requires
  • 05:42 Agency and the Feeling of Hopelessness
  • 06:24 Burnout vs. Hopelessness: Is There a Difference?
  • 07:55 Wellbeing Intelligence: The Leadership Skill We're Missing
  • 11:44 Languishing: That Gray Space Between Fine and Flourishing
  • 14:15 The Hidden Cost of Time Pressure on Creativity
  • 17:00 Breaking Through the High-Functioning Facade
  • 20:15 Setting Boundaries as a Recovering People Pleaser
  • 24:03 Practical Tools: Possibility Journal, Energy Ledger, and Hope Spotting
  • 29:15 Vulnerability as a Leadership Superpower
  • 33:46 Hope Killers and Hope Builders: The Language of Hope
  • 38:00 The Hope Audit and the Hope Strategist Toolkit
  • 39:33 Applying Hope at Home and as a Caregiver
  • 41:30 Where to Learn More About Jen
  • 41:26 End of Interview
  • 41:54 Andy Comments After the Interview
  • 45:18 Outtakes

Learn More

You can learn more about Jen and her work at Jen-Fisher.com.

For more learning on this topic, check out:

  • Episode 462 with Margie Warrell. Part of Jen's message in the book is the importance of agency—of believing that you're not a victim and that you have options. Margie is a fierce advocate for how to take action when you're feeling hopeless. I highly recommend her work.
  • Episode 448 with Marie-Hélène Pelletier. It's an engaging discussion about burnout and resilience, and a fantastic follow-up to this discussion with Jen.
  • Episode 396 with Thomas Curran. It's an episode on perfectionism, and I think you'll find it an excellent follow-up to this discussion as well.

Chat with PMeLa

You can chat directly with PMeLa, the podcast's AI persona, to get episode recommendations and answers to your project management and leadership questions. Visit PeopleAndProjectsPodcast.com/PMeLa to chat with her.

Pass the PMP Exam

If you or someone you know is thinking about getting PMP certified, we've put together a helpful guide called The 5 Best Resources to Help You Pass the PMP Exam on Your First Try. We've helped thousands of people earn their certification, and we'd love to help you too. It's totally free, and it's a great way to get a head start.

Just go to 5BestResources.PeopleAndProjectsPodcast.com to grab your copy. I'd love to help you get your PMP this year!

Join Us for LEAD52

I know you want to be a more confident leader–that's why you listen to this podcast. LEAD52 is a global community of people like you who are committed to transforming their ability to lead and deliver. It's 52 weeks of leadership learning, delivered right to your inbox, taking less than 5 minutes a week. And it's all for free. Learn more and sign up at GetLEAD52.com. Thanks!

Thank you for joining me for this episode of The People and Projects Podcast!

Talent Triangle: Power Skills

Topics: Leadership, Wellbeing, Burnout, Hope, Resilience, Vulnerability, Boundaries, Team Culture, Employee Engagement, Languishing, Psychological Safety, Workplace Performance

The following music was used for this episode:

Music: Imagefilm 034 by Sascha Ende
License (CC BY 4.0): https://filmmusic.io/standard-license

Music: Tuesday by Sascha Ende
License (CC BY 4.0): https://filmmusic.io/standard-license





Download audio: https://traffic.libsyn.com/secure/peopleandprojectspodcast/501-JenFisher.mp3?dest-id=107017
Read the whole story
alvinashcraft
36 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Helen Hou-Sandí

1 Share

Helen Hou-Sandí is a Staff Software Engineering Manager for Accessibility at GitHub and a WordPress Lead Developer. As a technologist, she is a leader in open source software and management, and caree deeply about building great user experiences. She am also a classically-trained pianist who’s performed extensively worldwide.You can find Helen on the following sites:

PLEASE SUBSCRIBE TO THE PODCAST

You can check out more episodes of Coffee and Open Source on https://www.coffeeandopensource.com

Coffee and Open Source is hosted by Isaac Levin





Download audio: https://anchor.fm/s/63982f70/podcast/play/116715149/https%3A%2F%2Fd3ctxlq1ktw2nl.cloudfront.net%2Fstaging%2F2026-2-10%2F419735328-44100-2-985d1693b4935.mp3
Read the whole story
alvinashcraft
36 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Introduction to Agentic Migration & Modernization Tools

1 Share
From: ITOpsTalk
Duration: 7:24
Views: 10

The Azure Migrate Agent and GitHub Copilot App Modernization are two tools that you can use to simplify and automate the challenging process of migrating workloads to Azure and updating existing code bases to the most recent framework and security standards.

▶️ https://learn.microsoft.com/en-us/training/modules/introduction-azure-copilot-agents/
▶️ https://learn.microsoft.com/en-us/training/modules/intro-github-copilot-app-modernization/
▶️ https://azure.github.io/MigrateModernizeSkilling/
▶️ https://learn.microsoft.com/en-us/azure/developer/github-copilot-app-modernization/

Content is for educational purposes and is not monetized.

▶️ Script and vocal performance by Orin
▶️ Clockwork Orin Avatar by D-ID
▶️ Voice enhancement by 11labs

Read the whole story
alvinashcraft
36 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories