Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
151112 stories
·
33 followers

Migrating Your App to Flutter: Step-by-Step Guide

1 Share

Complete guide to migrating your mobile app to Flutter. Learn planning, implementation strategies, code conversion, and best practices for successful cross-platform migration.

Migrating an existing app to Flutter is a big decision. You’re not just changing frameworks — you’re potentially unifying codebases, improving performance, and expanding your reach. But the migration process can seem daunting, especially when dealing with production apps serving real users.

I’ve successfully migrated three apps to Flutter, from small startups to enterprise applications with millions of users. Each migration taught me valuable lessons about planning, execution, and avoiding costly mistakes.

This guide will walk you through the entire migration process, from initial planning to successful deployment, so you can make the transition smoothly and confidently.

Why Migrate to Flutter?

Before diving into the “how,” let’s ensure you understand the “why.”

Compelling reasons to migrate:

Single codebase — Write once, deploy to iOS, Android, Web, and Desktop. Reduce development and maintenance costs by up to 50%.

Performance — Flutter compiles to native ARM code, delivering 60fps animations and smooth experiences comparable to native apps.

Hot reload — See changes instantly without losing app state. Development speed increases dramatically.

Rich UI — Beautiful, customizable widgets make creating stunning interfaces easier than ever.

Growing ecosystem — 30,000+ packages on pub.dev cover almost every use case imaginable.

Google backing — Long-term support and continuous improvement guaranteed.

When NOT to migrate:

  • Your app relies heavily on platform-specific features not available in Flutter
  • You have a tiny team with deep native expertise but no Dart/Flutter knowledge
  • You’re building a very simple app that doesn’t benefit from code sharing
  • Your business is doing well and the migration risk outweighs benefits

Pre-Migration Assessment

Audit Your Current App

Before writing any code, thoroughly analyze your existing app:

Feature inventory:

  • List all features and screens
  • Identify platform-specific implementations
  • Note third-party SDK dependencies
  • Document API integrations

Technical assessment:

  • Current architecture (MVC, MVVM, Clean Architecture)
  • State management approach
  • Database and storage solutions
  • Push notifications implementation
  • Authentication flow
  • Payment processing
  • Analytics and crash reporting

Performance baseline:

  • Current app size
  • Launch time
  • Memory usage
  • Battery consumption

This becomes your migration checklist and success metrics.

Choose Your Migration Strategy

Strategy 1: Big Bang (Complete Rewrite)

Rebuild the entire app from scratch in Flutter.

Pros:

  • Clean slate, no legacy code
  • Modern architecture from day one
  • Fastest time to full Flutter adoption

Cons:

  • Highest risk
  • Longer time to market
  • Maintaining two codebases during development

Best for: Small to medium apps, apps needing major refactoring anyway

Strategy 2: Gradual Migration (Add-to-App)

Integrate Flutter modules into existing native apps incrementally.

Pros:

  • Lower risk
  • Continuous deployment
  • Test Flutter in production gradually

Cons:

  • Increased complexity
  • Longer overall timeline
  • Integration challenges

Best for: Large enterprise apps, apps with complex native integrations

Strategy 3: Hybrid Approach

Rewrite new features in Flutter while maintaining existing native code.

Pros:

  • Balance of speed and safety
  • Innovation continues during migration
  • Natural deprecation of old code

Cons:

  • Mixed codebase complexity
  • Requires both native and Flutter expertise

Best for: Apps under active development, teams transitioning skills

I recommend Strategy 3 for most teams — it provides the best risk-reward balance.

Setting Up Your Flutter Project

Installation and Setup

# Install Flutter SDK
# Download from flutter.dev or use package manager

# Verify installation
flutter doctor

# Create new Flutter project
flutter create my_app_flutter
cd my_app_flutter

# Run on device
flutter run

Project Structure

Organize for scalability from day one:

lib/
├── main.dart
├── app/
│ ├── app.dart
│ ├── routes.dart
│ └── theme.dart
├── core/
│ ├── constants/
│ ├── utils/
│ └── services/
├── data/
│ ├── models/
│ ├── repositories/
│ └── datasources/
├── domain/
│ ├── entities/
│ └── usecases/
└── presentation/
├── screens/
├── widgets/
└── state/

This Clean Architecture approach separates concerns and makes testing easier.

Essential Dependencies

dependencies:
flutter:
sdk: flutter

# State Management
flutter_bloc: ^8.1.3
# or provider: ^6.1.1
# or riverpod: ^2.4.9

# Networking
dio: ^5.4.0

# Local Storage
shared_preferences: ^2.2.2
sqflite: ^2.3.0

# Dependency Injection
get_it: ^7.6.4

# Navigation
go_router: ^13.0.0

# JSON Serialization
json_annotation: ^4.8.1

dev_dependencies:
flutter_test:
sdk: flutter
build_runner: ^2.4.6
json_serializable: ^6.7.1
mockito: ^5.4.3

Migration Process Step-by-Step

Phase 1: Core Infrastructure (Week 1–2)

1. Setup theme and styling:

class AppTheme {
static ThemeData lightTheme = ThemeData(
colorScheme: ColorScheme.fromSeed(
seedColor: Colors.blue,
brightness: Brightness.light,
),
useMaterial3: true,
textTheme: TextTheme(
displayLarge: TextStyle(
fontSize: 32,
fontWeight: FontWeight.bold,
),
bodyLarge: TextStyle(fontSize: 16),
),
);

static ThemeData darkTheme = ThemeData(
colorScheme: ColorScheme.fromSeed(
seedColor: Colors.blue,
brightness: Brightness.dark,
),
useMaterial3: true,
);
}

2. Implement navigation:

final router = GoRouter(
routes: [
GoRoute(
path: '/',
builder: (context, state) => HomeScreen(),
),
GoRoute(
path: '/details/:id',
builder: (context, state) {
final id = state.pathParameters['id']!;
return DetailsScreen(id: id);
},
),
],
);

3. Setup networking layer:

class ApiClient {
final Dio _dio;

ApiClient(this._dio) {
_dio.options.baseUrl = 'https://api.example.com';
_dio.options.connectTimeout = Duration(seconds: 5);
_dio.options.receiveTimeout = Duration(seconds: 3);

_dio.interceptors.add(LogInterceptor());
_dio.interceptors.add(AuthInterceptor());
}

Future<Response> get(String path) async {
try {
return await _dio.get(path);
} on DioException catch (e) {
throw _handleError(e);
}
}

Exception _handleError(DioException error) {
switch (error.type) {
case DioExceptionType.connectionTimeout:
return NetworkException('Connection timeout');
case DioExceptionType.receiveTimeout:
return NetworkException('Receive timeout');
default:
return NetworkException('Network error');
}
}
}

Phase 2: Data Layer (Week 2–3)

1. Convert data models:

import 'package:json_annotation/json_annotation.dart';

part 'user.g.dart';

@JsonSerializable()
class User {
final String id;
final String name;
final String email;
@JsonKey(name: 'profile_image')
final String? profileImage;

User({
required this.id,
required this.name,
required this.email,
this.profileImage,
});

factory User.fromJson(Map<String, dynamic> json) =>
_$UserFromJson(json);

Map<String, dynamic> toJson() => _$UserToJson(this);
}

// Generate code with:
// flutter pub run build_runner build

2. Implement repositories:

abstract class UserRepository {
Future<User> getCurrentUser();
Future<void> updateUser(User user);
}

class UserRepositoryImpl implements UserRepository {
final ApiClient _apiClient;
final LocalStorage _localStorage;

UserRepositoryImpl(this._apiClient, this._localStorage);

@override
Future<User> getCurrentUser() async {
try {
// Try cache first
final cached = await _localStorage.getUser();
if (cached != null) return cached;

// Fetch from API
final response = await _apiClient.get('/user/me');
final user = User.fromJson(response.data);

// Cache for offline
await _localStorage.saveUser(user);

return user;
} catch (e) {
throw RepositoryException('Failed to fetch user');
}
}

@override
Future<void> updateUser(User user) async {
await _apiClient.put('/user/${user.id}', data: user.toJson());
await _localStorage.saveUser(user);
}
}

3. Setup local storage:

class LocalStorage {
final SharedPreferences _prefs;

LocalStorage(this._prefs);

Future<void> saveUser(User user) async {
await _prefs.setString('user', jsonEncode(user.toJson()));
}

Future<User?> getUser() async {
final userJson = _prefs.getString('user');
if (userJson == null) return null;
return User.fromJson(jsonDecode(userJson));
}
}

Phase 3: State Management (Week 3–4)

Using BLoC pattern as example:

// Events
abstract class UserEvent {}

class LoadUser extends UserEvent {}

class UpdateUser extends UserEvent {
final User user;
UpdateUser(this.user);
}

// States
abstract class UserState {}

class UserInitial extends UserState {}

class UserLoading extends UserState {}

class UserLoaded extends UserState {
final User user;
UserLoaded(this.user);
}

class UserError extends UserState {
final String message;
UserError(this.message);
}

// BLoC
class UserBloc extends Bloc<UserEvent, UserState> {
final UserRepository _repository;

UserBloc(this._repository) : super(UserInitial()) {
on<LoadUser>(_onLoadUser);
on<UpdateUser>(_onUpdateUser);
}

Future<void> _onLoadUser(
LoadUser event,
Emitter<UserState> emit,
) async {
emit(UserLoading());
try {
final user = await _repository.getCurrentUser();
emit(UserLoaded(user));
} catch (e) {
emit(UserError(e.toString()));
}
}

Future<void> _onUpdateUser(
UpdateUser event,
Emitter<UserState> emit,
) async {
try {
await _repository.updateUser(event.user);
emit(UserLoaded(event.user));
} catch (e) {
emit(UserError(e.toString()));
}
}
}

Phase 4: UI Migration (Week 4–8)

1. Start with simple screens:

class HomeScreen extends StatelessWidget {
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: Text('Home'),
actions: [
IconButton(
icon: Icon(Icons.settings),
onPressed: () => context.go('/settings'),
),
],
),
body: BlocBuilder<UserBloc, UserState>(
builder: (context, state) {
if (state is UserLoading) {
return Center(child: CircularProgressIndicator());
}

if (state is UserError) {
return Center(child: Text(state.message));
}

if (state is UserLoaded) {
return UserProfile(user: state.user);
}

return SizedBox();
},
),
);
}
}

2. Create reusable widgets:

class CustomButton extends StatelessWidget {
final String text;
final VoidCallback onPressed;
final bool isLoading;

const CustomButton({
required this.text,
required this.onPressed,
this.isLoading = false,
});

@override
Widget build(BuildContext context) {
return ElevatedButton(
onPressed: isLoading ? null : onPressed,
style: ElevatedButton.styleFrom(
padding: EdgeInsets.symmetric(vertical: 16, horizontal: 32),
shape: RoundedRectangleBorder(
borderRadius: BorderRadius.circular(8),
),
),
child: isLoading
? SizedBox(
height: 20,
width: 20,
child: CircularProgressIndicator(strokeWidth: 2),
)
: Text(text),
);
}
}

3. Handle complex layouts:

class ProductCard extends StatelessWidget {
final Product product;
final VoidCallback onTap;

const ProductCard({
required this.product,
required this.onTap,
});

@override
Widget build(BuildContext context) {
return Card(
clipBehavior: Clip.antiAlias,
child: InkWell(
onTap: onTap,
child: Column(
crossAxisAlignment: CrossAxisAlignment.start,
children: [
AspectRatio(
aspectRatio: 16 / 9,
child: Image.network(
product.imageUrl,
fit: BoxFit.cover,
errorBuilder: (context, error, stackTrace) {
return Container(
color: Colors.grey[300],
child: Icon(Icons.broken_image),
);
},
),
),
Padding(
padding: EdgeInsets.all(12),
child: Column(
crossAxisAlignment: CrossAxisAlignment.start,
children: [
Text(
product.name,
style: Theme.of(context).textTheme.titleMedium,
maxLines: 2,
overflow: TextOverflow.ellipsis,
),
SizedBox(height: 4),
Text(
'\${product.price.toStringAsFixed(2)}',
style: Theme.of(context).textTheme.titleLarge?.copyWith(
color: Theme.of(context).colorScheme.primary,
fontWeight: FontWeight.bold,
),
),
],
),
),
],
),
),
);
}
}

Phase 5: Platform Integration (Week 8–10)

1. Add platform channels for native features:

class NativeChannel {
static const platform = MethodChannel('com.example.app/native');

Future<String?> getBiometricAuth() async {
try {
final result = await platform.invokeMethod('authenticate');
return result;
} on PlatformException catch (e) {
print("Failed to authenticate: ${e.message}");
return null;
}
}
}

2. Implement push notifications:

class PushNotificationService {
final FirebaseMessaging _fcm = FirebaseMessaging.instance;

Future<void> initialize() async {
// Request permission
await _fcm.requestPermission(
alert: true,
badge: true,
sound: true,
);

// Get token
final token = await _fcm.getToken();
print('FCM Token: $token');

// Handle foreground messages
FirebaseMessaging.onMessage.listen((RemoteMessage message) {
_showNotification(message);
});

// Handle background messages
FirebaseMessaging.onBackgroundMessage(_backgroundHandler);
}

static Future<void> _backgroundHandler(RemoteMessage message) async {
print('Background message: ${message.messageId}');
}

void _showNotification(RemoteMessage message) {
// Show local notification
}
}

3. Setup analytics:

class AnalyticsService {
final FirebaseAnalytics _analytics = FirebaseAnalytics.instance;

Future<void> logScreenView(String screenName) async {
await _analytics.logScreenView(screenName: screenName);
}

Future<void> logEvent(String name, Map<String, dynamic> parameters) async {
await _analytics.logEvent(name: name, parameters: parameters);
}
}

Testing Your Migration

Unit Tests

void main() {
group('UserRepository', () {
late UserRepository repository;
late MockApiClient mockApiClient;
late MockLocalStorage mockLocalStorage;

setUp(() {
mockApiClient = MockApiClient();
mockLocalStorage = MockLocalStorage();
repository = UserRepositoryImpl(mockApiClient, mockLocalStorage);
});

test('should return user from cache if available', () async {
// Arrange
final user = User(id: '1', name: 'Test', email: 'test@example.com');
when(mockLocalStorage.getUser()).thenAnswer((_) async => user);

// Act
final result = await repository.getCurrentUser();

// Assert
expect(result, equals(user));
verifyNever(mockApiClient.get(any));
});
});
}

Widget Tests

void main() {
testWidgets('CustomButton shows loading indicator', (tester) async {
await tester.pumpWidget(
MaterialApp(
home: Scaffold(
body: CustomButton(
text: 'Submit',
onPressed: () {},
isLoading: true,
),
),
),
);

expect(find.byType(CircularProgressIndicator), findsOneWidget);
expect(find.text('Submit'), findsNothing);
});
}

Integration Tests

void main() {
IntegrationTestWidgetsFlutterBinding.ensureInitialized();

testWidgets('complete user flow', (tester) async {
app.main();
await tester.pumpAndSettle();

// Navigate to login
await tester.tap(find.text('Login'));
await tester.pumpAndSettle();

// Enter credentials
await tester.enterText(find.byType(TextField).first, 'test@example.com');
await tester.enterText(find.byType(TextField).last, 'password');

// Submit
await tester.tap(find.text('Submit'));
await tester.pumpAndSettle();

// Verify navigation to home
expect(find.text('Home'), findsOneWidget);
});
}

Deployment Strategy

Gradual Rollout

Week 1: 5% of users Week 2: 10% of users Week 3: 25% of users Week 4: 50% of users Week 5: 100% of users

Monitor crash rates, performance metrics, and user feedback at each stage.

App Store Optimization

Update your store listing:

  • New screenshots showcasing Flutter UI
  • Updated description highlighting improvements
  • Video preview of key features
  • A/B test different creatives

Performance Monitoring

void main() {
// Setup Firebase Performance
WidgetsFlutterBinding.ensureInitialized();

// Custom trace
final trace = FirebasePerformance.instance.newTrace('app_start');
trace.start();

runApp(MyApp());

trace.stop();
}

Common Migration Pitfalls

Pitfall 1: Underestimating complexity Solution: Add 30% buffer to all estimates

Pitfall 2: Not testing enough on real devices Solution: Test on 10+ device/OS combinations

Pitfall 3: Ignoring platform differences Solution: Use Platform.isIOS and Platform.isAndroid for platform-specific code

Pitfall 4: Poor state management architecture Solution: Choose one pattern and stick with it

Pitfall 5: Skipping performance profiling Solution: Profile regularly with Flutter DevTools

Post-Migration Optimization

Reduce App Size

# android/app/build.gradle
android {
buildTypes {
release {
shrinkResources true
minifyEnabled true
}
}
}

Optimize Images

Use cached_network_image for efficient image loading:

CachedNetworkImage(
imageUrl: product.imageUrl,
placeholder: (context, url) => CircularProgressIndicator(),
errorWidget: (context, url, error) => Icon(Icons.error),
memCacheWidth: 600, // Resize for memory efficiency
)

Implement Code Splitting

Use deferred loading for large features:

import 'package:flutter/widgets.dart' deferred as widgets;

void loadWidget() async {
await widgets.loadLibrary();
// Use widgets
}

Measuring Success

Track these metrics pre and post-migration:

Performance:

  • App launch time
  • Screen load time
  • Frame rate (should be 60fps)
  • Memory usage
  • App size

Quality:

  • Crash-free rate (target: 99.5%+)
  • Bug reports
  • Performance complaints

Business:

  • User retention
  • App store ratings
  • Conversion rates
  • Development velocity

Conclusion

Migrating to Flutter is a significant undertaking, but with proper planning and execution, it delivers tremendous value. A single codebase, faster development, and better performance make it worthwhile for most teams.

Start small, test thoroughly, and roll out gradually. Your users shouldn’t notice the migration except for improved performance and more frequent updates.

The Flutter ecosystem is mature, the tooling is excellent, and the community is vibrant. You’re making the right choice.

Your successful Flutter migration starts with the first screen. Begin today.

Enjoyed this migration guide? Show some love with those claps — it helps others discover this content!

Continue learning: Follow for more Flutter deep-dives, plus content on Node.js, Blockchain, and AI/ML. Explore my articles for more insights.

Exclusive content: Back me on Patreon at $5/month for early tutorials, migration templates, and advanced Flutter guides.


Migrating Your App to Flutter: Step-by-Step Guide was originally published in Flutter Community on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read the whole story
alvinashcraft
4 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Trump's One Rule AI Order

1 Share
From: AIDailyBrief
Duration: 7:29
Views: 170

Trump's executive order seeks federal preemption of state AI laws, empowers DOJ litigation, and ties Commerce funding to compliance, triggering legal challenges and GOP infighting. Approval of Nvidia H200 exports reshapes global chip strategy as China weighs massive subsidies and semiconductor independence. GPT-5.2 ranks among top models on independent benchmarks and tops GDP‑Val, underscoring intensifying competition among major AI labs.

Brought to you by:
KPMG – Go to ⁠www.kpmg.us/ai⁠ to learn more about how KPMG can help you drive value with our AI solutions.
Vanta - Simplify compliance - ⁠⁠⁠⁠⁠⁠⁠https://vanta.com/nlw

The AI Daily Brief helps you understand the most important news and discussions in AI.
Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614
Get it ad free at
Join our Discord: https://bit.ly/aibreakdown

Read the whole story
alvinashcraft
4 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Announcing Windows 11 Insider Preview Build 28020.1362 (Canary Channel)

1 Share
Hello Windows Insiders, today we are releasing Windows 11 Insider Preview Build 28020.1362 to the Canary Channel. (KB 5073095)

What’s new in Build 28020.1362

Changes and Improvements Gradually Rolling Out

[Gaming]

  • The full screen experience (FSE) is now available on more Windows 11 handheld devices after its initial launch on ASUS ROG Ally and ROG Ally X. FSE gives you a console-style interface with the Xbox app, making handheld gaming clean and distraction-free. It improves performance by minimizing background tasks, so gameplay stays smooth and responsive. To turn it on, go to Settings > Gaming > Full screen experience, and set Xbox as your home app. You can open FSE from Task View or Game Bar​​​​​​ or configure your handheld to enter full screen experience on startup.
[caption id="attachment_178497" align="alignnone" width="624"]UI showing the Task View in Xbox full screen experience. UI showing the Task View in Xbox full screen experience.[/caption]
  • Feedback: Share your thoughts in Feedback Hub (WIN + F) under Gaming and Xbox > Gaming Handhelds.

[Click to Do]

The following changes and improvements are rolling out for Copilot+ PCs:
  • The Click to Do context menu is being updated with a streamlined design, making it simpler to locate what you need. Frequently used actions like Copy, Save, Share, and Open will now be easier to access directly from the context menu.
  • Whenever a large image or table appears on your screen, the context menu will automatically pop up, making it quicker and easier to access the actions and results you need.
[caption id="attachment_178498" align="alignnone" width="640"]UI showing the context menu automatically appearing to provide quick access to common actions. UI showing the context menu automatically appearing to provide quick access to common actions.[/caption]

[Agent in Settings]

The following changes and improvements are rolling out for agent in Settings on Copilot+ PCs:
  • We’re introducing new experiences to make it easier to modify settings in both search and recommended settings.
  • Recommend Settings: Now allows for faster changes by showing an inline agent action for recently modified settings.
[caption id="attachment_178499" align="alignnone" width="624"]Settings Homepage with options to adjust recently modified settings under “Recommended settings”. Settings Homepage with options to adjust recently modified settings under “Recommended settings”.[/caption]
  • Search: We now show more available results in the search fly out to discover what you’re looking for and to allow you to quickly modify those settings. In cases where the settings can’t be adjusted further, a dialog lets you know why and provides an option to modify the settings as needed.
[caption id="attachment_178500" align="alignnone" width="624"]More available results are displayed within search to take action and quickly modify those settings. More available results are displayed within search to take action and quickly modify those settings.[/caption] [caption id="attachment_178501" align="alignnone" width="624"]When searching for “increase volume” within settings, a dialog is presented indicating that volume is already at the maximum setting and provides a slider to modify the value. When searching for “increase volume” within settings, a dialog is presented indicating that volume is already at the maximum setting and provides a slider to modify the value.[/caption]

[Studio Effects] 

Making Windows Studio Effects available on additional cameras
  • We are working to bring the Windows Studio Effects experience from integrated laptop cameras to a broader range of camera hardware, helping you stay professional and look your best across more setups. On supported Copilot+ PCs, we are rolling out the ability to use Studio Effect’s AI-powered camera enhancements with an additional, alternative camera – such as a USB webcam or your laptop’s built-in rear camera.
[caption id="attachment_178502" align="alignnone" width="650"]New option inside Settings to turn on the ability to use Windows Studio Effects on an additional camera highlighted in a red box. New option inside Settings to turn on the ability to use Windows Studio Effects on an additional camera highlighted in a red box.[/caption]
  • To get started, navigate to Settings > Bluetooth & devices > Cameras and select your preferred camera from the connected cameras list. Then, open the advanced camera options menu to find the new “Use Windows Studio Effects” toggle. Once enabled, you can now access and adjust Studio Effects directly from the camera settings page or via the quick settings menu in the taskbar.
  • For more information on Windows Studio Effects and device prerequisites, check out Windows Studio Effects – Microsoft Support.

[Drag Tray]

For Insiders with the Drag Tray feature:
  • Drag Tray now supports multi-file sharing, intelligently surfaces more relevant apps, and enables seamless file movement to a chosen folder.
[caption id="attachment_178488" align="alignnone" width="600"]Drag Tray UI shows options to share to apps like WhatsApp, Paint, Snapchat, and movement to other folders. Drag Tray UI shows options to share to apps like WhatsApp, Paint, Snapchat, and movement to other folders.[/caption]
  • We’ve added the ability to turn Drag Tray on/off from Settings > System > Nearby sharing.

[File Explorer]

  • Dark Mode: We’ve made improvements to the dark mode experience in File Explorer starting with key actions like copy, move, and delete dialogs. You’ll now see a consistent dark mode experience in:
    • The default and expanded state for copy, move, and delete dialogs
    • Progress bars and chart views
    • Dialogs for confirming states like skip, override, and file selection
    • Multiple confirmation and error dialogs
[caption id="attachment_178489" align="alignnone" width="451"]The new discover dialog in dark mode. The new discover dialog in dark mode.[/caption] [caption id="attachment_178490" align="alignnone" width="497"]The new recycle bin dialog in dark mode. The new recycle bin dialog in dark mode.[/caption] [caption id="attachment_178491" align="alignnone" width="445"]The new copy dialog in dark mode. The new copy dialog in dark mode.[/caption] [caption id="attachment_178492" align="alignnone" width="454"]The new replace or skip files dialog in dark mode. The new replace or skip files dialog in dark mode.[/caption] Other File Explorer improvements include:
  • For Insiders with Copilot+ PCs, we’re updating the File Explorer Search Box placeholder text to raise awareness of the improved Windows Search experience introduced earlier this year. Learn more in our January blog post.
[caption id="attachment_178493" align="alignnone" width="352"]Search for documents and images in File Explorer. Search for documents and images in File Explorer.[/caption]
  • When you hover over a file in File Explorer Home, commands such as Open file location and Ask Copilot appear as quick actions. This experience is now supported for work and school accounts (Entra ID). This feature isn’t available in the European Economic Area.

[Mobile Device Settings]

  • You can now directly add and manage your mobile devices from Settings on your Windows PC by navigating to “Mobile Devices” under the Bluetooth & Devices section. This page allows you to view your mobile devices, add new mobile devices, and manage features such as using your device as a connected camera or accessing your device’s files in File Explorer.
[caption id="attachment_178494" align="alignnone" width="445"]New settings page under Bluetooth & Devices to manage and add connected mobile devices. New settings page under Bluetooth & Devices to manage and add connected mobile devices.[/caption]

[Desktop Spotlight]

  • We are trying out a change that adds “Learn more about this background” and “Next desktop background” to the context menu when you click on your desktop if you have Windows Spotlight chosen as your desktop background under Settings > Personalization > Background.

[Input] 

  • We’re moving more keyboard settings from Control Panel to Settings – this includes that the setting for character repeat delay / rate is now available under Settings > Bluetooth & Devices > Keyboard, and the setting for cursor blink rate is now available under Settings > Accessibility > Text cursor.
  • Keyboard backlight performance has improved on supported HID-compliant keyboards. Compatible keyboards display keys clearly in low-light environments, and the backlight adjusts to help conserve power.
  • The AltGr layer is now enabled for the Arabic 101 keyboard layout.  The left Alt key continues to function as before, while the right Alt key acts as a modifier to access additional symbols. The first new symbol mapped to AltGr on the Arabic 101 layout is the Saudi Riyal currency symbol (AltGr+S). The Saudi Riyal currency symbol is also available on the touch keyboard’s symbols page and the expressive input panels currency tab. Users who switch languages with Alt+Shift can continue to use the left Alt+Shift or the general shortcut Windows logo key+SpacebarArabic 102 and Arabic 102 AZERTY layouts are updated similarly.
  • Pens that support haptic feedback will now deliver tactile responses during certain interactions with the system UI. For example, you may feel vibrations when hovering over the close button or when snapping and resizing windows.

[Game Pass]

  • In this build we have modified references to Game Pass plans in Settings to reflect updated branding and benefits.
[caption id="attachment_178495" align="alignnone" width="627"]Settings Home page reflects new Game Pass branding and benefits. Settings Home page reflects new Game Pass branding and benefits.[/caption]

[OneDrive]

  • We are rolling out the new OneDrive icon in Accounts and Homepages in Settings.

[Recovery] 

  • Quick Machine Recovery (QMR) now runs a one-time scan on PCs with settings quick machine recoveryand automatically check for solutions are both turned on. If a fix isn’t available immediately, QMR directs you to the best recovery options to get your PC running again.Alt txt: In Settings for Systems, Recovery, “Quick machine recovery” and “Automatically check for solutions” are enabled to run a onetime QMR scan by default.

[Advanced Settings] 

  • You can now turn on Virtual Workspaces in Advanced Settings. Virtual Workspaces allow you to enable or disable virtual environments such as Hyper-V and Windows Sandbox. To access Virtual Workspaces, go to SettingsSystems > Advanced.

Fixes gradually being rolled out

[File Explorer]

  • Fixed an issue where File Explorer might unexpectedly not show thumbnails for video files containing certain EXIF metadata.
  • Fixed an issue where an old white toolbar might sometimes appear unexpectedly in File Explorer.
  • Fixed an issue where when you right-clicked a file, the app icon next to the Open option might appear generic instead of showing the default app for that file type.
  • Fixed an issue where when opening a folder from another app (for example, opening the Downloads folder from a browser), your custom view — including sorting files by name, changing the icon size, or removing grouping — unexpectedly might reset back to default.
  • Fixed an issue where the body of the File explorer window might no longer respond to mouse clicks after invoking the context menu.
  • Fixed an issue where extracting very large archive folders (1.5gb+) might fail with a “Catastrophic Error” (error code 0x8000FFFF).
  • Fixed an issue which could cause File Explorer to become unresponsive when opening Home.

[Settings]

  • Fixed and issue where Settings might become unresponsive when attempting to navigate to the Network & Internet section.
  • Fixed an issue where the search bar in Settings might become overlapped with the minimum and maximum buttons in the title bar.
  • Fixed an issue where the processor name in Settings > System > About might be truncated.

[Taskbar]

  • Fixed an issue where the automatically hide the taskbar setting might unexpectedly turn off, after seeing a message saying a toolbar is already hidden on this side of your screen.
  • Fixed an issue where Voice access wasn’t working correctly when attempting to interact with the taskbar (calling out a number might not invoke that item).
  • Fixed and issue where the taskbar icons might automatically scale to be smaller, although there was enough room left without scaling changes.
  • Fixed an issue where if you hovered over an app icon on the taskbar, and then selected the window preview, the preview might dismiss and not bring the window to the foreground.

[Start menu]

  • For Insiders with the new Start menu, the Windows Search panel now matches the new Start menu in size. This update aims to create a smoother transition when searching.

[Internet]

  • Made some underlying improvements to help address an issue which could lead to not having internet after resuming from disconnected standby. Please don’t hesitate to file feedback under Network and Internet in the Feedback Hub if you continue experiencing issues.

[Display and graphics]

  • Improved performance when apps query monitors for their full list of supported modes. When this happened it could previously lead to a momentary stutter on very high-resolution monitors. This work should help prevent and reduce stuttering in these scenarios.
  • Fixed an issue where All-in-one PCs might experience issues with their brightness slider, where it unexpectedly would revert to the original brightness when interacting with it.
  • Fixed an issue where recently certain games might display the message Unsupported graphics card detected, although a supported graphics card was being used.
  • Fixed an issue where apps and browsers might display partially unresponsive onscreen content when other maximized or full-screen apps were updating in the background. This issue was particularly noticeable when scrolling, as only parts of the window content might update.
  • Fixed an issue where text might not render correctly when editing content within a multiline text box in certain apps.

[Login and Lock screens]

  • Made some underlying changes to improve the performance of loading the taskbar after unlocking your PC from sleep. This also should help in cases where the password field and other sign-in screen contents didn’t render when transitioning from the lock screen to the sign-in screen after sleep.
  • Fixed an issue where it might be very slow the first time when signing into a new account.
  • Fixed an issue where when your lock screen was set to slide show, there might be a memory leak. Memory leaks can lead to performance or reliability issues over time.

[Narrator]

  • Fixed an issue where Narrator might take abrupt random pauses during continuous reading. in word docs.

[Windows Update]

  • Fixed an underlying issue which could cause "Update and shutdown" to not actually shut down your PC after updating
  • Fixed an underlying issue which could lead to some Insiders seeing error 0x8007042B when attempting to upgrade to Canary from recent Windows 11 24H2 or 25H2-based builds.

[Task Manager]

  • Fixed an issue where Task Manager might open as expected, but if you tried to close it it would remain running in the background, with the number of processes growing each time you opened Task Manager. This could also lead to Task Manager unexpectedly appearing on boot.

[Other]

  • Fixed an issue where certain apps might become unresponsive when launching the Open or Save Dialog.
  • Fixed an issue where interacting with the desktop might unexpectedly invoke Task View.

[Paint App update rolling out to Canary & Dev Channels] 

  • With Paint version 11.2511.281.0, we’re introducing the collapse toolbar feature in Paint. To get started, open Paint and click the chevron icon at the bottom-right of the ribbon to enable Automatically hide toolbar. Once the toolbar collapses, use the Show toolbar button to bring it back and switch tools. To hide it again, click the Hide toolbar button or anywhere outside the toolbar. When you’re ready to return to the default view, click the chevron icon and select Always show toolbar.
[caption id="attachment_178485" align="alignnone" width="2404"]Paint App GIF showing the collapsable toolbar. Paint App GIF showing the collapsable toolbar.[/caption]

Reminders for Windows Insiders in the Canary Channel

  • The builds we release to the Canary Channel represent the latest platform changes early in the development cycle and should not be seen as matched to any specific release of Windows. Features and experiences included in these builds may never get released as we try out different concepts and get feedback. Features may change over time, be removed, or replaced and never get released beyond Windows Insiders. Some of these features and experiences could show up in future Windows releases when they’re ready.
  • Many features in the Canary Channel are rolled out using Control Feature Rollout technology, starting with a subset of Insiders and ramping up over time as we monitor feedback to see how they land before pushing them out to everyone in this channel.
  • Some features may show up in the Dev and Beta Channels first before showing up in the Canary Channel.
  • Some features in active development we preview with Windows Insiders may not be fully localized and localization will happen over time as features are finalized. As you see issues with localization in your language, please report those issues to us via Feedback Hub.
  • To get off the Canary Channel, a clean install of Windows 11 will be required. As a reminder - Insiders can’t switch to a channel that is receiving builds with lower build numbers without doing a clean installation of Windows 11 due to technical setup requirements.
  • Check out Flight Hub for a complete look at what build is in which Insider channel.
Thanks, Windows Insider Program Team *Functionality will vary by device and market; text actions will be available across markets in select character sets. See aka.ms/copilotpluspcs.
Read the whole story
alvinashcraft
5 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Fixing Aspire's image problem: a look at container registry support in 13.1

1 Share

The release of Aspire 13.1 is right around the corner (yes, it happens that fast), so I figured I’d dump my thoughts on what I spent a bulk of the time working on this release: improving custom image registry support in Aspire. One of the core primitives in the Aspire app model is the ability to define services and their resource dependencies. Another core primitive is to be able to project that representation of your service that you have in code to a cloud deployment. In turn, generating container images and pushing them to registries is a key aspect of materializing the app structure you model in Aspire into an actual cloud deployment.

As it turns out, a bunch of this work boiled down into one major learning: explicit is better than implicit. Let’s dig into why. Say you have an AppHost structure that looks like this:

var builder = DistributedApplication.CreateBuilder(args);

builder.AddAzureContainerAppEnvironment("env");

var database = builder.AddPostgres("myapp-db");

var api = builder.AddCSharpApp("api", "./api.cs")
    .WithHttpEndpoint()
    .WithReference(database);

builder.AddViteApp("frontend", "./frontend")
    .WithReference(api);

builder.Build().Run();

The AddAzureContainerAppEnvironment line here is doing a ton of heavy lifting. Behind the scenes, it registers a set of hooks that will inspect the app model for any compute resources and project them to their deployment target representations. In the case of a compute resource deployed to Azure Container Apps, its deployment target representation will consist of an Azure Bicep-resource modeled in code that describes the actual configuration of the container app, including:

  • The container image associated with the container app instance that is running
  • Any ingress routing policies that need to be configured on the container app
  • Any environment variables that need to be injected into the application

In addition to creating these deployment target projections, the AddAzureContainerAppEnvironment API also injects an AzureContainerAppEnvironmentResource into the app model, which behind the scenes encapsulates the Bicep representation of the Azure Container App Environment. The environment consists of the ACA environment itself, the log analytics workspace associated with it, and the Azure Container Registry that images will be pulled and pushed from.

The problem with implicit registries

Here’s where things got tricky. The ACR was provisioned implicitly as part of the ACA environment, which created a few problems. First, it was hard to discover the implicit registry in the app model. ACR is provisioned as part of the ACA environment and we don’t get access to its outputs until the deployment of the entire environment completes. Second, since the registry was bundled with the environment, we couldn’t start pushing container images until the entire environment finished provisioning. That includes the ACA environment itself, the log analytics workspace, and even the Aspire dashboard container. Finally, if any part of the environment provisioning failed (say, the dashboard container hit an error or the log analytics workspace was misconfigured), the entire registry was unavailable. Image pushes would fail even though the ACR itself might have provisioned successfully.

Explicit is better than implicit

The fix? Model the registry explicitly and separately from the ACA environment. By splitting the registry out as its own resource:

  • We can start pushing container images as soon as the registry is provisioned, without waiting for the rest of the environment
  • Image pushes are no longer affected by errors in other parts of the environment provisioning
  • The registry is a first-class citizen in the app model, making it easier to reference and customize

Leaning into the theme of granularity, splitting the registry from the ACA environment all-up means that we can parallelize more of the deployment process. The more we can break down the deployment into independent steps, the faster and more resilient the overall process becomes. If you’ve been following my posts on Aspire Pipelines, you’ll recognize this pattern: granularity enables concurrency.

It’s worth noting that while I’ve mentioned Azure Container Apps here, this change applies to App Service Environments as well, which also need an ACR provisioned in order to support image pushes. The same benefits around explicit modeling and more granular provisioning apply there.

Modeling push as a pipeline step

OK, the explicit modeling of the registry is nice. Since explicit modeling is the name of the game, what else can we explicitly model? The action associated with pushing the container images.

As mentioned in previous posts, we now model the deployment process that an Aspire app is associated with into a set of pipeline steps. In previous releases, we explicitly modeled steps associated with provisioning Azure resources and building container images. Naturally, we can do the same for the action of pushing images. In this case, individual compute resources register their push behavior in pipeline steps on the resource. The registries that are modeled in the Aspire app model are responsible for discovering all these push steps and wiring them up to a top-level entrypoint. This means that when you run:

aspire do push

On the following AppHost:

var builder = DistributedApplication.CreateBuilder(args);

builder.AddAzureContainerAppEnvironment("env");

var api = builder.AddCSharpApp("api", "./api.cs")
    .WithHttpEndpoint();

var worker = builder.AddCSharpApp("worker", "./worker.cs");

builder.Build().Run();

Aspire will:

  • Provision your ACR
  • Build the container images associated with the compute resources mentioned
  • Push the images to the ACR that has been provisioned

This decoupling of registration and discovery means we can push images for individual resources without pushing others (aspire do push vs aspire do push-api), register multiple registries in the app model and associate them with different compute resources, and run push operations in parallel with other deployment steps that don’t depend on them.

Supporting non-Azure registries

OK! Last piece of the puzzle. Although Aspire has a first-class integration for Azure Container Registry, the same can’t be said for other registries like GitHub Container Registry and DockerHub. To close this gap, there’s a new ContainerRegistryResource that can be used to parameterize the registry endpoint and repository to support pushing to a variety of registries.

var builder = DistributedApplication.CreateBuilder(args);

builder.AddContainerRegistry("docker", "docker.io", "captainsafia");

var api = builder.AddCSharpApp("api", "./api.cs")
    .WithHttpEndpoint();

builder.Build().Run();

In the scenario above, images will be pushed to the registry on DockerHub. It’s also possible to use this model to push to GitHub Container Registries. In this sample repo, you’ll observe that the AppHost declares a parameterized Container Registry and we use some GitHub Actions-foo to push built images to the container registry associated with that GitHub repo.

- name: Push images with Aspire
    env:
        Parameters__registry_endpoint: ghcr.io
        Parameters__registry_repository: $
    run: aspire do push

Note: in the example above, the Docker registry is the assumed target for all resources because it’s the only registry declared in the app model. When multiple registries are declared, you’ll need to specify the target registry using WithContainerRegistry.

Fin

That’s the gist of it. Separate the registry from the environment, model push as a pipeline step, and introduce a ContainerRegistryResource for non-Azure registries. The theme here is the same as it’s been across the deployment story: more granularity means more control. To leave this on a cliff-hanger though: while the story around image pushes has gotten some love, the story for image pulls hasn’t gotten the same treatment yet. More on that in a future post… ;)

Read the whole story
alvinashcraft
5 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Managing Content Security in Telerik ASP.NET Core Applications

1 Share

The new Telerik UI for ASP.NET Core project template gives you better protection—it’s how to make CSP work for you.

Let’s say, for example, that you’ve been happily building ASP.NET Core applications using the Progress Telerik Visual Studio extensions to generate the start point for your projects. All is going well until you start to build a new application, confident that you know what you’re doing and how all of this stuff works … except, this time, when you start your application, it doesn’t actually work.

If the symptoms are that images aren’t displayed or stylesheets aren’t applied or video doesn’t play, or web service requests aren’t issued, then the problem may be a Content Security Policy (CSP).

To confirm if the problem is CSP, just press F12 in your browser, check the messages in the console window and see if you find a message like this one (the asterisks represent message content that will vary, depending on the problem):

Refused to load the https:// because it violates the following Content Security Policy directive.

If you’re using the default Telerik template as a starting point for your project, the culprit is a <meta> tag that’s automatically included in the project’s Layout.cshtml file (the file is the default base for all your views). Here’s the current version of that tag, formatted to make it easier to read:

<meta http-equiv="Content-Security-Policy"
  content="default-src 'self';
  
    img-src 'self'
      data:;

    script-src 'self'
      https://kendo.cdn.telerik.com
      https://code.jquery.com/
      https://cdn.kendostatic.com
      https://unpkg.com
      https://cdn.jsdelivr.net 'nonce-Telerik-Examples';
 
    style-src 'self'
      https://kendo.cdn.telerik.com
      https://unpkg.com
      https://cdn.jsdelivr.net;

    font-src 'self'  
      https://unpkg.com;

    connect-src 'self'
      ws:
      http:;" />

You could make your error go away by just deleting the tag, but that’s probably a mistake. This Content Security Policy (CSP) <meta> tag helps protect against code injection and cross-site-scripting (XSS) attacks.

Fundamentally, this CSP helps protect your project by limiting the sources where your page can retrieve scripts, stylesheets and images (or any other resource) to the sources listed in the <meta> tag.

However, it also means that if you’re downloading resources from something that Progress Telerik can’t consider (your organization’s stylesheet site, for example), well, then your application won’t work.

To fix it, you first need to know how to read the CSP added in the <meta> tag.

Reading the Default <meta> Tag

A CSP like the one added in the <meta> tag is divided into multiple sections (called directives), separated by semicolons. A directive begins with the directive name and is followed by series of space-delimited sources.

The key directive is default-src which, unless overridden in other directives, specifies where the page can download resources from (in CSP, default-src is considered the fallback when other directives aren’t provided). Here’s the default-src directive from the <meta> tag, listing one source—the keyword 'self':

default-src 'self';

The 'self' keyword limits downloads to resources from the same domain that your page was downloaded from. The other directives override that default list for specific types of content to add additional sources.

The img-src directive, for example, lets you specify sites that you want to enable for downloading images. The <meta> tag in your Telerik-generated project uses 'self' to download images from the same domain as the page but also adds any image using a URI that begins with data. (The data: URI lets you embed base64 encoded files directly into your page rather than having to download them separately.)

img-src 'self'
  data:;

The other directives specify sources for:

  • Downloading JavaScript files: Your page’s domain, the Telerik site, plus some public and content delivery network sites like unpkg.com (script-src). The nonce keyword causes the server to generate a one-time random key that is included in the download and checked by the browser to verify that the script is coming from the requested server:
script-src 'self'
  https://kendo.cdn.telerik.com
  …
  https://cdn.jsdelivr.net 'nonce-Telerik-Examples';
  • Downloading stylesheets: Same kind of sites as with script files (style-src)
  • Downloading fonts: Both your page’s domain and unpkg.com (font-src)
  • Calling WebSockets and web services: Requests to your page’s domain plus all WebSocket requests and HTTP/HTTPS requests (connect-src)

The result of this CPS is that every source not in those lists is blocked—that includes stylesheets, images, audio/video files and so on that aren’t coming from your application’s domain. Getting your application running, then, comes down to extending those lists to include those other sources.

Side note: When reading or modifying a CSP, allowing HTTP also allows HTTPS (and vice versa). An img-src directive that includes http:phvis.com, for example, would still allow this image tag to download its image over HTTPS:

<img src=https://phvis.com/SomePictureOnPetersSite.jpg />

Tailoring the Telerik <meta> Tag

Typically, you’ll run into a CSP problem because you’re accessing a resource from somewhere other than your page’s domain (for example, some shared image/stylesheet site or a video from a streaming site). When you do run into this problem, you have two ways of fixing it:

  • Extend the default-src directive to include the other site. This makes sense if you’re downloading multiple resources from that site and have confidence in its security (a central corporate site with resources to be used on all of your organization’s pages).
  • Extend the directive for the specific resource that’s being blocked (e.g., adding the URL where your organization’s stylesheets/images are kept).

That last option may include adding a new directive if the resource isn’t one of the types already listed in the CSP (e.g., if the blocked resource is a video or audio file on another site). For a video or audio file, for example, you need to add the media-src directive to the CSP (see the directives list for other types of resources).

On the other hand, if your organization requires something more restrictive (the “Block All, Allow Some” strategy) or you’d prefer something more flexible (the “Allow All, Control Some” strategy), you may want to swap in a different CSP altogether.

Defining a Content Security Policy

If you want to go beyond setting CSP pages for individual pages, you can configure your web server to return a CSP as one of the response headers for any request. If you do, you’ll also have more options than are available using the <meta> tag. I’ll continue with the <meta> tag, however.

The starting point for your CSP is a policy that has no content: A policy that specifies nothing allows everything. Continuing to use the <meta> tag as my example, a policy that allows everything would look like this:

<meta http-equiv="Content-Security-Policy" />

Or, more explicitly:

<meta http-equiv="Content-Security-Policy" />
  content="" />

Building from that, you can implement one of three strategies: “Allow All, Control Some,” “Block All, Allow Some” or “Allow Some, Control Some.”

Allow All, Control Some

Implementing an “Allow All” strategy begins with omitting the default-src directive which provides the fallback for any omitted directive. Without a default-src, any directive you don’t provide allows everything.

Starting from there, you control just the resources you’re interested in by including directives for those resources. This example only blocks scripts (they must come from the page’s domain or my site) while allowing every other kind of resource:

<meta http-equiv="Content-Security-Policy"
  content="script-src 'self'
    http://phvis.com" />

This strategy makes sense when you feel there are only a small number of resources you need to control. This is the riskiest strategy (have you controlled all the right resources?) but will have the least impact on your applications.

Block All, Allow Some

Another strategy is to block everything and then allow specific resources. With this strategy, your starting point is to provide a CSP with a default-src with no value—that will block all resources:

<meta http-equiv="Content-Security-Policy"
  content="default-src;" />

If you want to be more explicit, you can use 'none' to make it clear that you’re blocking everything:

<meta http-equiv="Content-Security-Policy"
  content="default-src 'none';" />

You can now selectively allow some resources to be downloaded. This example lets in images from the page’s domain and my site but still blocks stylesheets, WebSocket access, web service requests and everything else not mentioned in this CSP:

<meta http-equiv="Content-Security-Policy"
  content="default-src 'none';"
  img-src  'self'
    http://phvis.com" />

This is probably the safest strategy but will have the biggest impact on your applications (to be more exact: more applications will stop working). It will also lead to the longest CSP list and maintenance effort as you’ll keep having to add more directives as your sites access a wider variety of resources from a longer list of sources.

Allow Some, Control Some

The third strategy is to use default-src to specify any common source for all the resources you will allow (typically that’s just 'self'—your page’s domain). You then add directives for specific resources that require something more. This is the strategy that Telerik has used in its CSP.

As you add those other directives, remember that those new directives don’t extend default-src but, instead, override it. Essentially, that means that most/all of your additional directives will begin with whatever you have in the default-src directive.

This example allows any resource from my page’s domain and my site to be downloaded, except for images. For images, this policy allows images from the page’s domain, my site and the Telerik site:

<meta http-equiv="Content-Security-Policy"
  content="default-src 'self'
    http://phvis.com;

    img-src 'self'
      http://phvis.com
      http://Telerik.com" />

This strategy is, essentially, a trade-off between the other two: By providing a fallback in default-src that allows resources from your “safe” sites, it reduces the number of directives that you’ll have to add. You’ll now only need to add directives where you need to specify a different or longer set of resources than your default list of “safe” sites.

So: Yes, CSP can be annoying. But CSP helps protect you from being attacked. And, considering the embarrassment that comes from finding your site defaced or hacked, it’s probably worth implementing the strategy that saves you from explaining how that happened.


Learn more about Progress Telerik UI for ASP.NET Core and try it yourself, free for 30 days.

Read the whole story
alvinashcraft
5 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Blazor Basics: Blazor WebAssembly Using Local Storage in Offline Scenarios

1 Share

We will learn how to leverage the Local Storage API to implement an offline mode for Blazor WebAssembly applications.

Even though connectivity is almost always available in 2025, there are still use cases where offline support is crucial. For example, consider a factory worker ticking off a checklist two floors underground, a metro employee inspecting cars or a wildlife rescuer off-grid.

Blazor WebAssembly is an ideal technology for solving such a scenario. The client application runs entirely in the web browser on the client’s device without requiring a permanent client-server connection.

In this article, we will learn how to utilize local Storage to save data when the application is offline.

Challenges with Offline Mode

There are a few challenges when implementing an offline mode for any application. It’s especially true for web-based applications. When a user refreshes or closes the browser, any unsaved state will be lost.

In offline mode, we store data locally, but at some point, we need to transfer it to the server. What if there is already a more recent version available on the server?

There are many more challenges when implementing a fully-fledged offline solution for any (web) application.

In this article, we want to focus on the basics of getting started with Blazor WebAssembly and how to leverage Local Storage to temporarily store data client-side before synchronizing the state with the server when the application is back online.

Progressive Web Application (PWA) with Blazor WebAssembly

The application in this article is a progressive web application (PWA). Key features include: Offline support, installability, push notifications and improved performance due to client-side caching.

However, implementing a PWA is not the focus of this article. You can learn all about how to use a progressive web application with Blazor WebAssembly in a previous article.

What Is Local Storage and Why Use It?

Local Storage is a standard web API implemented by all modern web browsers to store key-value data on the client. There is an upper limit of around 5 MB per web application.

Other storage options include the Session Storage, which is deleted when the user closes the browser tab, or the IndexedDB, which allows for larger and more structured data storage.

We use Local Storage in this example for its simplicity and widespread browser support.

Implementing Local Storage Access

Now that we know what offline support is and how using the Local Storage API helps us temporarily save state on the client, we want to implement a simple solution.

In this article, we will work with the default Blazor web application Standalone template and implement a feature that lets the user store the counter value while being offline.

A browser with a running Blazor WebAssembly web application and a Counter page showing the current counter values and two buttons to increment and save the value.

You can access the code used in this example on GitHub.

The project used in this example has an ASP.NET Core Web API project serving as a backend for the Blazor WebAssembly application. We won’t cover the details of that implementation.

A drawing showing an ASP.NET Core backend application and a Blazor WebAssembly web application. For the online mode, the Blazor app uses the server API and for the offline mode it uses the Local Storagi API from the browser.

All we need to know is that, besides utilizing the Local Storage to temporarily save the client’s state in offline mode, we use the web API application to persist the data on the server when the application is online.

First, we add a Services folder and create an ILocalStorageService interface.

namespace BlazorWasmOffline.Services;

public interface ILocalStorageService
{
    Task SetItemAsync(string key, string value);
    Task<string?> GetItemAsync(string key);
}

This interface abstracts the implementation of the JavaScript interop code to access the Local Storage using its native JavaScript API.

The implementation in the LocalStorageService class looks like this:

using Microsoft.JSInterop;

namespace BlazorWasmOffline.Services;

public class LocalStorageService(IJSRuntime _js) : ILocalStorageService
{
    public async Task<string?> GetItemAsync(string key)
    {
        return await _js.InvokeAsync<string?>("localStorage.getItem", key);
    }

    public async Task SetItemAsync(string key, string value)
    {
        await _js.InvokeVoidAsync("localStorage.setItem", key, value);
    }
}

We inject an instance of the IJSRuntime type, which allows us to call JavaScript from .NET.

In the GetItemAsync method, we accept a key of type string and use that key to look for an item stored in the Local Storage using the localStorage.getItem JavaScript function.

In the SetItemAsync method, we accept a key and a value of type string and use the localStorage.setItem JavaScript function to write data to the Local Storage.

Note: If you are completely new to JavaScript interop, I highly recommend learning more about it in the Blazor JavaScript Interop—Calling JavaScript from .NET article of this Blazor Basics series.

Let’s not forget to register the service with the dependency injection system in the Program.cs file:

builder.Services.AddScoped<ILocalStorageService, LocalStorageService>();

Monitoring the Online/Offline Status

Next, we need to be able to monitor the status of whether the application is online or offline. We also need a mechanism to watch for changes, or, in other words, to get notified about the status change.

We create a new connectionStatus.js file in the js folder of the wwwroot folder with the following code:

window.connectionStatus = {
    isOnline: () => navigator.onLine,
    registerOnlineOfflineEvents: (dotNetObjRef) => {
        window.addEventListener('online', () => dotNetObjRef.invokeMethodAsync('SetOnlineStatus', true));
        window.addEventListener('offline', () => dotNetObjRef.invokeMethodAsync('SetOnlineStatus', false));
    }
};

We create a connectionStatus object and add an isOnline function and a registerOnlineOfflineEvents function.

We can call the isOnline method in the .NET code to check if the application is currently online or offline.

The registerOnlineOfflineEvents function lets us know when the status changes from online to offline or vice versa.

Hint: Here, we use the opposite direction of the JavaScript interop and call .NET code from JavaScript. If you want to learn more about it, you can read the Blazor JavaScript Interop—Calling .NET from JavaScript article of the Blazor Basics series.

In the index.html file, we add a reference to load the connectionStatus.js file below the blazor.webassembly.js file reference:

<script src="js/connectionStatus.js"></script>

Implementing the OfflineComponentBase Class

With the connectionStatus script and the LocalStorageService in place, we are ready to implement the Counter component.

First of all, we create an OfflineComponentBase class, which abstracts the handling of the online/offline status.

public abstract class OfflineComponentBase(IJSRuntime JS) : ComponentBase
{
    protected bool IsOnline { get; set; }

    protected abstract void OnlineStatusChanged(bool status);

    protected override async Task OnAfterRenderAsync(bool firstRender)
    {
        if (firstRender)
        {
            await JS.InvokeVoidAsync("connectionStatus.registerOnlineOfflineEvents", 
                DotNetObjectReference.Create(this));
            IsOnline = await JS.InvokeAsync<bool>("connectionStatus.isOnline");
        }
    }

    [JSInvokable]
    public void SetOnlineStatus(bool status)
    {
        IsOnline = status;
        OnlineStatusChanged(status);
        StateHasChanged();
    }
}

The class contains an IsOnline property that we can conveniently access from the Counter component implementation when inheriting from this base class.

The abstract OnlineStatusChanged method allows us to implement code that will execute whenever the application’s online status changes.

In the OnAfterRenderAsync method, we use JavaScript interop to call the registerOnlineOfflineEvents function on the connectionStatus object and provide a reference to the current instance of the OfflineComponentBase class as its argument. We also check the online status and assign it to the IsOnline property.

The SetOnlineStatus method will be called from the JavaScript we previously implemented in the connectionStatus.js file. It’s important that we add the JSInvokable attribute to let Blazor know that we intend this method to be called from JavaScript.

In the implementation, we update the value of the IsOnline property, call the abstract OnlineStatusChanged method, and call the StateHasChanged method.

Implementing the Counter Component

Now, we’re finally ready to implement the Counter component.

First, we inject an instance of the ILocalStorageService type and inherit from the OfflineComponentBase class using Razor directives.

@inject ILocalStorageService LocalStorage
@inherits OfflineComponentBase

The component’s template looks almost like the default project, except for an additional button to save the counter value.

<PageTitle>Counter</PageTitle>

<h1>Counter</h1>

<p role="status">Current count: @_currentCount</p>

<button class="btn btn-primary" @onclick="IncrementCount">Increment Count</button>
<button class="btn btn-secondary" @onclick="SaveCount">Save Count</button>

In the code section, we add two fields and a constructor implementation:

@code {
    private int _currentCount = 0;
    private HttpClient _httpClient;

    public Counter(IJSRuntime _js) : base(_js)
    {
        _httpClient = new HttpClient();
        _httpClient.BaseAddress = new Uri("https://localhost:7071/api/");
    }
}

The _currentCount field holds the component state, and the _httpClient field has a reference to an HttpClient configured to call the API running on the server.

Hint: For simplicity, I put the Uri in the code. In production, you want to extract that into a configuration setting.

Next, we implement the OnAfterRenderAsync lifecycle method:

protected async override Task OnAfterRenderAsync(bool firstRender)
{
    await base.OnAfterRenderAsync(firstRender);

    if (firstRender)
    {
        if (IsOnline)
        {
            var counterData = await _httpClient.GetFromJsonAsync<CounterData>("counter");
            if (counterData != null)
            {
                _currentCount = counterData.Counter;
            }
        }
        else
        {
            var counter = (await LocalStorage.GetItemAsync("counter")) ?? "0";
            _currentCount = int.Parse(counter);
        }
        StateHasChanged();
    }
}

We call the parent’s OnAfterRenderAsync method and handle the case when the firstRender argument is true.

In case we are online, we use the HttpClient object, load the value from the server and assign it to the internal component state.

If we are offline, we access LocalStorage and look for a value using the counter key.

In the SaveCount method, which gets called when the user presses the Save button, we also handle both the online and offline cases:

public async Task SaveCount()
{
    if (IsOnline)
    {
        await _httpClient.PostAsJsonAsync<int>("counter", _currentCount);
    }
    else
    {
        await LocalStorage.SetItemAsync("counter", $"{_currentCount}");
    }
}

Again, we use the HttpClient to send the count to the server if we are online. And if we are offline, we use the LocalStorage service to save the value on the client-side.

Last but not least, we implement the abstract OnlineStatusChanged method:

protected async override void OnlineStatusChanged(bool isOnline)
{
    if (isOnline)
    {
        await _httpClient.PostAsJsonAsync<int>("counter", _currentCount);
    }
}

With the current limited functionality, we only handle cases when the application gets back online after running offline. Again, we use the HttpClient and send the current value to the server.

Testing the Offline Mode

With all parts in place, we now want to build and run the application and test our implementation.

Hint: I configured the solution to run both the WebAssembly client project and the ASP.NET Core Web API server project when pressing F5.

When the application starts in the browser, you want to open the developer tools immediately using the F12 shortcut and open the network tab.

A running Blazor web application with the developer tools opened and the Network tab selected. The network connectivity speed is set to Offline.

Select the Offline mode and navigate to the Counter page. We see zero as the current count. You can increase the count as much as you would like until you press the Save button to store the information.

Since we’re offline, the counter value will be stored in Local Storage. You can validate it by navigating to the Home page and back to the Counter page. The state is now loaded from the Local Storage.

A running Blazor web application with the developer tools opened and the Application tab selected. The Local Storage contains a counter key and a value with the current count.

You can also open the developer tools and navigate to the application tab, where you can access the data stored in the Local Storage. There should be an item with the key counter and the saved value.

Once the application goes back online, the counter value will be sent to the server.

In the developer tools, we can put the application back online, and you should see the network call that sends the counter value to the server.

A running Blazor web application with the developer tools opened and the Network tab selected. An HTTP request with the current counter value as its payload is visible.

When you navigate to the Home page and back to the Counter page, you’ll be able to see that the value is loaded from the server.

Of course, debugging the application gives you even more insights into how everything works together.

Further Improvement with the Command Pattern

Instead of handling the two cases (online/offline) in the Counter page component and handling the event fired when the application is back online, you could implement the command pattern.

Using the command pattern, we create a command whenever the user presses the button to save the counter value.

The code deciding whether to store the data in the Local Storage or send it to the server is within the command handler and decoupled from the Blazor component.

Additional Challenges with Offline Mode

When implementing offline mode in a real-world Blazor web application, we must consider a few additional things.

For example, let’s say we have a form with three fields, and at some point the server implementation changes and adds a fourth field to the data structure.

The data temporarily saved in the Local Storage will not work with the new server API. Therefore, we must implement measures to deal with such a situation gracefully.

A defensive option is to reject any request that does not fit the data structure and provide a generic message to the user stating that the API has potentially changed. The user has to re-enter the data.

A more advanced solution would be properly implementing API versioning and dealing with each API change in more detail, such as providing a message migration mechanism.

Whatever solution you choose, keep in mind that by adding offline support to your application, you detach the moment the user inputs data from the moment the data is transmitted to the server.

Another challenge is temporal decoupling. Imagine two different users working with the same application. What if there is a shopping cart list and User A marks an item as completed, and User B does the same before the status update goes through?

Depending on the features of your application, it can be a lot more work to properly implement an offline mode for your application. Keep that in mind before lightheartedly announcing offline support to your users.

Conclusion

We learned how to access the browsers’ Local Storage from a Blazor WebAssembly application. We also learned how to monitor the offline/online status of the application.

We also learned that implementing an offline mode in a real-world web application requires solving more challenges than temporarily storing state in Local Storage.

If you want to learn more about Blazor development, watch my free Blazor Crash Course on YouTube. And stay tuned to the Telerik blog for more Blazor Basics.

Read the whole story
alvinashcraft
5 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories