Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
152767 stories
·
33 followers

AWS growth climbs to 28% as Amazon’s big AI bets start to pay off

1 Share

Amazon Web Services growth accelerated to 28% in the first quarter — its fastest pace in nearly four years — pushing Amazon’s results past Wall Street’s expectations and validating, at least for now, the company’s controversial $200 billion bet on artificial intelligence infrastructure.

Overall, Amazon posted sales of $181.5 billion, up 17%, and operating income of $23.9 billion, up 30%. Both topped guidance and exceeded Wall Street’s expectations of about $177 billion in revenue.

Profits were $30.3 billion, or $2.78 per diluted share. However, that included a $16.8 billion pre-tax gain on Amazon’s investment in Anthropic, which inflated the bottom-line numbers. Excluding that one-time gain, adjusted earnings per share would have been $1.61, just shy of analyst expectations of $1.62.

Amazon’s advertising business grew 24% to $17.2 billion in the quarter, and the company said advertising revenue topped $70 billion over the past 12 months.

In the core e-commerce business, unit sales grew 15%. Amazon CEO Andy Jassy called it the strongest growth rate since the waning days of the COVID-19 lockdowns. It was boosted in part by faster delivery, with more than 1 billion items shipped same-day or overnight in the U.S. so far this year.

Amazon is in “the middle of some of the biggest inflections of our lifetime,” Jassy said in a release.

Shares were down about 2% in initial after-hours trading.

Read the whole story
alvinashcraft
32 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Microsoft tops Wall Street expectations, reports accelerating Azure growth and $37B AI run rate

1 Share

Microsoft’s Azure cloud business accelerated in the March quarter, growing 40% and topping the company’s own forecast, giving the tech giant a new answer to questions about its ability to translate record capital spending on AI infrastructure into stronger financial results.

The company’s revenue rose 18% to $82.9 billion, beating the $81.4 billion analyst consensus, and earnings per share jumped 23% to $4.27, above the $4.06 expected by Wall Street. 

AI run rate: In its earnings news release, Microsoft also disclosed that its AI business has reached an annual revenue run rate of $37 billion, up 123% from a year ago. It’s the first time the company has updated the figure since it reported a $13 billion run rate in January 2025.

Capex trends: Capital spending came down to $31.9 billion from $37.5 billion the previous quarter. Microsoft had said the decline would come and that it reflected the timing of data center construction and hardware deliveries, not a slowdown in demand for cloud and AI services.

Copilot: For the quarter, Microsoft 365 Copilot now exceeds 20 million paid seats, up from 15 million in January. That means about 4.4% of the company’s commercial base is on its paid enterprise AI plan.

Cloud overall: Microsoft Cloud revenue, which includes Azure, commercial Microsoft 365, LinkedIn, and Dynamics 365, rose 29% to $54.5 billion. The company’s remaining performance obligations, a measure of contracted future revenue, was $627 billion, with a significant part of that backlog tied to OpenAI.

Elsewhere in Microsoft’s business:

  • Revenue in the More Personal Computing segment fell 1% to $13.2 billion, with Xbox content and services revenue down 5% and Windows OEM and devices revenue down 2%. Search advertising revenue grew 12%.
  • The Productivity and Business Processes segment, which includes Microsoft 365, LinkedIn, and Dynamics 365, grew 17% to $35 billion. LinkedIn revenue rose 12%, and Dynamics 365 revenue increased 22%.
  • The Intelligent Cloud segment, home to Azure, grew 30% to $34.7 billion, making it nearly equal in size to the productivity segment for the first time.

The results come three months after Microsoft’s stock dropped 10%, wiping out $357 billion in market value, despite the company beating expectations on revenue and earnings.

Investors focused on the record capital spending, a Copilot product that had reached just 3.3% of Microsoft 365’s commercial base at that time, and a revenue backlog heavily dependent on OpenAI.

The OpenAI relationship has shifted significantly since then.

This week, the two companies restructured their partnership, with OpenAI ending its exclusive commitment to Microsoft’s Azure cloud and gaining the ability to run its products on other platforms, notably Amazon Web Services. Microsoft, in turn, locked in its revenue-sharing arrangement and removed a clause that could have ended it if OpenAI had declared artificial general intelligence.

Read the whole story
alvinashcraft
32 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

AI Lab Power Rankings

1 Share
From: AIDailyBrief
Views: 4

Microsoft and OpenAI amended the partnership to remove cloud exclusivity, allowing OpenAI models on AWS while preserving Microsoft's equity stake and a long-term revenue share. Power rankings evaluate labs across compute, enterprise positioning, platform control, consumer reach, model strength, momentum, branded narrative, and X factor, with Google, OpenAI, and Microsoft leading the list. Key themes include the shift to an agentic era, looming token and compute shortages, the rise of desktop agents like Amazon Quick, and rapidly changing competitive strategies across Anthropic, Meta, Apple, Amazon, and X.

The AI Daily Brief helps you understand the most important news and discussions in AI.
Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614
Get it ad free at http://patreon.com/aidailybrief
Learn more about the show https://aidailybrief.ai/

Read the whole story
alvinashcraft
33 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Giving UI Reviews to Coding Agents - Playwright CLI

1 Share
From: Playwrightdev
Duration: 1:40
Views: 463

🔗 Get started: https://github.com/microsoft/playwright-cli
📚 Docs: https://playwright.dev/agent-cli/introduction

New Playwright features make it easier to collaborate with coding agents like Copilot CLI and Claude Code. In this video, I show how to give visual
feedback on your agent's work using the Playwright Dashboard — sketch annotations right on a screenshot, and your agent picks them up as both image and
structured data.

Install Playwright CLI:
npm install -g @playwright/cli@latest
playwright-cli install --skills

⏱️ Chapters:
00:00 Intro
00:12 Kicking off the agent with a plan
00:25 How the agent uses Playwright CLI
00:36 Performing a UI review
01:13 Viewing all browser sessions at a glance
01:28 Wrap-up

🌐 Find Playwright elsewhere:
- Twitter/X: https://x.com/playwrightweb
- GitHub: https://github.com/microsoft/playwright
- LinkedIn: https://www.linkedin.com/company/playwrightweb

👍 Like & subscribe for more Playwright content.
💬 Questions? Drop them in the comments.

#Playwright #CopilotCLI #ClaudeCode #AIAgents #WebDev #Testing

Read the whole story
alvinashcraft
33 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

The Microsoft 365 Copilot Frontier Program: What Executives and IT Leaders Actually Need to Know

1 Share

If your business leaders are asking why they don't have the latest Copilot features they saw at Microsoft Ignite, someone has probably already said, "Have you looked at the Frontier program?"

That's where things get interesting.

Frontier is one of those programs that can be a real strategic advantage when you understand it well. But if you walk into it without the right governance in place, you're going to create headaches for your security and compliance teams fast.

Here's an honest, practical breakdown of what Frontier is, why organizations choose it, when you should wait, and what your governance teams need to know before you flip the switch.

 

 

What Is the Microsoft 365 Copilot Frontier Program?

Frontier is Microsoft's early access program that gives organizations access to the newest AI-powered Copilot capabilities before they reach general availability (GA).

Think of it as an on-ramp to the leading edge of Microsoft's AI roadmap.

When Microsoft's engineering teams build a new Copilot feature, it moves through a lifecycle:

  1. Private Preview — Invitation only. A small group of design partners tests the feature under close partnership. Not self-service, not for everyone.
  2. Frontier — Broader early access. Thousands of tenants can participate by opting in through the admin center. Still pre-GA, but real and working in your production environment.
  3. General Availability (GA) — Full Microsoft SLA coverage, support agreements, and regulatory compliance standards your organization depends on.

Here's the part worth repeating: Features in Frontier are real working capabilities inside your production Microsoft 365 environment, but they are in active development. Microsoft is still refining them based on customer feedback.

A few things to understand right away:

  • Frontier is opt-in. An M365 administrator has to enable it. It doesn't happen automatically.
  • When you enable Frontier, you're not getting a separate test environment. These features run inside your existing tenant alongside your GA Copilot capabilities.
  • The feature set changes. Features get added as they mature and graduate out of Frontier into GA over time.

Why Organizations Choose to Enable Frontier

There are four strategic reasons I see most often.

  1. Competitive velocity. In industries like financial services, healthcare, and professional services, staying ahead matters. Frontier lets your team start learning and building workflows around capabilities before your competition even knows they're coming. By the time a feature hits GA, your users are already fluent in it.
  2. Direct influence on the product. This one is underappreciated. Microsoft actively collects feedback from Frontier participants. When your users encounter something that doesn't work the way your workflows require, that feedback goes directly to the engineering team. Your organization gets a seat at the table in shaping how these features evolve.
  3. Organizational AI readiness. Participating in Frontier responsibly forces a healthy discipline. You need to mature your AI governance, adoption playbooks, and change management approach faster than you otherwise would. Many IT leaders I've talked to say that preparing for Frontier accelerated their overall Copilot adoption maturity because it forced them to get governance and IT strategy in place first.
  4. Access to differentiated capabilities. Some features that debut in Frontier are genuinely transformational. Copilot Cowork. New reasoning models. Deeper cross-application intelligence. If those capabilities tie directly to your business outcomes, waiting for GA means leaving real value on the table.

One more thing worth saying directly: the choice to enable Frontier is a leadership decision, not an IT decision. It's about balancing how fast you want to move with how mature your governance actually is. This is a joint venture between IT and the business.

Frontier vs. Private Preview vs. GA: Feature Lifecycle Explained

Here's a quick reference so you and your leadership team are using the right language:

StageAccessSLAHow to Join
Private PreviewInvitation onlyNoneMicrosoft selects you
FrontierOpt-in via admin centerPreview feature expectations (not GA SLA)Self-service
General AvailabilityAll licensed usersFull Microsoft SLAAutomatic

The key callout: Frontier features do not carry the SLA commitments that apply to GA services. That matters a lot in regulated environments.

When Frontier Makes Sense for Your Organization

Frontier is a strong fit if:

  • Leadership actively values being first to adopt, with the discipline to do it responsibly
  • You already have a mature M365 Copilot deployment and power users who are hungry for more
  • You have clear IT governance and change management processes in place
  • Your compliance posture allows for preview feature participation

When You Should Wait

I want to be equally honest about when Frontier is not the right move yet.

Wait if you're in a heavily regulated environment and haven't completed a compliance assessment. Preview features may not have completed all compliance certifications. Talk to your Microsoft account team before you enable anything.

Wait if your M365 baseline deployment is still maturing. Get the foundations right first.

Wait if you don't have a clear feedback path from end users. Without a channel for users to report back to IT and business leaders, Frontier creates frustration instead of value.

Wait if your IT team is already stretched. Frontier requires active engagement with release notes, user communication, and feedback loops. If capacity is already thin, this will add to the bottleneck.

With the right framing, Frontier isn't a risk. It's a governance responsibility.

Five Governance Checkpoints Before You Enable Frontier

This section will save your compliance team the most headaches. Work through all five before you flip the switch.

  1. Conduct a compliance assessment. Preview features may not have completed all compliance certifications. Work with your Microsoft account team to understand the compliance posture of specific Frontier features relevant to your industry.
  2. Define your governance scope. Frontier doesn't have to be all-or-nothing. You can enable it for a defined set of users using Microsoft 365 security groups while keeping the broader organization on GA capabilities. More on that in the admin center walkthrough below.
  3. Establish user communication protocols. Features can change quickly. Your users need to know what they're participating in, why their experience may differ from others, and how to submit feedback. ("Why does my UI look different than yours?" is a real conversation that happens all the time.)
  4. Set up a feedback and monitoring cadence. Review Frontier release notes regularly. Track what's live in your tenant and synthesize user feedback back to Microsoft.
  5. Plan for feature lifecycle transitions. Features can be updated, temporarily suspended, or graduated to GA. Your governance plan should address how you'll communicate changes and adjust workflows when that happens.

Think of governance here as a maturity accelerator, not a barrier.

How to Enable Frontier in the Microsoft 365 Admin Center

Here's exactly where to go and what to do.

Step 1: Navigate to Copilot Settings

Go to the Microsoft 365 admin center, navigate to the Copilot section, and select Settings. Click "View all" to see all settings on a single unified page. Use Ctrl+F to search for "Copilot Frontier."

Step 2: Scope Your Access

You'll have three options: enable for no one, for everyone, or for specific users. For most organizations, specific users is the right call. Set up a dedicated security group for your Frontier champions and assign access to that group only.

Step 3: Assign Frontier Agents

Enabling Frontier at the tenant level is just step one. You also need to assign specific Frontier agents to users. In the admin center, go to Agents > All Agents and search for "Frontier." From there, you can select individual agents (like Copilot Cowork) and assign them to your champion group or a subset of it.

This is the most common point of confusion: you can have Frontier enabled but still not have access to a specific agent like Cowork because you never assigned it. Both steps are required.

Step 4: Pull a Baseline Usage Report

Before your pilot starts, capture a baseline snapshot of your current Copilot usage. In the admin center, go to Reports > Usage > Microsoft 365 and look at the Copilot, Copilot Chat, and Agents tabs. Screenshot or export these. In four to eight weeks, you'll use this baseline to measure the impact of Frontier adoption across your pilot cohort and overall.

A Phased Rollout Model That Actually Works

Don't just turn Frontier on and hope for the best. Here's a five-phase model that turns it into a structured capability program.

Phase 1: Identify your Frontier champions. Target 50 to 200 users who are already Copilot power users, have a growth mindset toward AI, and can articulate business value from feature changes. These are your early adopters who will carry the signal back to the rest of the org.

Phase 2: Enable Frontier for your champion cohort. Follow the admin center steps above. Brief that group on what to expect, what's different, and how to submit feedback.

Phase 3: Evaluate and document. After four to eight weeks, pull up your Copilot usage dashboard and compare it to your baseline. Which features are driving measurable productivity gains? Document your findings.

Phase 4: Expand or adjust scope. Based on your champion cohort data, either expand to a broader user population or adjust scope if a specific feature is causing friction.

Phase 5: Establish steady-state governance. Formalize the feedback loop and user communication as a standard operating procedure within your Copilot governance framework. Start building documentation now so you're ready when features graduate to GA.

This approach turns Frontier from a feature toggle into a strategic capability program. That's where the real value shows up.

A Quick Decision Framework for Leadership

Before you bring this to your leadership team, run through these five questions:

  1. Do we have a clear AI governance framework in place?
  2. Are our Microsoft 365 GA deployments stable and delivering measurable value?
  3. Have our compliance and legal teams assessed preview feature participation?
  4. Do we have an identified Frontier champion cohort or IT bandwidth for a structured pilot?
  5. Is there a specific business outcome we're trying to accelerate?

If you answered yes to four or five of those, you're in a strong position to move forward.

If you have two or more no's, invest in those foundations first. Getting governance, bandwidth, and a clear use case in place before enabling Frontier. It isn't about slowing down, it's setting yourself up to actually get value from it.

Bottom Line

The Microsoft 365 Copilot Frontier program is a strategic option for enterprise organizations that want to shape the future of AI productivity tools, not just consume them. But it's not for everyone, and it's not designed to be.

It's built for organizations that have the governance maturity, leadership alignment, and operational capacity to engage with early access AI responsibly.

When you do it right, Frontier can accelerate your AI program, sharpen your competitive edge, and give your organization a direct voice in how Microsoft AI evolves.

The tools are all right there in the admin center. It's just a matter of knowing where to look and using them intentionally.

Have questions about Frontier readiness or want to talk through your organization's Copilot governance strategy? Drop them in the comments or reach out directly.

Read the whole story
alvinashcraft
33 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

How to Integrate APIs Seamlessly in Flutter

1 Share

A battle-tested guide to HTTP clients, REST patterns, GraphQL, WebSockets, caching, and pagination — with production Dart code from 90+ client projects across four continents.

Photo by Robert Gourley on Unsplash

It was a Friday evening in October 2022, and I was sitting in a co-working space in Dubai Internet City, watching my client’s Flutter app crash on every third API call. The app was a luxury concierge service — restaurant reservations, yacht bookings, and private jet charters for high-net-worth individuals in the UAE. The backend was solid. The endpoints returned clean JSON. But the Flutter app was a disaster. Every network request was a raw http.get() call scattered across 47 widget files. No centralized error handling. No retry logic. No token refresh mechanism. When the access token expired, the app showed a white screen. When the server returned a 500, the app crashed.

I inherited that codebase from another developer. It took me three weeks to untangle it — ripping out every raw HTTP call, introducing a proper networking layer with Dio, building a repository pattern for every API domain, adding interceptors for authentication and error handling, and implementing offline caching with Hive.

That project changed how I approach API integration in Flutter permanently. Over 7+ years of freelancing — building mobile apps for clients in India, Dubai, Singapore, Berlin, Toronto, and about a dozen other cities — I have learned that the networking layer is the single most important architectural decision in a Flutter app. Get it wrong, and every feature inherits the fragility. Get it right, and everything else becomes straightforward.

This guide is the exact system I use on every Flutter project in 2026. Production code from real client projects, with the lessons that only come from watching things break. Let us get into it.

Choosing the Right HTTP Client

The first decision you face in any Flutter project is which HTTP client to use. There are three serious options, and I have shipped production apps with all of them. Each one is right for a different kind of project.

The http package is Flutter’s official, minimal HTTP client. It ships with the Flutter SDK, has zero dependencies beyond Dart’s core libraries, and does exactly what it says: sends HTTP requests and returns responses. I used it exclusively for my first two years of Flutter development. It is fine for simple apps with a handful of endpoints — a portfolio app, a weather widget, a news reader that hits one API. But the moment you need interceptors, request cancellation, file uploads with progress tracking, or any form of middleware, you outgrow it fast.

Dio is where most professional Flutter developers land, and it is where I have stayed for the past four years. It is a powerful HTTP client built specifically for Dart that gives you interceptors (critical for auth token management), request cancellation, timeout configuration per request, FormData for file uploads, download progress callbacks, and a clean API for configuring base URLs, headers, and query parameters globally. Every production Flutter app I have built since 2021 uses Dio as its networking foundation.

Retrofit is a code generation layer that sits on top of Dio. You define your API endpoints as abstract Dart methods with annotations, run the build runner, and it generates the implementation. If you come from Android development with Java or Kotlin, this will feel instantly familiar — it is modeled directly after Square’s Retrofit for Android. I use it on larger projects where the API surface is big enough that the code generation saves meaningful time, typically anything with more than 30 endpoints.

Here is how I set up Dio as the foundation for a production app. This is the exact configuration I used for a fintech dashboard I built for a client in Singapore last year:

import 'package:dio/dio.dart';

class ApiClient {
late final Dio _dio;

ApiClient({required String baseUrl, required TokenStorage tokenStorage}) {
_dio = Dio(
BaseOptions(
baseUrl: baseUrl,
connectTimeout: const Duration(seconds: 15),
receiveTimeout: const Duration(seconds: 15),
sendTimeout: const Duration(seconds: 30),
headers: {
'Content-Type': 'application/json',
'Accept': 'application/json',
'X-Client-Platform': 'flutter',
},
validateStatus: (status) => status != null && status < 500,
),
);

_dio.interceptors.addAll([
AuthInterceptor(dio: _dio, tokenStorage: tokenStorage),
LogInterceptor(
requestBody: true,
responseBody: true,
logPrint: (obj) => print('[API] $obj'),
),
RetryInterceptor(dio: _dio, retries: 3, retryDelays: const [
Duration(seconds: 1),
Duration(seconds: 3),
Duration(seconds: 5),
]),
]);
}

Dio get dio => _dio;

Future<Response<T>> get<T>(
String path, {
Map<String, dynamic>? queryParameters,
CancelToken? cancelToken,
}) {
return _dio.get<T>(
path,
queryParameters: queryParameters,
cancelToken: cancelToken,
);
}

Future<Response<T>> post<T>(
String path, {
dynamic data,
CancelToken? cancelToken,
}) {
return _dio.post<T>(
path,
data: data,
cancelToken: cancelToken,
);
}

Future<Response<T>> put<T>(
String path, {
dynamic data,
CancelToken? cancelToken,
}) {
return _dio.put<T>(
path,
data: data,
cancelToken: cancelToken,
);
}

Future<Response<T>> delete<T>(
String path, {
CancelToken? cancelToken,
}) {
return _dio.delete<T>(
path,
cancelToken: cancelToken,
);
}
}

A few things to notice about this setup. The connectTimeout and receiveTimeout are both set to 15 seconds. I arrived at that number after extensive testing on real networks in Dubai, where mobile connectivity can be excellent in the city center and terrible two kilometers away. Fifteen seconds is long enough to survive a brief connectivity hiccup, short enough that the user is not staring at a spinner for an unreasonable amount of time. The sendTimeout is 30 seconds because file uploads for that Singapore project involved financial documents that could be several megabytes.

The validateStatus callback tells Dio not to throw exceptions for 4xx responses. I handle client errors (400, 401, 403, 404, 422) in my application logic, not in catch blocks. Only 5xx server errors and network failures become exceptions. This gives me much finer control over how different error states are presented to the user.

The interceptor stack is where the real power lives, and we will dig into that next.

Authentication Interceptors and Token Refresh

If your app requires user authentication — and almost every client app I have built does — then token management is the most critical piece of your networking layer. Get it wrong, and users get randomly logged out, requests fail silently, or worse, expired tokens are sent to the server and the user sees error screens they cannot recover from.

The pattern I use on every project is an interceptor that handles three things automatically: attaching the access token to every outgoing request, detecting 401 responses that indicate an expired token, and refreshing the token transparently before retrying the original request. The user never sees any of this. From their perspective, the app just works.

Here is the interceptor I built for a healthcare platform for a client in Berlin. The app handled sensitive patient data, so the token lifecycle was strict: 10-minute access tokens, 7-day refresh tokens, and immediate revocation on any suspicious activity.

import 'dart:async';
import 'package:dio/dio.dart';

class AuthInterceptor extends QueuedInterceptor {
final Dio _dio;
final TokenStorage _tokenStorage;
bool _isRefreshing = false;
final List<_PendingRequest> _pendingRequests = [];

AuthInterceptor({
required Dio dio,
required TokenStorage tokenStorage,
}) : _dio = dio,
_tokenStorage = tokenStorage;

@override
void onRequest(
RequestOptions options,
RequestInterceptorHandler handler,
) async {
final token = await _tokenStorage.getAccessToken();
if (token != null && !options.path.contains('/auth/refresh')) {
options.headers['Authorization'] = 'Bearer $token';
}
handler.next(options);
}

@override
void onError(DioException err, ErrorInterceptorHandler handler) async {
if (err.response?.statusCode != 401) {
handler.next(err);
return;
}

if (err.requestOptions.path.contains('/auth/refresh')) {
await _tokenStorage.clearTokens();
handler.next(err);
return;
}

if (_isRefreshing) {
final completer = Completer<Response>();
_pendingRequests.add(
_PendingRequest(
requestOptions: err.requestOptions,
completer: completer,
handler: handler,
),
);
try {
final response = await completer.future;
handler.resolve(response);
} catch (e) {
handler.next(err);
}
return;
}

_isRefreshing = true;

try {
final refreshToken = await _tokenStorage.getRefreshToken();
if (refreshToken == null) {
await _tokenStorage.clearTokens();
handler.next(err);
return;
}

final freshDio = Dio(BaseOptions(
baseUrl: _dio.options.baseUrl,
headers: {'Content-Type': 'application/json'},
));

final response = await freshDio.post(
'/auth/refresh',
data: {'refresh_token': refreshToken},
);

final newAccessToken = response.data['access_token'] as String;
final newRefreshToken = response.data['refresh_token'] as String;

await _tokenStorage.saveTokens(
accessToken: newAccessToken,
refreshToken: newRefreshToken,
);

// Retry the original request
final retryOptions = err.requestOptions;
retryOptions.headers['Authorization'] = 'Bearer $newAccessToken';
final retryResponse = await _dio.fetch(retryOptions);
handler.resolve(retryResponse);

// Retry all pending requests
for (final pending in _pendingRequests) {
pending.requestOptions.headers['Authorization'] =
'Bearer $newAccessToken';
try {
final pendingResponse = await _dio.fetch(pending.requestOptions);
pending.completer.complete(pendingResponse);
} catch (e) {
pending.completer.completeError(e);
}
}
} catch (refreshError) {
await _tokenStorage.clearTokens();
handler.next(err);
for (final pending in _pendingRequests) {
pending.completer.completeError(refreshError);
}
} finally {
_isRefreshing = false;
_pendingRequests.clear();
}
}
}

class _PendingRequest {
final RequestOptions requestOptions;
final Completer<Response> completer;
final ErrorInterceptorHandler handler;

_PendingRequest({
required this.requestOptions,
required this.completer,
required this.handler,
});
}

abstract class TokenStorage {
Future<String?> getAccessToken();
Future<String?> getRefreshToken();
Future<void> saveTokens({
required String accessToken,
required String refreshToken,
});
Future<void> clearTokens();
}

The critical detail here is that AuthInterceptor extends QueuedInterceptor, not the regular Interceptor. This is a distinction that cost me two days of debugging on that Berlin project. When multiple requests fire simultaneously and all get 401 responses, a regular Interceptor will try to refresh the token for each one concurrently. You end up with three or four refresh token calls hitting the server at the same time, and if the backend invalidates the refresh token on use (which it should for security), only the first call succeeds. The rest fail, and the user gets logged out.

QueuedInterceptor serializes the error handling. The first 401 triggers a refresh. All subsequent 401s during that refresh are queued as pending requests. Once the refresh succeeds, every pending request is retried with the new token. From the user's perspective, there is a slight delay and then everything loads. No logout. No error screen. No lost state.

I also create a fresh Dio instance for the refresh call itself. If you use the same Dio instance that has the AuthInterceptor attached, and the refresh endpoint itself returns a 401 (because the refresh token has also expired), you end up in an infinite loop. The fresh instance has no interceptors, so a failed refresh just fails cleanly, and we clear the tokens and let the app redirect to the login screen.

The Repository Pattern with Comprehensive Error Handling

Raw Dio calls should never appear in your widgets, your state management, or your business logic. Every API domain in your app should be wrapped in a repository class that translates HTTP responses into typed Dart objects or meaningful error states. This is the single most impactful architectural pattern I have adopted in Flutter, and it came directly from my years of Android development where the repository pattern is practically mandatory.

The repository does three things. First, it abstracts the HTTP client so your business logic does not know or care whether the data comes from a REST API, a local cache, or a mock. Second, it handles every possible failure mode — network errors, timeout errors, server errors, validation errors, deserialization errors — and translates them into a sealed result type that your UI can pattern-match on cleanly. Third, it manages the mapping between raw JSON and your domain models, keeping serialization concerns out of your widgets entirely.

Here is the repository pattern I use on every project. This specific version is from a real estate listing app I built for a client in Toronto:

import 'package:dio/dio.dart';
import 'package:fpdart/fpdart.dart';

sealed class ApiFailure {
final String message;
final int? statusCode;
const ApiFailure({required this.message, this.statusCode});
}

class NetworkFailure extends ApiFailure {
const NetworkFailure({super.message = 'No internet connection'});
}

class TimeoutFailure extends ApiFailure {
const TimeoutFailure({super.message = 'Request timed out'});
}

class ServerFailure extends ApiFailure {
const ServerFailure({required super.message, super.statusCode});
}

class ValidationFailure extends ApiFailure {
final Map<String, List<String>> fieldErrors;
const ValidationFailure({
required super.message,
required this.fieldErrors,
super.statusCode = 422,
});
}

class UnauthorizedFailure extends ApiFailure {
const UnauthorizedFailure({super.message = 'Session expired'});
}

class NotFoundFailure extends ApiFailure {
const NotFoundFailure({super.message = 'Resource not found'});
}

class UnknownFailure extends ApiFailure {
final Object? error;
const UnknownFailure({super.message = 'Something went wrong', this.error});
}

typedef ApiResult<T> = Either<ApiFailure, T>;

abstract class BaseRepository {
final ApiClient _client;

BaseRepository(this._client);

Future<ApiResult<T>> safeApiCall<T>(
Future<Response> Function() call,
T Function(dynamic data) mapper,
) async {
try {
final response = await call();

switch (response.statusCode) {
case 200:
case 201:
return Right(mapper(response.data));
case 204:
return Right(mapper(null));
case 401:
return const Left(UnauthorizedFailure());
case 404:
return const Left(NotFoundFailure());
case 422:
final errors = _parseValidationErrors(response.data);
return Left(ValidationFailure(
message: 'Validation failed',
fieldErrors: errors,
));
default:
final message = _extractErrorMessage(response.data);
return Left(ServerFailure(
message: message,
statusCode: response.statusCode,
));
}
} on DioException catch (e) {
return Left(_mapDioException(e));
} catch (e) {
return Left(UnknownFailure(error: e));
}
}

ApiFailure _mapDioException(DioException e) {
return switch (e.type) {
DioExceptionType.connectionTimeout ||
DioExceptionType.sendTimeout ||
DioExceptionType.receiveTimeout =>
const TimeoutFailure(),
DioExceptionType.connectionError => const NetworkFailure(),
DioExceptionType.cancel =>
const UnknownFailure(message: 'Request cancelled'),
_ => ServerFailure(
message: e.message ?? 'Server error',
statusCode: e.response?.statusCode,
),
};
}

Map<String, List<String>> _parseValidationErrors(dynamic data) {
if (data is Map<String, dynamic> && data.containsKey('errors')) {
final errors = data['errors'] as Map<String, dynamic>;
return errors.map(
(key, value) => MapEntry(key, List<String>.from(value as List)),
);
}
return {};
}

String _extractErrorMessage(dynamic data) {
if (data is Map<String, dynamic>) {
return data['message'] as String? ??
data['error'] as String? ??
'Unknown error';
}
return 'Unknown error';
}
}

class PropertyRepository extends BaseRepository {
PropertyRepository(super.client);

Future<ApiResult<List<Property>>> getListings({
required int page,
int perPage = 20,
String? city,
double? minPrice,
double? maxPrice,
}) {
return safeApiCall(
() => _client.get('/properties', queryParameters: {
'page': page,
'per_page': perPage,
if (city != null) 'city': city,
if (minPrice != null) 'min_price': minPrice,
if (maxPrice != null) 'max_price': maxPrice,
}),
(data) {
final list = data['data'] as List;
return list.map((json) => Property.fromJson(json)).toList();
},
);
}

Future<ApiResult<Property>> getProperty(String id) {
return safeApiCall(
() => _client.get('/properties/$id'),
(data) => Property.fromJson(data['data']),
);
}

Future<ApiResult<Property>> createListing(CreatePropertyDto dto) {
return safeApiCall(
() => _client.post('/properties', data: dto.toJson()),
(data) => Property.fromJson(data['data']),
);
}
}

The safeApiCall method is the heart of this pattern. Every API call in every repository goes through it. It catches every possible exception, maps every HTTP status code to a typed failure, and returns an Either type that forces the caller to handle both success and failure cases at the type level. You literally cannot forget to handle errors, because the compiler will not let you access the success value without first checking for failure.

I use the fpdart package for the Either type. Some developers prefer writing their own sealed Result class, and that works fine too. The important thing is that you never return nullable types from repositories and never throw exceptions that your UI layer has to catch. The failure cases are data, not exceptions.

The ValidationFailure class deserves special attention. Many APIs, particularly Rails and Laravel backends that I frequently integrate with, return 422 responses with field-specific error messages. A generic "validation failed" error is useless to the user. They need to know that the email field is invalid or the price is out of range. The fieldErrors map preserves that granularity so the UI can display inline errors next to the correct form fields.

JSON Serialization with Freezed and JsonSerializable

Manual JSON parsing is one of the most common sources of runtime crashes in Flutter apps. You write json['name'] as String and everything works until the server returns null for that field, or returns an integer where you expected a string, or changes the field name from created_at to createdAt in a backend refactor. Every one of these scenarios has caused a production crash in an app I was responsible for.

The solution is code generation with json_serializable and freezed. You define your model once, run the build runner, and get type-safe JSON parsing, immutable data classes, equality checks, copy-with methods, and union types for free. I resisted code generation for my first year of Flutter development because I thought it added unnecessary complexity. I was wrong. The 30 seconds it takes to run build_runner saves hours of debugging serialization bugs.

Here is a model from the real estate app in Toronto. The Property class handles nested objects, nullable fields, default values, date parsing, and custom enum serialization, all generated automatically:

import 'package:freezed_annotation/freezed_annotation.dart';

part 'property.freezed.dart';
part 'property.g.dart';

@freezed
class Property with _$Property {
const factory Property({
required String id,
required String title,
required String description,
required double price,
@JsonKey(name: 'property_type') required PropertyType propertyType,
required Address address,
@JsonKey(name: 'bedroom_count') required int bedroomCount,
@JsonKey(name: 'bathroom_count') required int bathroomCount,
@JsonKey(name: 'area_sqft') required double areaSqft,
@Default([]) List<String> images,
@Default([]) List<Amenity> amenities,
@JsonKey(name: 'is_featured') @Default(false) bool isFeatured,
@JsonKey(name: 'listed_at') required DateTime listedAt,
@JsonKey(name: 'updated_at') DateTime? updatedAt,
PropertyAgent? agent,
}) = _Property;

factory Property.fromJson(Map<String, dynamic> json) =>
_$PropertyFromJson(json);
}

@freezed
class Address with _$Address {
const factory Address({
required String street,
required String city,
required String province,
@JsonKey(name: 'postal_code') required String postalCode,
required String country,
required double latitude,
required double longitude,
}) = _Address;

factory Address.fromJson(Map<String, dynamic> json) =>
_$AddressFromJson(json);
}

@freezed
class PropertyAgent with _$PropertyAgent {
const factory PropertyAgent({
required String id,
required String name,
required String email,
@JsonKey(name: 'phone_number') String? phoneNumber,
@JsonKey(name: 'avatar_url') String? avatarUrl,
@JsonKey(name: 'license_number') required String licenseNumber,
}) = _PropertyAgent;

factory PropertyAgent.fromJson(Map<String, dynamic> json) =>
_$PropertyAgentFromJson(json);
}

@freezed
class Amenity with _$Amenity {
const factory Amenity({
required String id,
required String name,
required String icon,
@JsonKey(name: 'category') required String category,
}) = _Amenity;

factory Amenity.fromJson(Map<String, dynamic> json) =>
_$AmenityFromJson(json);
}

@JsonEnum(valueField: 'value')
enum PropertyType {
@JsonValue('house')
house('house'),
@JsonValue('condo')
condo('condo'),
@JsonValue('townhouse')
townhouse('townhouse'),
@JsonValue('land')
land('land'),
@JsonValue('commercial')
commercial('commercial');

final String value;
const PropertyType(this.value);
}

After running dart run build_runner build, this generates two files: property.freezed.dart and property.g.dart. The freezed file gives you immutable instances, == operator overrides, hashCode implementations, toString, and copyWith methods. The .g.dart file gives you fromJson and toJson methods that handle all the field mapping, type casting, and null checking.

The @JsonKey(name: 'property_type') annotation is something I use constantly. Dart convention is camelCase. Most REST APIs, especially those backed by Ruby, Python, or PHP, use snake_case. The annotation bridges that gap without requiring you to change either convention.

The @Default annotation handles optional fields with sensible defaults. When the server omits the images field, the generated code gives you an empty list instead of null. When is_featured is missing, you get false instead of a null-pointer exception. This eliminates an entire category of runtime errors.

A build.yaml at the project root can configure the generator globally: field_rename: snake eliminates the need for @JsonKey(name: ...) on every field, include_if_null: false omits null fields from toJson() for cleaner request bodies, and explicit_to_json: true ensures nested objects call their own toJson() methods instead of serializing as Instance of 'Address'.

Pagination That Actually Works

Pagination is one of those features that sounds trivial and turns out to be surprisingly complex in practice. The basic concept is simple: fetch page 1, display it, when the user scrolls to the bottom fetch page 2. But real-world pagination has edge cases that will bite you if you do not handle them. What happens when the user pulls to refresh while page 3 is loading? What happens when an item is deleted and the indices shift? What happens when the server returns fewer items than requested — is that the last page, or did some items get filtered out server-side?

I have settled on a pagination controller that I carry from project to project. This version was originally built for an e-commerce app for a client in Singapore that had a catalog of over 40,000 products. Smooth, reliable pagination was not optional.

import 'package:flutter/foundation.dart';
import 'package:fpdart/fpdart.dart';

enum PaginationStatus { initial, loading, loaded, loadingMore, error, noMore }

class PaginationController<T> extends ChangeNotifier {
final Future<ApiResult<PaginatedResponse<T>>> Function(int page, int perPage)
_fetchPage;
final int _perPage;

PaginationController({
required Future<ApiResult<PaginatedResponse<T>>> Function(
int page, int perPage)
fetchPage,
int perPage = 20,
}) : _fetchPage = fetchPage,
_perPage = perPage;

final List<T> _items = [];
int _currentPage = 0;
int _totalPages = 1;
PaginationStatus _status = PaginationStatus.initial;
ApiFailure? _lastError;

List<T> get items => List.unmodifiable(_items);
PaginationStatus get status => _status;
ApiFailure? get lastError => _lastError;
bool get hasMore => _currentPage < _totalPages;
bool get isEmpty => _items.isEmpty && _status == PaginationStatus.loaded;
bool get isInitial => _status == PaginationStatus.initial;

Future<void> loadInitial() async {
if (_status == PaginationStatus.loading) return;

_status = PaginationStatus.loading;
_lastError = null;
notifyListeners();

final result = await _fetchPage(1, _perPage);

result.fold(
(failure) {
_status = PaginationStatus.error;
_lastError = failure;
},
(response) {
_items.clear();
_items.addAll(response.items);
_currentPage = response.currentPage;
_totalPages = response.totalPages;
_status = _currentPage >= _totalPages
? PaginationStatus.noMore
: PaginationStatus.loaded;
},
);
notifyListeners();
}

Future<void> loadMore() async {
if (_status == PaginationStatus.loadingMore || !hasMore) return;

_status = PaginationStatus.loadingMore;
_lastError = null;
notifyListeners();

final nextPage = _currentPage + 1;
final result = await _fetchPage(nextPage, _perPage);

result.fold(
(failure) {
_status = PaginationStatus.error;
_lastError = failure;
},
(response) {
_items.addAll(response.items);
_currentPage = response.currentPage;
_totalPages = response.totalPages;
_status = _currentPage >= _totalPages
? PaginationStatus.noMore
: PaginationStatus.loaded;
},
);
notifyListeners();
}

Future<void> refresh() async {
_currentPage = 0;
_totalPages = 1;
await loadInitial();
}

void removeWhere(bool Function(T item) test) {
_items.removeWhere(test);
notifyListeners();
}

void updateItem(int index, T newItem) {
if (index >= 0 && index < _items.length) {
_items[index] = newItem;
notifyListeners();
}
}
}

class PaginatedResponse<T> {
final List<T> items;
final int currentPage;
final int totalPages;
final int totalItems;

const PaginatedResponse({
required this.items,
required this.currentPage,
required this.totalPages,
required this.totalItems,
});
}

The PaginationStatus enum is the key to making the UI predictable. initial means the controller was just created and no data has been loaded. loading means the first page is being fetched. loaded means data is available and more pages exist. loadingMore means a subsequent page is being fetched. error means the last fetch failed. noMore means all pages have been loaded. Every state maps to a specific UI: skeleton loader for loading, regular list for loaded, bottom spinner for loadingMore, error banner with retry for error, "no more items" footer for noMore.

The loadMore method has a guard at the top: if we are already loading more or there are no more pages, it returns immediately. This prevents the duplicate-request bug that happens when ScrollController fires the end-of-list callback multiple times during a single frame. I learned this the hard way on the Singapore project — without that guard, scrolling quickly would fire three or four identical requests for the same page, and the product list would show duplicate items.

The refresh method resets the state and loads from page 1. This is wired to pull-to-refresh. It replaces the entire list with fresh data rather than prepending, which avoids the complexity of reconciling new items with an existing paginated list.

The removeWhere and updateItem methods handle optimistic UI updates. When the user deletes a product from their favorites, you remove it from the list immediately and fire the API call in the background. If the API call fails, you re-insert the item. This makes the UI feel instant even on slow networks.

Caching Strategies for Offline-First Apps

API calls are expensive. They consume bandwidth, battery, and time. More importantly, they fail. The user’s phone loses signal in an elevator, the server goes down for maintenance, the CDN has a regional outage. If your app cannot function at all without a live connection to the server, you are going to lose users.

I learned this lesson viscerally on the Dubai concierge app I mentioned at the beginning. High-net-worth users in the UAE expect apps to work flawlessly, and they use them in places where connectivity is not always great — on yachts in the Arabian Gulf, in the basements of luxury malls, on long drives through the desert to Al Ain. The app had to work offline for at least basic browsing.

My standard caching approach uses Hive for local storage (it is fast, lightweight, and requires no native setup) and a cache-then-network strategy for read operations. When the user requests data, return the cached version immediately so the UI populates instantly, then fetch fresh data from the API in the background and update the UI when it arrives. If the API call fails, the user still has the cached data. The implementation lives in the repository layer.

The key insight with caching is that not all data is equal. Static reference data (categories, regions, amenity types) can be cached for days. Volatile data (prices, availability) should be cached for minutes or not at all. User-generated data (favorites, saved searches) should be cached indefinitely and synced when connectivity returns. Your caching strategy should reflect these differences.

One critical rule: never cache authentication tokens in the same store as application data. Tokens belong in platform-secure storage (flutter_secure_storage), which uses the device's keychain (iOS) or keystore (Android) for hardware-level encryption.

GraphQL Integration with graphql_flutter

Not every API is REST. Over the past two years, I have worked with an increasing number of clients whose backends use GraphQL, particularly startups that use Hasura on top of PostgreSQL or backends written in Node.js with Apollo Server. The graphql_flutter package provides a client, a cache, and widget-level integration. I have used it on four production projects, including a social media analytics dashboard for a marketing agency in Toronto where GraphQL's ability to fetch deeply nested campaign data in a single request was a perfect fit.

Setting up the client is straightforward: create a GraphQLClient with an HttpLink, add an AuthLink for bearer token authentication, and configure a normalized cache for offline support. The widget integration provides Query and Mutation widgets that handle loading, error, and data states declaratively.

Where GraphQL shines in Flutter is subscriptions. A subscription opens a WebSocket connection and pushes data to the client in real time. For the Toronto analytics dashboard, engagement counts updated live within a second, without any polling. The graphql_flutter package handles the WebSocket lifecycle, reconnection, and cache updates automatically.

The main drawback is code generation coupling. Unlike REST where models are decoupled from the transport layer, GraphQL queries are tightly coupled to the schema. The graphql_codegen package helps by generating typed query classes from your .graphql files. On larger projects, I always set this up.

WebSocket Real-Time Integration

Some features require data to flow from server to client continuously, not on demand. Chat messages, live locations, stock prices, collaborative editing, multiplayer game state — these all need WebSockets or a similar persistent connection.

I have built real-time features into five client projects. The most complex was a fleet management app for a logistics company that had 200 delivery vehicles in Berlin. Each vehicle sent GPS coordinates every 3 seconds, and the dispatch dashboard (a Flutter web app) showed all 200 vehicles moving on a map in real time. That is 200 updates every 3 seconds, or roughly 67 WebSocket messages per second hitting the client continuously.

Here is the WebSocket integration pattern I developed for that project. It handles connection lifecycle, automatic reconnection with exponential backoff, heartbeats to detect stale connections, and typed message parsing:

import 'dart:async';
import 'dart:convert';
import 'package:web_socket_channel/web_socket_channel.dart';

enum ConnectionStatus { disconnected, connecting, connected, reconnecting }

class RealtimeClient {
final String _url;
final String Function() _tokenProvider;
final Duration _heartbeatInterval;
final int _maxReconnectAttempts;

WebSocketChannel? _channel;
Timer? _heartbeatTimer;
Timer? _reconnectTimer;
int _reconnectAttempts = 0;
ConnectionStatus _status = ConnectionStatus.disconnected;

final _statusController = StreamController<ConnectionStatus>.broadcast();
final _messageController = StreamController<RealtimeEvent>.broadcast();

Stream<ConnectionStatus> get statusStream => _statusController.stream;
Stream<RealtimeEvent> get messages => _messageController.stream;
ConnectionStatus get status => _status;

RealtimeClient({
required String url,
required String Function() tokenProvider,
Duration heartbeatInterval = const Duration(seconds: 30),
int maxReconnectAttempts = 10,
}) : _url = url,
_tokenProvider = tokenProvider,
_heartbeatInterval = heartbeatInterval,
_maxReconnectAttempts = maxReconnectAttempts;

Future<void> connect() async {
if (_status == ConnectionStatus.connected ||
_status == ConnectionStatus.connecting) {
return;
}

_setStatus(ConnectionStatus.connecting);

try {
final token = _tokenProvider();
final uri = Uri.parse('$_url?token=$token');

_channel = WebSocketChannel.connect(uri);
await _channel!.ready;

_setStatus(ConnectionStatus.connected);
_reconnectAttempts = 0;
_startHeartbeat();

_channel!.stream.listen(
_onMessage,
onError: _onError,
onDone: _onDone,
cancelOnError: false,
);
} catch (e) {
_setStatus(ConnectionStatus.disconnected);
_scheduleReconnect();
}
}

void subscribe(String channel) {
_send({
'type': 'subscribe',
'channel': channel,
});
}

void unsubscribe(String channel) {
_send({
'type': 'unsubscribe',
'channel': channel,
});
}

void sendMessage(String channel, Map<String, dynamic> payload) {
_send({
'type': 'message',
'channel': channel,
'payload': payload,
});
}

Stream<T> on<T>(
String eventType,
T Function(Map<String, dynamic> data) mapper,
) {
return messages
.where((event) => event.type == eventType)
.map((event) => mapper(event.data));
}

void _onMessage(dynamic raw) {
try {
final decoded = jsonDecode(raw as String) as Map<String, dynamic>;

if (decoded['type'] == 'pong') return;

_messageController.add(RealtimeEvent(
type: decoded['type'] as String,
channel: decoded['channel'] as String?,
data: decoded['data'] as Map<String, dynamic>? ?? {},
timestamp: DateTime.now(),
));
} catch (e) {
// Malformed message --- log but do not crash
}
}

void _onError(Object error) {
_setStatus(ConnectionStatus.disconnected);
_stopHeartbeat();
_scheduleReconnect();
}

void _onDone() {
_setStatus(ConnectionStatus.disconnected);
_stopHeartbeat();
_scheduleReconnect();
}

void _startHeartbeat() {
_heartbeatTimer?.cancel();
_heartbeatTimer = Timer.periodic(_heartbeatInterval, (_) {
_send({'type': 'ping'});
});
}

void _stopHeartbeat() {
_heartbeatTimer?.cancel();
_heartbeatTimer = null;
}

void _scheduleReconnect() {
if (_reconnectAttempts >= _maxReconnectAttempts) {
_setStatus(ConnectionStatus.disconnected);
return;
}

_setStatus(ConnectionStatus.reconnecting);
_reconnectAttempts++;

final delay = Duration(
milliseconds: (1000 * _pow(2, _reconnectAttempts - 1))
.clamp(1000, 30000)
.toInt(),
);

_reconnectTimer?.cancel();
_reconnectTimer = Timer(delay, connect);
}

double _pow(num base, int exponent) {
double result = 1;
for (int i = 0; i < exponent; i++) {
result *= base;
}
return result;
}

void _send(Map<String, dynamic> message) {
if (_status == ConnectionStatus.connected && _channel != null) {
_channel!.sink.add(jsonEncode(message));
}
}

void _setStatus(ConnectionStatus newStatus) {
if (_status != newStatus) {
_status = newStatus;
_statusController.add(newStatus);
}
}

Future<void> disconnect() async {
_reconnectTimer?.cancel();
_stopHeartbeat();
await _channel?.sink.close();
_channel = null;
_setStatus(ConnectionStatus.disconnected);
}

void dispose() {
disconnect();
_statusController.close();
_messageController.close();
}
}

class RealtimeEvent {
final String type;
final String? channel;
final Map<String, dynamic> data;
final DateTime timestamp;

const RealtimeEvent({
required this.type,
this.channel,
required this.data,
required this.timestamp,
});
}

The exponential backoff in _scheduleReconnect is critical. Without it, if the server goes down, every connected client will try to reconnect simultaneously and repeatedly, creating a thundering herd that makes recovery harder. The backoff starts at 1 second, doubles with each attempt, and caps at 30 seconds. After 10 failed attempts, the client gives up and stays disconnected. In the Berlin fleet app, this meant that when the WebSocket server was restarted for a deployment, the 15 dispatcher clients would reconnect over a 30-second window instead of all hitting the server in the same millisecond.

The heartbeat mechanism solves a subtle problem: detecting dead connections. TCP does not always notify you when a connection drops, especially on mobile networks. Without heartbeats, the client can sit there thinking it is connected while the server has long since closed the socket. The heartbeat sends a ping every 30 seconds, and if the server does not respond with a pong, the onDone callback fires and triggers reconnection.

The on method provides a typed stream of events filtered by type. In the fleet app, the dispatch dashboard would do realtimeClient.on('location_update', VehicleLocation.fromJson) to get a stream of typed location objects that the map widget consumed directly. Different channels handled different vehicle fleets, and dispatchers could subscribe and unsubscribe to specific fleets without affecting the underlying WebSocket connection.

Testing API Integrations

Testing networking code is where many Flutter developers cut corners. The answer to “why bother?” became clear after a backend team in Singapore renamed a field from total_price to totalPrice on a Friday afternoon. My app crashed for every user who opened the order details screen.

My testing strategy has three layers. Unit tests for repositories use a mock HTTP client (http_mock_adapter with Dio) to verify that every success and error scenario maps to the expected result type. Integration tests use a local mock server to verify the full round trip: request serialization, transport, response deserialization. Contract tests compare the app's expected JSON structure against the actual API response in CI whenever the backend deploys.

Contract tests have saved me from production crashes at least six times. A well-tested networking layer means you can upgrade Dio, refactor repositories, or migrate from REST to GraphQL, and the tests will tell you immediately if anything broke.

Structuring Your API Layer: Putting It All Together

The individual pieces need to be assembled into a coherent architecture. I separate concerns into four layers. The data layer contains API client configuration and interceptors. The model layer contains freezed models and JSON serialization. The repository layer combines API calls with caching, error mapping, and transformation. The presentation layer consumes repositories through dependency injection and never touches HTTP or raw JSON.

This means when a client in Dubai needs a new endpoint, I add a repository method and a model. I do not touch the HTTP client, interceptors, or other endpoints. When a client in Toronto needs caching on an existing feature, I modify only the repository. The UI is untouched.

Dependency injection holds it together. I use Riverpod on most projects, but the principle works with GetIt or Provider. The API client is a singleton. Repositories depend on the client. Controllers depend on repositories. Testing becomes trivial — inject a mock repository and test business logic without any network involvement.

Common Mistakes I See (and Used to Make)

After auditing dozens of codebases, there are recurring patterns that cause problems. The most frequent is making API calls directly in widgets — coupling UI to networking, making testing impossible, and duplicating error handling everywhere.

The second is ignoring cancellation. When the user navigates away mid-request, that call should be cancelled via Dio’s CancelToken to avoid the "setState() called after dispose()" error.

The third is not handling the full error spectrum. Developers handle the happy path and the generic error, but miss 401 (re-authentication needed), 403 (insufficient permission), 429 (rate limited), and 503 (maintenance). Each deserves a distinct UI response.

The fourth is trusting the API contract blindly. Backend teams rename fields, change types, and swap optionality. Freezed with @Default values and contract tests in CI are your best defense.

Performance Optimization for API-Heavy Apps

When your app makes dozens of API calls on a single screen — as the Singapore analytics dashboard did, pulling data from seven different endpoints — performance optimization becomes critical.

Request batching: Fire independent requests in parallel with Future.wait. This reduces screen load time from seven round trips to one.

Response compression: Ensure your server supports gzip. For large JSON responses, gzip reduces payload size by 70 to 90 percent.

Lazy image loading: Use cached_network_image to load images as they scroll into view with disk caching. For the Toronto real estate app, this cut initial data transfer from 12 megabytes to 400 kilobytes per listing page.

Selective field fetching: Request only the fields you need. GraphQL does this natively; some REST APIs support sparse fieldsets via query parameters. The Singapore dashboard reduced response size by 85 percent by fetching only three fields instead of forty.

Smart polling: Adjust intervals based on app visibility using WidgetsBindingObserver — poll frequently in the foreground, infrequently in the background, and not at all when disposed.

Looking Ahead: The Evolving Flutter API Landscape

Flutter’s networking ecosystem continues to mature. The Dart team is working on native HTTP/3 support, and the cronet_http and cupertino_http packages already let you use platform-native HTTP stacks that support HTTP/3 today. gRPC is gaining traction for Flutter apps communicating with microservice backends — I used it on a video streaming platform for a client in Berlin and the performance improvement over REST was significant. Server-Sent Events are emerging as a lighter alternative to WebSockets for one-way real-time data, particularly for AI API streaming responses.

Regardless of which transport you choose, the patterns in this guide remain the same. Abstract the transport behind a repository. Handle every error state explicitly. Cache aggressively. Test the contract between your app and the server. And never make raw HTTP calls from your widgets.

If this guide helped you build a better networking layer in your Flutter apps, I would appreciate a clap — or fifty. I write detailed, code-heavy guides like this one every week, drawing from real client projects and hard-won production lessons. Follow me here on Medium to get them in your feed. And if you have a Flutter project with API integration challenges, or if you are dealing with the exact kind of networking spaghetti I described at the beginning of this article, drop a comment below. I read every one, and I have learned as much from reader questions as from my own mistakes. Until next time — keep your interceptors clean and your tokens short-lived.


How to Integrate APIs Seamlessly in Flutter was originally published in Flutter Community on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read the whole story
alvinashcraft
34 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories