Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
146675 stories
·
33 followers

Breaking down the facts about secure development with Power Platform

1 Share

Today, organizations are being measured by how quickly they can innovate. Whether it’s launching new digital experiences, streamlining operations, or responding to customer needs in real time, the ability to move fast has always been a competitive differentiator. And it only grew on importance in the agentic era. But speed alone isn’t enough. Innovation must be scalable, secure, and sustainable.

Microsoft Power Platform is designed to meet that challenge. It empowers teams to build solutions faster, automate more processes, and scale across the business within a framework that puts security and governance first. With tools that are AI-ready and built for enterprise-grade environments from Copilot-assisted development to intelligent threat detection and posture management, the platform helps organizations move with both agility and control.

Let’s break down the facts about building secure, modern applications.

Fact: Low code does not mean low security

Despite the ever-growing usage and strong ROI, there are still people who think that low-code tools are not built for enterprise grade applications. Power Platform proves otherwise by delivering a comprehensive, layered security model designed to meet the demands of large organizations. As part of a managed security approach, the platform integrates governance and security controls directly into the development lifecycle ensuring that policies are consistently applied across environments.

From identity and access management to data protection and network security, Power Platform provides native capabilities that reduce risk without slowing innovation. Features like role-based access control, conditional access for individual apps, and data loss prevention policies are all included. Azure Virtual Network (VNet) helps keep apps and data private by creating a secure connection that blocks public internet access and limits traffic to only trusted sources.

Visibility and access control are central to this approach. Power Platform includes tenant-level analytics and inventory tracking that allow IT teams to monitor what’s being built, which connectors are in use, and whether apps are operating within approved environments. Advanced connector policies complement these tools by helping enforce data boundaries and prevent unauthorized connections, rather than providing direct visibility or access control. With tools like IP filtering, cookie binding, and role-based permissions, IT can ensure that only the right users have access to sensitive data. This helps prevent shadow IT before it starts giving teams a secure space to innovate while ensuring IT retains oversight.

The platform’s approach to security also extends to AI and agents. Security is enforced across all components of the platform, including apps and AI agents. As organizations adopt tools like M365 Copilot and Copilot Studio, Power Platform provides a secure foundation for building and deploying AI agents. These agents follow existing data loss prevention policies, access controls, and network protections, ensuring AI adoption does not create new exposure.

Power Platform also provides the flexibility to extend Copilot Studio agent protection beyond default safeguards with additional runtime protection. Organizations can choose to integrate additional monitoring systems such as Microsoft Defender, custom tools, or other security platforms for a defense-in-depth approach to agent runtime security.

Centrica, the UK’s largest retailer of zero-carbon electricity, is a good example of secure low-code innovation. With over 800 Power Platform solutions and 15,000 users, Centrica maintains enterprise-grade governance by embedding security, oversight, and controls into every stage of development.

Accenture also demonstrates how Power Platform helps reduce risk at scale. By giving more than 50,000 employees the ability to build within defined guardrails, the company reduced demand for short-term IT projects by 30%. Their approach to low-code governance helped them gain visibility into platform activity while supporting global collaboration. As one Accenture executive put it, “For us, we define shadow IT as things we cannot see or control when we need to. By standing up the platform and inviting our people to create and build—at its very core we have gained visibility into what people are doing and how they are connecting, which starts governance at the platform level.”

Fact: You do not have to outsource to be compliant

There is a perception that distributed development models increase compliance risk. Power Platform addresses this with centralized administration and clear visibility into who is building, what they are building, and how data is being used.

From the Power Platform admin center, IT teams can configure environments, enforce policies, and monitor usage across the entire organization. Tools like Dataverse audit logging, Microsoft Purview integration, and Lockbox support provide deep visibility into sensitive operations and data access.

Purview enhances compliance by enabling data classification, sensitivity labeling, and activity tracking across Power Platform environments. It also helps organizations enforce retention policies and ensure data governance requirements are met supporting alignment with global regulations like GDPR and HIPAA.

AI capabilities introduce new governance needs, which Power Platform meets with built-in support for risk assessment and proactive recommendations. Copilot capabilities also assist admins in identifying misconfigurations and streamlining compliance reporting.

Power Platform also integrates with Microsoft Sentinel and solution checkers to detect anomalies, surface vulnerabilities, and alert administrators to unusual behavior. Security posture management tools help teams assess and adjust configurations over time, helping organizations scale AI responsibly while maintaining strong governance.

PG&E is a case in point. With more than 4,300 developers and 300 Power Platform solutions, the company has embedded governance and risk management into its development lifecycle. This approach has helped PG&E achieve more than $75 million in annual savings, while ensuring that compliance and oversight remain strong.

Fact: You are not alone in your administering. You have guidance and support.

Another misconception is that managing low-code platforms at scale requires external tools or consultants. Power Platform includes everything needed to govern, secure, and scale app development from within your organization.

IT admins can use Power Platform admin center and advisor to receive AI-driven, real-time recommendations tailored to their environment. These insights help assess environment health, refine governance policies, and proactively manage security posture. Advisor also provides a security score, giving teams a clear view of how well they are securing their environments and a concrete way to demonstrate progress and accountability to leadership.

The platform is designed to adapt to each organization’s structure and needs. Recommendations can be dismissed when covered by other controls, and environmental groups allow governance to be tailored to specific business units or departments. This flexibility ensures that security doesn’t get in the way of progress but works alongside it.

Advanced features like test automation, environment isolation, and integrated observability help maintain consistent performance. VNet integration allows organizations to connect securely to on-premises systems without exposing resources to the public internet.

An example of one of leading automotive manufacturers highlights these capabilities. The company used VNet support in Power Platform to securely connect AI agents to internal systems without relying on an on-premises data gateway. The result was faster deployment, better compliance with internal security policies, and more than 3,000 hours saved through improved data access.

Start building secure, scalable solutions

Foster innovation while still maintaining security and governance principles. Microsoft Power Platform gives IT leaders and developers the ability to move quickly while maintaining the control their organizations require. With built-in governance, privacy protections, and AI-powered insights, teams can confidently scale low-code development without introducing risk. You no longer have to choose between innovation and security. With Power Platform, you can deliver both.

Explore real-world success stories and best practices. Visit the Power Platform site and follow this blog for the next article in the series breaking down the facts of the modern development.

The post Breaking down the facts about secure development with Power Platform appeared first on Microsoft Power Platform Blog.

Read the whole story
alvinashcraft
58 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Author of Systemd Quits Microsoft To Prove Linux Can Be Trusted

1 Share
Lennart Poettering has left Microsoft to co-found Amutable, a new Berlin-based company aiming to bring cryptographically verifiable integrity and deterministic trust guarantees to Linux systems. He said in a post on Mastodon that his "role in upstream maintenance for the Linux kernel will continue as it always has." Poettering will also continue to remain deeply involved in the systemd ecosystem. The Register reports: Linux celeb Lennart Poettering has left Microsoft and co-founded a new company, Amutable, with Chris Kuhl and Christian Brauner. Poettering is best known for systemd. After a lengthy stint at Red Hat, he joined Microsoft in 2022. Kuhl was a Microsoft employee until last year, and Brauner, who also joined Microsoft in 2022, left this month. [...] It is unclear why Poettering decided to leave Microsoft. We asked the company to comment but have not received a response. Other than the announcement of systemd 259 in December, Poettering's blog has been silent on the matter, aside from the announcement of Amutable this week. In its first post, the Amutable team wrote: "Over the coming months, we'll be pouring foundations for verification and building robust capabilities on top." It will be interesting to see what form this takes. In addition to Poettering, the lead developer of systemd, Amutable's team includes contributors and maintainers for projects such as Linux, Kubernetes, and containerd. Its members are also very familiar with the likes of Debian, Fedora, SUSE, and Ubuntu.

Read more of this story at Slashdot.

Read the whole story
alvinashcraft
58 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

What's new in Astro - January 2026

1 Share
January 2026 - Astro joins Cloudflare, Astro v6 beta is released, and more!
Read the whole story
alvinashcraft
59 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Model Context Protocol (MCP): Building and Debugging Your First MCP Server in .NET

1 Share

In my earlier Microsoft Agent series of posts, we’ve explored how AI agents can be extended using function tools and plugins.

In this post, we look at the Model Context Protocol (MCP) -the emerging standard that lets AI assistants discover and call tools exposed by your AI applications.

Specifically, we cover the following:

  • What is MCP
  • Why MCP matters for AI applications
  • Setting up an MCP server in ASP.NET Core
  • Transport options: SSE vs STDIO
  • Testing and debugging your MCP server
  • Configuring popular MCP clients

 

A practical walkthrough is also included that shows how to setup a local MCP server in Visual Studio, then reference and call this from Code.

Let’s dig in.

~

What Is MCP

The Model Context Protocol (MCP) is an open standard that enables AI assistants to discover and invoke tools exposed by external servers.

You can think of it as a universal adapter that lets any MCP-compatible client communicate with any MCP-compatible server.

Rather than building custom integrations for each AI tool, you build one MCP server and multiple clients can consume it.

In some ways, it reminds me of older web services. ASMX? WCF? remember those?

The protocol uses JSON-RPC 2.0 over different transport mechanisms, making it both standardised and flexible.

~

Why MCP Matters

Traditional AI integrations require custom code for each platform. If you want your documentation search to work in Claude Desktop, Cursor, or VS Code Continue, you’d typically need three separate integrations.

MCP changes this. Build once, connect everywhere.

Key benefits include:

  • Tool Discovery: Clients automatically discover what tools your server offers
  • Standardised Communication: JSON-RPC 2.0 provides a well-defined message format
  • Multiple Transports: Choose HTTP/SSE for development, STDIO for production
  • Growing Ecosystem: Major AI tools are adopting MCP support

 

Ideal if you or your org contains unique IP and want to increase the footprint of your solution to AI agents or AI enabled tools, services, and IDEs.

~

Setting Up an MCP Server in ASP.NET Core

Let’s build an MCP server that exposes a documentation search tool. We’ll use Microsoft’s MCP SDK.

First, add the required NuGet packages:

dotnet add package ModelContextProtocol --prerelease
dotnet add package ModelContextProtocol.AspNetCore --prerelease

 

Next, configure the MCP server in Program.cs:

var builder = WebApplication.CreateBuilder(args);

// Configure MCP based on environment
if (builder.Environment.IsDevelopment())
{
    builder.Services.AddMcpServer()
        .WithHttpTransport()  // SSE for dev testing
        .WithToolsFromAssembly();
}
else
{
    builder.Services.AddMcpServer()
        .WithStdioServerTransport()  // STDIO for production
        .WithToolsFromAssembly();
}


var app = builder.Build();


// Map the MCP endpoint
app.MapMcp();
app.Run();

 

The key decision here is the transport mechanism.  This depends on your deployment model.

During development, HTTP/SSE is easier to debug.

HTTP/SSE is ideal when your MCP server runs as a web API that multiple clients can connect to.

STDIO is used when a client like Claude Desktop spawns and manages the MCP server process directly via stdin/stdout – common for simple local-only tools.

For production deployments to Claude Desktop, STDIO is preferred.

~

Creating Your First MCP Tool

MCP tools are like function tools in the Microsoft Agent Framework.

I say this as just like the Agent Framework, with MCP Tools, you annotate methods with descriptive attributes to help AI clients understand when to invoke them.

For example, here’s a documentation search tool:

using ModelContextProtocol.Server;
using System.ComponentModel;

namespace MyApp.MCP.Tools;

[McpServerToolType]
public class SearchDocs
{
    private readonly IDocumentSearchService _searchService;
    private readonly ILlmService _llmService;

    public SearchDocs(
        IDocumentSearchService searchService,
        ILlmService llmService)
    {
        _searchService = searchService;
        _llmService = llmService;
    }

    [McpServerTool]
    [Description("Searches documentation and returns a synthesised answer")]
    public async Task<string> Search(
        [Description("The natural language search query")]
        string query)
    {
        // 1. Vector search for relevant chunks
        var chunks = await _searchService.SearchAsync(query);
 
        if (!chunks.Any())
        {
            return "No relevant documentation found for your query.";
        }


        // 2. Fetch full documents
        var documents = await _searchService.GetFullDocumentsAsync(chunks);

        // 3. Synthesise answer using LLM
        var context = string.Join("\n\n", documents.Select(d => d.Content));
        var answer = await _llmService.SynthesiseAsync(query, context);

        // 4. Return with source attribution
        var sources = string.Join("\n", documents.Select(d => $"- {d.Url}"));

        return $"{answer}\n\n**Sources:**\n{sources}";
    }
}

 

The [Description] attributes are crucial.   They tell the AI client what the tool does and how to use it.

~

Testing with MCP Explorer

You can test your MCP tool using MCP Explorer on Windows.  Download it from https://mcp-explorer.com and install.

Key features include:

  • No Node.js Required: A standalone Windows application
  • Auto-Detection: Automatically discovers servers from your Claude Desktop config
  • Visual Tool Execution: Test tools with a clean UI
  • JSON-RPC Monitoring: See the raw protocol messages for debugging

 

To connect manually, enter the SSE endpoint URL hhttp://localhost:5125/sse, when found, the MCP tool search is discovered from the Visual Studio instance:

You click this and supply the relevant parameter with the value Getting Started.

Clicking Execute Tool invokes the MCP tool and returns data:

MCP Explorer is useful when you want to inspect the JSON-RPC traffic between client and server.  Ideal for understanding exactly what’s happening under the hood.

~

Configuring VS Code and the Continue Extension

Once your server is running, you can connect various AI tools.  I used VS Code and the Continue extension.

To configure the Continue extension, perform the following steps:

  1. Install the Continue plugin in Code.
  2. Next create a file at ~/.continue/mcpServers/my-docs.yaml

 

Note: You may need to create the .continue and mcpServers folders if they don’t already exist.

Next, add the following content and save:

name: MCP-Demo-Tools
version: 0.0.1
schema: v1
mcpServers:
- name: mcp-docs
  type: sse
  url: http://localhost:5125/sse

 

With the configuration defined, it can now be tested.

~

Running Your MCP Server in Visual Studio and Testing with VS Code

A common development workflow is to run your MCP server in Visual Studio while consuming it from VS Code with the Continue extension.

This gives you the best of both worlds: full debugging capabilities in VS and AI-assisted coding in VS Code.

Step 1: Configure Visual Studio

Open your solution in Visual Studio 2022. Ensure your launchSettings.json has the HTTP profile configured:

{
  "profiles": {
    "http": {
      "commandName": "Project",
      "dotnetRunMessages": true,
      "launchBrowser": false,
      "applicationUrl": "http://localhost:5125",
      "environmentVariables": {
        "ASPNETCORE_ENVIRONMENT": "Development"
      }
    }
  }
}

 

Run the application and start debugging. You’ll see output console confirming the server is listening:

 Step 2: Configure VS Code Continue

In VS Code, install the Continue extension if you haven’t already.  Ensure you have the MCP server configuration from earlier setup,  Ensure it contains the following content:

name: MCP-Demo-Tools
version: 0.0.1
schema: v1
mcpServers:
- name: mcp-docs
  type: sse
  url: http://localhost:5125/sse

 

Restart VS Code or reload the Continue extension for the changes to take effect.

Step 3: Configure Continue’s LLM

Continue needs its own LLM to decide when to use tools. Open Continue’s settings and configure your preferred model.

You can use OpenAI, Anthropic, or a local model via Ollama.

Example config.yaml for OpenAI:

models:
  - title: GPT-4
    provider: openai
    model: gpt-4
    apiKey: your-api-key-here

 Step 4: Verify the Connection

Back in Visual Studio, watch the Output window or console. When Continue connects, you should see requests like: Request: GET /sse and Request: POST /message

The initialize and tools/list JSON-RPC calls confirm Continue has discovered your tools.

Step 5: Use Agent Mode

In VS Code, open the Continue panel and ensure you’re in Agent Mode (not Chat mode).

The mode selector is near the input box.

Now type a prompt that should trigger your tool:

Search the documentation for authentication examples

If everything is configured correctly, you’ll see tools/call requests in Visual Studio’s console, and your breakpoints in the tool method will be hit:

The data is returned from your MCP tool in Visual Studio, back to VS Code:

We can further confirm the mock data we have in our Visual Studio MCP Server instance:

// Sample documents for demo purposes
// In production, this would query a vector database like Elasticsearch
private static readonly List<Document> _sampleDocs = new()
{
    new Document(
        "1",
        "https://docs.example.com/api/authentication",
        "Authentication API",
        "To authenticate with the API, include your API key in the Authorization header. Use the format: Authorization: Bearer YOUR_API_KEY. API keys can be generated from the dashboard."
    ),
    new Document(
        "2",
        "https://docs.example.com/tutorials/getting-started",
        "Getting Started Tutorial",
        "Welcome to our platform! This tutorial will guide you through the initial setup. First, install the SDK using npm install @example/sdk. Then, initialise the client with your credentials."
    ),
    new Document(
        "3",
        "https://docs.example.com/examples/basic-usage",
        "Basic Usage Examples",
        "Here are some common usage patterns. To create a new resource, use client.create(). To fetch existing resources, use client.get(id). To update, use client.update(id, data)."
    ),
    new Document(
        "4",
        "https://docs.example.com/reference/errors",
        "Error Reference",
        "Common error codes: 400 Bad Request - Invalid input parameters. 401 Unauthorized - Missing or invalid API key. 404 Not Found - Resource does not exist. 429 Too Many Requests - Rate limit exceeded."
    )
};

 

From the above, we can see the content in document with in ID of 2 maps to the response in VS Code.

Troubleshooting

If Continue doesn’t discover your tools:

Check that Visual Studio is running and the server is listening on the correct port

Verify the YAML file is in the correct location: C:\Users\<username>\.continue\mcpServers\

Ensure the URL matches exactly: http://localhost:5125/sse

Restart VS Code after making configuration changes

Check for port conflicts – only one process can bind to port 5125

~

Debugging Connection Issues

When things go wrong, here are some other things you can try.

Add Request Logging

Add middleware to see what requests are hitting your server:

app.Use(async (context, next) =>
{
    Console.WriteLine($">>> Request: {context.Request.Method} {context.Request.Path}");
    await next();
});

Check for HTTPS Redirect Issues

A common gotcha is HTTPS redirection breaking SSE connections.  This caught be out at first. Disable it for development:

if (!app.Environment.IsDevelopment())
{
    app.UseHttpsRedirection();
}

Verify Port Configuration

Check your launchSettings.json to confirm the correct ports:

{
  "profiles": {
    "http": {
      "applicationUrl": "http://localhost:5125"
    },
    "https": {
      "applicationUrl": "https://localhost:7073;http://localhost:5125"
    }
  }
}

Watch for Multiple Instances

If you’re getting unexpected behaviour, check Task Manager for stray dotnet.exe processes. Multiple instances can cause port conflicts.

~

Tips for Triggering Tool Usage

Sometimes you can find your tool doesn’t get called and human intent is not detected.  This caught me out initially.

The LLM decides whether to use your tools based on your prompt. It’s important to try and use precise language.  To improve tool invocation, here are some tips.

Be Explicit in Prompts

Use prompts like “search the docs for authentication examples” or “use the documentation tool to find API reference info” rather than how do I authenticate?” (might use general knowledge).

 Use Clear Tool Descriptions

Your [Description] attributes should be specific and action oriented. The LLM reads these to decide when to invoke your tool.

Check Agent Mode

In tools like Continue, ensure you’re in Agent mode, not Chat mode. Tools only work in Agent mode.

~

Summary

In this post, we’ve covered the fundamentals of building and debugging an MCP server.

In future posts, we’ll explore:

  • Query logging for training data collection
  • Caching with vector similarity matching
  • Authentication and authorisation for MCP endpoints
  • Deploying MCP servers to Azure

 

MCP represents a significant step forward in AI tool interoperability. By adopting this standard, you’re building integrations that will work across an expanding ecosystem of AI assistants.

~

Further Reading and Resources

Some additional resources you might find helpful.

~

JOIN MY EXCLUSIVE EMAIL LIST
Get the latest content and code from the blog posts!
I respect your privacy. No spam. Ever.
Read the whole story
alvinashcraft
59 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

How to Add Multi-Language Support in Flutter: Manual and AI-Automated Translations for Flutter Apps

1 Share

As Flutter applications scale beyond a single market, language support becomes a critical requirement. A well-designed app should feel natural to users regardless of their locale, automatically adapting to their language preferences while still giving them control.

This article provides a comprehensive, production-focused guide to supporting multiple languages in a Flutter application using Flutter’s localization system, the intl package, and Bloc for state management. We’ll support English, French, and Spanish, implement automatic language detection, and allow users to manually switch languages from settings, while also exploring the use of AI to automate text translations.

Table of Contents

Prerequisites

Before proceeding, you should be comfortable with the following concepts:

  • Dart programming language: variables, classes, functions, and null safety

  • Flutter fundamentals: widgets, BuildContext, and widget trees

  • State management basics: familiarity with Bloc or similar patterns

  • Terminal usage: running Flutter CLI commands

If you have prior experience working with Flutter widgets and basic app architecture, you are well prepared to follow along.

Why Localization Matters in Flutter Applications

Localization (often abbreviated as l10n) is the process of adapting an application for different languages and regions, going beyond simple text translation to influence accessibility, user trust, and overall usability. From a technical perspective, localization introduces several challenges: text must be dynamically resolved at runtime, the UI must update instantly when the language changes, language preferences must persist across sessions, and device locale detection must gracefully fall back when a language is unsupported.

Flutter’s localization framework, when combined with intl and Bloc, solves these challenges cleanly and predictably.

Flutter Localization Architecture Overview

Flutter localization is built around three key ideas:

  1. ARB files as the source of truth for translated strings

  2. Code generation to provide type-safe access to translations

  3. Locale-driven rebuilds of the widget tree

At runtime, the active Locale determines which translation file is used. When the locale changes, Flutter automatically rebuilds dependent widgets.

How to Set Up Dependencies

Add the required dependencies to your pubspec.yaml:

dependencies:
  flutter:
    sdk: flutter

  flutter_localizations:
    sdk: flutter

  intl: ^0.20.2
  flutter_bloc: ^8.1.3
  arb_translate: ^1.1.0

Enable localization code generation:

flutter:
  generate: true

This instructs Flutter to generate localization classes from ARB files.

How to Define Supported Languages

For this guide, the application will support:

  • English (en)

  • French (fr)

  • Spanish (es)

These locales will be declared centrally and used throughout the app.

How to Add Localized Text with ARB Files

Flutter uses Application Resource Bundle (ARB) files to store localized strings. Each supported language has its own ARB file.

English – app_en.arb

{
  "@@locale": "en",
  "enter_email_address_to_reset": "Enter your email address to reset"
}

French – app_fr.arb

{
  "@@locale": "fr",
  "enter_email_address_to_reset": "Entrez votre adresse e-mail pour réinitialiser"
}

Spanish – app_es.arb

{
  "@@locale": "es",
  "enter_email_address_to_reset": "Ingrese su dirección de correo electrónico para restablecer"
}

Each key must be identical across files. Only the values change per language.

How to Generate Localization Code

Run the following command in your terminal:

flutter gen-l10n

Flutter generates a strongly typed localization class, typically located at:

.dart_tool/flutter_gen/gen_l10n/app_localizations.dart

This file exposes getters such as:

AppLocalizations.of(context)!.enter_email_address_to_reset

How to Configure MaterialApp for Localization

The MaterialApp widget must be configured with localization delegates and supported locales:

MaterialApp(
  localizationsDelegates: const [
    AppLocalizations.delegate,
    GlobalMaterialLocalizations.delegate,
    GlobalWidgetsLocalizations.delegate,
    GlobalCupertinoLocalizations.delegate,
  ],
  supportedLocales: const [
    Locale('en'),
    Locale('fr'),
    Locale('es'),
  ],
  locale: state.locale,
  home: const MyHomePage(),
)

The locale property is controlled by Bloc, allowing dynamic updates at runtime.

Auto-Detecting the User’s Device Language

Flutter exposes the device locale via PlatformDispatcher. We can use this to automatically select the most appropriate supported language.

void detectLanguageAndSet() {
  Locale deviceLocale = PlatformDispatcher.instance.locale;

  Locale selectedLocale = AppLocalizations.supportedLocales.firstWhere(
    (supported) => supported.languageCode == deviceLocale.languageCode,
    orElse: () => const Locale('en'),
  );

  print('Using Locale: ${selectedLocale.languageCode}');

  GlobalConfig.storageService.setStringValue(
    AppStrings.DETECTED_LANGUAGE,
    selectedLocale.languageCode,
  );

  context.read<AppLocalizationBloc>().add(
    SetLocale(locale: selectedLocale),
  );
}

This approach reads the device language, matches it against supported locales, falls back to English when the language is unsupported, persists the detected language, and updates the UI instantly.

How to Manage Localization with Bloc

Bloc provides a predictable and testable way to manage application-wide locale changes.

Localization State

class AppLocalizationState {
  final Locale locale;
  const AppLocalizationState(this.locale);
}

Localization Event

abstract class AppLocalizationEvent {}

class SetLocale extends AppLocalizationEvent {
  final Locale locale;
  SetLocale({required this.locale});
}

Localization Bloc

class AppLocalizationBloc
    extends Bloc<AppLocalizationEvent, AppLocalizationState> {
  AppLocalizationBloc()
      : super(const AppLocalizationState(Locale('en'))) {
    on<SetLocale>((event, emit) {
      emit(AppLocalizationState(event.locale));
    });
  }
}

The AppLocalizationBloc manages the app’s language state. It starts with English (Locale('en')) as the default, and when it receives a SetLocale event, it updates the state to the new locale provided in the event, causing the app’s UI to switch to that language. Whenever SetLocale is dispatched, the entire app rebuilds using the new locale.

How to Display Localized Text in Widgets

Once localization is configured, using translated text is straightforward:

Text(
  AppLocalizations.of(context)!.enter_email_address_to_reset,
  style: getRegularStyle(
    color: Colors.white,
    fontSize: FontSize.s16,
  ),
)

AppLocalizations.of(context)!.enter_email_address_to_reset retrieves the localized string enter_email_address_to_reset for the current app locale from the generated localization resources. The correct translation is resolved automatically based on the active locale.

Language Switching from Settings

Users should always be able to override automatic language detection.

ListTile(
  title: const Text('French'),
  onTap: () {
    context.read<AppLocalizationBloc>().add(
      SetLocale(locale: const Locale('fr')),
    );
  },
)

This ListTile displays the text "French", and when tapped, it triggers the AppLocalizationBloc to change the app’s locale to French ('fr') by dispatching a SetLocale event and it persists the selected language so it can be restored on the next app launch.

How to Add Parameters to Localized Strings

Real-world applications rarely display static text. Messages often include dynamic values such as user names, counts, dates, or prices. Flutter’s localization system, powered by intl, supports parameterized (interpolated) strings in a type-safe way.

Where Parameters Are Defined

Parameters are defined inside ARB files alongside the localized string itself, with each parameterized message consisting of the message string containing placeholders and a corresponding metadata entry that describes those placeholders.

Example: Parameterized Text

Suppose we want to display a greeting message that includes a user’s name.

English – app_en.arb

{
  "@@locale": "en",
  "greetingMessage": "Hello {username}!",
  "@greetingMessage": {
    "description": "Greeting message shown on the home screen",
    "placeholders": {
      "username": {
        "type": "String"
      }
    }
  }
}

This defines a parameterized localized message for English, indicated by "@@locale": "en". The "greetingMessage" key contains the string "Hello {username}!", where {username} is a placeholder that will be dynamically replaced with the user’s name at runtime. The "@greetingMessage" entry provides metadata for the message, including a description that explains the string is shown on the home screen, and a "placeholders" section that specifies "username" is of type String. When the app runs, this structure allows the message to display dynamically—for example, if the username is "Alice", the message would appear as "Hello Alice!".

French – app_fr.arb

{
  "@@locale": "fr",
  "greetingMessage": "Bonjour {username} !"
}

Spanish – app_es.arb

{
  "@@locale": "es",
  "greetingMessage": "¡Hola {username}!"
}

The placeholder name ({username}) must be identical across all ARB files.

Generated Dart API

After running:

flutter gen-l10n

Flutter generates a strongly typed method instead of a simple getter:

String greetingMessage(String username)

This prevents runtime errors and ensures compile-time safety.

How to Use Parameterized Strings in Widgets

Text(
  AppLocalizations.of(context)!.greetingMessage('Tony'),
)

If the locale is set to French, the output becomes:

Bonjour Tony !

Pluralization and Quantities

Another common localization requirement is pluralization. Languages differ significantly in how they express quantities, and hardcoding plural logic in Dart quickly becomes error-prone.

Defining Plural Messages in ARB

{
  "itemsCount": "{count, plural, =0{No items} =1{1 item} other{{count} items}}",
  "@itemsCount": {
    "description": "Displays the number of items",
    "placeholders": {
      "count": {
        "type": "int"
      }
    }
  }
}

This defines a pluralized message for itemsCount. The string {count, plural, =0{No items} =1{1 item} other{{count} items}} dynamically changes based on the value of count: it shows "No items" when count is 0, "1 item" when count is 1, and "{count} items" for all other values. The metadata entry "@itemsCount" provides a description and specifies that the placeholder count is of type int.

Each language can define its own plural rules while sharing the same key.

Using Pluralized Messages

Text(
  AppLocalizations.of(context)!.itemsCount(3),
)

Flutter automatically applies the correct plural form based on the active locale.

How to Format Dates, Numbers, and Currency

The intl package also provides locale-aware formatting utilities. These should be used in combination with localized strings, not as replacements.

Date Formatting Example

final formattedDate = DateFormat.yMMMMd(
  Localizations.localeOf(context).toString(),
).format(DateTime.now());
Text(
  AppLocalizations.of(context)!.lastLoginDate(formattedDate),
)

This ensures that both language and formatting rules align with the user’s locale.

Localization Data Flow

Localization is handled as an explicit data flow, with locale resolution modeled as application state rather than a static configuration passed into MaterialApp.

The process starts with the device locale, obtained from the platform layer at startup. This value represents the system’s preferred language and region but is not applied directly to the UI.

Instead, it flows through a detectLanguageAndSet step responsible for applying application-specific rules. This layer typically handles locale normalization and fallback logic, such as mapping unsupported locales to supported ones, restoring a user-selected language from persistent storage, or enforcing product constraints around available translations.

The resolved locale is then emitted into a Localization Bloc, which acts as the single source of truth for localization state. By centralizing locale management, the application can support runtime language changes, ensure predictable rebuilds, and keep localization logic decoupled from both the widget tree and platform APIs.

The Bloc feeds into the locale property of MaterialApp, which is the integration point with Flutter’s localization system. Updating this value triggers a rebuild of the Localizations scope and causes all dependent widgets to resolve strings for the active locale.

At the edge of the system, localized widgets consume the generated localization classes produced by flutter gen-l10n. These widgets remain agnostic to how the locale was selected or updated. They simply react to the localization context provided by the framework.

This architecture cleanly separates:

  • Locale detection

  • Business logic and state management

  • Framework-level localization

  • UI rendering

As a result, localization behavior remains explicit, maintainable, and compatible with automated translation workflows and CI-driven localization updates.

Localization Data Flow

Common Pitfalls and How to Avoid Them

  1. Avoid manual string concatenation. For example, do not use 'Hello ' + name. You should rely on localized templates instead.

  2. Never hardcode plural logic in Dart. Always use intl’s pluralization features to handle different languages correctly.

  3. Avoid locale-specific formatting outside intl utilities. Dates, numbers, and currencies should be formatted using the proper localization tools.

  4. Always regenerate localization files after updating ARB files. This ensures the app reflects all the latest translations.

How to Automate Translations with AI

In Flutter applications that rely on ARB files for localization, translation maintenance becomes increasingly costly as the application grows. Each new message must be manually propagated across locale files, often resulting in missing keys, inconsistent phrasing, or delayed updates. This problem is amplified in projects that do not use a Translation Management System (TMS) and instead keep ARB files directly in the repository.

While many TMS platforms have begun adding AI-assisted translation features, not all projects use a TMS at all, particularly small teams, internal tools, or personal projects. In these cases, developers frequently resort to copying strings into AI chat tools and pasting results back into ARB files, which is inefficient and difficult to scale.

To address this workflow gap, Leen Code published arb_translate package, a Dart-based CLI tool that automates missing ARB translations using large language models.

Design Approach

The model behind arb_translate aligns with Flutter’s existing localization pipeline rather than replacing it:

  • English ARB files remain the source of truth

  • Only missing keys are translated

  • Output is written back as standard ARB files

  • flutter gen-l10n is still responsible for code generation

This design makes the tool suitable for both local development and CI usage, without introducing new runtime dependencies or localization abstractions.

At a high level, the flow is:

  1. Parse the base (typically English) ARB file

  2. Identify missing keys in target locale ARB files

  3. Send key–value pairs to an LLM via API

  4. Receive translated strings

  5. Update or generate locale-specific ARB files

  6. Run flutter gen-l10n to regenerate localized resources

Gemini-Based Setup

To use Gemini for ARB translation:

  1. Generate a Gemini API key
    https://ai.google.dev/tutorials/setup

    Gemini API Dashboard

  2. Install the CLI:

dart pub global activate arb_translate
  1. Export the API key:
export ARB_TRANSLATE_API_KEY=your-api-key
  1. Run the tool from the Flutter project root:
arb_translate

The tool scans existing ARB files, generates missing translations, and writes them back to disk.

OpenAI/ChatGPT Support

As of version 1.0.0, arb_translate also supports OpenAI ChatGPT models. This allows teams to standardize on OpenAI infrastructure or switch providers without changing their localization workflow.

  1. Generate an OpenAI API key
    https://platform.openai.com/api-keys

    OpenAI Platform

  2. Install the tool:

dart pub global activate arb_translate
  1. Export the API key:
export ARB_TRANSLATE_API_KEY=your-api-key
  1. Select OpenAI as the provider:

Via l10n.yaml:

arb-translate-model-provider: open-ai

Or via CLI:

arb_translate --model-provider open-ai
  1. Execute:
arb_translate

Practical Use Cases

This approach is not intended to replace professional translation or review workflows. Instead, it serves as a deterministic automation layer that:

  • Eliminates manual copy-paste workflows

  • Keeps ARB files structurally consistent

  • Enables translation generation in CI

  • Allows downstream review in a TMS if required

For content-heavy Flutter applications or teams without a dedicated localization platform, this provides a pragmatic and maintainable solution.

Best Practices and Considerations

  1. Always define a fallback locale to ensure the app remains usable.

  2. Avoid hardcoding user-facing strings; rely on localized resources.

  3. Use semantic and stable ARB keys for maintainability.

  4. Persist user language preferences to provide a consistent experience.

  5. Test your app with long translations and multiple locales to catch layout or UI issues.

Conclusion

Localization is a foundational requirement for modern Flutter applications. By combining Flutter’s built-in localization framework, the intl package, and Bloc for state management, you gain a robust and scalable solution.

With automatic device language detection, runtime switching, and clean architecture, your application becomes globally accessible without sacrificing maintainability.

References

Here are official links you can use as references for Flutter localization:



Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Microsoft Ships OData .NET (ODL) 9.0.0 Preview 3: Safety, Modern APIs, and Spec Compliance

1 Share

Microsoft released OData .NET (ODL) 9.0.0 Preview 3, the latest preview iteration of the OData .NET client and core libraries, continuing the modernisation effort of the library. This preview focuses on safer default behaviours, runtime API cleanup, and closer conformance with the OData specification as the team works toward a stable 9.x release.

By Edin Kapić
Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories