Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
148178 stories
·
33 followers

C# Extension Members

1 Share

Overview: What are extension members?

Extension members allow you to define additional members for existing types without modifying their definitions. With them, you can add functionality to existing types you don’t have access to or don’t control, for example, built-in types or types from an API or commercial library. 

Extension methods have been a feature of C# since version 3.0 in 2007, so the concept has been around for some time in .NET. However, traditional extension methods were just that – methods only. Extensions could not be created for properties, fields, or operators. You couldn’t create static extensions and they couldn’t easily participate in interfaces. However, new syntax in C# 14 allows both instance and static properties and methods, as well as operators.

Classic extension methods

Let’s quickly review what a classic extension method looks like. We’ll extend the DateTime structure to check the first Monday of any quarter. You might see code like this in manufacturing scenarios where production runs need to start on a specific day, such as the first Monday of a quarter. The code looks something like this:

    public static DateTime FirstMondayOfQuarter(this DateTime dateTime, int quarter)
    {
        if (quarter is < 1 or > 4)
            throw new ArgumentOutOfRangeException(nameof(quarter), 
                "Quarter must be between 1 and 4.");

        var year = dateTime.Year;
        var firstMonth = (quarter - 1) * 3 + 1;

        var date = new DateTime(year, firstMonth, 1);

        var offset = ((int)DayOfWeek.Monday - (int)date.DayOfWeek + 7) % 7;
        return date.AddDays(offset);
    }

Notice that to make the extension method you must make the class and method static, and use the this keyword to indicate which type to extend. While the definition uses the static keyword, it’s not a static member.

Code to use this extension method looks like the following:

DateTime myDate = DateTime.Now;

for (var i = 1; i <= 4; i++)
{
    Console.WriteLine(myDate.FirstMondayOfQuarter(i).ToShortDateString());
}

Because it’s not a static method, you can’t just call DateTime.FirstMondayOfQuarter(2). Calling DateTime.Now (or any DateTime member) creates a new instance of a DateTime.

Extension members in C# 14

Use the new extension block inside a static class to define extensions. The extension block accepts the receiver type (the type you want to make an extension for), and optionally, a receiver parameter name for instance members. Adding the parameter name is recommended for clarity. Here’s the syntax:

extension(Type) { … }    // plain extension block

extension(Type parameterName) { … }   // extension block with a parameter name

If we want to convert a classic extension method to a new extension member, we can use Rider. Rider has a handy intention action for this, just press Alt + Enter and choose Move to extension block:

Animated gif showing how to use Rider to upgrade a classic method that extends the DateTime struct to calculate the first Monday of a quarter to an extension member

The code to use it doesn’t change. However, you can now call the code without having to create an instance first, like this:

Console.WriteLine(DateTime.Now.FirstMondayOfQuarter(i).ToShortDateString());

So you won’t need to change any calling code unless you want to.

To create an extension property, use an extension block like you would for any extension member. The rest of the code looks very natural like regular C# code.

public static class DateTimeExtensions
{
    extension(DateTime date)
    {
        public static bool isWeekend(DateTime d) => 
                d.DayOfWeek is DayOfWeek.Saturday or DayOfWeek.Sunday;
    }
}

// To use it:

if (DateTime.Today.IsWeekend)
{
    // No work today, yay!
}

Notice that in the extension block you define methods, properties, and other members without using the this parameter syntax for each member.

A goal of the C# team was to ensure that existing code doesn’t break, so then the syntax you use becomes a matter of style. There’s no need to change any of your existing extension methods, but Rider’s handy intention action makes it fast and easy to do so.

In Summary

Extension members are beneficial for several scenarios, including transforming helper methods into properties, organizing related extensions, incorporating static constants or factories into existing types, defining operators on external types, and making third-party APIs feel more integrated or native.

Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

Microsoft Agent Framework is Release Candidate! Let’s Go 🔥🤖

1 Share

⚠ This blog post was created with the help of AI tools. Yes, I used a bit of magic from language models to organize my thoughts and automate the boring parts, but the geeky fun and the 🤖 in C# are 100% mine.

Hi!

Big milestone these days: The Microsoft Agent Framework (MAF) just reached Release Candidate status 🎉

Official announcement here:
👉 https://devblogs.microsoft.com/foundry/microsoft-agent-framework-reaches-release-candidate/

As someone who has been building apps, samples, demos, orchestration experiments and livestream content around MAF for months… this one feels GOOD.

Let’s talk about this.


🤖 What is Microsoft Agent Framework?

The Microsoft Agent Framework is a .NET-first (and Python) framework to:

  • Build AI Agents
  • Orchestrate multi-agent systems
  • Connect tools, memory, skills
  • Integrate with Azure AI and local models
  • Control execution, planning, routing

Think:

Structured AI orchestration for real-world production systems.

Not “just chat”. Not “just prompts”. Not “just LLM calls”.

This is agent architecture in C#. And that’s why I like it a lot 😎

Built by Microsoft. Designed for real applications. Native .NET developer experience.


🧠 A Minimal C# Agent Example

Let’s start simple.

A minimal agent setup:

using System;
using Azure.AI.OpenAI;
using Azure.Identity;
using Microsoft.Agents.AI;

AIAgent agent = new AzureOpenAIClient(
        new Uri(Environment.GetEnvironmentVariable("AZURE_OPENAI_ENDPOINT")!),
        new AzureCliCredential())
    .GetChatClient("gpt-5")
    .AsAIAgent(instructions: "You are a friendly assistant. Keep your answers brief.");

Console.WriteLine(await agent.RunAsync("What is the largest city in France?"));
That’s it: 
  1. Create the agent app
  2. Register an agent
  3. Provide instructions
  4. Execute

Super easy.


🔧 Adding a Tool (Because Agents Need Superpowers)

Let’s give our agent a tool.

using System;
using Azure.AI.OpenAI;
using Azure.Identity;
using Microsoft.Agents.AI;
using Microsoft.Extensions.AI;

var endpoint = Environment.GetEnvironmentVariable("AZURE_OPENAI_ENDPOINT")
    ?? throw new InvalidOperationException("Set AZURE_OPENAI_ENDPOINT");
var deploymentName = Environment.GetEnvironmentVariable("AZURE_OPENAI_DEPLOYMENT_NAME") ?? "gpt-5";

AIAgent agent = new AzureOpenAIClient(new Uri(endpoint), new AzureCliCredential())
    .GetChatClient(deploymentName)
    .AsAIAgent(instructions: "You are a helpful assistant.", tools: [AIFunctionFactory.Create(GetWeather)]);

Example tool:

using System.ComponentModel;

[Description("Get the weather for a given location.")]
static string GetWeather([Description("The location to get the weather for.")] string location)
    => $"The weather in {location} is cloudy with a high of 15°C.";

Now your agent:

  • Decides when to call tools
  • Uses structured tool invocation
  • Returns enriched results

This is controlled autonomy.


🧩 Multi-Agent Orchestration

Now we move from “chatbot” to architecture.

using System;
using Azure.AI.OpenAI;
using Azure.Identity;
using Microsoft.Agents.AI;

var client = new AzureOpenAIClient(
        new Uri(Environment.GetEnvironmentVariable("AZURE_OPENAI_ENDPOINT")!),
        new AzureCliCredential())
    .GetChatClient("gpt-5");

var planner = client.AsAIAgent(
    instructions: "Break down the task into steps."
);

var executor = client.AsAIAgent(
    instructions: "Execute each step carefully."
);

Workflow workflow = AgentWorkflowBuilder.BuildSequential(planner, executor);

Then orchestrate:

var result = await workflow.RunAsync(
    "Create a blog post about AI agents."
);

Console.WriteLine(result);

Now you’re orchestrating.


🧱 Where MAF Fits in the Stack

MAF works beautifully with:

  • Azure AI
  • Local models
  • Tool calling
  • Structured execution
  • Observability
  • Enterprise-grade patterns

It’s not a demo framework. It’s a foundation.


🎥 All My Microsoft Agent Framework Content

Over the last months I’ve built a LOT around MAF.

Here’s a curated list of my content:


📝 Blog Posts

  • Deep dives into agent orchestration
  • Tool integration patterns
  • Multi-agent execution samples
  • Performance comparisons
  • Real C# implementation breakdowns

👉 You can find all posts tagged with Agent Framework on my blog: https://elbruno.com


🎬 YouTube Videos

On my YouTube channel I’ve covered:

  • Intro to Microsoft Agent Framework
  • Multi-agent demos in C#
  • Agent orchestration patterns
  • Live coding sessions
  • Performance experiments
  • Comparison with other orchestration approaches

Channel: https://youtube.com/elbruno


🔴 Livestreams

I’ve done:

  • .NET Channels
  • Microsoft Reactor sessions
  • Community livestream demos
  • GitHub repo walkthroughs
  • Live coding of multi-agent apps

All focused on:

Real .NET developer experience.


📦 My GitHub Samples

Some of the repos I built around MAF include:

  • Multi-agent orchestration samples
  • Performance comparison experiments
  • Tool-based execution demos
  • Local AI integration experiments
  • Hybrid Azure + local agent setups

Check them here: https://github.com/elbruno


🚀 Why the Release Candidate Matters

Release Candidate means:

  • API stability
  • Production readiness direction
  • Ecosystem alignment
  • Documentation maturity
  • Clear forward path

This is no longer experimental territory. This is “start building real stuff”. And as a .NET developer? This is AMAZING!


🧠 Final Thoughts

I’ve been saying this for a while. AI apps are not just about LLM calls. They’re about:

  • Control
  • Orchestration
  • Tools
  • Deterministic flow
  • Observability

Microsoft Agent Framework gives us that 😀


If you’ve been experimenting with MAF too, tell me what you’re building 👇

And if you haven’t started yet…

Now is the perfect time 🔥

Happy coding!

Greetings

El Bruno

More posts in my blog ElBruno.com.

More info in https://beacons.ai/elbruno




Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

Cracking the Code of Serverless Design: Patterns that Scale and Patterns that Fail

1 Share
Chad Green explores how intentional design patterns determine whether serverless architectures deliver on their promises of scalability, resilience, and cost efficiency.
Read the whole story
alvinashcraft
18 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

MariaDB innovation: vector index performance

1 Share

Last year I shared many posts documenting MariaDB performance for vector search using ann-benchmarks. Performance was great in MariaDB 11 and this blog post explains that it is even better in MariaDB 12. This work was done by Small Datum LLC and sponsored by the MariaDB Foundation. My previous posts were published in January and February 2025.

tl;dr

  • Vector search recall vs precision in MariaDB 12.3 is better than in MariaDB 11.8
  • Vector search recall vs precision in Maria 11.8 is better than in Postgres 18.2 with pgvector 0.8.1
  • The improvements in MariaDB 12.3 are more significant for larger datasets
  • MariaDB 12.3 has the best results because it use less CPU per query
Benchmark

This post has much more detail about my approach. I ran the benchmark for 1 session. I use ann-benchmarks via my fork of a fork of a fork at this commit.  The ann-benchmarks config files are here for MariaDB and for Postgres.

This time I used the dbpedia-openai-X-angular tests for X in 100k, 500k and 1000k.

For hardware I used a larger server (Hetzner ax162-s) with 48 cores, 128G of RAM, Ubuntu 22.04 and HW RAID 10 using 2 NVMe devices. 

For databases I used:
  • MariaDB versions 11.8.5 and 12.3.0 with this config file. Both were compiled from source. 
  • Postgres 18.2 with pgvector 0.8.1 with this config file. These were compiled from source. For Postgres tests were run with and without halfvec (float16).
I had ps and vmstat running during the benchmark and confirmed there weren't storage reads as the table and index were cached by MariaDB and Postgres.

The command lines to run the benchmark using my helper scripts are:
    bash rall.batch.sh v1 dbpedia-openai-100k-angular c32r128
    bash rall.batch.sh v1 dbpedia-openai-500k-angular c32r128
    bash rall.batch.sh v1 dbpedia-openai-1000k-angular c32r128

Results: dbpedia-openai-100k-angular

Summary
  • MariaDB 12.3 has the best results
  • the difference between MariaDB 12.3 and 11.8 is smaller here than it is below for 500k and 1000k
Results: dbpedia-openai-500k-angular

Summary
  • MariaDB 12.3 has the best results
  • the difference between MariaDB 12.3 and 11.8 is larger here than above for 100k
Results: dbpedia-openai-1000k-angular

Summary
  • MariaDB 12.3 has the best results
  • the difference between MariaDB 12.3 and 11.8 is larger here than it is above for 100k and 500k


Read the whole story
alvinashcraft
27 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

CIO Podcast – Episode 109: Fractional CIO Work with Ryan Thousand

1 Share

For the 109th episode of the CIO podcast hosted by Healthcare IT Today, we are joined by Ryan Thousand, CIO at Dahl Memorial Healthcare Association, to talk about fractional CIO work! We kick this episode off with Thousand sharing some examples of where he has had to right-size his approach to IT in rural health. Rural health is often brought up when discussing the disadvantages they have, so to flip that, we are discussing some of the advantages rural health IT leaders have over larger health systems.

Next, we dive into Thousand’s experience as a fractional CIO leader for multiple organizations. Despite needing them, many critical access hospitals unfortunately don’t have a CIO. Thousand and I talk about what the fractional CIO work looks like for these hospitals that can’t get a full CIO. 

Then we talk about the biggest challenges Thousand faces as an IT leader. Next, we take a look at the places where IT doesn’t scale while also having workforce shortages. We then debate how to deal with those challenges. Lastly, we conclude this episode with Thousand sharing his advice to other CIOs and aspiring CIOs out there.

Here’s a look at the questions and topics we discuss in this episode:

  • What are some examples of where you have to kind of right-size your approach to IT in rural health?
  • What are some advantages that rural health IT leaders have over larger health systems?
  • I’m sure many listeners are intrigued by fractional CIO work. What’s your experience like as a fractional CIO leader for multiple organizations?
  • Many critical access hospitals don’t have a CIO, but really need one. How’s the fractional CIO work for a hospital that can’t get a full CIO?
  • What’s one of your biggest challenges as an IT leader?
  • Where are places where IT doesn’t scale, but you also may have workforce shortages? How do you deal with that challenge?
  • What advice would you give to other CIOs or aspiring CIOs out there?

Now, without further ado, we’re excited to share with you the next episode of the CIO Podcast by Healthcare IT Today.

We release a new CIO Podcast every ~2 weeks. You can also subscribe to the Healthcare IT Today podcast on any of the following platforms:

NOTE: We’ll be updating the links below as the various podcasting platforms approve the new podcast.  Check back soon to be able to subscribe on your favorite podcast application.

Thanks for listening to the CIO Podcast on Healthcare IT Today and if you enjoy the content we’re sharing, please rate the podcast on your favorite podcasting platform.

Along with the popular podcasting platforms above, you can Subscribe to Healthcare IT Today on YouTube.  Plus, all of the audio and video versions will be made available to stream on HealthcareITToday.com.

We’d love to hear what you think of the podcast and if there are other healthcare CIO you’d like to see us have on the program. Feel free to share your thoughts and perspectives in the comments of this post with @techguy on Twitter, or privately on our Contact Us page.

We appreciate you listening!

Listen to the Latest Episodes



Read the whole story
alvinashcraft
36 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

How to Build a CLI Tool to Auto-Translate OpenAPI Specifications

1 Share

APIs defined with the OpenAPI Specification make integration easier, but language can still be a barrier. Many API specs, descriptions, and examples are written in a single language, which limits accessibility for global developers.

In this article, you’ll learn how to build a CLI tool that automatically translates an OpenAPI specification into multiple languages, making your API documentation more accessible without maintaining separate versions manually.

What You Will Build

By the end of this tutorial, you'll have built a complete CLI application that:

  • Parses and manipulates OpenAPI/Swagger specs programmatically
  • Integrates with Lingo.dev for context-aware translations
  • Generates a dynamic React viewer with language switching

Prerequisites

  • Intermediate JavaScript/Node.js knowledge
  • Familiarity with React (for the frontend viewer portion)
  • Basic understanding of OpenAPI/Swagger specifications
  • A Lingo.dev account

How it Works

Trans-Spec operates as a three-stage pipeline that takes your OpenAPI specification and produces multilingual documentation with a browsable view.

an overview of how the CLI tool works

The Process

1. Parse & Extract

  • Reads your OpenAPI spec
  • Identifies translatable content (descriptions, summaries)
  • Preserves technical terms (paths, parameter names, enums)

2. Set up Structure and Translate

  • Creates .trans-spec/i18n/ folder structure
  • Generates i18n.json configuration
  • Calls Lingo.dev to translate content
  • Saves translated specs per language

3. Serve

  • Copies the translated specs to the frontend public directory
  • Generates a dynamic Vite config with your languages
  • Translates the frontend UI using Lingo.dev Compiler
  • Starts the dev server at localhost:3000

Key Feature: Language switching updates both the API docs content AND the viewer UI simultaneously. When you select Spanish, you see Spanish API descriptions with Spanish UI labels.

Incremental Updates: Re-running generate only re-translates changed content and preserves existing translations.

Project Initialization and Setup

This project will be a monorepo that contains the CLI and the frontend viewer in a single parent folder.

  1. Create a new folder for your project and initialize a NodeJS project:

    # create a new folder (called trans-spec) and navigate into it
    mkdir trans-spec && cd trans-spec
    
    # Initialize root package.json
    npm init -y
    
  2. Create folders for your CLI logic and frontend viewer:

    mkdir cli viewer
    
  3. Set up your CLI:

    # Setup CLI
    cd cli && npm init -y
    
    # install dependencies
    npm install commander chalk ora
    
    # create necessary directories and files
    mkdir src && touch index.js src/auth.js src/setup.js src/config.js src/translate.js && cd ..
    

    The above initializes a NodeJS application and installs:

    • commander: to handle CLI commands and flags.
    • chalk: to add colors in the terminal (makes output look nice).
    • ora: for spinner/progress indicators while translations are running.

    The command also creates an entry file called index.js and a folder called src. This folder is where most of your CLI logic will live.

  4. Set up your viewer:

    # Setup Viewer
    cd viewer && npm install -g create-vite && npm create vite@latest . -- --template react
    
    # install dependencies
    npm install js-yaml && cd ..
    

    The commands above initialize a React application using Vite and install js-yaml to read and write YAML structures in JavaScript for display.
    If the first command starts a React server, you can simply end the process with ctrl + c (command + c on Mac).

  5. Finally, open your project in VSCode:

    code .
    

Building a CLI Tool to Translate OpenAPI Specification Files

If you have completed the section above, your cli folder should look like this:

cli/
├── index.js
├── package.json
└── src/
    ├── auth.js
    ├── config.js
    └── setup.js
    └── translate.js

Now, you are ready to start this section.

Step 1. Create the Authentication Flow

  1. In your src/auth.js file, start by importing necessary dependencies:

    import { execSync, spawn } from "child_process";
    import ora from "ora";
    import chalk from "chalk";
    

    The code above imports:

    • execSync, which you can use to run a command and wait for it to finish before moving to the next process,
    • spawn, which you can use to run a command in the background and react to events as they happen,
    • ora for a loading spinner,
    • chalk for colored terminal output.
  2. Next, add a small helper that allows you to pause execution for a given number of milliseconds. This is useful when you need to wait for a response:

    const MAX_RETRIES = 2;
    
    function wait(ms) {
      return new Promise((resolve) => setTimeout(resolve, ms));
    }
    
  3. Now write the function that checks whether the user is already logged in to their Lingo.dev account, since this project uses Lingo.dev for translation:

    function isAuthenticated() {
      try {
        const output = execSync("npx lingo.dev@latest auth 2>&1", {
          encoding: "utf-8",
          stdio: "pipe",
        });
        return output.includes("Authenticated as");
      } catch (err) {
        return false;
      }
    }
    

    The code above runs the Lingo.dev CLI auth command and checks the output for "Authenticated as". This check is because Lingo.dev does not return output like true or false in the auth command; instead, it returns a message like:

    Authenticated as <your-email>
    

    If anything goes wrong while running the command, the code simply returns false.
    NOTE: The isAuthenticated() function uses 2>&1 because Lingo.dev writes the "Authenticated as" output to stderr instead of stdout. execSync only captures stdout by default.

  4. Write a function to trigger login if the user authentication fails:

    async function triggerLogin() {
      return new Promise((resolve, reject) => {
        const login = spawn("npx", ["lingo.dev@latest", "login"], {
          stdio: "inherit",
          shell: true,
        });
    
        login.on("close", (code) => {
          if (code === 0) {
            spinner.succeed("Login complete");
            resolve();
          } else {
            spinner.fail("Login failed");
            reject(new Error("Login process exited with code " + code));
          }
        });
    
        login.on("error", (err) => {
          spinner.fail("Login failed");
          reject(err);
        });
      });
    }
    

    NOTE: using stdio: 'inherit' lets the login process use the same terminal as your CLI tool, so the user can see what's happening.

  5. Now, create and export a function that will run the full authentication process, using the code we already have:

    export async function checkAuth() {
      const spinner = ora("Checking authentication").start();
    
      // Already authenticated, no need to login
      if (isAuthenticated()) {
        spinner.succeed(chalk.green("Authenticated"));
        return;
      }
    
      spinner.stop();
    
      // Not authenticated, trigger login once
      try {
        await triggerLogin();
      } catch (err) {
        console.log(chalk.red("✖ Login failed: " + err.message));
        process.exit(1);
      }
    
      // After login, wait and retry auth check up to MAX_RETRIES times
      for (let attempt = 1; attempt <= MAX_RETRIES; attempt++) {
        await wait(2000); // wait 2 seconds before checking
    
        if (isAuthenticated()) {
          console.log(chalk.green("✔ Successfully authenticated"));
          return;
        }
    
        if (attempt < MAX_RETRIES) {
          console.log(
            chalk.yellow(
              `Auth check failed. Retrying (${attempt}/${MAX_RETRIES})...`,
            ),
          );
        }
      }
    
      console.log(chalk.red("✖ Authentication failed"));
      console.log(
        chalk.white("Please run: npx lingo.dev@latest login manually and try again."
        ),
      );
      process.exit(1);
    }
    

    The spinner stops before triggerLogin() is called. If it kept running while the login process took over the terminal, the output would be garbled. After login, the function retries the auth check up to two times with a 2-second gap, giving credentials time to be written to disk before giving up.

Step 2: Prepare Your OpenAPI File for Translation

Before your CLI tool can translate anything, it needs to know where your OpenAPI spec file is and what language it's written in.

setup.js handles that initialization step. It takes your existing spec file, creates the .trans-spec folder structure, and copies the file into the right place so the rest of the tool knows where to find it.

If the spec file is in English, it creates a .trans-spec/i18n/en/your-file.yaml and copies your file there. If it is in French, it uses the fr folder instead of en. Later, you will get the language source from the user so your code can know which language code to use.

Copy this code into your src/setup.js file:

import fs from "fs";
import path from "path";
import chalk from "chalk";
import ora from "ora";

const TRANSSPEC_DIR = ".trans-spec";
const I18N_DIR = path.join(TRANSSPEC_DIR, "i18n");

export async function setup(specPath, sourceLanguage) {
  const spinner = ora("Setting up project...").start();

  try {
    const resolvedSpecPath = path.resolve(specPath);

    if (!fs.existsSync(resolvedSpecPath)) {
      spinner.fail(chalk.red(`Spec file not found: ${specPath}`));
      process.exit(1);
    }

    // Use source language folder
    const sourceDir = path.join(I18N_DIR, sourceLanguage);

    if (!fs.existsSync(TRANSSPEC_DIR)) {
      fs.mkdirSync(sourceDir, { recursive: true });
      spinner.text = "Created .trans-spec folder structure";
    }

    // Preserve the original filename
    const originalFilename = path.basename(resolvedSpecPath);
    const destPath = path.join(sourceDir, originalFilename);

    fs.copyFileSync(resolvedSpecPath, destPath);

    spinner.succeed(chalk.green("Project setup complete"));
  } catch (err) {
    spinner.fail(chalk.red("Setup failed: " + err.message));
    process.exit(1);
  }
}

Step 3: Generate the Lingo.dev Configuration

Lingo.dev needs a configuration file to know which languages to translate from and to. src/config.js generates that file and saves it at .trans-spec/i18n.json.

  1. Open your src/config.js file and add your impors:

    import fs from "fs";
    import path from "path";
    import chalk from "chalk";
    import ora from "ora";
    
    const TRANSSPEC_DIR = ".trans-spec";
    const CONFIG_PATH = path.join(TRANSSPEC_DIR, "i18n.json");
    
  2. Write a generateConfig function to generate the configuration Lingo.dev expects:

    export async function generateConfig(languages, source = "en") {
      const spinner = ora("Generating config...").start();
    
      try {
        // Handles comma separated values: "es,fr,de"
        // Parse new languages if provided
        let newTargets = [];
        if (languages) {
          newTargets = languages
            .split(/[\s,]+/)
            .map((lang) => lang.trim())
            .filter(Boolean);
        }
    
        // Check if config already exists
        let existingTargets = [];
        if (fs.existsSync(CONFIG_PATH)) {
          const existingConfig = JSON.parse(fs.readFileSync(CONFIG_PATH, "utf-8"));
          existingTargets = existingConfig.locale?.targets || [];
        }
    
        // Merge: existing + new, remove duplicates
        const allTargets = [...new Set([...existingTargets, ...newTargets])];
    
        if (allTargets.length === 0) {
          spinner.fail(chalk.red("No target languages provided"));
          process.exit(1);
        }
    
        const config = {
          $schema: "https://lingo.dev/schema/i18n.json",
          version: "1.12",
          locale: {
            source: source,
            targets: allTargets,
          },
          buckets: {
            yaml: {
              include: ["i18n/[locale]/*.yaml"],
            },
          },
        };
    
        fs.writeFileSync(CONFIG_PATH, JSON.stringify(config, null, 2));
    
        if (newTargets.length > 0) {
          spinner.succeed(
            chalk.green(`Config updated: ${source}${allTargets.join(", ")}`),
          );
        } else {
          spinner.succeed(
            chalk.green(
          `Using existing config: ${source}${allTargets.join(", ")}`,
            ),
          );
        }
        return allTargets;
      } catch (err) {
        spinner.fail(chalk.red("Config generation failed: " + err.message));
        process.exit(1);
      }
    }
    

    The function accepts a languages string in any reasonable format, such as es,fr. If a configuration file already exists, the new languages are merged with the existing ones rather than overwriting them.
    Wrapping in new Set handles duplicates, so running the command twice with the same languages won't bloat the file.

The buckets field is what Lingo.dev uses to find your files. The [locale] placeholder in i18n/[locale]/*.yaml gets replaced with each target language code at translation time, mapping directly to the folder structure you created in the previous step.

You can read the official Lingo.dev documentation for further understanding.

Finally, the function returns the targets array because the index.js file will need it later to tell the user where their translated files are.

Step 4: Call Lingo.dev for Translation

Now that you have created the configuration that Lingo.dev expects, you can perform the actual translation by programmatically calling lingo.dev run.

  1. Open your src/translate.js and add your imports:

    import { spawn } from "child_process";
    import chalk from "chalk";
    import ora from "ora";
    import path from "path";
    
    const TRANSSPEC_DIR = ".trans-spec";
    const MAX_RETRIES = 2;
    
  2. Now add the runTranslation function, which spawns the Lingo.dev CLI and watches the output:

    async function runTranslation() {
    return new Promise((resolve, reject) => {
    let output = "";
    let hasErrors = false;
    
    const process = spawn("npx", ["lingo.dev@latest", "run"], {
      cwd: path.resolve(TRANSSPEC_DIR),
      shell: true,
      stdio: "pipe",
    });
    
    // Capture stdout
    process.stdout.on("data", (data) => {
      const text = data.toString();
      output += text;
      console.log(text);
      // Check for failure indicators
      if (text.includes("") || text.includes("[Failed Files]")) {
        hasErrors = true;
      }
    });
    
    process.on("close", (code) => {
      if (code !== 0 || hasErrors) {
        reject(new Error("Translation had errors or failures"));
      } else {
        resolve();
      }
    });
    
    process.on("error", (err) => {
      reject(err);
    });
    });
    }
    

    Notice the cwd option:

    cwd: path.resolve(TRANSSPEC_DIR)
    

    This tells the spawn process to run from inside the .trans-spec/ folder. This is critical because lingo.dev run looks for i18n.json in the current working directory. Without this, it would look in the wrong place and fail.

    The output is piped so the function can scan it in real time. A zero exit code alone isn't enough to confirm success. Lingo.dev can exit cleanly, but still report failed files in the output. So the function also watches for "❌" and "[Failed Files]", and treats those as errors.

  3. Now, add the translate() function that ties it together:

    export async function translate(targets) {
    const spinner = ora("Translating...").start();
    spinner.stop();
    
      for (let attempt = 1; attempt <= MAX_RETRIES; attempt++) {
    try {
      await runTranslation();
      console.log(chalk.green("\n✔ Translation complete"));
      return;
    } catch (err) {
      if (attempt < MAX_RETRIES) {
        console.log(
          chalk.yellow(
            `Translation failed. Retrying (${attempt}/${MAX_RETRIES})...`,
          ),
        );
      }
    }
    }
    
      console.log(chalk.red("Translation failed after 2 attempts."));
    console.log(
    chalk.white("Please try running manually: npx lingo.dev@latest run"),
    );
    process.exit(1);
    }
    

    If the first attempt fails, it retries once more before giving up. On final failure, it exits with a message pointing the user to run the command manually — the same pattern you used in auth.js.

Step 5

Read the whole story
alvinashcraft
56 seconds ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories