Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
148173 stories
·
33 followers

How to Build a CLI Tool to Auto-Translate OpenAPI Specifications

1 Share

APIs defined with the OpenAPI Specification make integration easier, but language can still be a barrier. Many API specs, descriptions, and examples are written in a single language, which limits accessibility for global developers.

In this article, you’ll learn how to build a CLI tool that automatically translates an OpenAPI specification into multiple languages, making your API documentation more accessible without maintaining separate versions manually.

What You Will Build

By the end of this tutorial, you'll have built a complete CLI application that:

  • Parses and manipulates OpenAPI/Swagger specs programmatically
  • Integrates with Lingo.dev for context-aware translations
  • Generates a dynamic React viewer with language switching

Prerequisites

  • Intermediate JavaScript/Node.js knowledge
  • Familiarity with React (for the frontend viewer portion)
  • Basic understanding of OpenAPI/Swagger specifications
  • A Lingo.dev account

How it Works

Trans-Spec operates as a three-stage pipeline that takes your OpenAPI specification and produces multilingual documentation with a browsable view.

an overview of how the CLI tool works

The Process

1. Parse & Extract

  • Reads your OpenAPI spec
  • Identifies translatable content (descriptions, summaries)
  • Preserves technical terms (paths, parameter names, enums)

2. Set up Structure and Translate

  • Creates .trans-spec/i18n/ folder structure
  • Generates i18n.json configuration
  • Calls Lingo.dev to translate content
  • Saves translated specs per language

3. Serve

  • Copies the translated specs to the frontend public directory
  • Generates a dynamic Vite config with your languages
  • Translates the frontend UI using Lingo.dev Compiler
  • Starts the dev server at localhost:3000

Key Feature: Language switching updates both the API docs content AND the viewer UI simultaneously. When you select Spanish, you see Spanish API descriptions with Spanish UI labels.

Incremental Updates: Re-running generate only re-translates changed content and preserves existing translations.

Project Initialization and Setup

This project will be a monorepo that contains the CLI and the frontend viewer in a single parent folder.

  1. Create a new folder for your project and initialize a NodeJS project:

    # create a new folder (called trans-spec) and navigate into it
    mkdir trans-spec && cd trans-spec
    
    # Initialize root package.json
    npm init -y
    
  2. Create folders for your CLI logic and frontend viewer:

    mkdir cli viewer
    
  3. Set up your CLI:

    # Setup CLI
    cd cli && npm init -y
    
    # install dependencies
    npm install commander chalk ora
    
    # create necessary directories and files
    mkdir src && touch index.js src/auth.js src/setup.js src/config.js src/translate.js && cd ..
    

    The above initializes a NodeJS application and installs:

    • commander: to handle CLI commands and flags.
    • chalk: to add colors in the terminal (makes output look nice).
    • ora: for spinner/progress indicators while translations are running.

    The command also creates an entry file called index.js and a folder called src. This folder is where most of your CLI logic will live.

  4. Set up your viewer:

    # Setup Viewer
    cd viewer && npm install -g create-vite && npm create vite@latest . -- --template react
    
    # install dependencies
    npm install js-yaml && cd ..
    

    The commands above initialize a React application using Vite and install js-yaml to read and write YAML structures in JavaScript for display.
    If the first command starts a React server, you can simply end the process with ctrl + c (command + c on Mac).

  5. Finally, open your project in VSCode:

    code .
    

Building a CLI Tool to Translate OpenAPI Specification Files

If you have completed the section above, your cli folder should look like this:

cli/
├── index.js
├── package.json
└── src/
    ├── auth.js
    ├── config.js
    └── setup.js
    └── translate.js

Now, you are ready to start this section.

Step 1. Create the Authentication Flow

  1. In your src/auth.js file, start by importing necessary dependencies:

    import { execSync, spawn } from "child_process";
    import ora from "ora";
    import chalk from "chalk";
    

    The code above imports:

    • execSync, which you can use to run a command and wait for it to finish before moving to the next process,
    • spawn, which you can use to run a command in the background and react to events as they happen,
    • ora for a loading spinner,
    • chalk for colored terminal output.
  2. Next, add a small helper that allows you to pause execution for a given number of milliseconds. This is useful when you need to wait for a response:

    const MAX_RETRIES = 2;
    
    function wait(ms) {
      return new Promise((resolve) => setTimeout(resolve, ms));
    }
    
  3. Now write the function that checks whether the user is already logged in to their Lingo.dev account, since this project uses Lingo.dev for translation:

    function isAuthenticated() {
      try {
        const output = execSync("npx lingo.dev@latest auth 2>&1", {
          encoding: "utf-8",
          stdio: "pipe",
        });
        return output.includes("Authenticated as");
      } catch (err) {
        return false;
      }
    }
    

    The code above runs the Lingo.dev CLI auth command and checks the output for "Authenticated as". This check is because Lingo.dev does not return output like true or false in the auth command; instead, it returns a message like:

    Authenticated as <your-email>
    

    If anything goes wrong while running the command, the code simply returns false.
    NOTE: The isAuthenticated() function uses 2>&1 because Lingo.dev writes the "Authenticated as" output to stderr instead of stdout. execSync only captures stdout by default.

  4. Write a function to trigger login if the user authentication fails:

    async function triggerLogin() {
      return new Promise((resolve, reject) => {
        const login = spawn("npx", ["lingo.dev@latest", "login"], {
          stdio: "inherit",
          shell: true,
        });
    
        login.on("close", (code) => {
          if (code === 0) {
            spinner.succeed("Login complete");
            resolve();
          } else {
            spinner.fail("Login failed");
            reject(new Error("Login process exited with code " + code));
          }
        });
    
        login.on("error", (err) => {
          spinner.fail("Login failed");
          reject(err);
        });
      });
    }
    

    NOTE: using stdio: 'inherit' lets the login process use the same terminal as your CLI tool, so the user can see what's happening.

  5. Now, create and export a function that will run the full authentication process, using the code we already have:

    export async function checkAuth() {
      const spinner = ora("Checking authentication").start();
    
      // Already authenticated, no need to login
      if (isAuthenticated()) {
        spinner.succeed(chalk.green("Authenticated"));
        return;
      }
    
      spinner.stop();
    
      // Not authenticated, trigger login once
      try {
        await triggerLogin();
      } catch (err) {
        console.log(chalk.red("✖ Login failed: " + err.message));
        process.exit(1);
      }
    
      // After login, wait and retry auth check up to MAX_RETRIES times
      for (let attempt = 1; attempt <= MAX_RETRIES; attempt++) {
        await wait(2000); // wait 2 seconds before checking
    
        if (isAuthenticated()) {
          console.log(chalk.green("✔ Successfully authenticated"));
          return;
        }
    
        if (attempt < MAX_RETRIES) {
          console.log(
            chalk.yellow(
              `Auth check failed. Retrying (${attempt}/${MAX_RETRIES})...`,
            ),
          );
        }
      }
    
      console.log(chalk.red("✖ Authentication failed"));
      console.log(
        chalk.white("Please run: npx lingo.dev@latest login manually and try again."
        ),
      );
      process.exit(1);
    }
    

    The spinner stops before triggerLogin() is called. If it kept running while the login process took over the terminal, the output would be garbled. After login, the function retries the auth check up to two times with a 2-second gap, giving credentials time to be written to disk before giving up.

Step 2: Prepare Your OpenAPI File for Translation

Before your CLI tool can translate anything, it needs to know where your OpenAPI spec file is and what language it's written in.

setup.js handles that initialization step. It takes your existing spec file, creates the .trans-spec folder structure, and copies the file into the right place so the rest of the tool knows where to find it.

If the spec file is in English, it creates a .trans-spec/i18n/en/your-file.yaml and copies your file there. If it is in French, it uses the fr folder instead of en. Later, you will get the language source from the user so your code can know which language code to use.

Copy this code into your src/setup.js file:

import fs from "fs";
import path from "path";
import chalk from "chalk";
import ora from "ora";

const TRANSSPEC_DIR = ".trans-spec";
const I18N_DIR = path.join(TRANSSPEC_DIR, "i18n");

export async function setup(specPath, sourceLanguage) {
  const spinner = ora("Setting up project...").start();

  try {
    const resolvedSpecPath = path.resolve(specPath);

    if (!fs.existsSync(resolvedSpecPath)) {
      spinner.fail(chalk.red(`Spec file not found: ${specPath}`));
      process.exit(1);
    }

    // Use source language folder
    const sourceDir = path.join(I18N_DIR, sourceLanguage);

    if (!fs.existsSync(TRANSSPEC_DIR)) {
      fs.mkdirSync(sourceDir, { recursive: true });
      spinner.text = "Created .trans-spec folder structure";
    }

    // Preserve the original filename
    const originalFilename = path.basename(resolvedSpecPath);
    const destPath = path.join(sourceDir, originalFilename);

    fs.copyFileSync(resolvedSpecPath, destPath);

    spinner.succeed(chalk.green("Project setup complete"));
  } catch (err) {
    spinner.fail(chalk.red("Setup failed: " + err.message));
    process.exit(1);
  }
}

Step 3: Generate the Lingo.dev Configuration

Lingo.dev needs a configuration file to know which languages to translate from and to. src/config.js generates that file and saves it at .trans-spec/i18n.json.

  1. Open your src/config.js file and add your impors:

    import fs from "fs";
    import path from "path";
    import chalk from "chalk";
    import ora from "ora";
    
    const TRANSSPEC_DIR = ".trans-spec";
    const CONFIG_PATH = path.join(TRANSSPEC_DIR, "i18n.json");
    
  2. Write a generateConfig function to generate the configuration Lingo.dev expects:

    export async function generateConfig(languages, source = "en") {
      const spinner = ora("Generating config...").start();
    
      try {
        // Handles comma separated values: "es,fr,de"
        // Parse new languages if provided
        let newTargets = [];
        if (languages) {
          newTargets = languages
            .split(/[\s,]+/)
            .map((lang) => lang.trim())
            .filter(Boolean);
        }
    
        // Check if config already exists
        let existingTargets = [];
        if (fs.existsSync(CONFIG_PATH)) {
          const existingConfig = JSON.parse(fs.readFileSync(CONFIG_PATH, "utf-8"));
          existingTargets = existingConfig.locale?.targets || [];
        }
    
        // Merge: existing + new, remove duplicates
        const allTargets = [...new Set([...existingTargets, ...newTargets])];
    
        if (allTargets.length === 0) {
          spinner.fail(chalk.red("No target languages provided"));
          process.exit(1);
        }
    
        const config = {
          $schema: "https://lingo.dev/schema/i18n.json",
          version: "1.12",
          locale: {
            source: source,
            targets: allTargets,
          },
          buckets: {
            yaml: {
              include: ["i18n/[locale]/*.yaml"],
            },
          },
        };
    
        fs.writeFileSync(CONFIG_PATH, JSON.stringify(config, null, 2));
    
        if (newTargets.length > 0) {
          spinner.succeed(
            chalk.green(`Config updated: ${source}${allTargets.join(", ")}`),
          );
        } else {
          spinner.succeed(
            chalk.green(
          `Using existing config: ${source}${allTargets.join(", ")}`,
            ),
          );
        }
        return allTargets;
      } catch (err) {
        spinner.fail(chalk.red("Config generation failed: " + err.message));
        process.exit(1);
      }
    }
    

    The function accepts a languages string in any reasonable format, such as es,fr. If a configuration file already exists, the new languages are merged with the existing ones rather than overwriting them.
    Wrapping in new Set handles duplicates, so running the command twice with the same languages won't bloat the file.

The buckets field is what Lingo.dev uses to find your files. The [locale] placeholder in i18n/[locale]/*.yaml gets replaced with each target language code at translation time, mapping directly to the folder structure you created in the previous step.

You can read the official Lingo.dev documentation for further understanding.

Finally, the function returns the targets array because the index.js file will need it later to tell the user where their translated files are.

Step 4: Call Lingo.dev for Translation

Now that you have created the configuration that Lingo.dev expects, you can perform the actual translation by programmatically calling lingo.dev run.

  1. Open your src/translate.js and add your imports:

    import { spawn } from "child_process";
    import chalk from "chalk";
    import ora from "ora";
    import path from "path";
    
    const TRANSSPEC_DIR = ".trans-spec";
    const MAX_RETRIES = 2;
    
  2. Now add the runTranslation function, which spawns the Lingo.dev CLI and watches the output:

    async function runTranslation() {
    return new Promise((resolve, reject) => {
    let output = "";
    let hasErrors = false;
    
    const process = spawn("npx", ["lingo.dev@latest", "run"], {
      cwd: path.resolve(TRANSSPEC_DIR),
      shell: true,
      stdio: "pipe",
    });
    
    // Capture stdout
    process.stdout.on("data", (data) => {
      const text = data.toString();
      output += text;
      console.log(text);
      // Check for failure indicators
      if (text.includes("") || text.includes("[Failed Files]")) {
        hasErrors = true;
      }
    });
    
    process.on("close", (code) => {
      if (code !== 0 || hasErrors) {
        reject(new Error("Translation had errors or failures"));
      } else {
        resolve();
      }
    });
    
    process.on("error", (err) => {
      reject(err);
    });
    });
    }
    

    Notice the cwd option:

    cwd: path.resolve(TRANSSPEC_DIR)
    

    This tells the spawn process to run from inside the .trans-spec/ folder. This is critical because lingo.dev run looks for i18n.json in the current working directory. Without this, it would look in the wrong place and fail.

    The output is piped so the function can scan it in real time. A zero exit code alone isn't enough to confirm success. Lingo.dev can exit cleanly, but still report failed files in the output. So the function also watches for "❌" and "[Failed Files]", and treats those as errors.

  3. Now, add the translate() function that ties it together:

    export async function translate(targets) {
    const spinner = ora("Translating...").start();
    spinner.stop();
    
      for (let attempt = 1; attempt <= MAX_RETRIES; attempt++) {
    try {
      await runTranslation();
      console.log(chalk.green("\n✔ Translation complete"));
      return;
    } catch (err) {
      if (attempt < MAX_RETRIES) {
        console.log(
          chalk.yellow(
            `Translation failed. Retrying (${attempt}/${MAX_RETRIES})...`,
          ),
        );
      }
    }
    }
    
      console.log(chalk.red("Translation failed after 2 attempts."));
    console.log(
    chalk.white("Please try running manually: npx lingo.dev@latest run"),
    );
    process.exit(1);
    }
    

    If the first attempt fails, it retries once more before giving up. On final failure, it exits with a message pointing the user to run the command manually — the same pattern you used in auth.js.

Step 5

Read the whole story
alvinashcraft
18 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

AI Agent and Copilot Podcast: Cisco Engineering Leader on AI’s Impact in Product Support

1 Share

Welcome to this AI Agent & Copilot Podcast, where we analyze the opportunities, impact, and outcomes that are possible with AI.

In this episode, I speak with Nik Kale, a principal engineer with Cisco and the architect of an AI-powered system — called AI Support Fabric — that’s used in product support inside Cisco and by customers of the tech giant.

Highlights

AI Support Fabric Background (02:24)

AI Support Fabric powers in-product guidance, AI-assisted support, and human escalation workflows within Cisco products. Customers often navigate multiple tabs and portals to find answers, leading to a disconnected experience. AI Support Fabric aims to provide a more connected experience across the entire product ecosystem, moving from reactive to proactive support.

Implementation and Results of AE Support Fabric (04:50)

AI Support Fabric runs in Cisco security and enterprise environments. It has over 200,000 unique users and 15,000 unique customers engaging weekly. The system brings AI and human intelligence directly into the product, providing personalized and predictive support. The in-product layer ensures customers receive targeted remediation content and guidance, reducing noise. Examples include handling zero-day vulnerabilities by pushing targeted remediation content directly to affected customers.

Unified Data Foundation (09:49)

AI Support Fabric is built on a unified data foundation called Tron, which acts as a single source of truth; Tron ingests millions of customer interactions, categorizing them into actionable outcomes like defects or documentation issues. The Digital Intellectual Capital Ecosystem (DICE) distills knowledge from years of support operations into reusable content, enabling omni-channel delivery. The principle of “build once, deliver everywhere” ensures content is reusable across various customer interaction channels.

AI Agent & Copilot Summit is an AI-first event to define opportunities, impact, and outcomes with Microsoft Copilot and agents. Building on its 2025 success, the 2026 event takes place March 17-19 in San Diego. Get more details.

Layers of AI Support (12:34)

AI Support Fabric consists of three layers: proactive guidance, AI assistant, and human escalation. The proactive guidance layer uses the Cisco Digital Adoption Platform to surface relevant guidance at the moment of friction. The AI assistant is a multi-agent system that coordinates at machine speeds, acting like a task force for complex issues. The human escalation layer packages all relevant diagnostic information for human engineers to resolve complex issues.

Human Escalation and Safety in AI Systems (18:50)

Human escalation is especially important in security products. The human escalation layer treats escalation as a first-class feature, ensuring AI recommendations are validated and logged. The system reduces mean time to resolution by 15% to 20%, saving time and effort for both customers and engineers.

Customer Outcomes and ROI (21:38)

ROI is framed across three dimensions: resolution speed, knowledge leverage, and shift from reactive to proactive support. Resolution speed is improved by providing contextual health and reducing initial calls for information. Knowledge leverage multiplies the value of institutional support knowledge, making it available when needed. The shift from reactive to proactive support prevents issues before they become problems.

More Cisco and AI Insights:


The post AI Agent and Copilot Podcast: Cisco Engineering Leader on AI’s Impact in Product Support appeared first on Cloud Wars.

Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete

How AI agents could destroy the economy

1 Share
Citrini Research imagines a report from two years in the future, in which unemployment has doubled and the total value of the stock market has fallen by more than a third.
Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Inside Microsoft’s big Xbox leadership shake-up

1 Share
Xbox logo illustration

Xbox fans had been anticipating the retirement of Microsoft Gaming CEO Phil Spencer for years, but what most hadn't expected was the departure of Xbox president Sarah Bond too. For many outside the company, Bond seemed like Spencer's natural successor, a deputy of sorts.

Microsoft CEO Satya Nadella and Microsoft CFO Amy Hood clearly didn't agree.

Instead of picking Bond for the role, Microsoft promoted Asha Sharma, a former Microsoft AI executive, to the top of Xbox. The decision to overlook Bond might have surprised many Xbox fans, but for the more than a dozen current and former Microsoft employees I've been speaking to, it's felt inev …

Read the full story at The Verge.

Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete

AI Agent & Copilot Podcast: Stoneridge Software CEO Eric Newell on Building Secure AI Strategies

1 Share

In this episode of the AI Agent & Copilot Podcast, John Siefert is joined by Eric Newell, CEO, Stoneridge Software, who details his session at AI Agent & Copilot Summit NA 2026, which will focus on how organizations can build a productive AI strategy that moves beyond experimentation.

Key Takeaways

  • Session overview: Newell will be leading a session as part of the M365 & Work IQ masterclass, “Executive’s Guide to Rolling Out M365 Copilot.” The session will focus on how organizations can move beyond AI experimentation to build a secure and productive AI strategy. “AI is incredibly powerful,” he explains, “But you need to just make sure that you’re set up to take advantage of it, and then you build some organizational capacity to do it.”
  • AI executive briefings: For customers and other leaders, Newell shares executive-level AI education and practical guidance, grounding other leaders in what AI, LLMs, and Microsoft’s tools can do for productivity. He notes that some of these learnings will be a part of his session at the event.
  • Final thoughts: In closing, Newell adds that he’s looking forward to his session and hopes attendees bring questions focused on practical guidance.

AI Agent & Copilot Summit is an AI-first event to define opportunities, impact, and outcomes with Microsoft Copilot and agents. Building on its 2025 success, the 2026 event takes place March 17-19 in San Diego. Get more details.

The post AI Agent & Copilot Podcast: Stoneridge Software CEO Eric Newell on Building Secure AI Strategies appeared first on Cloud Wars.

Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Defense against uploads: Q&A with OSS file scanner, pompelmi

1 Share
API and network traffic get all the press, but some folks are still trying to build a better upload scanner.
Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories