Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
152044 stories
·
33 followers

Helping Decision-Makers Say Yes to Kotlin Multiplatform (KMP)

1 Share

This post was written by external contributors from Touchlab.

Justin Mancinelli

Justin Mancinelli

Justin Mancinelli is VP of Client Services at Touchlab, where he leads client services strategy and complex technical delivery. He partners with engineering leaders on mobile apps, SDKs, developer tooling, Kotlin Multiplatform, and Compose Multiplatform. With more than 13 years of experience helping software businesses succeed, he focuses on turning product and engineering goals into delivery.

LinkedIn

Samuel Hill from Touchlab

Samuel Hill

As VP of Engineering at Touchlab, Samuel Hill leads engineering strategy and supports teams building mobile products across Android and iOS. He works with engineering leaders on Kotlin Multiplatform, architecture, development standards, and team growth. With more than 13 years of experience in mobile engineering, he focuses on strong technical delivery and cross-functional collaboration.

LinkedIn

KMP is a strategic platform

In the current competitive landscape, the traditional mobile development model characterized by maintaining independent, duplicated codebases for iOS and Android is no longer a sustainable use of capital. This approach systematically introduces feature lag, technical debt, and a fragmented engineering culture that hinders organizational agility. For leadership, adopting Kotlin Multiplatform (KMP) must be viewed as a fundamental shift in capital allocation for mobile engineering.

KMP is not merely an incremental technical upgrade – it is a strategic platform that enables a unified engineering organization. By sharing high-value business logic while preserving native performance and UI integrity, KMP enables organizations to drastically reduce the total cost of ownership (TCO) of their mobile ecosystem. This transition transforms mobile development from platform-specific silos into a high-velocity engine that accelerates roadmaps, mitigates delivery risks, and secures a competitive advantage. As organizations increasingly integrate AI into their products, Kotlin Multiplatform provides a reliable, JVM-native foundation for building and deploying AI-powered mobile and backend services without introducing additional language or runtime complexity.

Quantifiable metrics for KMP adoption

Understanding the strategic impact of Kotlin Multiplatform for your organization starts with modeling potential cost savings, development velocity improvements, and risk mitigations. The following data, synthesized from enterprise-scale implementations and market leaders, provides an empirical foundation for proposing, budgeting, and planning your KMP adoption initiative.

Advantage Improved metrics1 Business/team impact
Code reduction 40–60% less code
80% logic shared
Dramatic reduction in technical debt and long-term maintenance overhead
Development velocity 20-40% faster code reviews
15–30% faster release cycles
Increased bandwidth for senior talent and faster PR throughput
Quality and reliability 40–60% fewer bugs
25–40% fewer platform-specific edge cases
Reduced QA cycles and higher customer satisfaction through consistent behavior
Timeline acceleration 50% faster implementation
Multi-year roadmaps realized in a single quarter
Drastically shortened time-to-market makes it possible to respond to market shifts in real time and execute strategic pivots under urgent deadlines

1. These figures were derived from proprietary and public data gathered from Touchlab clients and community case studies (see the Proven market validation section for example data). Actual results may vary depending on architecture, team structure, and project scope.

Velocity and feature parity

KMP eliminates the feature lag that historically forces businesses to delay launching on the second platform and marketing departments to delay new feature announcements. In traditional siloed development, discrepancies in business logic and implementation speed between iOS and Android teams are inevitable. KMP solves this by enabling a single, verified implementation of business rules that serves both platforms simultaneously.

An engineer can build and test a new feature on one platform. Subsequent platforms then simply hook up the existing data models and logic from the shared KMP code to their native UI. This groundwork reuse ensures consistency from day one. 

Beyond immediate speed, this unified architecture promotes maintainability and de-risks incremental development across platforms. Future requirements, such as top-down enforced migration from one data, analytics, or streaming platform to another, are accelerated by building upon a stable, shared foundation that supports synchronized launches across the entire user ecosystem.

Organizational risk reduction

Adopting KMP is a primary driver for organizational risk reduction, enforcing a new foundation that prioritizes architectural discipline over the spaghetti often found in legacy mobile apps. By centralizing core business logic, organizations gain strategic agility that de-risks the technical roadmap. This architectural flexibility allows leadership to pivot across web and mobile ecosystems at a speed impossible when logic is trapped in platform-specific silos, enabling the engineering department to meet sudden market demands.

Consolidating complex calculations and business rules into a single source of truth fundamentally lowers the probability of systemic error. When logic is duplicated across disparate codebases, an organization implicitly accepts a doubled risk of regression and a fractured quality assurance cycle. KMP mitigates this operational hazard by ensuring that a single, verified enhancement or fix propagates across the entire product line, effectively slashing the technical debt and remediation costs that typically compound in traditional multi-platform environments.

Shared logic with KMP naturally mandates a clean separation of concerns, moving the organization away from fragile, UI-entangled code. The clear architecture empowers teams to achieve significantly higher automated test coverage, which removes the fear of the unknown that often plagues legacy systems. As the codebase becomes more predictable and less reliant on manual intervention, the organization achieves a level of stability where innovation can occur without the constant threat of destabilizing critical business functions.

Engineering culture and talent

The shift to KMP directly affects talent retention and internal mobility within the engineering organization. By moving away from platform-specific constraints, KMP allows teams to transition from isolated silos to a unified model where developers function as mobile engineers. This shift creates a more flexible and responsive technical workforce where engineering resources are allocated based on business priorities rather than purely on platform and language expertise.

Architectural alignment simplifies the codebase and clarifies the path to productivity for new hires. By maintaining a single logic layer instead of two separate implementations, organizations typically see a 30–50% reduction in onboarding time. Engineers can focus on mastering a well-structured system that minimizes technical debt and cognitive overhead often found in siloed environments.

Proven market validation

KMP has proven its benefit at world-class organizations that require stability and scale. The following companies have been Touchlab clients, or discussed their data publicly with Touchlab and JetBrains:

  • Bitkey shares 95% of its mobile codebase with KMP and was able to tear down silos so that Android and iOS engineers became mobile engineers, picking up tickets no matter the platform
  • Blackstone achieved a 50% increase in implementation speed within six months of code consolidation, sharing ~90% of business logic with KMP.
  • Duolingo saved 6–12 engineer-months leveraging KMP to deliver iOS and Web implementations after the initial Android implementation. They spent five engineer-months to adopt KMP and deliver the iOS version of Adventures, then only one and a half engineer-months to deliver it to web, leveraging the same KMP codebase compared to 9 months for the initial Android implementation. 
  • Forbes achieved significant savings in engineering time and effort by consolidating over 80% of logic across platforms, sharing ~90% of business logic in total.
  • Google has been investing in and transitioning to KMP for several years, stating that KMP allows for “flexibility and speed in delivering valuable cross-platform experiences”. The Google Workspace team found that iOS runtime performance and app size with KMP were on par with those of the existing code.
  • Philips effectively halved the time to develop features on both Android and iOS.
  • An information security company re-targeted their mobile app to the web in three weeks for a press conference after a third-party vendor blocked the release of their mobile apps. Thanks to KMP, it was very easy to call the already implemented and tested code from JavaScript.
  • A national media company built its KMP Identity SDK for use across brand apps on Android, iOS, and web, with a team half the size of that typically allocated for platform-specific projects.
  • A world leader in tabletop gaming accelerated a multi-year mobile roadmap into a single quarter with KMP to meet the needs of explosive growth and demographic shift towards mobile users.

For more stories discussing real-world strategies, integration approaches, and gains from KMP, check out the Kotlin Multiplatform case studies collected by JetBrains.

Strategic recommendation

Kotlin Multiplatform is a future-proof architectural standard developed by JetBrains and supported by Google. It offers a low-risk, high-reward path for organizations looking to modernize their mobile strategy. Most organizations that adopt KMP for shared logic see a measurable ROI within three to six months.

The strategic recommendation is to initiate a pilot project focusing on pure business logic areas, such as calculations, data models, and business rules. With a conservative sharing potential of 75% in these areas, scaling KMP will allow your organization to eliminate redundant effort and transition toward a high-velocity, unified engineering future.

The Touchlab acceleration factor: While the long-term gains of KMP are inherent to the technology, expert guidance from experienced Kotlin Multiplatform practitioners, such as Touchlab, can help minimize the initial learning curve and accelerate adoption. Specialized assistance early in the adoption process prevents the trial-and-error phase that can stall pilot projects, ensuring the first success occurs quickly and the architectural benefits begin compounding immediately. When scaling challenges arise, Touchlab’s tools and experience take your KMP teams to the next level. Find out what Touchlab can do for you at https://touchlab.co.

Read the whole story
alvinashcraft
35 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Markdown + Astro = ❤️

1 Share

Markdown is a great invention that lets us write less markup. It also handles typographical matters like converting straight apostrophes (') to opening or closing quotes (' or ') for us.

Although Astro has built-in support for Markdown via .md files, I’d argue that your Markdown experience can be enhanced in two ways:

  1. MDX
  2. Markdown Component

I’ve cover these in depth in Practical Astro: Content Systems.

We’re going to focus on MDX today.

MDX

MDX is a superset of Markdown. It lets you use components in Markdown and simple JSX in addition to all other Markdown features.

For Astro, you can also use components from any frontend framework that you have installed. So you can do something like:

---
# Frontmatter...
---

import AstroComp from '@/components/AstroComp.astro'
import SvelteComp from '@/components/AstroComp.astro'

<AstroComp> ... </AstroComp>
<SvelteComp> ... </SvelteComp>

It can be a great substitute for content-heavy stuff because it lets you write markup like the following.

<div class="card">
  ### Card Title

  Content goes here

  - List
  - Of
  - Items

  Second paragraph
</div>

Astro will convert the MDX into the following HTML:

<div class="card">
  <h2>Card Title</h2>

  <p>Content goes here </p>

  <ul>
    <li> List </li>
    <li> Of </li>
    <li> Items </li>
  </ul>

  <p>Second paragraph</p>
</div>

Notice what I did above:

  • I used ## instead of a full h2 tag.
  • I used - instead of <ul> and <li> to denote lists.
  • I didn’t need any paragraph tags.

Writing the whole thing in HTML directly would have been somewhat of a pain.

Installing MDX

Astro folks have built an integration for MDX so it’s easy-peasy to add it to your project. Just follow these instructions.

Three Main Ways to Use MDX

These methods also work with standard Markdown files.

  1. Import it directly into an Astro file
  2. Through content collections
  3. Through a layout

Import it Directly

The first way is simply to import your MDX file and use it directly as a component.

---
import MDXComp from '../components/MDXComp.mdx'
---

<MDXComp />

Because of this, MDX can kinda function like a partial.

Through Content Collections

First, you feed your MDX into a content collection. Note that you have to add the mdx pattern to your glob here.

Import it directly

The first way is simply to import your MDX file and use it directly as a component.

// src/content.config.js
import { defineCollection } from 'astro:content';
import { glob } from 'astro/loaders';

const blog = defineCollection({
  loader: glob({ pattern: "**/*.{md,mdx}", base: "./src/blog" }),
});

export const collections = { blog };

Then you retrieve the MDX file from the content collection.

---
import { getEntry, render } from 'astro:content'
const { slug } = Astro.props
const post = await getEntry('blog', slug)
const { Content } = await render(post)
---

<Content />

As you’re doing this, you can pass components into the MDX files so you don’t have to import them individually in every file.

For example, here’s how I would pass the Image component from Splendid Labz into each of my MDX files.

---
import { Image } from '@splendidlabz/astro'
// ...
const { Content } = await render(post)
const components = { Image }
---

<Content {components} />

In my MDX files, I can now use Image without importing it.

<Image src="..." alt="..." />

Use a Layout

Finally, you can add a layout frontmatter in the MDX file.

---
title: Blog Post Title
layout: @/layouts/MDX.astro
---

This layout frontmatter should point to an Astro file.

In that file:

  • You can extract frontmatter properties from Astro.props.content.
  • The MDX content can be rendered with <slot>.
---
import Base from './Base.astro'
const props = Astro.props.content
const { title } = props
---

<Base>
  <h1>{title}</h1>
  <slot />
</Base>

Caveats

Formatting and Linting Fails

ESLint and Prettier don’t format MDX files well, so you’ll end up manually indenting most of your markup.

This is fine for small amounts of markup. But if you have lots of them… then the Markdown Component will be a much better choice.

More on that in another upcoming post.

RSS Issues

The Astro RSS integration doesn’t support MDX files out of the box.

Thankfully, this can be handled easily with Astro containers. I’ll show you how to do this in Practical Astro.

Taking it Further

I’ve been building with Astro for 3+ years, and I kept running into the same friction points on content-heavy sites: blog pages, tag pages, pagination, and folder structures that get messy over time.

So I built Practical Astro: Content Systems, 7 ready-to-use solutions for Astro content workflows (MDX is just one of them). You get both the code and the thinking behind it.

If you want a cleaner, calmer content workflow, check it out.

I also write about Astro Patterns and Using Tailwind + CSS together on my blog. Come by and say hi!


Markdown + Astro = ❤️ originally handwritten and published with love on CSS-Tricks. You should really get the newsletter as well.

Read the whole story
alvinashcraft
36 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Introducing the CLI Generator

1 Share

Turn any Open API spec or Postman Collection into a fully-featured command-line tool

Today, we’re excited to announce that Postman users can now easily generate a fully-functioning command-line interface (CLI) application based on their Postman collections or Open API specifications. We’ve designed our CLI with two main end-users in mind: Agents (LLMs) and humans.

CLI for Agents

As our world continues to become more AI-centric, CLIs bridge the gap between agents and APIs. These tools allow agents to interact with existing APIs in a myriad of ways in order to query data or run operations, combining these into any imaginable workflow.

CLIs are a natural way for AI agents to interact with web services. They abstract away the details that LLMs shouldn’t have to worry about – like crafting the right HTTP request, or managing authentication tokens – and let them focus on getting the data they need.  CLIs are truly the SDKs for AI Agents.

We’ve included several features to help agents leverage our generated CLIs, including:

  • An easy to explore CLI, automatically listing which commands and flags are available at each step
  • Automatically generating a --help flag for each command
  • Generated Skill.md files, which agents can read to quickly learn about the whole CLI
  • Loading credentials from environment files, so the agent can proceed with fewer prompts

CLI for Humans

By writing a few commands, humans can also leverage CLIs to explore or test their APIs – allowing them to quickly write CI/CD workflows, share commands with others, or even figure out what command they could build next. Similar to agents, they often need help to understand which commands and flags are available. However, there are some features that are most helpful to humans, including:

  • Interactive passwords prompts for sensitive credentials
  • Automatically formatting JSON responses so they’re more readable
  • Allowing complex request bodies to be read from a file, using the –-body-file flag

In the rest of this post, we’ll showcase some of the main features available in the generated CLIs. To make it easy to illustrate, we’ve generated a CLI based on a made-up music playback API called posttunes

Getting Started

Generate CLIs with 3 easy steps. Currently, you can find the CLI generation feature within the same UI as SDK generation:

  1. From the Postman App, right-click on a Collection or API Specification → Go to the “More” section → click on “Generate SDK”:
  2. Alternatively, click on the plus + button on the left navigation bar and choose “SDK”:
  3. From the Generate SDKs screen, choose the CLI option and then click “Generate SDK”:

Alternatively, if you wish to generate it from the Postman CLI, use the following command:

postman sdk generate <collectionId / specId> -l cli

(Where collectionId / specId are the Postman collection / specification identifiers)

Product Overview

The generated CLI will include a command for every single endpoint in your API, organized around your URL hierarchy (or alternatively, Open API tags) for Open API specs; for Collections, it will be organized around your folder structure.

Each command automatically includes some helpful hints on how to run it. For example, we will see the following response when running the generated posttunes CLI:

> posttunes       

posttunes CLI

Usage:

  posttunes [command]

Available Commands:

  albums      Get information about albums

  artists     Get information about artists  

  config      Manage posttunes CLI configuration

  help        Help about any command

  library     

  player      

  setup-auth  Configure authentication credentials

  users       

Flags:

  -h, --help   help for posttunes

Use "posttunes [command] --help" for more information about a command.

Calling the albums command will show you the available operations:

> posttunes albums

Usage:

  posttunes albums [command]

Available Commands:

  get                      Get Album

  get-tracks               Get Album Tracks

  get-users-saved-albums   Get User's Saved Albums

Flags:

  -h, --help   help for albums

Use "posttunes albums [command] --help" for more information about a command.

Running a specific operation with the --help flag will show users what required and optional parameters are necessary:

> posttunes albums get-tracks --help

Get Album Tracks

Usage:

  posttunes albums get-tracks [flags]

Flags:

  -h, --help            help for get-an-albums-tracks

      --id string       The posttunes ID of the album.

      --limit int       The maximum number of items to return. Default: 10. Minimum: 1. Maximum: 50.

      --offset int      The index of the first item to return. Default: 0 (the first item). Use with limit to get the next set of items.

Authentication

Currently, our CLI generator supports the following authentication mechanisms:

  1. Basic Authentication (username and password)
  2. API Key
  3. OAuth 2.0 (Client Credentials and Authentication Code)

In order to setup your authentication, simply run the setup-auth command and it will present you with whatever authentication options are available:

> posttunes setup-auth

Configure authentication credentials

Usage:

  posttunes setup-auth [command]

Available Commands:

  oauth       Configure OAuth authentication credentials

Running the oauth authorization-code command will result in the following flow:

> posttunes setup-auth oauth authorization-code

Visit this URL to authorize:

https://accounts.posttunes.com/authorize?client_id=...&redirect_uri=http%3A%2F%2F127.0.0.1%3A8777%2Fcallback&response_type=code&state=...

Waiting for authorization...

Authorization successful!

The above flow will redirect you to login and authorize the user with posttunes in a browser, resulting in an access token you can use to call other endpoints. Note that not all authentication flows require a browser prompt (authorization code is one of the few that does).

A Skills Directory for AI Agents

One of the design goals for the CLI generator was to make the resulting tool work as well for AI agents as it does for humans. To that end, every generated CLI includes a .claude/skills/ directory — a set of structured Markdown documents that describe every command in a machine-readable format.

.claude/

└── skills/

   ├── albums/

   │   ├── SKILL.md            ← describes the `albums` command group

   │   ├── get-an-album/

   │   │   └── SKILL.md        ← describes `posttunes albums get-an-album`

   │   └── get-an-albums-tracks/

   │       └── SKILL.md        ← describes `posttunes albums get-an-albums-tracks`

   └── artists/

       ├── SKILL.md

       └── ...

Although an AI agent is able to learn about the CLI by running it, having the Skills documentation has its advantages. Organizing the Skills in small documents (around each operation) allows the LLM to progressively load this information in small batches, and only when needed. This can save on tokens and time, preventing the agent from having to learn about all CLI commands at once.

Get started today

Check out the CLI Generator Documentation for more information on getting started. Our team is excited to see what CLIs you will build!

The post Introducing the CLI Generator appeared first on Postman Blog.

Read the whole story
alvinashcraft
36 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Guidance Counselor 2.0 with David McCarter

1 Share
Join Taylor Desseyn and David McCarter live on the Guidance Counselor 2.0 podcast on April 21, 2026, at 9:30 AM CST. They'll discuss strategies for success in the tech job market, insights from McCarter's book "Rock Your Career," and provide valuable advice for preparing for technical interviews. Don't miss it!
Read the whole story
alvinashcraft
36 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Sanitizing Data in Document Pipelines: A Practical Approach with TX Text Control in C# .NET

1 Share
This article explores the importance of data sanitization in document processing pipelines and explains how to use TX Text Control effectively to sanitize data in C# .NET applications. Additionally, we will discuss common challenges associated with handling user-generated content and offer practical solutions to help you maintain the integrity and security of your document processing workflows.

Read the whole story
alvinashcraft
36 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Eldert Grootenboer on Reliability in Azure Messaging

1 Share

Episode 899

Eldert Grootenboer on Reliability in Azure Messaging

Azure Service Bus PM Eldert Grootenboer talks about new features in Service Bus and Event Hubs that add redundancy and reliability to these queueing services.

He describes the configuration options of geo-replication and the trade-offs involved to help you decide which configuration to select.

Read the whole story
alvinashcraft
36 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories