Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
149065 stories
·
33 followers

CSS Gap Decorations Level 1 Updated Working Draft

1 Share

The CSS Working Group has published an updated Working Draft of CSS Gap Decorations Level 1.

This module builds upon existing support for column rules in multicolumn layout. It extends support for gap decorations to other container layout types such as grid and flex. It also introduces several new properties to support decorations in the row direction, as well as additional controls such as styling and placement variations.

This updated draft includes one major change and two new major features since the FPWD:

  • The former -outset properties are now -inset, with positive/negative semantics adjusted accordingly.
  • Authors can now specify different decoration offsets for edges vs. interior intersections, and start vs. end sides.
  • Authors can now control visibility of gap decorations next to empty areas using the *-rule-visibility-items properties.

A number of smaller changes based on author feedback and CSSWG discussion are also included. A complete list can be found in the Changes section.

Please send feedback by either filing an issue in GitHub (preferable) or sending mail to the (archived) public mailing list www-style@w3.org with the spec code ([css-gaps-1]) and your comment topic in the subject line. (Alternatively, you can email one of the editors and ask them to forward your comment.)

Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

Announcing TypeScript 6.0 RC

1 Share

Today we are excited to announce the Release Candidate (RC) of TypeScript 6.0! To get started using the RC, you can get it through npm with the following command:

npm install -D typescript@rc

TypeScript 6.0 is a unique release in that we intend for it to be the last release based on the current JavaScript codebase. As announced last year (with recent updates here), we are working on a new codebase for the TypeScript compiler and language service written in Go that takes advantage of the speed of native code and shared-memory multi-threading. This new codebase will be the foundation of TypeScript 7.0 and beyond. TypeScript 6.0 will be the immediate precursor to that release, and in many ways it will act as the bridge between TypeScript 5.9 and 7.0. As such, most changes in TypeScript 6.0 are meant to help align and prepare for adopting TypeScript 7.0.

With that said, there are some new features and improvements that are not just about alignment. Let’s take a look at some of the highlights of this release, followed by a more detailed look at what’s changing for 7.0 and how to prepare for it.

What’s New Since the Beta?

Since TypeScript 6.0 beta, we have made a few noteworthy changes – mostly to align with the behavior of TypeScript 7.0.

One adjustment is in type-checking for function expressions in generic calls, especially those occurring in generic JSX expressions (see this pull request). This will typically catch more bugs in existing code, though you may find that some generic calls may need an explicit type argument.

We have also extended our deprecation of import assertion syntax (i.e. import ... assert {...}) to import() calls like import(..., { assert: {...}})

Finally, we have updated the DOM types to reflect the latest web standards, including some adjustments to the Temporal APIs as well.

Less Context-Sensitivity on this-less Functions

When parameters don’t have explicit types written out, TypeScript can usually infer them based on an expected type, or even through other arguments in the same function call.

declare function callIt<T>(obj: {
    produce: (x: number) => T,
    consume: (y: T) => void,
}): void;

// Works, no issues.
callIt({
    produce: (x: number) => x * 2,
    consume: y => y.toFixed(),
});

// Works, no issues even though the order of the properties is flipped.
callIt({
    consume: y => y.toFixed(),
    produce: (x: number) => x * 2,
});

Here, TypeScript can infer the type of y in the consume function based on the inferred T from the produce function, regardless of the order of the properties. But what about if these functions were written using method syntax instead of arrow function syntax?

declare function callIt<T>(obj: {
    produce: (x: number) => T,
    consume: (y: T) => void,
}): void;

// Works fine, `x` is inferred to be a number.
callIt({
    produce(x: number) { return x * 2; },
    consume(y) { return y.toFixed(); },
});

callIt({
    consume(y) { return y.toFixed(); },
    //                  ~
    // error: 'y' is of type 'unknown'.

    produce(x: number) { return x * 2; },
});

Strangely enough, the second call to callIt results in an error because TypeScript is not able to infer the type of y in the consume method. What’s happening here is that when TypeScript is trying to find candidates for T, it will first skip over functions whose parameters don’t have explicit types. It does this because certain functions may need the inferred type of T to be correctly checked – in our case, we need to know the type of T to analyze our consume function.

These functions are called contextually sensitive functions – basically, functions that have parameters without explicit types. Eventually the type system will need to figure out types for these parameters – but this is a bit at odds with how inference works in generic functions because the two "pull" on types in different directions.

function callFunc<T>(callback: (x: T) => void, value: T) {
    return callback(value);
}

callFunc(x => x.toFixed(), 42);
//       ^
// We need to figure out the type of `x` here,
// but we also need to figure out the type of `T` to check the callback.

To solve this, TypeScript skips over contextually sensitive functions during type argument inference, and instead checks and infers from other arguments first. If skipping over contextually sensitive functions doesn’t work, inference just continues across any unchecked arguments, going left-to-right in the argument list. In the example immediately above, TypeScript will skip over the callback during inference for T, but will then look at the second argument, 42, and infer that T is number. Then, when it comes back to check the callback, it will have a contextual type of (x: number) => void, which allows it to infer that x is a number as well.

So what’s going on in our earlier examples?

// Arrow syntax - no errors.
callIt({
    consume: y => y.toFixed(),
    produce: (x: number) => x * 2,
});

// Method syntax - errors!
callIt({
    consume(y) { return y.toFixed(); },
    //                  ~
    // error: 'y' is of type 'unknown'.

    produce(x: number) { return x * 2; },
});

In both examples, produce is assigned a function with an explicitly-typed x parameter. Shouldn’t they be checked identically?

The issue is subtle: most functions (like the ones using method syntax) have an implicit this parameter, but arrow functions do not. Any usage of this could require "pulling" on the type of T – for example, knowing the type of the containing object literal could in turn require the type of consume, which uses T.

But we’re not using this! Sure, the function might have a this value at runtime, but it’s never used!

TypeScript 6.0 takes this into account when it decides if a function is contextually sensitive or not. If this is never actually used in a function, then it is not considered contextually sensitive. That means these functions will be seen as higher-priority when it comes to type inference, and all of our examples above now work!

This change was provided thanks to the work of Mateusz Burzyński.

Subpath Imports Starting with #/

When Node.js added support for modules, it added a feature called "subpath imports". This is basically a field called imports which allows packages to create internal aliases for modules within their package.

{
    "name": "my-package",
    "type": "module",
    "imports": {
        "#root": "./dist/index.js",
        "#root/*": "./dist/*"
    }
}

This allows modules in my-package to import from #root instead of having to use a relative path like ../../index.js, and basically allows any other module to write something like

import * as utils from "#root/utils.js";

instead of using a relative path like the following.

import * as utils from "../../utils.js";

One minor annoyance with this feature has been that developers always had to write something after the # when specifying a subpath import. Here, we used root, but it is a bit useless since there is no directory we’re mapping over other than ./dist/

Developers who have used bundlers are also accustomed to using path-mapping to avoid long relative paths. A familiar convention with bundlers has been to use a simple @/ as the prefix. Unfortunately, subpath imports could not start with #/ at all, leading to a lot of confusion for developers trying to adopt them in their projects.

But more recently, Node.js added support for subpath imports starting with #/. This allows packages to use a simple #/ prefix for their subpath imports without needing to add an extra segment.

{
    "name": "my-package",
    "type": "module",
    "imports": {
        "#": "./dist/index.js",
        "#/*": "./dist/*"
    }
}

This is supported in newer Node.js 20 releases, and so TypeScript now supports it under the options node20, nodenext, and bundler for the --moduleResolution setting.

This work was done thanks to magic-akari, and the implementing pull request can be found here.

Combining --moduleResolution bundler with --module commonjs

TypeScript’s --moduleResolution bundler setting was previously only allowed to be used with --module esnext or --module preserve; however, with the deprecation of --moduleResolution node (a.k.a. --moduleResolution node10), this new combination is often the most suitable upgrade path for many projects.

Projects will often want to instead plan out a migration towards either

  • --module preserve and --moduleResolution bundler
  • --module nodenext

depending on your project type (e.g. bundled web app, Bun app, or Node.js app).

More information can be found at this implementing pull request.

The --stableTypeOrdering Flag

As part of our ongoing work on TypeScript’s native port, we’ve introduced a new flag called --stableTypeOrdering intended to assist with 6.0-to-7.0 migrations.

Today, TypeScript assigns type IDs (internal tracking numbers) to types in the order they are encountered, and uses these IDs to sort union types in a consistent manner. A similar process occurs for properties. As a result, the order in which things are declared in a program can have possibly surprising effects on things like declaration emit.

For example, consider the declaration emit from this file:

// Input: some-file.ts
export function foo(condition: boolean) {
    return condition ? 100 : 500;
}

// Output: some-file.d.ts
export declare function foo(condition: boolean): 100 | 500;
//                                               ^^^^^^^^^
//             Note the order of this union: 100, then 500.

If we add an unrelated const above foo, the declaration emit changes:

// Input: some-file.ts
const x = 500;
export function foo(condition: boolean) {
    return condition ? 100 : 500;
}

// Output: some-file.d.ts
export declare function foo(condition: boolean): 500 | 100;
//                                               ^^^^^^^^^
//                           Note the change in order here.

This happens because the literal type 500 gets a lower type ID than 100 because it was processed first when analyzing the const x declaration. In very rare cases this change in ordering can even cause errors to appear or disappear based on program processing order, but in general, the main place you might notice this ordering is in the emitted declaration files, or in the way types are displayed in your editor.

One of the major architectural improvements in TypeScript 7 is parallel type checking, which dramatically improves overall check time. However, parallelism introduces a challenge: when different type-checkers visit nodes, types, and symbols in different orders, the internal IDs assigned to these constructs become non-deterministic. This in turn leads to confusing non-deterministic output, where two files with identical contents in the same program can produce different declaration files, or even calculate different errors when analyzing the same file. To fix this, TypeScript 7.0 sorts its internal objects (e.g. types and symbols) according to a deterministic algorithm based on the content of the object. This ensures that all checkers encounter the same object order regardless of how and when they were created. As a consequence, in the given example, TypeScript 7 will always print 100 | 500, removing the ordering instability entirely.

This means that TypeScript 6 and 7 can and do sometimes display different ordering. While these ordering changes are almost always benign, if you’re comparing compiler outputs between runs (for example, checking emitted declaration files in 6.0 vs 7.0), these different orderings can produce a lot of noise that makes it difficult to assess correctness. Occasionally though, you may witness a change in ordering that causes a type error to appear or disappear, which can be even more confusing.

To help with this situation, in 6.0, you can specify the new --stableTypeOrdering flag. This makes 6.0’s type ordering behavior match 7.0’s, reducing the number of differences between the two codebases. Note that we don’t necessarily encourage using this flag all the time as it can add a substantial slowdown to type-checking (up to 25% depending on codebase).

If you encounter a type error using --stableTypeOrdering, this is typically due to inference differences. The previous inference without --stableTypeOrdering happened to work based on the current ordering of types in your program. To help with this, you’ll often benefit from providing an explicit type somewhere. Often, this will be a type argument

- someFunctionCall(/*...*/);
+ someFunctionCall<SomeExplicitType>(/*...*/);

or a variable annotation for an argument you intend to pass into a call.

- const someVariable = { /*... some complex object ...*/ };
+ const someVariable: SomeExplicitType = { /*... some complex object ...*/ };

someFunctionCall(someVariable);

Note that this flag is only intended to help diagnose differences between 6.0 and 7.0 – it is not intended to be used as a long-term feature

See more at this pull-request.

es2025 option for target and lib

TypeScript 6.0 adds support for the es2025 option for both target and lib. While there are no new JavaScript language features in ES2025, this new target adds new types for built-in APIs (e.g. RegExp.escape), and moves a few declarations from esnext into es2025 (e.g. Promise.try, Iterator methods, and Set methods). Work to enable the new target was contributed thanks to Kenta Moriuchi.

New Types for Temporal

The long-awaited Temporal proposal has reached stage 3 and is expected to be added to JavaScript in the near future. TypeScript 6.0 now includes built-in types for the Temporal API, so you can start using it in your TypeScript code today via --target esnext or "lib": ["esnext"] (or the more-granular temporal.esnext).

let yesterday = Temporal.Now.instant().subtract({
    hours: 24,
});

let tomorrow = Temporal.Now.instant().add({
    hours: 24,
});

console.log(`Yesterday: ${yesterday}`);
console.log(`Tomorrow: ${tomorrow}`);

Temporal is already usable in several runtimes, so you should be able to start experimenting with it soon. Documentation on the Temporal APIs is available on MDN, though it may still be incomplete.

This work was contributed thanks to GitHub user Renegade334.

New Types for "upsert" Methods (a.k.a. getOrInsert)

A common pattern with Maps is to check if a key exists, and if not, set and fetch a default value.

function processOptions(compilerOptions: Map<string, unknown>) {
    let strictValue: unknown;
    if (compilerOptions.has("strict")) {
        strictValue = compilerOptions.get("strict");
    }
    else {
        strictValue = true;
        compilerOptions.set("strict", strictValue);
    }
    // ...
}

This pattern can be tedious. ECMAScript’s "upsert" proposal recently reached stage 4, and introduces 2 new methods on Map and WeakMap:

  • getOrInsert
  • getOrInsertComputed

These methods have been added to the esnext lib so that you can start using them immediately in TypeScript 6.0.

With getOrInsert, we can replace our code above with the following:

function processOptions(compilerOptions: Map<string, unknown>) {
    let strictValue = compilerOptions.getOrInsert("strict", true);
    // ...
}

getOrInsertComputed works similarly, but is for cases where the default value may be expensive to compute (e.g. requires lots of computations, allocations, or does long-running synchronous I/O). Instead, it takes a callback that will only be called if the key is not already present.

someMap.getOrInsertComputed("someKey", () => {
    return computeSomeExpensiveValue(/*...*/);
});

This callback is also given the key as an argument, which can be useful for cases where the default value is based on the key.

someMap.getOrInsertComputed(someKey, computeSomeExpensiveDefaultValue);

function computeSomeExpensiveValue(key: string) {
    // ...
}

This update was contributed thanks to GitHub user Renegade334.

RegExp.escape

When constructing some literal string to match within a regular expression, it is important to escape special regular expression characters like *, +, ?, (, ), etc. The RegExp Escaping ECMAScript proposal has reached stage 4, and introduces a new RegExp.escape function that takes care of this for you.

function matchWholeWord(word: string, text: string) {
    const escapedWord = RegExp.escape(word);
    const regex = new RegExp(`\\b${escapedWord}\\b`, "g");
    return text.match(regex);
}

RegExp.escape is available in the es2025 lib, so you can start using it in TypeScript 6.0 today.

This work was contributed thanks Kenta Moriuchi.

The dom lib Now Contains dom.iterable and dom.asynciterable

TypeScript’s lib option allows you to specify which global declarations your target runtime has. One option is dom to represent web environments (i.e. browsers, who implement the DOM APIs). Previously, the DOM APIs were partially split out into dom.iterable and dom.asynciterable for environments that didn’t support Iterables and AsyncIterables. This meant that you had to explicitly add dom.iterable to use iteration methods on DOM collections like NodeList or HTMLCollection.

In TypeScript 6.0, the contents of lib.dom.iterable.d.ts and lib.dom.asynciterable.d.ts are fully included in lib.dom.d.ts. You can still reference dom.iterable and dom.asynciterable in your configuration file’s "lib" array, but they are now just empty files.

// Before TypeScript 6.0, this required "lib": ["dom", "dom.iterable"]
// Now it works with just "lib": ["dom"]
for (const element of document.querySelectorAll("div")) {
    console.log(element.textContent);
}

This is a quality-of-life improvement that eliminates a common point of confusion, since no major modern browser lacks these capabilities. If you were already including both dom and dom.iterable, you can now simplify to just dom.

See more at this issue and its corresponding pull request.

Breaking Changes and Deprecations in TypeScript 6.0

TypeScript 6.0 arrives as a significant transition release, designed to prepare developers for TypeScript 7.0, the upcoming native port of the TypeScript compiler. While TypeScript 6.0 maintains full compatibility with your existing TypeScript knowledge and continues to be API compatible with TypeScript 5.9, this release introduces a number of breaking changes and deprecations that reflect the evolving JavaScript ecosystem and set the stage for TypeScript 7.0.

In the two years since TypeScript 5.0, we’ve seen ongoing shifts in how developers write and ship JavaScript:

  • Virtually every runtime environment is now "evergreen". True legacy environments (ES5) are vanishingly rare.
  • Bundlers and ESM have become the most common module targets for new projects, though CommonJS remains a major target. AMD and other in-browser userland module systems are much rarer than they were in 2012.
  • Almost all packages can be consumed through some module system. UMD packages still exist, but virtually no new code is available only as a global variable.
  • tsconfig.json is nearly universal as a configuration mechanism.
  • Appetite for "stricter" typing continues to grow.
  • TypeScript build performance is top of mind. Despite the gains of TypeScript 7, performance must always remain a key goal, and options which can’t be supported in a performant way need to be more strongly justified.

So TypeScript 6.0 and 7.0 are designed with these realities in mind. For TypeScript 6.0, these deprecations can be ignored by setting "ignoreDeprecations": "6.0" in your tsconfig; however, note that TypeScript 7.0 will not support any of these deprecated options.

Some necessary adjustments can be automatically performed with a codemod or tool. For example, the experimental ts5to6 tool can automatically adjust baseUrl and rootDir across your codebase.

Up-Front Adjustments

We’ll cover specific adjustments below, but we have to note that some deprecations and behavior changes do not necessarily have an error message that directly points to the underlying issue. So we’ll note up-front that many projects will need to do at least one of the following:

  • Set the "types" array in tsconfig, typically to "types": ["node"].

    "types": ["*"] will restore the 5.9 behavior, but we recommend using an explicit array to improve build performance and predictability.

    You’ll typically know this is the issue if you see a lot of type errors related to missing identifiers or unresolved built-in modules.

  • Set "rootDir": "./src" if you were previously relying on this being inferred

    You’ll often know this is the issue if you see files being written to ./dist/src/index.js instead of ./dist/index.js.

Simple Default Changes

Several compiler options now have updated default values that better reflect modern development practices.

  • strict is now true by default: The appetite for stricter typing continues to grow, and we’ve found that most new projects want strict mode enabled. If you were already using "strict": true, nothing changes for you. If you were relying on the previous default of false, you’ll need to explicitly set "strict": false in your tsconfig.json.

  • module defaults to esnext: Similarly, the new default module is esnext, acknowledging that ESM is now the dominant module format.

  • target defaults to current-year ES version: The new default target is the most recent supported ECMAScript spec version (effectively a floating target). Right now, that target is es2025. This reflects the reality that most developers are shipping to evergreen runtimes and don’t need to transpile down to older ECMAScript versions.

  • noUncheckedSideEffectImports is now true by default: This helps catch issues with typos in side-effect-only imports.

  • libReplacement is now false by default: This flag previously incurred a large number of failed module resolutions for every run, which in turn increased the number of locations we needed to watch under --watch and editor scenarios. In a new project, libReplacement never does anything until other explicit configuration takes place, so it makes sense to turn this off by default for the sake of better performance by default.

If these new defaults break your project, you can specify the previous values explicitly in your tsconfig.json.

rootDir now defaults to .

rootDir controls the directory structure of your output files relative to the output directory. Previously, if you did not specify a rootDir, it was inferred based on the common directory of all non-declaration input files. But this often meant that it was impossible to know if a file belonged to a project without trying to load and parse that project. It also meant that TypeScript had to spend more time inferring that common source directory by analyzing every file path in the program.

In TypeScript 6.0, the default rootDir will always be the directory containing the tsconfig.json file. rootDir will only be inferred when using tsc from the command line without a tsconfig.json file.

If you have source files any level deeper than your tsconfig.json directory and were relying on TypeScript to infer a common root directory for source files, you’ll need to explicitly set rootDir:

  {
      "compilerOptions": {
          // ...
+         "rootDir": "./src"
      },
      "include": ["./src"]
  }

Likewise, if your tsconfig.json referenced files outside of the containing tsconfig.json, you would need to adjust your rootDir to include those files.

  {
      "compilerOptions": {
          // ...
+         "rootDir": "../src"
      },
      "include": ["../src/**/*.tests.ts"]
  }

See more at the discussion here and the implementation here.

types now defaults to []

In a tsconfig.json, the types field of compilerOptions specifies a list of package names to be included in the global scope during compilation. Typically, packages in node_modules are automatically included via imports in your source code; but for convenience, TypeScript would also include all packages in node_modules/@types by default, so that you can get global declarations like process or the "fs" module from @types/node, or describe and it from @types/jest, without needing to import them directly.

In a sense, the types value previously defaulted to "enumerate everything in node_modules/@types". This can be very expensive, as a normal repository setup these days might transitively pull in hundreds of @types packages, especially in multi-project workspaces with flattened node_modules. Modern projects almost always need only @types/node, @types/jest, or a handful of other common global-affecting packages.

In TypeScript 6.0, the default types value will be [] (an empty array). This change prevents projects from unintentionally pulling in hundreds or even thousands of unneeded declaration files at build time. Many projects we’ve looked at have improved their build time anywhere from 20-50% just by setting types appropriately.

This will affect many projects. You will likely need to add "types": ["node"] or a few others:

  {
      "compilerOptions": {
          // Explicitly list the @types packages you need
+         "types": ["node", "jest"]
      }
  }

You can also specify a * entry to re-enable the old enumeration behavior:

  {
      "compilerOptions": {
          // Load ALL the types - the default from TypeScript 5.9 and before.
+         "types": ["*"]
      }
  }

If you end up with new error messages like the following:

Cannot find module '...' or its corresponding type declarations.
Cannot find name 'fs'. Do you need to install type definitions for node? Try `npm i --save-dev @types/node` and then add 'node' to the types field in your tsconfig.
Cannot find name 'path'. Do you need to install type definitions for node? Try `npm i --save-dev @types/node` and then add 'node' to the types field in your tsconfig.
Cannot find name 'process'. Do you need to install type definitions for node? Try `npm i --save-dev @types/node` and then add 'node' to the types field in your tsconfig.
Cannot find name 'Bun'. Do you need to install type definitions for Bun? Try `npm i --save-dev @types/bun` and then add 'bun' to the types field in your tsconfig.
Cannot find name 'describe'. Do you need to install type definitions for a test runner? Try `npm i --save-dev @types/jest` or `npm i --save-dev @types/mocha` and then add 'jest' or 'mocha' to the types field in your tsconfig.

it’s likely that you need to add some entries to your types field.

See more at the proposal here along with the implementing pull request here.

Deprecated: target: es5

The ECMAScript 5 target was important for a long time to support legacy browsers; but its successor, ECMAScript 2015 (ES6), was released over a decade ago, and all modern browsers have supported it for many years. With Internet Explorer’s retirement, and the universality of evergreen browsers, there are very few use cases for ES5 output today.

TypeScript’s lowest target will now be ES2015, and the target: es5 option is deprecated. If you were using target: es5, you’ll need to migrate to a newer target or use an external compiler. If you still need ES5 output, we recommend using an external compiler to either directly compile your TypeScript source, or to post-process TypeScript’s outputs.

See more about this deprecation here along with its implementing pull request.

Deprecated: --downlevelIteration

--downlevelIteration only has effects on ES5 emit, and since --target es5 has been deprecated, --downlevelIteration no longer serves a purpose.

Subtly, using --downlevelIteration false with --target es2015 did not error in TypeScript 5.9 and earlier, even though it had no effect. In TypeScript 6.0, setting --downlevelIteration at all will lead to a deprecation error.

See the implementation here.

Deprecated: --moduleResolution node (a.k.a. --moduleResolution node10)

--moduleResolution node encoded a specific version of Node.js’s module resolution algorithm that most-accurately reflected the behavior of Node.js 10. Unfortunately, this target (and its name) ignores many updates to Node.js’s resolution algorithm that have occurred since then, and it is no longer a good representation of the behavior of modern Node.js versions.

In TypeScript 6.0, --moduleResolution node (specifically, --moduleResolution node10) is deprecated. Users who were using --moduleResolution node should usually migrate to --moduleResolution nodenext if they plan on targeting Node.js directly, or --moduleResolution bundler if they plan on using a bundler or Bun.

See more at this issue and its corresponding pull request.

Deprecated: amd, umd, and systemjs values of module

The following flag values are no longer supported

  • --module amd
  • --module umd
  • --module systemjs
  • --module none

AMD, UMD, and SystemJS were important during the early days of JavaScript modules when browsers lacked native module support. The semantics of "none" were never well-defined and often led to confusion. Today, ESM is universally supported in browsers and Node.js, and both import maps and bundlers have become favored ways for filling in the gaps. If you’re still targeting these module systems, consider migrating to an appropriate ECMAScript module-emitting target, adopt a bundler or different compiler, or stay on TypeScript 5.x until you can migrate.

This also implies dropped support for the amd-module directive, which will no longer have any effect.

See more at the proposal issue along with the implementing pull request.

Deprecated: --baseUrl

The baseUrl option is most-commonly used in conjunction with paths, and is typically used as a prefix for every value in paths. Unfortunately, baseUrl is also considered a look-up root for module resolution.

For example, given the following tsconfig.json

{
  "compilerOptions": {
    // ...
    "baseUrl": "./src",
    "paths": {
      "@app/*": ["app/*"],
      "@lib/*": ["lib/*"]
    }
  }
}

and an import like

import * as someModule from "someModule.js";

TypeScript will probably resolve this to src/someModule.js, even if the developer only intended to add mappings for modules starting with @app/ and @lib/.

In the best case, this also often leads to "worse-looking" paths that bundlers would ignore; but it often meant that that many import paths that would never have worked at runtime are considered "just fine" by TypeScript.

path mappings have not required specifying baseUrl for a long time, and in practice, most projects that use baseUrl only use it as a prefix for their paths entries. In TypeScript 6.0, baseUrl is deprecated and will no longer be considered a look-up root for module resolution.

Developers who used baseUrl as a prefix for path-mapping entries can simply remove baseUrl and add the prefix to their paths entries:

  {
    "compilerOptions": {
      // ...
-     "baseUrl": "./src",
      "paths": {
-       "@app/*": ["app/*"],
-       "@lib/*": ["lib/*"]
+       "@app/*": ["./src/app/*"],
+       "@lib/*": ["./src/lib/*"]
      }
    }
  }

Developers who actually did use baseUrl as a look-up root can also add an explicit path mapping to preserve the old behavior:

{
  "compilerOptions": {
    // ...
    "paths": {
      // A new catch-all that replaces the baseUrl:
      "*": ["./src/*"],

      // Every other path now has an explicit common prefix:
      "@app/*": ["./src/app/*"],
      "@lib/*": ["./src/lib/*"],
    }
  }
}

However, this is extremely rare. We recommend most developers simply remove baseUrl and add the appropriate prefixes to their paths entries.

See more at this issue and the corresponding pull request.

Deprecated: --moduleResolution classic

The moduleResolution: classic setting has been removed. The classic resolution strategy was TypeScript’s original module resolution algorithm, and predates Node.js’s resolution algorithm becoming a de facto standard. Today, all practical use cases are served by nodenext or bundler. If you were using classic, migrate to one of these modern resolution strategies.

See more at this issue and the implementing pull request.

Deprecated: --esModuleInterop false and --allowSyntheticDefaultImports false

The following settings can no longer be set to false:

  • esModuleInterop
  • allowSyntheticDefaultImports

esModuleInterop and allowSyntheticDefaultImports were originally opt-in to avoid breaking existing projects. However, the behavior they enable has been the recommended default for years. Setting them to false often led to subtle runtime issues when consuming CommonJS modules from ESM. In TypeScript 6.0, the safer interop behavior is always enabled.

If you have imports that rely on the old behavior, you may need to adjust them:

// Before (with esModuleInterop: false)
import * as express from "express";

// After (with esModuleInterop always enabled)
import express from "express";

See more at this issue and its implementing pull request.

Deprecated: --alwaysStrict false

The alwaysStrict flag refers to inference and emit of the "use strict"; directive. In TypeScript 6.0, all code will be assumed to be in JavaScript strict mode, which is a set of JS semantics that most-noticeably affects syntactic corner cases around reserved words. If you have "sloppy mode" code that uses reserved words like await, static, private, or public as regular identifiers, you’ll need to rename them. If you relied on subtle semantics around the meaning of this in non-strict code, you may need to adjust your code as well.

See more at this issue and its corresponding pull request.

Deprecated: outFile

The --outFile option has been removed from TypeScript 6.0. This option was originally designed to concatenate multiple input files into a single output file. However, external bundlers like Webpack, Rollup, esbuild, Vite, Parcel, and others now do this job faster, better, and with far more configurability. Removing this option simplifies the implementation and allows us to focus on what TypeScript does best: type-checking and declaration emit. If you’re currently using --outFile, you’ll need to migrate to an external bundler. Most modern bundlers have excellent TypeScript support out of the box.

Deprecated: legacy module Syntax for namespaces

Early versions of TypeScript used the module keyword to declare namespaces:

// ❌ Deprecated syntax - now an error
module Foo {
    export const bar = 10;
}

This syntax was later aliased to the modern preferred form using the namespace keyword:

// ✅ The correct syntax
namespace Foo {
    export const bar = 10;
}

When namespace was introduced, the module syntax was simply discouraged. A few years ago, the TypeScript language service started marking the keyword as deprecated, suggesting namespace in its place.

In TypeScript 6.0, using module where namespace is expected is now a hard deprecation. This change is necessary because module blocks are a potential ECMAScript proposal that would conflict with the legacy TypeScript syntax.

The ambient module declaration form remains fully supported:

// ✅ Still works perfectly
declare module "some-module" {
    export function doSomething(): void;
}

See this issue and its corresponding pull request for more details.

Deprecated: asserts Keyword on Imports

The asserts keyword was proposed to the JavaScript language via the import assertions proposal; however, the proposal eventually morphed into the import attributes proposal, which uses the with keyword instead of asserts.

Thus, the asserts syntax is now deprecated in TypeScript 6.0, and using it will lead to an error:

// ❌ Deprecated syntax - now an error.
import blob from "./blahb.json" asserts { type: "json" }
//                              ~~~~~~~
// error: Import assertions have been replaced by import attributes. Use 'with' instead of 'asserts'.

Instead, use the with syntax for import attributes:

// ✅ Works with the new import attributes syntax.
import blob from "./blahb.json" with { type: "json" }

See more at this issue and its corresponding pull request.

Deprecated: no-default-lib Directives

The /// <reference no-default-lib="true"/> directive has been largely misunderstood and misused. In TypeScript 6.0, this directive is no longer supported. If you were using it, consider using --noLib or --libReplacement instead.

See more here and at the corresponding pull request.

Specifying Command-Line Files When tsconfig.json Exists is Now an Error

Currently, if you run tsc foo.ts in a folder where a tsconfig.json exists, the config file is completely ignored. This was often very confusing if you expected checking and emit options to apply to the input file.

In TypeScript 6.0, if you run tsc with file arguments in a directory containing a tsconfig.json, an error will be issued to make this behavior explicit:

error TS5112: tsconfig.json is present but will not be loaded if files are specified on commandline. Use '--ignoreConfig' to skip this error.

If it is the case that you wanted to ignore the tsconfig.json and just compile foo.ts with TypeScript’s defaults, you can use the new --ignoreConfig flag.

tsc --ignoreConfig foo.ts

See more at this issue and its corresponding pull request.

Preparing for TypeScript 7.0

TypeScript 6.0 is designed as a transition release. While options deprecated in TypeScript 6.0 will continue to work without errors when "ignoreDeprecations": "6.0" is set, those options will be removed entirely in TypeScript 7.0 (the native TypeScript port). If you’re seeing deprecation warnings after upgrading to TypeScript 6.0, we strongly recommend addressing them before adopting TypeScript 7.0 (or trying native previews) in your project.

As for the schedule, we expect TypeScript 7.0 to follow soon after TypeScript 6.0. This should help us maintain continuity while giving us a faster feedback loop for migration issues discovered during adoption.

What’s Next?

At this point, TypeScript 6.0 is feature-complete, and we anticipate very few changes apart from critical bug fixes to the compiler. Over the next few weeks, we’ll focus on addressing issues reported on the 6.0 branch, so we encourage you to try the RC and share feedback.

We also publish nightly builds on npm and in Visual Studio Code, which can provide a faster snapshot of recently fixed issues.

We are also continuing to work on TypeScript 7.0, and we publish nightly builds of our native previews along with a VS Code extension too. Feedback on both 6.0 and 7.0 are very much appreciated, and we encourage you to try out both if you can.

So give TypeScript 6.0 RC a try in your project, and let us know what you think!

Happy Hacking!

– Daniel Rosenwasser and the TypeScript Team

The post Announcing TypeScript 6.0 RC appeared first on TypeScript.

Read the whole story
alvinashcraft
28 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Nearly half of all companies now use Rust in production, survey finds

1 Share

A decade into tracking the Rust programming language, the 2025 State of Rust Survey, released this week, paints a picture of a language that has moved well past its early-adopter phase and is quietly consolidating inside enterprise codebases around the world.

The survey, conducted by the Rust Survey Team from November 17 to December 17, 2025, drew 7,156 responses. While that represents a slight dip from last year’s 7,310, the team says the numbers reflect a maturing community rather than a retreating one. “Overall, no big surprises and a few trends confirmed,” the survey team writes in its analysis.

Non-trivial use

Nearly half of all organizations represented in the survey (48.8%) report making non-trivial use of Rust, up from 38.7% in 2023. That’s a 10-point jump in two years, and the survey team describes the trend as structural. This suggests that Rust has moved beyond hobbyist and systems-programmer territory.

“The steady growth may indicate a structural market presence of Rust in companies,” the survey says. “Codebases consolidate, and the quantity of Rust code overall keeps increasing.”

That consolidation is apparent in how developers are using the language day to day. Some 55.1% of respondents say they now use Rust daily, the highest rate recorded across four years of tracking. And 56.8% rate themselves as productive Rust writers, up sharply from 42.3% in 2022.

Andrew Cornwall, an analyst at Forrester Research, tells The New Stack that he doesn’t see large-scale adoption of Rust at the enterprise.

“[Rust is] finding a place as a substitute for C among some developers,” Cornwall says. “However, outside of embedded devices, the amount of C written at most enterprises is dwarfed by the amount of Python and Java. Rust also faces competition from ecosystems like Go, which are close in speed but include a garbage collector that can make development easier for both people and agents.”

Rust has gained traction in organizations that provide software and services to the federal government, which calls for the use of memory-safe languages for critical systems.

According to the survey, the top enterprise deployment domains are server-side and backend applications at 51.7%, followed by cloud computing applications at 25.3%, and distributed systems at roughly 22%. Computer security and embedded systems round out the top five, underscoring Rust’s particular appeal in contexts where memory safety and performance are non-negotiable.

Other findings

The top reason non-users give for staying away is simply that they haven’t gotten around to it yet — cited by 60.8% of that group — rather than difficulty, missing libraries, or community concerns. That’s a meaningful change in posture.

For organizations already running Rust in production, 84.8% say Rust has helped them achieve their goals, 78.5% say adoption was worth the cost, and 65.4% say they would use it again. 31% of respondents acknowledged adoption challenges.

Developers say they are satisfied with Rust’s pace of evolution, with 57.6% saying they are happy with how fast Rust is moving (25.5% wish it would move faster).

Meanwhile, the stabilization of two long-awaited features in 2025 — let chains and async closures — landed well with the community. Generic const expressions and improved trait methods have now risen to the top items on the community wish list.

Compile times remain the most persistent source of frustration, the survey shows. More than 27% of respondents call slow compilation a significant problem, and it has held that position across multiple survey cycles. High disk usage from build artifacts is a close second. The Rust team has acknowledged both issues but has yet to deliver a satisfactory resolution.

An interesting finding involves how developers are learning and getting help with Rust. There was a noticeable dip in attendance at meetups, forums, and other community learning spaces, prompting the survey team to flag what it sees as an emerging shift: developers are increasingly routing their questions to LLMs rather than to community channels. Word clouds from open-ended responses prominently feature ChatGPT, Claude, and Gemini alongside traditional resources like The Rust Programming Language book and official documentation.

Now in its tenth year, the State of Rust Survey shows that Rust in 2025 is a language that has earned its place in production infrastructure.

The post Nearly half of all companies now use Rust in production, survey finds appeared first on The New Stack.

Read the whole story
alvinashcraft
32 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

The case for running AI agents on Markdown files instead of MCP servers

1 Share
Overhead illustration of hands at a desk with a keyboard, mouse, and coffee cup, representing developer productivity and AI agent workflows.

A venture capitalist is running his entire company on twelve Markdown files. No web application. No workflow engine. No orchestration runtime. Just structured documents in a git repo that teach Claude Code how to draft emails, triage support tickets, prepare board metrics, and manage product launches.

CompanyOS, which founder Brad Feld open-sourced in February 2026, connects to eight model context protocol (MCP) servers to access APIs for Gmail, Linear, Help Scout, and other services. But the intelligence lives in the fashionable-once-again Markdown. Each skill file encodes the workflow, guardrails, tone calibration, and decision logic. The MCP servers are plumbing. If you disconnect them, the skills still run. You just copy and paste instead of auto-sending.

Feld is not alone. Sentry’s David Cramer, who built Sentry‘s own MCP server, writes bluntly that “many MCP servers don’t need to exist” because they are either poor API wrappers or can be replaced with a skill file.

The examples continue: Supabase open-sourced an agent-skills repository that separates timeless development practices from dynamic API interactions. Microsoft’s .NET Skills Executor, which shipped three weeks ago, orchestrates SKILL.md files that invoke MCP tools as a subordinate layer. And a growing number of practitioners are discovering what Júlio Falbo documented in his widely cited post, “Markdown is the New API.” The GitHub MCP server consumed roughly 50,000 tokens of context (later trimmed to around 23,000) to teach an agent how to interact with GitHub. A SKILL.md file saying “use the gh CLI for these operations” achieved the same result in about 200 tokens.

Something is shifting. Developers are ripping out MCP servers and replacing them with Markdown files. Not because MCP is broken, but because many MCP servers were built to solve the wrong kind of problem.

The two kinds of problems

Every task an AI agent performs falls into one of two categories: It either needs to know something or to do something. The confusion between these two categories is driving most of the architectural waste in agent systems today.

When an agent needs to know something, you are dealing with a knowledge problem. Coding standards, deployment procedures, triage workflows, company policies, API usage patterns. This knowledge is relatively stable. It changes on the timescale of weeks or months. It can be expressed in natural language. And critically, it fits inside the context window of modern LLMs without any runtime infrastructure.

When an agent needs to do something, you are dealing with an execution problem. Querying a database, creating a GitHub issue, sending an email, and reading a Slack channel. Execution requires a runtime. It needs authentication, network access, error handling, and state management. This is what MCP was designed for, and it handles it well.

The problem is that many teams built MCP servers for knowledge problems. Someone wanted their agent to understand how to interact with GitHub, so they built or installed an MCP server that exposes dozens of tools for repository management, pull request workflows, issue tracking, and CI/CD operations. The agent now has access to everything. It also has to process a massive tool schema on every invocation, consuming tens of thousands of tokens just to understand what’s available before it can decide what to do.

A skill file that says “use the gh CLI, prefer squash merges, always run tests before pushing, and format commit messages as conventional commits” encodes the same workflow knowledge in a fraction of the context window budget. The agent already knows how to use a CLI. It just needed the institutional knowledge about how your team uses it.

The decision framework

Think of this framework as three layers that map to three distinct questions.

The first question is whether the agent needs to know something. If the answer involves coding standards, deployment processes, triage workflows, voice and tone guidelines, or any form of institutional knowledge, that belongs in a skill. Markdown files. Version-controlled in git. Reviewed in pull requests like any other code artifact.

The second question is whether the agent needs to do something. If the answer requires calling an API, querying a database, reading from a message queue, or interacting with any external system at runtime, that belongs in MCP. A running server with authentication, error handling, and proper observability.

The third question is where things get interesting. Does the agent need to know how to do something well? This is the hybrid case, the most common in production systems. The answer here is a skill that references MCP tools. The skill encodes the workflow, the sequencing, the edge cases, and the judgment calls. MCP provides the execution layer underneath.

Feld’s co-support skill is a clean example of this third pattern. The skill file defines the entire support triage workflow. It knows how to categorize issues by severity, what tone to use with different customer segments, when to escalate versus resolve, and what information to include in internal notes. The Help Scout MCP server handles API calls, reads conversations, posts replies, and tags tickets. But the skill works even without the MCP server. Without API access, it still triages a pasted customer message, drafts the response in the correct tone, and formats it as copy-ready text. The thinking survives. Only the plumbing disappears.

The 50x token tax

The cost of getting this layering wrong is not abstract. It shows up directly in your context window budget, and by extension in your API bill and your agent’s reasoning quality.

Consider a concrete example. The GitHub MCP server, one of the most popular in the ecosystem, exposes tools for repository management, file operations, search, issues, pull requests, code review, branches, and more. When an agent loads this server’s tool schema, it consumes roughly 23,000 to 50,000 tokens of context window space, depending on the version. That is the context window capacity that the agent can no longer use to reason about your actual task.

“A SKILL.md file that encodes your team’s GitHub workflow…typically runs 200 to 500 tokens. The agent gets the same operational knowledge with 100x less context consumption.”

A SKILL.md file that encodes your team’s GitHub workflow, including branch naming conventions, PR review requirements, CI expectations, and merge strategies, typically runs 200 to 500 tokens. The agent gets the same operational knowledge with 100x less context consumption. And because the skill is a focused, curated document rather than a raw API surface, the agent makes better decisions. It knows your team’s conventions, not just the universe of possible GitHub API calls.

This is not an argument against the GitHub MCP server. There are genuine execution tasks, creating issues, posting review comments, and merging pull requests that require API access. The argument is that loading a full MCP server to teach an agent how your team uses GitHub is like importing an entire database driver library to share a few configuration values. You are paying an infrastructure tax for a knowledge problem.

At scale, this tax compounds. An enterprise agent system connecting to a dozen MCP servers might consume 200,000 to 400,000 tokens in tool schemas alone. That is half or more of the available context window for most models, burned before the agent processes a single user request. Replacing the knowledge components with skill files can reclaim most of that budget for actual reasoning.

What production systems look like

The pattern emerging across early adopters follows a consistent shape.

Feld’s CompanyOS runs 12 skill files totaling about 2,000 lines of Markdown, connected to 8 MCP servers. The skills handle everything from email voice calibration to root cause analysis using Toyota’s Five Whys method. Every skill has a “standalone mode” that works without any MCP connections. The MCP servers are strictly for API execution, sending the email, querying the database, and searching the ticket system.

Supabase’s open-source agent-skills repository takes a similar approach. Skill files encode development practices that are stable across versions, things like database migration patterns, edge function deployment conventions, and testing strategies. These are complemented by MCP servers that handle dynamic API documentation and real-time schema introspection. The boundary is clean. If the knowledge is timeless, it goes into a skill. If it requires a live connection, it goes through MCP.

Microsoft’s .NET Skills Executor, released in early February, makes the layering explicit in its architecture. SKILL.md files define workflows. The executor resolves dependencies, including MCP tool invocations, at runtime. The skill is the orchestration layer. MCP provides the function calls. This is probably the clearest signal that the industry is converging on a two-layer model rather than an MCP-for-everything approach.

Anthropic’s own Claude Code implementation follows this pattern internally. The skills system that ships with Claude Code uses structured Markdown files to encode best practices for document creation, code generation, and tool usage. These skill files reference MCP tools when execution is needed, but keep the workflow logic in Markdown, which can be version-controlled, reviewed, and customized by users.

The Git Advantage

One benefit of skills that gets underappreciated is the operational model they enable. Skill files are plain text. They live in git. They go through pull requests. They have blame history and diff views and branch strategies.

This matters more than it might seem. When your agent’s behavior is encoded in an MCP server, changing that behavior means modifying server code, redeploying, and hoping you have adequate test coverage. When the behavior is encoded in a skill file, changing it means editing a Markdown document and committing the changes. The feedback loop is minutes, not hours. And the change is visible to everyone on the team in a format they can read without understanding the server’s implementation language.

Feld’s CompanyOS leans into this heavily. His email voice calibration, the rules that determine how co-comms adjusts tone for different recipients, is a section in a Markdown file. When he wants to change how the system communicates with investors versus customers, he edits a paragraph and commits. No deployment. No restart. No risk of breaking API integrations.

For platform engineering teams managing agent systems across an organization, this operational model is significantly more sustainable than maintaining a fleet of MCP servers that each encode institutional knowledge in application code.

What to Do on Monday Morning

If you are a platform engineer or team lead evaluating your agent architecture, here is a practical starting point.

Audit your current MCP servers and ask, for each tool they expose, whether that tool solves a knowledge problem or an execution problem. If a tool exists primarily to teach the agent how to use an API rather than to call that API, it is a candidate for extraction into a skill file.

Start with the highest-token-cost servers. The ones with the largest tool schemas are likely the ones encoding the most knowledge alongside their execution capabilities. Extract the knowledge into SKILL.md files and leave the execution tools in MCP.

Adopt the standalone test that Feld uses. Every skill should produce useful output even without MCP connections. If disconnecting the MCP server makes the skill completely non-functional, you probably have an execution concern that belongs in MCP. If the skill still generates the right analysis, recommendation, or draft, you have validated that the knowledge layer is properly separated.

Version your skills in git alongside your application code. Treat them as first-class artifacts with the same review process you use for infrastructure configuration. Skill files that encode business logic, compliance requirements, or security policies deserve the same rigor as Terraform modules or Kubernetes manifests.

The Layered Future

The skills-versus-MCP conversation is not a competition. It is an architectural clarification that the ecosystem needs.

MCP won the protocol war. Over 30,000 servers are indexed across registries. Every major cloud provider, tool vendor, and AI company supports it. That is not changing. What is changing is the recognition that MCP was designed for tool execution, and that treating it as the sole mechanism for everything an agent needs to know creates systems that are expensive, fragile, and hard to maintain.

“The agent does not need another orchestration layer. It needs domain knowledge in a format it already understands. And for most of the knowledge problems teams are solving today, a Markdown file is better architecture than a running server.”

The emerging two-layer model is clean. Skills provide domain knowledge in a format that is cheap to process, easy to version, and accessible to every team member. MCP provides tool execution with proper authentication, error handling, and observability. The best agent systems use both, with a clear boundary between them.

Feld’s 12 Markdown files will not replace the enterprise MCP infrastructure. But they demonstrate a principle that scales beyond any individual implementation. The agent does not need another orchestration layer. It needs domain knowledge in a format it already understands. And for most of the knowledge problems teams are solving today, a Markdown file is a better architecture than a running server.

The post The case for running AI agents on Markdown files instead of MCP servers appeared first on The New Stack.

Read the whole story
alvinashcraft
38 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

What's new in TensorFlow 2.21

1 Share
Google has officially launched LiteRT, the successor to TFLite, which offers significantly faster GPU and NPU acceleration alongside seamless support for PyTorch and JAX. The update also introduces lower-precision data type support for increased efficiency and a commitment to more frequent security and dependency updates across the TensorFlow ecosystem. This transition solidifies LiteRT as Google's primary high-performance framework for deploying GenAI and advanced on-device inference.
Read the whole story
alvinashcraft
43 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Celebrating Women in AI: 3 Questions with Cecilia Liu on Leading Docker’s MCP Strategy

1 Share

To celebrate International Women’s Day, we sat down with Cecilia Liu, Senior Product Manager at Docker, for three questions about the vision and strategy behind Docker’s MCP solutions. From shaping product direction to driving AI innovation, Cecilia plays a key role in defining how Docker enables secure, scalable AI tooling.

WomensDay1 resize

Cecilia leads product management for Docker’s MCP Catalog and Toolkit, our solution for running MCP servers securely and at scale through containerization. She drives Docker’s AI strategy across both enterprise and developer ecosystems, helping organizations deploy MCP infrastructure with confidence while empowering individual developers to seamlessly discover, integrate, and use MCP in their workflows. With a technical background in AI frameworks and an MBA from NYU Stern, Cecilia bridges the worlds of AI infrastructure and developer tools, turning complex challenges into practical, developer-first solutions.

What products are you responsible for?

I own Docker’s MCP solution. At its core, it’s about solving the problems that anyone working with MCP runs into: how do you find the right MCP servers, how do you actually use them without a steep learning curve, and how do you deploy and manage them reliably across a team or organization.

How does Docker’s MCP solution benefit developers and enterprise customers?

Dev productivity is where my heart is. I want to build something that meaningfully helps developers at every stage of their cycle — and that’s exactly how I think about Docker’s MCP solution.

For end-user developers and vibe coders, the goal is simple: you shouldn’t need to understand the underlying infrastructure to get value from MCP. As long as you’re working with AI, we make it easy to discover, configure, and start using MCP servers without any of the usual setup headaches. One thing I kept hearing in user feedback was that people couldn’t even tell if their setup was actually working. That pushed us to ship in-product setup instructions that walk you through not just configuration, but how to verify everything is running correctly. It sounds small, but it made a real difference.

For developers building MCP servers and integrating them into agents, I’m focused on giving them the right creation and testing tools so they can ship faster and with more confidence. That’s a big part of where we’re headed.

And for security and enterprise admins, we’re solving real deployment pain, making it faster and cheaper to roll out and manage MCP across an entire organization. Custom catalogs, role-based access controls, audit logging, policy enforcement. The goal is to give teams the visibility and control they need to adopt AI tooling confidently at scale.

Customers love us for all of the above, and there’s one more thing that ties it together: the security that comes built-in with Docker. That trust doesn’t happen overnight, and it’s something we take seriously across everything we ship.

What are you excited about when it comes to the future of MCP?

What excites me most is honestly the pace of change itself. The AI landscape is shifting constantly, and with every new tool that makes AI more powerful, there’s a whole new set of developers who need a way to actually use it productively. That’s a massive opportunity.

MCP is where that’s happening right now, and the adoption we’re seeing tells me the need is real. But what gets me out of bed is knowing the problems we’re solving: discoverability, usability, deployment. They are all going to matter just as much for whatever comes next. We’re not just building for today’s tools. We’re building the foundation that developers will reach for every time something new emerges.

Cecilia is speaking about scaling MCP for enterprises at the MCP Dev Summit in NYC on 3rd of April, 2026. If you’re attending, be sure to stop by Docker’s booth (D/P9).

Learn more

Read the whole story
alvinashcraft
49 seconds ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories