Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
152724 stories
·
33 followers

Warp’s gamble: Going open source to take on closed-source rivals

1 Share
Abstract pattern of warped pink, magenta, and lime green stripes bending into a vortex-like curve.

Warp, the popular Rust-based agentic development environment, has released its client as open source.

Warp began in 2022 as, believe it or not, a terminal program for Macs. In addition, you could use Warp as an IDE. From there, it evolved into what the company calls an agentic development environment (ADE) and became available on Linux and Windows as well. That’s a lot of change in a short time, and now the Warp client is going open source under the AGPL.

In a blog post announcing the shift on Tuesday, CEO and founder Zach Lloyd writes that “the Warp client is now open source, and the community can participate in building it using an agent-first workflow managed by Oz, our cloud agent orchestration platform.”

OpenAI is the “founding sponsor of the new, open‑source Warp repository,” and the agent workflows that power it are built on GPT models. Warp describes this as “our vision of how software will be built in the future,” with humans supervising “a fleet of agents” that handle most of the implementation work.

The company argues that the long‑standing bottleneck in development is no longer typing code, but all the “human-in-the-loop activities around the code: speccing the product and verifying behavior.”

Besides, agents can already “handle the implementation heavy lifting really well,” so why not enable contributors to focus on higher‑level design and verification? Their argument makes sense.

And, what does that have to do with open source? Warp’s leadership explained its decision to go open source as a mix of practical product concerns and a bet on where AI‑assisted development is headed. The company says it believes it “can ship a better Warp, more quickly, if we open source and work with our community to help supervise a fleet of agents.” 

Well, that has certainly always been a big reason why countless other companies have embraced open source. Usually, though, companies start with an open-source project and then turn it into a product or service. We’ll see if this flip-around of the usual open-source-to-product path works out for Warp.

A second motivation is to give developers more say over the shape of “agentic development.” Warp states that “there isn’t a full-featured open agentic development environment on the market.” 

The company presents the open‑sourced client as an alternative to closed tools from larger incumbents. The company is also pitching it as a starting point for others who want to build their own tools on top of Warp and Oz.

As part of the shift, Warp says it is moving “from a closed product development process to an open one.” Public GitHub issues will now be the “source of truth” for feature tracking, with the company promising to publish its ADE roadmap and hold technical and product discussions in the open.

For now, the Warp open‑source repo is tightly coupled to Warp’s commercial Oz orchestration platform. The company emphasizes that “Warp’s new open-source agent workflows are powered by OpenAI models, with OpenAI supporting the next generation of collaborative software development.”

In the blog post, OpenAI engineering lead Thibault Sottiaux adds, “Open source has long been central to how developers learn, build, and push the field forward. We’re excited to support experiments that explore how AI can help maintainers and contributors collaborate more effectively at scale.” That said, Warp notes that contributors are “free to use other coding agents as well,” but says its preference is for Oz, which it claims has “the correct skills and verification loops built-in.”

There’s more than a licensing change happening here.  The company is rolling out several product updates, which it describes as making the tool “more open and customizable.” Those include:

  • Support for “a much wider range of open source models,” including Kimi, MiniMax, and Qwen, plus an “auto (open)” routed option that picks what Warp deems the best open model for a given task.
  • A more flexible UI configuration so users can run Warp as “just a terminal,” add lightweight features such as diff view and file tree, or turn it into a “full-fledged ADE with built-in agents.”
  • A “long-overdue” settings file designed to give both users and agents programmatic control over configuration and easier portability across machines.

What all this means, Warp hopes, is that by improving the program and opening the client, it will help the company “build a successful business” in a market filled with “highly funded, closed-source competitors.”

Without the ability to out-spend rivals, the company is making a slight gamble on its ability to innovate.

Without the ability to out-spend rivals, the company is making a slight gamble on its ability to innovate. The company’s blog post states that “Warp is a smart way for us to accelerate product development.

“We need to build our business by offering the best possible product to the most excited community,” the blog post reads, acknowledging the challenge — and inherent risk — ahead.

Warp also presents the move as the fulfillment of an original plan from its early “Show HN” launch, saying “the plan was always to open source the client,” but that internal debates about the trade‑offs have continued “every year.” The rise of AI agents, Lloyd writes, finally shifted the calculus: “We could just keep going with our current model, privately guessing at the roadmap and scaling more and more agents to build internally, but that feels like a missed opportunity.”

Will this prove to be Warp’s golden chance? We’ll find out. 

The post Warp’s gamble: Going open source to take on closed-source rivals appeared first on The New Stack.

Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

Back Build Awesome Pro and make it easier to build for the web!

1 Share

The Build Awesome (11ty) Kickstarter (Final_FINAL_v2) is live! We’re trying to make it easier for anyone to build, publish, and maintain web sites!


Read the whole story
alvinashcraft
21 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Working with the Postman CLI

1 Share

The Postman CLI is the official command-line tool for Postman. It lets you run collections, publish workspaces, lint API specs, trigger monitors, and more, all from a terminal or CI/CD pipeline. This article covers everything you need to go from installing to a working CI/CD pipeline with the Postman CLI.

Prerequisites

Postman CLI vs Newman

Before we get into it, you may already know Newman, the open-source collection runner also built by Postman. They’re similar tools, and we recommend that you always use the Postman CLI for different reasons:

Postman CLI Newman
Maintained by Postman No longer maintained
Sends results to Postman Cloud Yes, if provided API key No, and requires a reporter plugin
Package signed by Postman Yes No
Supports governance checks Yes No
Supports login/logout Yes No
Usable as a Node.js library No (you can npm install) Yes
Supports Multi-Protocol Requests Yes No
Supports v3 Collections Yes No

We recommend that you always use the Postman CLI and move away from Newman, as we no longer maintain Newman. Postman CLI comes with native Postman Cloud integration.

Install

To install the Postman CLI, you can either use npm or download it using a shell command.

npm (recommended for most teams):

npm install -g postman-cli

macOS or Linux (shell script):

curl -o- "https://dl-cli.pstmn.io/install/unix.sh" | sh

Windows (PowerShell):

powershell -ExecutionPolicy ByPass -c "iex ((New-Object System.Net.WebClient).DownloadString('https://dl-cli.pstmn.io/install/windows.ps1'))"

Verify the install:

postman --version

PATH issue? If the command isn’t found after install, your shell may not have the npm global bin directory in its PATH. Run npm bin -g to find it, then add it to your ~/.zshrc or ~/.bashrc.

Authenticate

The Postman CLI requires some form of authentication to connect to your account. You can authenticate once with the login command or use the --with-api-key flag when working in a CI pipeline.

Interactive login (browser-based):

Run the command below and follow the prompt.

postman login

Non-interactive login (CI/CD, scripting):

You need your API keys to use this method. You can get an API key at web.postman.co/settings/me/api-keys.

postman login --with-api-key "$POSTMAN_API_KEY"

Note: Always use a secret or environment variable for the API key in scripts. Do not hardcode it.

Sign out:

To sign out, use the logout command.

postman logout

Setup

To follow the rest of this tutorial, click the Run in Postman button below. This button lets you fork the collection we’ll use for the rest of this demo into your own Workspace

This collection contains a basic demo banking API that we will be using for the rest of this tutorial.

Run a Collection on the CLI

We can run collections on the Postman app using the collection runner in Postman. However, to run this same command on the CLI, you will need to run the Postman CLI using the command below.

It runs a collection locally and sends the results to your Postman Cloud account if connected.

postman collection run <collection-id-or-path>

It runs a collection locally and sends the results to your Postman Cloud account if connected. If you have your Workspace connected to a local git repo, or you have your collection file available locally, you can provide a path to that file. Otherwise, to get the remote collection UID, follow the steps below.

Toggle the right sidebar if not already toggled, and click on the info icon. You will see an option to copy the ID of the collection.

Copy Collection ID

This UI can be used in the above command to run the collection alongside any tests, scripts, workflows, and assertions it has.

On running this collection on the CLI, you will see the following output.

Run Collection from the command line

Run with an environment

You can specify an environment to run against your collection using the –environment flag or e for shorthand.

postman collection run <collection-id> --environment <environment-id>

# or shorthand:

postman collection run <collection-id> -e <environment-id>

Override variables without an environment file

Variables within an environment file can also be overridden by specifying their key value pairs using –env-var

postman collection run <collection-id> \

--env-var "baseUrl=http://localhost:3000" \

--env-var "apiKey=test-key"

Run a specific folder

To run a specific folder in the collection, get the folder UID the same way the collection UID was gotten, and specify the folder UID using the -i flag.

postman collection run <collection-id> -i <folder-uid>

Stop on first failure

You can stop a collection run when the first failure happens. This is helpful for debugging. It is useful in CI when you want fast failure rather than running the entire suite.

postman collection run <collection-id> --bail

Set output format

You can configure the output format of your collections using the built-in reporters of the Postman CLI. You can specify a reporter using the -r flag.

postman collection run <collection-id> --reporters json

postman collection run <collection-id> --reporters junit

postman collection run <collection-id> --reporters html

JUnit output is widely supported by CI/CD platforms like GitHub Actions, Jenkins, and CircleCI for test result visualization.

Run a Single Request

For quick debugging without running a full collection, you can use the Postman CLI similar to how you would use curl on the command line to run a single request.

postman request GET https://api.example.com/health \

--header "Authorization: Bearer $TOKEN"

With a request body:

postman request POST https://api.example.com/users \

--header "Content-Type: application/json" \

--body '{"name": "Alice", "email": "alice@example.com"}'

Save the response to a file:

postman request GET https://api.example.com/users --output ./response.json

Publish to Postman Cloud (Native Git)

If you’re using Native Git, two commands handle the handoff from your local git repository to Postman Cloud:

Postman Workspace prepare: postman workspace prepare validates your local collections and environments, it checks for valid JSON structure, resolves references, and flags issues before they reach CI.

postman workspace prepare

  • Postman Workspace Push

Postman Workspace push: `postman workspace push`  publishes your local workspace state to Postman Cloud. This updates the Cloud View that your API consumers see.

postman workspace push

# Skip the confirmation prompt (for CI):

postman workspace push -y

The typical flow of this workflow will be

git commit → git push → CI runs collection → postman workspace push -y → Cloud View updated

This is the command that makes Native Git work end-to-end. Your CI pipeline runs workspace push -y only on merges to main, so consumers always see a validated, CI-approved state.

If your collections or environments are in non-default locations:

postman workspace push \

--collections-dir ./custom/collections \

--environments-dir ./custom/environments

To learn more about native git, pushing changes to the cloud from local, and collaborating with a git-backed collection, read this article – Collaborating on APIs with Postman Team Workspaces and Native Git

Lint an OpenAPI Spec

Postman CLI also has the spec lint command that validates an OpenAPI spec against your team’s governance rules configured in Postman.

> Note: This feature only works for enterprise users

Navigate to this URL and copy the YAML OpenAPI Specification. Paste the copied YAML file into the spec hub and copy the spec ID.

Add a new spec

If you have your Workspace connected to Native git, you can use the local YAML file as well.

postman spec lint {spec-id}

OR

postman spec lint ./postman/specs/openapi.yaml

Fail on errors only (ignore warnings):

postman spec lint ./postman/specs/openapi.yaml --fail-severity ERROR

Output as JSON (for programmatic processing):

postman spec lint ./postman/specs/openapi.yaml --output JSON

Trigger a Monitor

You can run a cloud monitor from the CLI. This is useful for on-demand health checks or triggering from a deploy script.

First, create the monitor.

Next, trigger the monitor run using the CLI with the command below

postman monitor run <monitor-id>

Trigger a Monitor

You can wait for the run to complete before exiting:

postman monitor run <monitor-id> --timeout 60000

Or if you want the command to exit 0 regardless of monitor results (e.g., for informational runs):

postman monitor run <monitor-id> --suppress-exit-code

GitHub Actions CI/CD Template

Here’s a complete example workflow that validates collections on every PR and publishes to Postman Cloud on every merge to main. This is the recommended pattern for teams using Native Git:

name: Postman CI

on:
  push:
    branches: [main]
  pull_request:
    branches: [main]
  workflow_dispatch:

jobs:
  postman:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - uses: actions/setup-node@v4
        with:
          node-version: 22
          cache: "npm"

      - run: npm install

      - name: Install Postman CLI
        run: curl -o- "https://dl-cli.pstmn.io/install/unix.sh" | sh

      - name: Start server
        run: |
          npm run dev &
          for i in {1..20}; do
            nc -z localhost 3000 && echo "Server ready" && break
            echo "Waiting for server..." && sleep 2
          done

      - name: Authenticate
        run: postman login --with-api-key "${{ secrets.POSTMAN_API_KEY }}"

      - name: Run collection
        run: postman collection run ./postman/collections/my-api.postman_collection.json

      - name: Publish to Postman Cloud
        if: github.ref == 'refs/heads/main' && github.event_name == 'push'
        run: |
          postman workspace prepare
          postman workspace push -y

      - name: Stop server
        if: always()
        run: kill $(cat pidfile) 2>/dev/null || pkill -f "npm run dev" || true

A few things worth noting:

  • postman collection run runs on every PR, so you catch failures before anything merges

  • postman workspace push -y runs only on merges to main, consumers see a clean, validated state

  • postman workspace prepare validates before pushing, fail fast rather than publishing a broken workspace

  • Store POSTMAN_API_KEY in GitHub → Settings → Secrets and variables → Actions

Quick Reference

# Auth
postman login --with-api-key "$POSTMAN_API_KEY"
postman logout

# Run a collection
postman collection run <id-or-path>
postman collection run <id> -e <env-id>
postman collection run <id> --env-var "key=value"
postman collection run <id> -i <folder-uid>
postman collection run <id> -d data.csv -n 10
postman collection run <id> --bail
postman collection run <id> --reporters junit

# Run a single request
postman request GET https://api.example.com/endpoint

# Native Git: publish to Cloud
postman workspace prepare
postman workspace push -y

# Lint an OpenAPI spec
postman spec lint ./postman/specs/openapi.yaml

# Trigger a monitor
postman monitor run <monitor-id>

Resources

The post Working with the Postman CLI appeared first on Postman Blog.

Read the whole story
alvinashcraft
26 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Bitwarden CLI compromised (News)

1 Share

Bitwarden’s CLI got hit by the Checkmarx supply-chain campaign, TypeScript 7.0 beta lands with the Go-rewritten compiler running ~10x faster than 6.0, and pgBackRest lost its maintainer of thirteen years leaving anyone running production Postgres with a real dependency-trust task this week. We’ve also got Ubuntu 26.04 LTS shipping with TPM-backed full-disk encryption, and Matz dropping Spinel as an AOT path that takes Ruby to native binaries. This week was a good reminder that the tools we depend on are all moving at once. Security, performance, and maintenance aren’t isolated threads.

View the newsletter

Join the discussion

Changelog++ members save 2 minutes on this episode because they made the ads disappear. Join today!

Sponsors:

  • Coder.com – Secure environments where devs and agents work in parallel. Open by design. Secure by default.

Featuring:





Download audio: https://op3.dev/e/https://pscrb.fm/rss/p/https://cdn.changelog.com/uploads/news/185/changelog-news-185.mp3
Read the whole story
alvinashcraft
41 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Kayla + Maddy LIVE - Plants, Trains, and Aspire

1 Share
From: kayla.cinnamon
Duration: 0:00
Views: 36

🎙️ New to streaming or looking to level up? Check out StreamYard and get $10 discount! 😍 https://streamyard.com/pal/d/5873078476275712

Read the whole story
alvinashcraft
47 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Control your desktop terminal from your phone — GitHub Copilot CLI remote sessions

1 Share
From: Gerald Versluis
Duration: 1:49
Views: 69

With GitHub Copilot CLI you can now work from EVERYWHERE! Connect your phone or any device with your desktop, which is completely pre-configured for your needs, and start building!

💝 Join this channel to get access to perks:
https://www.youtube.com/channel/GeraldVersluis/join

🛑 Don't forget to subscribe to my channel for more cool content: https://www.youtube.com/GeraldVersluis/?sub_confirmation=1

🎥 Video edited with Camtasia (ref): https://techsmith.z6rjha.net/AJoeD

🙋‍♂️ Also find my...
Blog: https://blog.verslu.is
All the rest: https://jfversluis.dev

#githubcopilot #copilotcli #githubcopilotcli

Read the whole story
alvinashcraft
58 seconds ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories