Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
152727 stories
·
33 followers

A Non-Engineer Look at Vibe Coding Mistakes (And How to Avoid Them)

1 Share

By 2026, I don’t need to tell you vibe coding is useful, you’ve probably already tried it. Still, thinking about it as a way of thinking that will solve all of your problems is probably a bit too optimistic.

It’s easy to get carried away while you’re typing prompts, but there are several real challenges with vibe coding that you need to think about. It almost always works until it doesn’t work anymore.

Where vibes are immaculate

From a personal (and business) standpoint, vibe coding at the start of a project is probably its best use. Since I feel a personal example is better than three abstract claims, I’ll share my own.

Throughout my career, I often got overwhelmed by tasks and had troubles organizing them in a way that my brain likes. Projects, writing, editing, sending interview questions and getting them back, for example. None of these are big tasks on their own but they’re very different in nature.

Finding a good task manager app was next to impossible, but I have managed to create a simple dashboard web app that has all the things I need and nothing more.

For many, my AI tool might be useless, but for me it’s the only one that works precisely the way my brain does.

The appeal is real, and undeniable. Because you know best what you want, getting AI to do it is just a matter of sending messages and tailoring the “thing” into whatever you’d like. For me it was a real solution for a problem I face every day, but for someone else it might be a prototype of a new app or service that went from brain to screen in three hours.

Where the vibes get bad

The problems start when vibe coding moves into production, and I’ve seen that with my eyes as well. While we all know what things should look like for the end user, most of us from non-technical fields do not have any idea what’s happening under the hood. That’s why you need to think about more than just vibes.

Code you don’t understand is code you can’t maintain. Easy enough; if you create a beast of a tool, app, or an entire service you intend to share with others, it’s very tricky to maintain it if you know nothing about it.

For example, it’s fine to create a massive SaaS solution you would use since you can troubleshoot issues with AI in your own time; there’s no real risk. But if you’re shipping the service to others, what happens if something breaks at 3 am for someone who paid for that service?

Security gaps stack up. Every security expert will tell you that nobody cares about security until something goes wrong. With vibe coding and AI, the “wrong” part can be built-in during the creation of the tools. AI sometimes just embeds API keys or other secret info in the code, doesn’t sanitize the code properly, and leaves a lot of open doors in general. If you’re not savvy and don’t think about this, you’re going to have a bad time. Also, remember that hackers also use AI to find weaknesses.

The DORA AI report shows what we’re all thinking. The 2024 DORA report found that AI adoption actually correlated with a 7.2% reduction in delivery stability. More telling: 39% of respondents said they had little to no trust in AI-generated code, yet nearly everyone was using it anyway.

The risk here is over-reliance on AI tools that leave you, their creator, sitting on the sidelines wondering what’s even going on.

Keep an eye out for over-vibing

In my experience, there are a few signals that you are perhaps in too deep into vibe coding, and it’s starting to become a liability. I’m not saying you should go and seek professional help, but for some of these help is really the only solution.

  • You can’t explain what the code does, only what you asked the tool to make. If you try to explain it to a real engineer, the conversation is short and unpleasant for both sides.
  • You’re using AI to explain code that AI wrote, which makes the explanation wrong as well.
  • You’ve stopped writing tests because “AI will catch it”. And AI is, of course, not catching it for some reason.
  • You’re patching AI patches. The original AI solution had a bug, you asked AI to fix it, that fix introduced another issue, and now you’re three layers deep.

I know that asking a senior engineer to help you fix your vibe coded app might be one of the more stressful experiences in your life, but if you’re set to be a vibe coder, that’s a chance you’ve got to take.

I’m not trying to ruin your vibe, but…

Don’t get me wrong here, I’m not saying that you should lose Claude Code and start typing everything manually (or by pasting from StackOverflow).

What I’m saying is that the ones who make the most out of vibe coding apps are the ones who could, theoretically, make the same apps themselves.

What I’ve found in the last month is that AI adoption and tendency to vibe code things is a pendulum: you either think it’s bad, complex and don’t want to touch it, and once you do start you can’t get enough of it. I’ve heard comparisons with “prime Call of Duty”, which is a reference only gamers would understand, but paints the picture.

A good vibe coding experience is like driving a convertible on a coastal road at sunset.

In 2026, people are exceptionally interested in making anything with Claude Code and other tools and the truth is that we don’t really need 90% of those.

So when vibing, I’d advise you to think small, and think about problems that you have. It’s ok to create a task management tool for you, and not to make money on, and it’s okay to vibe code a prototype of a new idea before presenting it.

I’d just like you to make sure to understand that using a tool mainly used by software engineers does NOT make you one, and you need to spend a lot of time understanding what the tool built in order to get close to the “real world.”

FAQ:

What is vibe coding?

Vibe coding is a development approach where you describe what you want in natural language and let AI generate the code. It’s fast and easy, but works best when the person in charge knows what good code looks like.

What are the biggest risks of vibe coding?

The main risks are shipping code you don’t understand, accumulating security gaps from unvalidated outputs, and losing the ability to debug or maintain what you’ve built.

Is vibe coding bad for production?

Not by itself, but it carries risks in production environments. These are around security, maintainability, and code that can’t be explained without AI.

Can non-developers use vibe coding to build apps?

Yes, but with limits. Personal tools and prototypes are fair game. Anything you ship to paying users needs someone who can own the code and knows how to fix it.

Does AI-generated code affect software delivery?

According to the 2024 DORA report, AI adoption correlated with a 7.2% reduction in delivery stability, suggesting that bigger productivity on paper doesn’t mean better software.

The post A Non-Engineer Look at Vibe Coding Mistakes (And How to Avoid Them) appeared first on ShiftMag.

Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

Building the Future of IDEs: Inside the First JetBrains Codex Hackathon

1 Share

39 projects, 6 finalists, and a weekend of IDE-native AI in San Francisco.

Earlier this month, we brought developers together in San Francisco for the inaugural JetBrains x Codex Hackathon. Over the course of one weekend, teams built 39 IDE-native AI projects, from which six finalists emerged. The event highlighted just how rapidly developers are transforming AI within the IDE into sophisticated workflows, tools, and products.

Why it matters

AI in the IDE is a vibrant conversation in software right now. From agents and copilots to context windows and orchestration, developers are brimming with ideas about where this is all heading. A hackathon remains one of the best formats for channeling that energy into working code, giving people the space to pursue ambitious projects within a weekend window.

The response was immense: 443 developers applied, resulting in 39 completed projects. Roughly half were IDE plugins or tools built on the IntelliJ Platform SDK – the kind of work that directly shapes the future of how engineering teams build. JetBrains believes in the power of convening a room like this, fueled by working tools and a real deadline. Leading technologists gave up their Sunday to judge, because the conversations and innovations that took place in that room were worth showing up for.

What got built

The weekend’s top prize went to hyperreasoning, a solo build by Aditya Mangalampalli. One person, one laptop, 24 hours – and a coding agent that decides which reasoning paths are worth exploring before it generates a single line of code. It’s the kind of project that justifies the hackathon format: a dormant idea finally prototyped by someone with the conviction to see it through solo over the course of a weekend. We’ll share the full story in a follow-up blog post.

Second place went to Scopecreep (Bhavik Sheoran, Kenneth Ross, Roman Javadyan, and Joon Im). Third place went to mesh-code (Ayush Ojha, Coco Cao, Kush Ise, and AL DRAM). Both teams, along with our three other finalists, will get their own spotlights in the next blog post.

Zooming out from individual projects, a few things stood out across the finalist pool. Roughly half the submissions were JetBrains plugins or IDE-native tools, built directly against the IntelliJ Platform SDK rather than around it. Two of our six finalists were solo builders – a remarkable feat in a format that usually rewards more hands. The work that impressed the judges wasn’t necessarily the work that generated code the fastest; it was the work that gave developers more visibility into what their agents were doing, more guardrails around it, and a clearer sense of when to trust the output.

That last part matters. Speed makes for good demos, but the projects people in the room kept coming back to were those building toward something that lasts past the demo: correctness, safety, context, and review. Better, not just faster.

“The most valuable part was building directly against the IDE rather than around it.”

Participant feedback

The partners who made it possible

A weekend like this only happens when the entire ecosystem shows up. OpenAI brought Codex to the heart of the event, sent a judge, and gave every attendee ChatGPT Pro credits. Cerebral Valley managed the experience seamlessly from start to finish. AuthZed contributed two judges and provided cloud credits for every builder in the room.

Nebius sent a judge and backed the winning team, while Supabase and BKey each sent judges and anchored key layers of the tech stack. Clerk and Vercel joined as community partners, and Shack15 hosted us. A stack that tight is precisely what allows a two-person team to ship something production-looking by Sunday night.

Read the whole story
alvinashcraft
15 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Teaching software development the easy way using GitLab

1 Share

For instructors teaching software development, one of the biggest logistical challenges is assignment distribution and feedback at scale. How do you give large groups of students access to course materials, keep solution code private, and still deliver meaningful, contextual feedback without lots of administrative overhead?

The GitLab for Education program provides qualifying institutions with free access to GitLab Ultimate, enabling instructors to build professional-grade workflows that mirror real-world software development environments. In this article, you'll learn how Stephen G. Dame, a lecturer in the Computing and Software Systems department at the University of Washington, Bothell, uses simple workflows in GitLab to manage everything from course materials to student feedback across multiple classes.

From aerospace to academia: Bringing GitLab to the classroom

Dame came to academia with years of experience as a chief software engineer at Boeing Commercial Airplanes, where GitLab was used for aerospace projects. As an adjunct professor, he became an early advocate for GitLab within the university, joining the GitLab for Education program to access the full feature set needed to run structured, scalable course workflows.

"GitLab provides the greatest way to organize multiple classes, student assignments, lectures, and code samples through the use of Groups and Subgroups, which I found to be unique to GitLab compared to other repository platforms."

  • Stephen G. Dame, University of Washington, Bothell

Set up groups: Build the right structure before writing a line of code

The foundation of an effective GitLab-based course is a well-planned group hierarchy. GitLab's Groups and Subgroups allow instructors to model the natural structure of a university department institution, course, and role with precise, inheritable permissions at every level.

Dame's structure places the university at the root (UWTeaching), with each course occupying its own subgroup (e.g. css430). Within each course sit repositories for lecture-materials and code, alongside dedicated Subgroups for students and graders. Instructor materials remain private, while student and grader subgroups are configured with controlled permissions so that assignment briefs and solutions are visible only to the right people.

Screenshot of GitLab group hierarchy — institution, course subgroup, and per-student subgroups

Permissions cascade downward through the hierarchy via Manage > Members, allowing Dame to add students to a course's students subgroup with Reporter access and an expiration date tied to the end of the academic quarter. Students can clone and pull from assignment repositories but cannot push — keeping solution code firmly under instructor control.

Students are guided to set up SSH keys across all their working environments (local machines, cloud shells, virtual machines) so they can clone repositories and receive weekly updates via git pull. They copy relevant code into their own private repositories to manage their own version history.

Tip for large classes: For larger cohorts, adding students by hand is impractical. GitLab's REST API lets you automate subgroup creation and membership from a list of usernames. Below is a sample Python script that handles this:

    import gitlab
    from datetime import datetime

    # Connect to your GitLab instance
    gl = gitlab.Gitlab('https://gitlab.com', private_token='YOUR_PRIVATE_TOKEN')

    # Target parent group ID (e.g., the ID for "css430 > students")
    parent_group_id = 12345678

    # Set expiration: typically the beginning of the next month after quarter end
    expiry_date = '2025-01-01'

    # List of collected student usernames
    student_list = ['alice_css430', 'bob_css430', 'carol_css430', 'dave_css430', 'eve_css430']

    for username in student_list:
        try:
            # 1. Create a personal subgroup for the student
            subgroup = gl.groups.create({
                'name': username,
                'path': username,
                'parent_id': parent_group_id,
                'visibility': 'private'
            })

            # 2. Add student to the new subgroup with Expiration
            user = gl.users.list(username=username)[0]
            subgroup.members.create({
                'user_id': user.id,
                'access_level': gitlab.const.REPORTER_ACCESS,
                'expires_at': expiry_date
            })
            print(f"Success: Subgroup created and student added for {username}")
        except Exception as e:
            print(f"Error processing {username}: {e}")

There is also an open source project that automates class management published by GitLab that provides additional tooling for this workflow.

Give feedback where the work actually lives

Once the structure is in place, the feedback workflow is where GitLab's value becomes most apparent to students. Dame asks students to submit assignments by opening a merge request in their repository. This gives instructors an immediate, clean diff of everything the student has written. A GitLab merge request showing inline code comment function for an instructor Instructors can click any line of code and leave an inline comment — not just flagging what is wrong, but explaining why, and pointing to what to look at next. Students receive this feedback in direct context with their code, which is far more actionable than a comment at the bottom of a submitted document.

Join GitLab for Education

Setting up your first GitLab assignment takes some initial effort, but once the structure is in place it largely runs itself. The real payoff goes beyond organization: Students graduate having worked daily in an environment that mirrors professional software development, building habits around version control and code review rather than learning them as abstract concepts.

If you are just getting started, keep it simple. Begin with a single course group, one assignment template, and a basic pipeline. The structure will grow naturally alongside your confidence with the platform.

Make sure to sign up for GitLab for Education so that you and your students can access all top-tier features, including unlimited reviewers on merge requests, additional compute minutes, and expanded storage.

Apply to the GitLab for Education program today.

Read the whole story
alvinashcraft
25 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Warp’s gamble: Going open source to take on closed-source rivals

1 Share
Abstract pattern of warped pink, magenta, and lime green stripes bending into a vortex-like curve.

Warp, the popular Rust-based agentic development environment, has released its client as open source.

Warp began in 2022 as, believe it or not, a terminal program for Macs. In addition, you could use Warp as an IDE. From there, it evolved into what the company calls an agentic development environment (ADE) and became available on Linux and Windows as well. That’s a lot of change in a short time, and now the Warp client is going open source under the AGPL.

In a blog post announcing the shift on Tuesday, CEO and founder Zach Lloyd writes that “the Warp client is now open source, and the community can participate in building it using an agent-first workflow managed by Oz, our cloud agent orchestration platform.”

OpenAI is the “founding sponsor of the new, open‑source Warp repository,” and the agent workflows that power it are built on GPT models. Warp describes this as “our vision of how software will be built in the future,” with humans supervising “a fleet of agents” that handle most of the implementation work.

The company argues that the long‑standing bottleneck in development is no longer typing code, but all the “human-in-the-loop activities around the code: speccing the product and verifying behavior.”

Besides, agents can already “handle the implementation heavy lifting really well,” so why not enable contributors to focus on higher‑level design and verification? Their argument makes sense.

And, what does that have to do with open source? Warp’s leadership explained its decision to go open source as a mix of practical product concerns and a bet on where AI‑assisted development is headed. The company says it believes it “can ship a better Warp, more quickly, if we open source and work with our community to help supervise a fleet of agents.” 

Well, that has certainly always been a big reason why countless other companies have embraced open source. Usually, though, companies start with an open-source project and then turn it into a product or service. We’ll see if this flip-around of the usual open-source-to-product path works out for Warp.

A second motivation is to give developers more say over the shape of “agentic development.” Warp states that “there isn’t a full-featured open agentic development environment on the market.” 

The company presents the open‑sourced client as an alternative to closed tools from larger incumbents. The company is also pitching it as a starting point for others who want to build their own tools on top of Warp and Oz.

As part of the shift, Warp says it is moving “from a closed product development process to an open one.” Public GitHub issues will now be the “source of truth” for feature tracking, with the company promising to publish its ADE roadmap and hold technical and product discussions in the open.

For now, the Warp open‑source repo is tightly coupled to Warp’s commercial Oz orchestration platform. The company emphasizes that “Warp’s new open-source agent workflows are powered by OpenAI models, with OpenAI supporting the next generation of collaborative software development.”

In the blog post, OpenAI engineering lead Thibault Sottiaux adds, “Open source has long been central to how developers learn, build, and push the field forward. We’re excited to support experiments that explore how AI can help maintainers and contributors collaborate more effectively at scale.” That said, Warp notes that contributors are “free to use other coding agents as well,” but says its preference is for Oz, which it claims has “the correct skills and verification loops built-in.”

There’s more than a licensing change happening here.  The company is rolling out several product updates, which it describes as making the tool “more open and customizable.” Those include:

  • Support for “a much wider range of open source models,” including Kimi, MiniMax, and Qwen, plus an “auto (open)” routed option that picks what Warp deems the best open model for a given task.
  • A more flexible UI configuration so users can run Warp as “just a terminal,” add lightweight features such as diff view and file tree, or turn it into a “full-fledged ADE with built-in agents.”
  • A “long-overdue” settings file designed to give both users and agents programmatic control over configuration and easier portability across machines.

What all this means, Warp hopes, is that by improving the program and opening the client, it will help the company “build a successful business” in a market filled with “highly funded, closed-source competitors.”

Without the ability to out-spend rivals, the company is making a slight gamble on its ability to innovate.

Without the ability to out-spend rivals, the company is making a slight gamble on its ability to innovate. The company’s blog post states that “Warp is a smart way for us to accelerate product development.

“We need to build our business by offering the best possible product to the most excited community,” the blog post reads, acknowledging the challenge — and inherent risk — ahead.

Warp also presents the move as the fulfillment of an original plan from its early “Show HN” launch, saying “the plan was always to open source the client,” but that internal debates about the trade‑offs have continued “every year.” The rise of AI agents, Lloyd writes, finally shifted the calculus: “We could just keep going with our current model, privately guessing at the roadmap and scaling more and more agents to build internally, but that feels like a missed opportunity.”

Will this prove to be Warp’s golden chance? We’ll find out. 

The post Warp’s gamble: Going open source to take on closed-source rivals appeared first on The New Stack.

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Back Build Awesome Pro and make it easier to build for the web!

1 Share

The Build Awesome (11ty) Kickstarter (Final_FINAL_v2) is live! We’re trying to make it easier for anyone to build, publish, and maintain web sites!


Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Working with the Postman CLI

1 Share

The Postman CLI is the official command-line tool for Postman. It lets you run collections, publish workspaces, lint API specs, trigger monitors, and more, all from a terminal or CI/CD pipeline. This article covers everything you need to go from installing to a working CI/CD pipeline with the Postman CLI.

Prerequisites

Postman CLI vs Newman

Before we get into it, you may already know Newman, the open-source collection runner also built by Postman. They’re similar tools, and we recommend that you always use the Postman CLI for different reasons:

Postman CLI Newman
Maintained by Postman No longer maintained
Sends results to Postman Cloud Yes, if provided API key No, and requires a reporter plugin
Package signed by Postman Yes No
Supports governance checks Yes No
Supports login/logout Yes No
Usable as a Node.js library No (you can npm install) Yes
Supports Multi-Protocol Requests Yes No
Supports v3 Collections Yes No

We recommend that you always use the Postman CLI and move away from Newman, as we no longer maintain Newman. Postman CLI comes with native Postman Cloud integration.

Install

To install the Postman CLI, you can either use npm or download it using a shell command.

npm (recommended for most teams):

npm install -g postman-cli

macOS or Linux (shell script):

curl -o- "https://dl-cli.pstmn.io/install/unix.sh" | sh

Windows (PowerShell):

powershell -ExecutionPolicy ByPass -c "iex ((New-Object System.Net.WebClient).DownloadString('https://dl-cli.pstmn.io/install/windows.ps1'))"

Verify the install:

postman --version

PATH issue? If the command isn’t found after install, your shell may not have the npm global bin directory in its PATH. Run npm bin -g to find it, then add it to your ~/.zshrc or ~/.bashrc.

Authenticate

The Postman CLI requires some form of authentication to connect to your account. You can authenticate once with the login command or use the --with-api-key flag when working in a CI pipeline.

Interactive login (browser-based):

Run the command below and follow the prompt.

postman login

Non-interactive login (CI/CD, scripting):

You need your API keys to use this method. You can get an API key at web.postman.co/settings/me/api-keys.

postman login --with-api-key "$POSTMAN_API_KEY"

Note: Always use a secret or environment variable for the API key in scripts. Do not hardcode it.

Sign out:

To sign out, use the logout command.

postman logout

Setup

To follow the rest of this tutorial, click the Run in Postman button below. This button lets you fork the collection we’ll use for the rest of this demo into your own Workspace

This collection contains a basic demo banking API that we will be using for the rest of this tutorial.

Run a Collection on the CLI

We can run collections on the Postman app using the collection runner in Postman. However, to run this same command on the CLI, you will need to run the Postman CLI using the command below.

It runs a collection locally and sends the results to your Postman Cloud account if connected.

postman collection run <collection-id-or-path>

It runs a collection locally and sends the results to your Postman Cloud account if connected. If you have your Workspace connected to a local git repo, or you have your collection file available locally, you can provide a path to that file. Otherwise, to get the remote collection UID, follow the steps below.

Toggle the right sidebar if not already toggled, and click on the info icon. You will see an option to copy the ID of the collection.

Copy Collection ID

This UI can be used in the above command to run the collection alongside any tests, scripts, workflows, and assertions it has.

On running this collection on the CLI, you will see the following output.

Run Collection from the command line

Run with an environment

You can specify an environment to run against your collection using the –environment flag or e for shorthand.

postman collection run <collection-id> --environment <environment-id>

# or shorthand:

postman collection run <collection-id> -e <environment-id>

Override variables without an environment file

Variables within an environment file can also be overridden by specifying their key value pairs using –env-var

postman collection run <collection-id> \

--env-var "baseUrl=http://localhost:3000" \

--env-var "apiKey=test-key"

Run a specific folder

To run a specific folder in the collection, get the folder UID the same way the collection UID was gotten, and specify the folder UID using the -i flag.

postman collection run <collection-id> -i <folder-uid>

Stop on first failure

You can stop a collection run when the first failure happens. This is helpful for debugging. It is useful in CI when you want fast failure rather than running the entire suite.

postman collection run <collection-id> --bail

Set output format

You can configure the output format of your collections using the built-in reporters of the Postman CLI. You can specify a reporter using the -r flag.

postman collection run <collection-id> --reporters json

postman collection run <collection-id> --reporters junit

postman collection run <collection-id> --reporters html

JUnit output is widely supported by CI/CD platforms like GitHub Actions, Jenkins, and CircleCI for test result visualization.

Run a Single Request

For quick debugging without running a full collection, you can use the Postman CLI similar to how you would use curl on the command line to run a single request.

postman request GET https://api.example.com/health \

--header "Authorization: Bearer $TOKEN"

With a request body:

postman request POST https://api.example.com/users \

--header "Content-Type: application/json" \

--body '{"name": "Alice", "email": "alice@example.com"}'

Save the response to a file:

postman request GET https://api.example.com/users --output ./response.json

Publish to Postman Cloud (Native Git)

If you’re using Native Git, two commands handle the handoff from your local git repository to Postman Cloud:

Postman Workspace prepare: postman workspace prepare validates your local collections and environments, it checks for valid JSON structure, resolves references, and flags issues before they reach CI.

postman workspace prepare

  • Postman Workspace Push

Postman Workspace push: `postman workspace push`  publishes your local workspace state to Postman Cloud. This updates the Cloud View that your API consumers see.

postman workspace push

# Skip the confirmation prompt (for CI):

postman workspace push -y

The typical flow of this workflow will be

git commit → git push → CI runs collection → postman workspace push -y → Cloud View updated

This is the command that makes Native Git work end-to-end. Your CI pipeline runs workspace push -y only on merges to main, so consumers always see a validated, CI-approved state.

If your collections or environments are in non-default locations:

postman workspace push \

--collections-dir ./custom/collections \

--environments-dir ./custom/environments

To learn more about native git, pushing changes to the cloud from local, and collaborating with a git-backed collection, read this article – Collaborating on APIs with Postman Team Workspaces and Native Git

Lint an OpenAPI Spec

Postman CLI also has the spec lint command that validates an OpenAPI spec against your team’s governance rules configured in Postman.

> Note: This feature only works for enterprise users

Navigate to this URL and copy the YAML OpenAPI Specification. Paste the copied YAML file into the spec hub and copy the spec ID.

Add a new spec

If you have your Workspace connected to Native git, you can use the local YAML file as well.

postman spec lint {spec-id}

OR

postman spec lint ./postman/specs/openapi.yaml

Fail on errors only (ignore warnings):

postman spec lint ./postman/specs/openapi.yaml --fail-severity ERROR

Output as JSON (for programmatic processing):

postman spec lint ./postman/specs/openapi.yaml --output JSON

Trigger a Monitor

You can run a cloud monitor from the CLI. This is useful for on-demand health checks or triggering from a deploy script.

First, create the monitor.

Next, trigger the monitor run using the CLI with the command below

postman monitor run <monitor-id>

Trigger a Monitor

You can wait for the run to complete before exiting:

postman monitor run <monitor-id> --timeout 60000

Or if you want the command to exit 0 regardless of monitor results (e.g., for informational runs):

postman monitor run <monitor-id> --suppress-exit-code

GitHub Actions CI/CD Template

Here’s a complete example workflow that validates collections on every PR and publishes to Postman Cloud on every merge to main. This is the recommended pattern for teams using Native Git:

name: Postman CI

on:
  push:
    branches: [main]
  pull_request:
    branches: [main]
  workflow_dispatch:

jobs:
  postman:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - uses: actions/setup-node@v4
        with:
          node-version: 22
          cache: "npm"

      - run: npm install

      - name: Install Postman CLI
        run: curl -o- "https://dl-cli.pstmn.io/install/unix.sh" | sh

      - name: Start server
        run: |
          npm run dev &
          for i in {1..20}; do
            nc -z localhost 3000 && echo "Server ready" && break
            echo "Waiting for server..." && sleep 2
          done

      - name: Authenticate
        run: postman login --with-api-key "${{ secrets.POSTMAN_API_KEY }}"

      - name: Run collection
        run: postman collection run ./postman/collections/my-api.postman_collection.json

      - name: Publish to Postman Cloud
        if: github.ref == 'refs/heads/main' && github.event_name == 'push'
        run: |
          postman workspace prepare
          postman workspace push -y

      - name: Stop server
        if: always()
        run: kill $(cat pidfile) 2>/dev/null || pkill -f "npm run dev" || true

A few things worth noting:

  • postman collection run runs on every PR, so you catch failures before anything merges

  • postman workspace push -y runs only on merges to main, consumers see a clean, validated state

  • postman workspace prepare validates before pushing, fail fast rather than publishing a broken workspace

  • Store POSTMAN_API_KEY in GitHub → Settings → Secrets and variables → Actions

Quick Reference

# Auth
postman login --with-api-key "$POSTMAN_API_KEY"
postman logout

# Run a collection
postman collection run <id-or-path>
postman collection run <id> -e <env-id>
postman collection run <id> --env-var "key=value"
postman collection run <id> -i <folder-uid>
postman collection run <id> -d data.csv -n 10
postman collection run <id> --bail
postman collection run <id> --reporters junit

# Run a single request
postman request GET https://api.example.com/endpoint

# Native Git: publish to Cloud
postman workspace prepare
postman workspace push -y

# Lint an OpenAPI spec
postman spec lint ./postman/specs/openapi.yaml

# Trigger a monitor
postman monitor run <monitor-id>

Resources

The post Working with the Postman CLI appeared first on Postman Blog.

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories