Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
149116 stories
·
33 followers

htmxRazor 1.2.0: Toast Notifications, Pagination, and the End of CSS Specificity Fights

1 Share

The first feature release after htmxRazor hit 1.1 is here, and it targets the three complaints I hear most from .NET developers building server-rendered apps with htmx: “I need toast notifications,” “I need pagination that works with htmx from the start,” and “your CSS keeps fighting with mine.”

Version 1.2.0 addresses all three. Here is what shipped.

Toast Notifications That Actually Work with htmx

Every htmx-powered app needs toast notifications. A user submits a form, the server processes it, and you need to tell them what happened. Until now, your options in the ASP.NET Core world were to wire up a JavaScript toast library by hand or build your own partial-view-plus-htmx-oob-swap plumbing.

htmxRazor 1.2.0 ships a complete toast notification system. Drop a <rhx-toast-container> on your layout, then trigger toasts from the server using the HxToast() or HxToastOob() extension methods. The component handles auto-dismiss timers, severity variants (success, warning, danger, info), stacking when multiple toasts fire at once, and aria-live announcements so screen readers pick up every notification automatically.

No JavaScript. No third-party library. One Tag Helper and a server-side method call.

Pagination Built for htmx

Pagination is another pattern that shows up in nearly every production app, yet nobody had shipped a .NET Tag Helper that wires up htmx navigation correctly. The new <rhx-pagination> component gives you page buttons, ellipsis for large ranges, first/last/prev/next controls, and size variants. All page transitions happen through htmx, so you get partial page updates without full reloads.

If you have been hand-coding pagination partials on every project, this replaces all of that with a single component.

CSS Cascade Layers: No More Specificity Wars

This is the change that will matter most to teams adopting htmxRazor in existing applications.

Every component library ships CSS that eventually collides with your own styles. You write a rule, the library’s rule wins because of higher specificity, and you start sprinkling !important everywhere. It is a familiar and miserable cycle.

Version 1.2.0 wraps all htmxRazor component CSS inside @layer declarations. Cascade layers let the browser resolve specificity in a predictable order: any CSS you write outside a layer will always beat CSS inside one. That means your application styles win by default, with zero specificity hacks needed.

This single change makes htmxRazor significantly easier to adopt in brownfield projects that already have their own stylesheets.

Accessibility: ARIA Live Region Manager

The new <rhx-live-region> component solves a problem that most developers do not realize they have until an accessibility audit flags it. When htmx swaps content on the page, screen readers do not automatically announce the change. Users who rely on assistive technology can miss critical updates entirely.

The live region manager listens for htmx swaps and pushes announcements to screen readers with configurable politeness levels (polite or assertive) and atomic update control. If you care about building applications that work for all of your users, this component closes a real gap.

View Transitions and hx-on:* Support

Two smaller additions round out the release. The new rhx-transition and rhx-transition-name attributes let you wire up the View Transitions API for animated page transitions with no custom JavaScript. And the hx-on:* dictionary attribute on the base Tag Helper class brings full support for htmx 2.x event handler attributes across every component in the library.

Upgrade Now

dotnet add package htmxRazor --version 1.2.0

Browse the full docs and live demos at https://htmxRazor.com, and check the source at https://github.com/cwoodruff/htmxRazor.

htmxRazor is MIT licensed and accepting contributions. If toast notifications, proper pagination, or cascade layers solve a problem you have been working around, give 1.2.0 a try.

The post htmxRazor 1.2.0: Toast Notifications, Pagination, and the End of CSS Specificity Fights first appeared on Chris Woody Woodruff | Fractional Architect.

Read the whole story
alvinashcraft
13 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

iOS 26.4 Beta 3

1 Share

iOS 26.4 Beta 3

Apple released iOS 26.4 Beta 3 on March 2, 2026, continuing the testing cycle for the upcoming platform update. While this beta primarily focuses on stability improvements, the broader iOS 26.4 release introduces several changes that developers should be aware of across media experiences, system behavior, and platform capabilities.

Below is a breakdown of the most relevant updates in the iOS 26.4 cycle.


Native video podcasts support

iOS 26.4 introduces native video podcast support in the Apple Podcasts ecosystem. Users can now seamlessly switch between audio and video versions of podcast episodes within the Podcasts app without losing playback position.

The implementation relies on Apple’s HTTP Live Streaming (HLS) technology, enabling adaptive video streaming, full screen playback, and the ability to download video podcast episodes for offline viewing.

For media platforms and podcast creators, this update signals Apple’s push toward richer multimedia podcast formats. Video podcasting has become a major distribution channel across platforms such as YouTube and Spotify, and Apple’s native support positions the Podcasts ecosystem to compete more directly in this space.

Although iOS does not expose new developer APIs specifically for podcast video, applications that distribute or aggregate podcast content may increasingly need to support both audio and video podcast formats.


Encrypted RCS messaging improvements

iOS 26.4 expands Apple’s messaging capabilities with support for encrypted RCS messaging, improving security for conversations that use the Rich Communication Services protocol.

RCS enables more modern messaging features compared to traditional SMS, including improved media sharing and richer conversation capabilities. The addition of encryption strengthens the privacy model of these communications.

For developers working with messaging integrations, notification processing, or communication features inside apps, this change reflects Apple’s continued movement toward more secure messaging infrastructure across platforms.


Apple Intelligence expansion and AI powered media features

iOS 26.4 continues Apple’s integration of Apple Intelligence, with new AI powered capabilities appearing in system applications such as Apple Music.

One example is Playlist Playground, which allows users to generate music playlists using natural language prompts. The system can interpret descriptions and automatically create playlists based on mood, activity, or context.

This feature demonstrates Apple’s broader strategy of embedding generative AI capabilities directly into system apps and services. For developers, the growing presence of Apple Intelligence across the system may influence user expectations around AI assisted content creation and automation within applications.


Ambient media and system widget additions

The iOS 26.4 beta introduces new ambient music widgets that allow users to quickly start background sound environments directly from the home screen or lock screen.

These widgets provide quick access to soundscapes designed for focus, relaxation, or background listening. While primarily a user facing feature, it reflects Apple’s continued expansion of system level media experiences and widget based interactions.

Applications that provide media or background audio experiences may need to consider how users interact with these features alongside system level controls.


System improvements and platform stability

Later beta builds such as iOS 26.4 Beta 3 focus heavily on platform stability and bug fixes. Apple continues refining system behavior, background execution handling, and application lifecycle interactions as the update moves closer to public release.

Developers testing applications on the iOS 26.4 beta should validate compatibility against the latest SDK and verify behavior across updated frameworks and system services.

Read the whole story
alvinashcraft
29 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Jensen Huang Calls OpenClaw "Most Important Software Release Ever"

1 Share
From: AIDailyBrief
Duration: 7:51
Views: 511

Jensen Huang declared OpenClaw the most important software release ever. OpenClaw's explosive adoption has ignited a global agent arms race, with Chinese founders and cloud providers launching hosted instances and agentic apps. OpenAI and Anthropic revenues surged toward tens of billions in ARR while Google unveiled Gemini-powered cinematic video overviews fusing imagery, audio and narration.

The AI Daily Brief helps you understand the most important news and discussions in AI.
Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614
Get it ad free at http://patreon.com/aidailybrief
Learn more about the show https://aidailybrief.ai/

Read the whole story
alvinashcraft
40 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

NanoClaw can stuff each AI agent into its own Docker container to deal with OpenClaw’s security mess

1 Share
Illustration of a person using a laptop while interacting with an AI chatbot, surrounded by icons representing artificial intelligence, chat messaging, and automation gears

On the one hand, I feel a bit conflicted pointing out the recognised security issues with OpenClaw, even as serious AI thought leaders are naming their agents “Arnold” and shouting orders at them. I feel duty-bound to take their enthusiasm seriously, while also stressing that this whole area remains problematic.

Enter NanoClaw. And it’s more than just a very small claw.

Firstly, NanoClaw can isolate each agent in its own container. So the agentic loop starts with no knowledge of other agents, and only knows about the resources you tell it about.

The other intriguing thing is that the app isn’t a large configuration file talking to a monolith; it is just the code you need (and the appropriate Claude skills) — with Claude hacking itself as needed. Given that code is “cheap” now, and Claude editing is reliable, this does make sense. And it does keep the code size down.

WhatsApp? No thanks.

My first question was… how does a bot get access to WhatsApp? This is the preferred contact choice of most OpenClaw users (and NanoClaw). The problem here is that unless you have a business account, you surely don’t own enough stake in the platform to host an arbitrary new user. On closer inspection, it appears that the WhatsApp connection relies on a module called Baileys that scans WhatsApp Web’s WebSocket-based data, which Meta strongly discourages. In fact, accounts using unauthorised methods to connect to WhatsApp are actively monitored and restricted.

I’m hardly going to encourage using such a method, but fortunately, we don’t have to. I do pay for a Slack workspace, and while connecting to Slack is a little painful, it is at least fully accounted for.

Installing

I have Claude installed, of course, connected to a “Pro” account. With the instructions, I do the usual thing:

git clone https://github.com/qwibitai/nanoclaw.git


Then I ran Claude within the new directory with /setup:

I have Docker Desktop installed, as this part requires:

On a Mac, you will see the familiar Docker icon in the Menu Bar if you didn’t start it yourself.

Then we move to how you are connecting to Claude:

Usually, I have to remember to turn the API key off because it’s more expensive than a subscription. This is the first time I’ve seen the two options mentioned side by side—a good sign.

Then we get to the contentious bit:

As I’ve indicated, I don’t think WhatsApp is appropriate, so I’ll be using Slack.

Then we were given the great Slack sidequest:

I now have to find two tokens, but not with my sword and trusty shield, but with the Slack API. I only recommend this campaign for an experienced adventurer. Onwards.

Generating the tokens in Slack

Fortunately, there are some good instructions on the Slack skill, and Claude is patient. First, we need to generate the tokens and scopes.

On Slack, I found the appropriate dialog:

We need to turn on Socket Mode:

Then we need to subscribe to a set of bot events:

And add scopes for OAuth – these limit what the NanoClaw App can do in the account:

And finally, you get to install your new app and fetch the final dungeon key token:

I have slain the dragon / found the treasure / defeated the Rocketeer. Well, not quite.

Claude crashed. But I quickly got back to where I was, and Claude appears to fix the errant Slack script, and accept my two tokens for its .env file:

Then it was a case of introducing NanoClaw into my Slack channel.

I suspected we were done with Slack itself, but we needed to give it access to my server folders. Remember, this is what we did with Claude Cowork to give it real power:

A nicer way to select the folders would be cool, but I added the folders I was happy for NanoClaw to see:

And then I was able to communicate with NanoClaw on my Slack channel, after getting the correct Claude auth token:

My initial attempt to confirm that NanoClaw could see my folders on my Mac failed:

This is both good and bad. It proved that the agent is sitting in a container and not part of a single app. And of course, I asked Claude to fix itself. I had been tailing the log, so I could relay all the problems back to Claude, which eventually mapped the folders in a way that the NanoClaw agent could understand:

Note how it refers to “the agent” as a separate entity. So I had a back-and-forth between the NanoClaw agent and Claude. I’m still very much the engineer here – but the separation of control is fine. The errors were the ones we all make, not understanding what Linux wants. No one understands what Linux wants.

Eventually, it fixed its internal database and restarted what it needed to for the container. And with the new mapping, I could see my Documents folder:

To check, I added a new file to check it really was mapped live to the directory. Eventually, it did reflect that the file was there.

“I like the fact that Claude thought of the agent sitting in the container as quite separate from itself, and overall this is certainly a much more sensible and secure setup if you really want to be a ‘power user’ who really just needs a secretary to yell at.”

Now this isn’t running on a Mac Mini under my cupboard, but on my laptop. So I won’t be asking for a research document based on a report in my inbox at 2 a.m. while out for a jog, but if I were into that sort of thing, NanoClaw can clearly provide this fairly safely.

While I did need to play engineer to get everything working, in reality, I was telling Claude my problems, and Claude fixed them. For that, I got a direct connection from my mobile Slack app to my server. I like the fact that Claude thought of the agent sitting in the container as quite separate from itself, and overall, this is certainly a much more sensible and secure setup if you really want to be a “power user” who really needs a secretary to yell at.

The post NanoClaw can stuff each AI agent into its own Docker container to deal with OpenClaw’s security mess appeared first on The New Stack.

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Open-source coding agents like OpenCode, Cline, and Aider are solving a huge headache for developers

1 Share
Resource Database for Unpslapsh+

AI coding agents are proliferating, but the economics of running large language models (LLMs) are breaking down as developers juggle multiple APIs and seriously unpredictable token bills. This is particularly problematic when agents start making dozens of model calls just to complete a single request.

The response from the developer community has been open-source coding agents, which operate one layer above the models. They keep costs consistent because they’re independent of — and work across — many LLMs.

OpenCode is one such player, which last week introduced OpenCode Go, a $10-per-month subscription designed to make those workloads easier to manage.

The agent layer takes center stage

The rise of coding agents such as OpenCode also points to a shift in where value may sit in the AI software stack. Much of the early attention in generative AI centred on the capabilities of LLMs themselves. Tools like OpenCode scan repositories, interpret developer instructions, break tasks into multiple steps, run commands, and apply changes across a project. In effect, they translate a model’s general reasoning ability into concrete actions inside a codebase.

A growing number of similar open projects are exploring this space. Alongside OpenCode, tools such as Kilo Code (16.3k stars on GitHub as of writing) are experimenting with similar open-agent architectures while introducing their own paid tiers to cover infrastructure costs. Cline, an open-source VS Code extension that emerged from an Anthropic “Build with Claude” Hackathon in 2024, has 58.7k GitHub stars. Meanwhile, Aider (currently at 41.6k GitHub stars) has evolved over the years and is one of the most established open-source coding agents.

Projects like these mark the emergence of a new layer of developer tooling built around LLMs. The agent is the interface that developers interact with: software that interprets tasks, navigates repositories, and coordinates the model calls that produce the final output.

And much like the broader software sphere, subscriptions have become the standard way to package them. Tools such as Anthropic’s Claude Code, OpenAI’s Codex, and Cursor combine model access with an assistant that can read repositories, propose edits, and execute tasks across a project. The subscription layer typically bundles model usage into a single monthly plan, reflecting the heavy prompt traffic these systems generate.

OpenCode approaches the problem from a slightly different angle. It’s an open-source coding agent that runs in the terminal (a desktop app is also available in beta) and connects to whichever models developers want to use. OpenCode acts as a neutral layer between a developer and the models, allowing the same agent to operate with systems from OpenAI, Anthropic, Google, or open models hosted elsewhere.

Open source agents are pulling ahead

OpenCode quietly emerged in 2024 from the team behind Serverless Stack (SST), an open source framework for building applications on Amazon Web Services (AWS). Several of the same developers are involved, including Dax Raad, alongside Jay V and Frank Wang, who run developer tools company Anomaly.

Throughout 2025, the project gained significant traction. According to Runa Capital’s ROSS Index of fast-growing commercial open source startups, the OpenCode repository reached 44.6K GitHub stars by the end of last year, placing it among the fastest-rising projects. The repo has continued to grow, too, passing 117K stars as of this writing in March 2026.

Part of the appeal lies in OpenCode’s flexibility. Many of the major coding agents are tightly aligned with a particular model provider — for example, Anthropic’s Claude Code or OpenAI’s Codex. Cursor, for its part, exposes a curated set of models inside its editor environment. OpenCode, however, allows developers to connect their own providers and API keys, supporting dozens of model providers and even locally hosted systems.

That flexibility becomes more relevant as model providers tighten control over how their systems are accessed. Anthropic, for example, recently tightened Claude restrictions after discovering that some third-party tools — including OpenCode — were routing Claude Code subscription access through external agents. The change prevents Claude Code subscription credentials from being used outside Anthropic’s own tooling, although developers can still access Claude models through the standard API inside tools such as OpenCode.

The move appears aimed at a pattern some developers had adopted: running intensive agent loops through flat-rate subscriptions that would otherwise cost far more under usage-based API pricing. By contrast, OpenAI models remain usable inside third-party agents such as OpenCode, reflecting growing competition between model providers seeking to win over the developer community.

OpenCode Go builds on that model flexibility by offering a bundled option. Instead of requiring developers to connect external providers themselves, the $10-per-month plan includes access to several models directly inside the tool, including GLM-5 from Zhipu, Kimi K2.5 from Moonshot AI, and MiniMax M2.5. All three models come from Chinese AI labs and are widely considered cheaper to run than many Western frontier systems, helping make a low-cost subscription feasible for a tool that may generate large volumes of model calls.

Coding agents, of course, tend to generate bursts of model activity rather than sustained activity. A single request can trigger dozens of model calls as the agent scans a repository, proposes changes, runs commands, and revises its output. That pattern can produce large volumes of tokens in a short period.

Open source keeps this new agent layer malleable, allowing developers to inspect, modify, and swap the components that shape how those agents behave. That token-intensive behavior is also what makes OpenCode Go’s pricing noteworthy: A relatively low $10/month open-source subscription signals that the cost of running these models has dropped enough to make a low-margin subscription viable, which is a meaningful signal about where the underlying economics are heading.

The post Open-source coding agents like OpenCode, Cline, and Aider are solving a huge headache for developers appeared first on The New Stack.

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Collaborating on APIs with Postman Team Workspaces and Native Git

1 Share

As a developer, your code is probably already version-controlled in Git, but the surrounding API artifacts (OpenAPI specs, collections, environments, etc) can often be scattered across tools, scripts, and projects. This challenge is at the heart of what Postman provides – a unified workspace, with rich toolset for working with, managing, and collaborating on, APIs. Now, with the new native git integration, your APIs can be managed right alongside your code.

Let’s say you’re an engineer on a team building a Book API for your online business. The API implementation code lives in a GitHub repository , and multiple groups in your organization need to collaborate:

  • Your engineering team needs to design and evolve the API surface, not just the implementation.

  • QA needs collections and environments they can extend with tests and plug into CI.

  • Other internal teams and external consumers need a reliable way to discover, fork, and use the Collections.

In this post, I’m going to show you how to set up your collaboration flows and publish to the cloud to connect with your continuous integration processes. You can think of this as your inner loop of collaboration – people, teams, and processes internal to your organization. Then, in a follow up post, we will cover the outer loop of external collaboration. Let’s jump in.

Prerequisites

To follow along, you’ll need:

Phase 1 – Set up collaboration flows internally for your engineering team

Step 1 – Clone the repo

The repository contains a simple demo book API that has some basic CRUD operations. Run this command to clone the repository and create a feature branch

gh repo clone Postman-Devrel/Book-API 
OR 
git clone https://github.com/Postman-Devrel/Book-API.git 

git checkout -b feat/branch-name

Step 2 – Connect your Workspace to git

The Workspace defines the collaboration boundary. Connecting it to a local Git repo ensures API artifacts are versioned, reviewable, and branch-aware like your code.

API collaboration in Postman begins with a Workspace. This is where your collections, environments, API specs, flows, and mocks reside. It enables teammates to discover APIs, grants consumers access, and distributes the authoritative version of your API artifacts.

In the top navigation menu, click on Workspaces→ Create Workspace. Give your Workspace a name (Book API) and create it. In the blank Workspace, on the left side of the Footer, you will see a “Connect Git” option and an “Open Folder” option in the left sidebar.

Empty Workspace

Select either of the options and choose the folder of the git repo you just cloned. Your local files will appear on the left sidebar. Click on the “Generate config file” prompt at the top of the file view.

You will land on a UI as shown above. Let’s break this down:

  1. In the left sidebar, view your existing code base files from the cloned repo, including two folders: .postman and postman. The .postman folder contains Workspace configuration options, while Postman holds all Workspace resources and artifacts. Toggle these folders in the UI to view their contents.

  2. In the footer’s left section, the current branch name appears. The active environment is Local View. Click the branch name to switch branches or to Cloud View. Local View stores local git resources on disk; Cloud View hosts resources in Postman Cloud. Cloud view shows what is deployed, while local view displays what is in development.

To get a clearer idea of Local vs Cloud view, let’s switch between the two. Click on the name of your branch to the left of the footer, and you will see a list of branches available in this repo and a search field. Click on “Postman cloud” above the search field to switch to cloud view. The difference might not be striking now until you start to generate collections and other artifacts in your Workspace. Switch back to Local and continue to the next step.

Step 3 – Create a Collection with Agent Mode

Next, click on the Ask AI button in #3. Agent Mode will automatically read all the files in this git repo, check for exposed API interfaces, and automatically generate a collection for you to start working with.

You can see the generated collection files in your file system. Click on the Local Items toggle of the left navigation sidebar to see the generated collection.

Switch back and forth between Local and Cloud view to confirm that this collection is only available locally in your Workspace. Stay in local view and continue to the next step.

Step 4 – Modify Workspace artifacts

Now, let us make some changes to the Collection we just created in this Workspace using Agent Mode.

First, test out the requests in this collection. You need to install the required dependencies and start the Node.js server.

Click on the terminal button at the footer and run the following command.

yarn install
yarn dev

Now, navigating back to the collection, run any random request and ascertain that you get a valid API response back. Click on the Save Response button to save this response as an example for this request. Repeat the same procedure for some or all other requests in this collection so they all have a saved request example.

On your terminal, if you run git status, you will see all the changes that have been made and are available to be staged for commit.

Next, open the right sidebar and ask Agent Mode to generate some tests for your collection(Prompt: Generate comprehensive tests for all the requests in this collection). Agent Mode will automatically modify each request and add some tests to its respective test scripts.

Step 5 – Invite Your Team

To follow this step, you need a Team Workspace. This is included in the Postman Team plan. If you don’t have a Team plan, skip to step #6. You can follow the rest of the article.

Now, you need to invite members of your team to this Workspace so they can similarly connect it to their local git repos. Navigate to the Workspace settings by clicking on the ellipsis icon to the right of the search bar in the left sidebar. Select Workspace Overview and navigate to settings. Click on Manage People and select the appropriate team to invite or type in users email individually. Default their roles to the viewer role, and only give edit access to those who need to maintain the Workspace itself.

Step 6 – Commit and Push Changes

Commit your Postman artifacts alongside your code changes.

git add .
git commit -m "feat: commit message"
git push origin feat/branch-name

The collection change is now part of the same commit as the code change. You can push these changes and open a pull request as you normally would. The PR contains both the implementation and the API artifact changes.

Anyone reviewing the code can pull these changes locally and similarly connect their Workspace to git locally to try out the changes.

Phase 2 – Automate the distribution of these changes

After development, the next step is distributing these APIs to the appropriate consumers, either internally within your organisation or externally to partners or customers.

All changes so far have been local. When updates occur, you must distribute them to consumers by pushing changes to the Cloud View. This can be done manually or automatically in CI using Postman CLI.

To push manually, click on the ellipsis icon beside the search bar in the left sidebar and select the “Push to Postman Cloud” option.

Publish to Cloud via the CI

Before pushing changes that update the Cloud View your consumers depend on, ensure they can be tested and run automatically in your CI pipeline, your specs lint successfully, there are no unexpected failures, etc.

Automation helps you build confidence in your workflows. Here’s a GitHub Actions workflow example that validates on every PR, and publishes to Postman Cloud when changes are merged to main:

name: Publish to Postman Workspace

on:
  push:
    branches: [main]
  pull_request:
    branches: [main]
  workflow_dispatch:

jobs:
  postman:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - run: curl -o- "https://dl-cli.pstmn.io/install/unix.sh" | sh

      - name: Set up Node.js
        uses: actions/setup-node@v4
        with:
          node-version: 20
          cache: "npm"

      - name: Install dependencies
        run: npm install

      - name: Start server in background
        run: |
          npm run dev &
          echo $! > pidfile
          # Wait for server to be ready (port 3000)
          for i in {1..20}; do
            if nc -z localhost 3000; then
              echo "Server is up!"
              break
            fi
            echo "Waiting for server..."
            sleep 2
          done

      - name: Validate Collection
        env:
          POSTMAN_API_KEY: ${{ secrets.POSTMAN_API_KEY }}
        run: |
          postman login --with-api-key "$POSTMAN_API_KEY"
          postman collection run ./postman/collections/Book-API

      - name: Push to Postman Cloud
        if: github.event_name == 'push'
        env:
          POSTMAN_API_KEY: ${{ secrets.POSTMAN_API_KEY }}
        run: |
          postman login --with-api-key "$POSTMAN_API_KEY"
          postman workspace prepare
          postman workspace push -y

      - name: Stop server
        if: always()
        run: |
          if [ -f pidfile ]; then
            kill $(cat pidfile) || true
            rm pidfile
          fi

This action file is already present in the ./github/workflows folder of the forked repository. You can copy and paste it if you’re working from a different repository.

It ensures that when the main branch updates, the collection runs, and all workspace changes are pushed to your consumers in Postman after the validation steps pass and your pull request is merged.

Note that the API keys are securely stored and referenced using GitHub Secrets.

Ensure you update the path in line 46 of the actions file to the relative path of your collection folder. In my case, this line becomes postman collection run ./postman/collections/Book-API

If the collection run is successful, the workspace changes are pushed to the Postman Cloud and made available to your consumers. Switch to Postman Cloud from the footer to confirm that the changes were pushed.

How you set this up is your choice. Note that Cloud view shows what is deployed, while local view shows what is in development.

A merge to main can trigger deployment. You can also use a different trigger to push to the cloud after successful deployment.

Conclusion

By now, you’ve:

  • Connected your Postman Team Workspace to a Git repository

  • Standardize how your engineering team designs, tests, and reviews API artifacts alongside code

  • Automated how trusted, validated changes are promoted to Postman Cloud for consumers via CI

This completes the internal collaboration loop: engineers can safely evolve APIs, reviewers can validate changes in pull requests, and consumers always get an up‑to‑date, reliable version of your Collections, environments, and other API artifacts.

However, most APIs don’t live in an engineering silo. Documentation writers, QA engineers, frontend teams, and even external partners often need to propose changes, improve docs, or extend tests—without having direct access to the codebase or your internal Git workflows.

In the second part of this article(Enabling Collaboration with External Teams), we’ll look at how to safely open up this workflow to non‑engineering contributors using Postman Cloud, Workspace forks, and pull requests on Collections themselves.

Resources

The post Collaborating on APIs with Postman Team Workspaces and Native Git appeared first on Postman Blog.

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories