Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
152168 stories
·
33 followers

ChatGPT’s new Images 2.0 model is surprisingly good at generating text

1 Share
ChatGPT Images 2.0, the newest image-generation model from OpenAI, shows just how much AI capabilities have evolved over the last few years.
Read the whole story
alvinashcraft
18 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

AI backlash is coming for elections

1 Share
Graphic photo illustration of a voting sign that reads “Vote here”.

Ask Americans how they feel about AI and most say they have concerns. Communities have mounted resistance to data center projects, stalling them across the US. On social media, anger at AI companies and executives is unrestrained - sometimes to the point of condoning violence.

But look at the issues that most campaigns are focused on, and AI is far less prevalent, experts say.

More than 60 percent of both Republicans and Democrats polled by Ipsos earlier this year agree that the government should regulate AI for economic stability and public safety, and that the technology's development should slow down. Still, "when you just ask folks, 'w …

Read the full story at The Verge.

Read the whole story
alvinashcraft
18 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Job Cuts Driven By AI Are Rising On Wall Street

1 Share
Firms like Bank of America, Citi, Wells Fargo, and others are reporting strong profits while reducing head count and automating more work. "All of them credited A.I. to some degree ... in areas ranging from the so-called back office, where tens of thousands of employees fill out paperwork to comply with various laws and regulations, to the front office, where seven-figure salaried professionals put together complicated financial transactions for corporate clients," reports the New York Times. From the report: Less than four months ago, Bank of America's chief executive, Brian T. Moynihan, volunteered in a TV interview what he would say to his 210,000 employees about the chance of artificial intelligence replacing human work. "You don't have to worry," he said. "It's not a threat to their jobs." Last week, after Bank of America reported $8.6 billion in profit for the first quarter -- $1.6 billion more than the same period a year earlier -- Mr. Moynihan struck a different tone. The bank's bottom line, he said, was helped by shedding 1,000 jobs through attrition by "eliminating work and applying technology," which he repeatedly specified was artificial intelligence. He predicted more of that in the months and years to come. "A.I. gives us places to go we haven't gone," Mr. Moynihan said. The veneer of Wall Street's longstanding assertion -- that A.I. will enhance human work, not replace it -- is rapidly peeling away, as evidenced by the current quarterly earnings season. JPMorgan Chase, Citi, Bank of America, Goldman Sachs, Morgan Stanley and Wells Fargo racked up $47 billion in collective profits, up 18 percent, while shedding 15,000 employees. All of them credited A.I. to some degree with helping cut jobs and automate work in areas ranging from the so-called back office, where tens of thousands of employees fill out paperwork to comply with various laws and regulations, to the front office, where seven-figure salaried professionals put together complicated financial transactions for corporate clients. Unlike executives in Silicon Valley, few major financial figures are stating outright that A.I. is eliminating jobs. Citi, for example, has pledged to shrink its work force by 20,000 people through what one executive described to financial analysts last week as the company's "productivity and efficiency journey." The bank is paying for A.I. software from Anthropic, Google, Microsoft and OpenAI, to automatically read legal documents, approve account openings, send invoices for trades and organize sensitive customer data, among other tasks, according to public statements by bank executives and two people familiar with Citi's systems. Among the recent job cuts at Citi were scores of employees who were part of the bank's "A.I. Champions and Accelerators" program, according to the two people, who were not permitted by the bank to speak publicly. The program involves Citi employees who perform their day jobs while also working to persuade their colleagues to adopt A.I. technologies.

Read more of this story at Slashdot.

Read the whole story
alvinashcraft
18 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Tim Cook's Legacy Is Turning Apple Into a Subscription

1 Share
The soon-to-exit Apple CEO went all in on services. Now, the incoming CEO, John Ternus, will need to embrace the AI era.
Read the whole story
alvinashcraft
19 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Announcing TypeScript 7.0 Beta

1 Share

Today we are absolutely thrilled to announce the release of TypeScript 7.0 Beta!

If you haven’t been following TypeScript 7.0’s development, this release is significant in that it is built on a completely new foundation. Over the past year, we have been porting the existing TypeScript codebase from TypeScript (as a bootstrapped codebase that compiles to JavaScript) over to Go. With a combination of native code speed and shared memory parallelism, TypeScript 7.0 is often about 10 times faster than TypeScript 6.0.

Don’t let the “beta” label fool you – you can probably start using this in your day-to-day work immediately. The new Go codebase was methodically ported from our existing implementation rather than rewritten from scratch, and its type-checking logic is structurally identical to TypeScript 6.0. This architectural parity ensures the compiler continues to enforce the exact same semantics you already rely on. TypeScript 7.0 has been evaluated against the enormous test suite we’ve built up over the span of a decade, and is already in use in multiple multi-million line-of-code codebases both inside and outside Microsoft. It is highly stable, highly compatible, and ready to be put to the test in your daily workflows and CI pipelines today.

For over a year we’ve been working with many internal Microsoft teams, along with teams at companies like Bloomberg, Canva, Figma, Google, Lattice, Linear, Miro, Notion, Slack, Vanta, Vercel, VoidZero, and more to try out pre-release builds of TypeScript 7.0 on their codebases. The feedback has been overwhelmingly positive, with many teams reporting similar speedups, shaving off a majority of their build times, and enjoying a much more lightweight and fluid editing experience. In turn, we feel confident that the beta is in great shape, and we can’t wait for you to try it out soon.

Using TypeScript 7.0 Beta

To get TypeScript 7.0 Beta, you can install it via npm:

npm install -D @typescript/native-preview@beta

Note: the package name will eventually be typescript in a future release.

From there, you can run tsgo in place of the tsc executable.

> npx tsgo --version
Version 7.0.0-beta

The tsgo executable has the same behavior on all TypeScript code as tsc from TypeScript 6.0 – just much faster.

To try out the editing experience, you can install the TypeScript Native Preview extension for VS Code. The editor support is rock-solid, and has been widely used by many teams for months now. It’s an easy low-friction way to try TypeScript 7.0 out on your codebase immediately. It uses the same foundation as the command line experience, so you get the same performance improvements in your editor as you do on the command line. Notably, it’s also built on the language server protocol, making it easy to run in most modern editors or even tools like Copilot CLI.

Running Side-by-Side with TypeScript 6.0

To help you transition from TypeScript 6.0 to TypeScript 7.0, this beta release is available through the @typescript/native-preview package name using the tsgo entry point. This enables easy validation and comparison between tsc and tsgo.

However, as we mentioned above, the stable release of TypeScript 7.0 will be published under the typescript package and will use the tsc entry point.

Additionally, even though 7.0 Beta is close to production-ready, we won’t have a stable programmatic API available until at least several months from now with TypeScript 7.1. Given this, we have made it a priority to ensure TypeScript can be run side-by-side with TypeScript 6.0 for the foreseeable future without any conflicts around “which tsc is which?”

As part of the 6.0/7.0 transition process, we’ve published a new compatibility package, @typescript/typescript6. This package exposes a new entry point tsc6, so that (if needed) you can run the next release of TypeScript 7.0 (which will provide a tsc binary) side-by-side without naming conflicts. It will also re-export the TypeScript 6.0 API, so that you can use tsc for TypeScript 7, while other tooling can continue to rely on 6.0.

Because some tools like typescript-eslint expect to import from typescript directly via peer dependencies, we recommend achieving this via npm aliases. You should be able to run the following command

npm install -D typescript@npm:@typescript/typescript6

or modify your package.json as follows:

{
  "devDependencies": {
    "typescript": "npm:@typescript/typescript6@^6.0.0",
  }
}

In the future we will have more specific guidance for using a TS7-powered tsc alongside a TS6-powered tsc6.

Parallelization and Controls

TypeScript 7.0 now performs many steps in parallel, including parsing, type-checking, and emitting. Some of these steps, like parsing and emitting can mostly be done independently across files. As such, parallelization automatically scales well with larger codebases with relatively little overhead. But not every step in a TypeScript build is easily parallelizable.

Checker Parallelization

Other steps, like type-checking, have more complex dependencies across files. Most files end up relying on the same type information from their dependencies and the global scope, and so running type-checkers completely independently would be wasteful – both in computation and memory. On the other hand, type-checking occasionally relies on the relative ordering of information in a program, and so type-checking from scratch must always check the same files in an identical order to ensure the same results.

To enable parallelization while avoiding these pitfalls, TypeScript 7.0 creates a fixed number of type-checker workers with their own view of the world. These type-checking workers may end up duplicating some common work, but given the same input files, they will always divide them identically and produce the same results.

The default number of type-checking workers is 4, but it can be configured with the new --checkers flag. You may find that increasing this number can further speed up builds on larger codebases where typical machines have more CPU cores, but will typically come at the cost of increased memory usage. Likewise, machines with fewer CPU cores (e.g. CI runners) may want to decrease this number to avoid unnecessary overhead.

In rare cases, varying the number of --checkers may surface order-dependent results. Specifying a fixed number of checkers across your team can help ensure everyone is getting the same results, but is up to the discretion of each team.

Project Reference Builder Parallelization

TypeScript 7.0 can parallelize builds within a project, but it can now also build multiple projects at once as well. This behavior can be configured with the new --builders flag, which controls the number of parallel project reference builders that can run at once. This can be particularly helpful for monorepos with many projects.

Like --checkers, increasing the number of builders can speed up builds, but may come at the cost of increased memory usage. It also has a multiplicative effect with --checkers, so it’s important to find the right balance for your machine and codebase. For example, building with --checkers 4 --builders 4 allows up to 16 type-checkers to run at once, which may be excessive.

Unlike --checkers, varying the number of builders should not produce different results; however, building project references is fundamentally bottlenecked by the dependency graph of projects (with the exception of type-checking on codebases that leverage --isolatedDeclarations and separate syntactic declaration file emit).

Single-Threaded Mode

In some cases, it can be helpful to enforce single-threaded operation throughout the compiler. This may be useful for debugging, comparing performance with TypeScript 6 and 7, when orchestrating parallel builds externally, or for running in environments with very limited resources. To enable single-threaded mode, you can use the new --singleThreaded flag. This will not only cap the number of type-checking workers to 1, but also ensure parsing and emitting are done in a single thread.

Updates Since 5.x, and New Behaviors from 6.0

TypeScript 7.0 is made to be compatible with TypeScript 6.0’s type-checking and command-line behavior. Any TypeScript code that compiles cleanly with TypeScript 6.0 (with the stableTypeOrdering flag on, and without the ignoreDeprecations flag set) should compile identically in TypeScript 7.0.

With that said, TypeScript 7.0 adopts 6.0’s new defaults, and provides hard errors in the face of any flags and constructs deprecated in TypeScript 6.0. This is notable as 6.0 is still relatively new, and many projects will need to adapt to its new behaviors. We encourage developers to adopt TypeScript 6.0 to make the transition to TypeScript 7.0 easier, and you can also read the TypeScript 6.0 release blog post for more details on these deprecations.

At a glance, the notable default changes to configuration are:

  • strict is true by default.
  • module defaults to esnext.
  • target defaults to the current stable ECMAScript version immediately preceding esnext.
  • noUncheckedSideEffectImports is true by default.
  • libReplacement is false by default.
  • stableTypeOrdering is true by default, and cannot be turned off.
  • rootDir now defaults to ./, and inner source directories must be explicitly set.
  • types now defaults to [], and the old behavior can be restored by setting it to ["*"].

We believe the rootDir and types changes may be the most “surprising” changes, but they can be mitigated easily. Projects where the tsconfig.json sits outside of a directory like src will simply need to include rootDir to preserve the same directory structure.

  {
      "compilerOptions": {
          // ...
+         "rootDir": "./src"
      },
      "include": ["./src"]
  }

For the types change, projects that depend on specific global declarations will need to list them explicitly. For example,

  {
      "compilerOptions": {
          // Explicitly list the @types packages you need (e.g. bun, mocha, jasmine, etc.)
+         "types": ["node", "jest"]
      }
  }

The deprecations that have turned into hard errors with no-op behavior are:

  • target: es5 is no longer supported.
  • downlevelIteration is no longer supported.
  • moduleResolution: node/node10 are no longer supported, with nodenext and bundler being recommended instead.
  • module: amd, umd, systemjs, none are no longer supported, with esnext or preserve being recommended in conjunction with bundlers or browser-based module resolution.
  • baseUrl is no longer supported, and paths can be updated to be relative to the project root instead of baseUrl.
  • moduleResolution: classic is no longer supported, and bundler or nodenext are the recommended replacements.
  • esModuleInterop and allowSyntheticDefaultImports cannot be set to false.
  • alwaysStrict is assumed to be true and can no longer be set to false
  • The module keyword cannot be used in namespace declarations.
  • The asserts keyword cannot be used on imports, and must use the with keyword instead (to align with developments on ECMAScript’s import attribute syntax).
  • /// <reference no-default-lib /> directives are no longer respected under skipDefaultLibCheck.
  • Command line builds cannot take file paths when the current directory contains a tsconfig.json file unless passed an explicit --ignoreConfig flag.

JavaScript Differences

As we ported the existing codebase, we also took the opportunity to revisit how our JavaScript support works.

TypeScript originally supported JavaScript files by using JSDoc comments and recognizing certain code patterns for analysis and type inference. Lots of the time, this was based on popular coding patterns, but occasionally it was based on whatever people might be writing that Closure and the JSDoc doc generating tool might understand. While this approach was helpful for developers with loosely-written JSDoc codebases, it required a number of compromises and special cases to work well, and diverged in a number of ways from TypeScript’s analysis in .ts files.

In TypeScript 7.0, we have reworked our JavaScript support to be more consistent with how we analyze TypeScript files. Some of the differences include:

  • Values cannot be used where types are expected – instead, write typeof someValue
  • @enum is not specially recognized anymore – create a @typedef on (typeof YourEnumDeclaration)[keyof typeof YourEnumDeclaration].
  • A standalone ? is no longer usable as a type – use any instead.
  • @class does not make a function a constructor – use a class declaration instead.
  • Postfix ! is not supported – just use T.
  • Type names must be defined within a @typedef tag (i.e. /** @typedef {T} TypeAliasName */), not adjacent to an identifier (i.e. /** @typedef {T} */ TypeAliasName;).
  • Closure-style function syntax (e.g. function(string): void) is no longer supported – use TypeScript shorthands instead (e.g. (s: string) => void).

Additionally, some JavaScript patterns, like aliasing this and reassigning the entirety of a function’s prototype are no longer specially treated.

While some of our JS support is in flux, we have been updating this CHANGES.md file to capture the differences between TypeScript 6.0 and 7.0 in more detail.

Editor Experience

TypeScript 7.0’s performance improvements are not limited to the command line experience – they also extend to the editor experience too. The TypeScript Native Preview extension for VS Code provides a seamless way to try out TypeScript 7.0 in your editor, and has seen widespread use.

Since it first debuted, we’ve added in missing functionality like auto-imports, expandable hovers, inlay hints, code lenses, go-to-source-definition, JSX linked editing and tag completions, and more. Additionally, we’ve rebuilt much of our testing and diagnostics infrastructure to make sure the quality bar is high.

This extension respects most of the same configuration settings as the built-in TypeScript extension for Visual Studio Code, along with most of the same features. While a few things are still coming (like semantics-enhanced highlighting, more-specific import management commands, etc.), the extension is already powerful, stable, and fast.

Upcoming Work

In the coming weeks, we expect to ship a more efficient implementation of --watch, and meet parity on declaration file emit from JavaScript files. We will also be working on minor editor feature gaps like “find file references” from the file explorer, and surfacing the more granular “sort imports” and “remove unused imports” commands instead of just the more general “organize imports” command.

Beyond this, we’ll be developing a stable programmatic API for TypeScript 7.1 or later, improving our real-world testing infrastructure, and addressing feedback.

The Road to TypeScript 7.0

With TypeScript 7.0 Beta now available, the team is focusing on bug fixes, compatibility work, editor polish, and performance improvements as we move toward a stable release. Our current plan is to release TypeScript 7.0 within the next two months, with a release candidate available a few weeks prior. The release candidate will be the point where we expect TypeScript 7’s behavior to be finalized, with changes after that focused on critical fixes to regressions.

Between now and then, we would especially appreciate feedback from trying TypeScript 7.0 on real projects. If you run into any issues, please let us know on the issue tracker for microsoft/typescript-go so we can make sure the stable release is in great shape.

We also encourage you to share your experience using TypeScript 7.0 and tag @typescriptlang.org on Bluesky or @typescript@fosstodon.org on Mastodon, or @typescript on Twitter.

Our team is incredibly excited for you to try this release out, so try it today and let us know what you think. Happy hacking!

– The TypeScript Team

The post Announcing TypeScript 7.0 Beta appeared first on TypeScript.

Read the whole story
alvinashcraft
20 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Eliminate context switching with Postman Native Git and MCP

1 Share

Eliminate context switching with Postman Native Git and MCP

You’re in the zone. You’ve been heads-down on a backend service, the code is flowing, and you’re ready to verify everything works. You fire up your API tests, and… something’s off. A test fails. An error message is wrong. A response isn’t matching what you expected.

Now the familiar dance begins: jump to your code editor to fix the issue, switch to the terminal to commit and push, open GitHub in the browser to create a PR, wait for the review, merge it, then try to remember where you were. Research from Gloria Mark at UC Irvine shows it takes an average of 23 minutes to regain focus after a context switch. The American Psychological Association estimates that this kind of task-switching can consume up to 40% of your productive time.

That’s a lot of lost flow state for what amounts to a one-line fix.

In this post, I’ll walk through how to use Postman’s Native Git integration, Postman Agent Mode, and the GitHub MCP server to stay in one tool from the moment you spot a bug to the moment your PR is merged. I’ve embedded the video walkthrough above, but this post breaks down each step so you can follow along at your own pace.

What you’ll need

Before getting started, make sure you have:

Connect your local repository with Native Git

The first step is connecting your local Git repository to Postman so your collections, environments, and code all live in the same workspace. Native Git stores Postman collections as files directly in your repository, which means your API definitions sit right alongside your application code.

In the Postman desktop app:

  1. Click the Local file system folder in the left sidebar
  2. Select Open folders and choose the directory containing your Git repository
  3. Connect it to your workspace when prompted

Once connected, your local collections appear directly in Postman. If your repo has collections and your application code together, you get a single view of everything: your API requests, test scripts, and the service code they test against.

Worth noting: Native Git works with the Postman Collection v3 format, which uses YAML instead of the older v2.1 JSON. YAML collections produce much cleaner Git diffs, making code reviews significantly easier. If your collections are still on v2.1, don’t worry. Postman Agent Mode can handle the migration automatically, and I’ll show you how that happened to me later in this walkthrough.

Set up the GitHub MCP server

The Model Context Protocol (MCP) is what gives Postman Agent Mode the ability to interact directly with GitHub. Instead of you manually switching to a browser to create branches, push commits, and merge PRs, Agent Mode uses the GitHub MCP server to do it all through natural language prompts.

To configure it:

  1. Click the settings gear icon in the footer of the Postman desktop app
  2. Select Configure MCP servers
  3. Find the GitHub MCP server and click Add server
  4. Click Edit to open the configuration

You’ll need to paste in your GitHub personal access token. To generate one:

  1. Go to GitHub and open Settings > Developer settings > Personal access tokens
  2. Select Fine-grained tokens and click Generate new token
  3. Give it a name (something like “postman-workflow”) and set your preferred expiration
  4. Under Repository access, choose the repositories you want to work with
  5. Under Permissions, grant read and write access to the repository contents and metadata

Paste the token into the MCP server configuration in Postman and click Update. If everything is configured correctly, approximately 84 tools appear in the MCP server status. That confirms Postman Agent Mode can now create branches, commit code, open PRs, and merge them, all without you leaving the app. Your config should look something like this.

{
    "mcpServers": {
        "GitHub Remote MCP Server": {
            "url": "https://api.githubcopilot.com/mcp/",
            "headers": {
                "X-MCP-Toolsets": "all",
                "Authorization": "Bearer github_pat_your-token"
            }
        }
    }
}

Create a feature branch with Agent Mode

Here’s where the workflow starts to feel different. Normally, you’d open a terminal, run git checkout -b fix/error-message, and then switch back to your editor. With Agent Mode, you stay right where you are.

Open the Agent Mode prompt in Postman and type something like:

Create a new branch called fix/update-error-message in my auth-service repo

Agent Mode uses the GitHub MCP server to create the branch and switch to it. You can confirm this in the Native Git status bar at the bottom of the Postman app, where your current branch name appears. No terminal. No browser tab. No context switch.

Once you’re on the new branch, you can also ask Agent Mode to check which branch you’re on, list recent commits, or view the diff of what’s changed. It’s like having Git built into your API development workflow.

Make your code changes

With the local file system connected, you can edit files directly in Postman. In the video walkthrough, I made a straightforward change: updating an error message in an auth controller.

// Before
res.status(401).json({ error: "Invalid credentials" });

// After
res.status(401).json({ error: "Invalid credentials. Please try again." });

This is a small example, but the principle applies to any code change. Because Native Git is watching your local files, Postman sees the modification immediately. There’s no import/export cycle, no copy-pasting between tools.

The key insight here is that your collections and your application code coexist in the same repository. When you fix a bug in your service code, the collection that tests that endpoint is right there in the same workspace. That tight coupling between implementation and testing is what makes this workflow so effective.

Test your changes with Local Mocks

Before committing anything, you want to make sure your change didn’t break something else. This is where Local Mocks come in.

Local Mocks run on your machine alongside your development server, giving you a controlled environment for testing without hitting external services. In the walkthrough:

  1. Go to your local items in the left sidebar
  2. Find your mock server (in this case, the login service mocks)
  3. Switch the environment to your Local Mocks environment, which points to your local server (for example, localhost:4500)
{
  "base_url": "http://localhost:4500",
  "auth_endpoint": "/api/v1/auth/login"
}

With your local server running and the environment configured, you can ask Agent Mode to run all the tests on the collection:

Run all tests on the login service collection

Agent Mode runs through each request in the collection, validates the test scripts, and reports results. If something fails, you can fix it and rerun without leaving the workspace. If everything passes, you’re ready to commit.

I’ve found this pattern of edit-then-test-locally catches issues that would otherwise slip through to a PR review. You get that fast feedback loop without the overhead of pushing to a remote branch and waiting for CI to run.

Commit and push changes with the GitHub MCP

Your tests passed. Time to get your changes into the repository. Normally, this is where you’d open a terminal and run through git add, git commit, and git push. Instead, prompt Agent Mode:

Commit my changes to the fix/update-error-message branch with the message "Update auth error message for clarity"

Agent Mode stages your changes, creates the commit, and pushes it to your remote branch, all through the GitHub MCP server.

When Agent Mode committed the changes, it noticed that my collections were still in the v2.1 JSON format. It automatically migrated them to the Postman Collection v3 YAML format and included those changes in the same commit. That’s a migration I needed to do eventually but hadn’t prioritized. Agent Mode identified it, handled it, and pushed everything together.

The commit ended up being bigger than my original one-line fix, but every additional change was meaningful. That’s the kind of proactive assistance that saves real time, especially when it’s something you’d eventually have to do manually.

Create and merge a pull request

The final step in the workflow is creating a PR and getting it merged. Still in Agent Mode:

Create a pull request for the fix/update-error-message branch

Agent Mode creates the PR on GitHub with a description based on your commit messages and the changes in the branch. You can review the PR details right in the Agent Mode response, including the PR URL, title, and summary of changes.

Once the PR is approved (or if you’re working solo and don’t require approvals):

Merge the pull request

And that’s it. Your code is in the main branch. You’ve gone from spotting a bug in a test response to merging a fix into production without opening a browser, a terminal, or a separate code editor.

What this changes

Let me step back and put this in perspective. The traditional workflow for fixing a bug found during API testing looks something like this:

Step Tool Context switch
Spot the error Postman
Fix the code Code editor Yes
Stage and commit Terminal Yes
Push to remote Terminal
Create a PR GitHub (browser) Yes
Merge the PR GitHub (browser)
Go back to testing Postman Yes

That’s 4 context switches for a single bug fix. At 23 minutes of recovery time per switch, you’re looking at over an hour of lost focus for what might be a 30-second code change. Research from Parnin and DeLine found that interrupted tasks take twice as long and produce twice as many errors.

With Native Git, Agent Mode, and the GitHub MCP server, the entire workflow stays in Postman:

Step Tool Context switch
Spot the error Postman
Create a branch Postman (Agent Mode) No
Fix the code Postman (local files) No
Test the fix Postman (Local Mocks) No
Commit and push Postman (Agent Mode) No
Create and merge PR Postman (Agent Mode) No
Continue testing Postman No

Zero context switches. You stay in your flow state the entire time.

Things to watch for

Token permissions matter. If your GitHub personal access token doesn’t have the right permissions, Agent Mode won’t be able to create branches or push commits. Make sure you’ve granted read and write access to repository contents and metadata. The GitHub fine-grained token docs walk through the exact permissions you need.

The collection v3 migration is automatic but worth understanding. If Agent Mode migrates your collections from v2.1 JSON to v3 YAML, review the changes before merging. The migration is well-tested, but it’s good practice to understand what changed in your repository. The YAML format is more readable in diffs, which is a win for code reviews going forward.

Check your branch before committing. Always confirm you’re on the right branch before asking Agent Mode to commit changes. The Native Git status bar in the Postman footer shows your current branch, making this a quick visual check.

Resources

The post Eliminate context switching with Postman Native Git and MCP appeared first on Postman Blog.

Read the whole story
alvinashcraft
20 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories