Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
150306 stories
·
33 followers

No more tokens! Locking down npm Publish Workflows

1 Share

With the recent spate of high profile npm security incidents involving compromised deployment workflows, I decided that it would be prudent to do a full inventory of my npm security footprint (especially for 11ty).

Just in the last few months:

  • November 2025: Shai Halud v2 (PostHog) (and PostHog post-mortem): Worm infected ×834 packages. Propagated via preinstall npm script.
  • September 2025
    1. Shai Halud (@ctrl/tinycolor, CrowdStrike): Worm infected ×526 packages. Propagated via postinstall npm script.
    2. DuckDB: targeted phishing email (with 2FA) pointed to fake domain npmjs.help. Compromised packages were published with token created by attacker.
    3. debug and chalk: same as above: targeted phishing email (with 2FA).
  • August 2025: S1ngularity (Nx) (and Nx post-mortem): well-meaning but insecure code (from approved authors) was merged which allowed arbitrary commands to be executed via content in Pull Requests to the repo. Compromised packages were published via a stolen NPM token.
Expand to see the insecure YAML from the S1ngularity attack
# Some content omitted for brevity
on:
  pull_request:
    types: [opened, edited, synchronize, reopened]
  # …
jobs:
  validate-pr-title:
    # …
    steps:
      # …
      - name: Create PR message file
        run: |
          mkdir -p /tmp
          cat > /tmp/pr-message.txt << 'EOF'
          ${{ github.event.pull_request.title }}

          ${{ github.event.pull_request.body }}
          EOF

      - name: Validate PR title
        run: |
          echo "Validating PR title: ${{ github.event.pull_request.title }}"
          node ./scripts/commit-lint.js /tmp/pr-message.txt

Given the attack vectors of recent incidents, any packages using GitHub Actions (or other CI) to publish should be considered to have an elevated risk (and this was very common across 11ty’s numerous packages).

I’ve been pretty cautious about npm tokens. I have each repository set up (painstakingly) to use extremely granular tokens (access to publish one package and one package only). This limits the blast radius of any compromise to a single package and has helped manage my blood pressure (I accidentally leaked a token earlier this year).

Security Checklist

I’ve completed my review and made a bunch of changes to improve my security footprint on GitHub and npm, noted below. The suggestions below avoid introducing additional third-party tooling that may decrease your footprint short-term (while actually increasing it long-term).

Caveat: my current workflow uses GitHub Releases to trigger a GitHub Action workflow to publish packages to npm (and this advice may vary a bit if you’re using different tools like GitLab or pnpm or yarn, sorry).

  1. Use Two-Factor Authentication (2FA) for both GitHub AND npm, for every person that has access to publish. This is table-stakes. No compromises. Require 2FA everywhere.
    • On GitHub, go to your organization’s Settings page and navigate to Authentication Security. Check the Require Two-factor authentication for everyone and Only allow secure two-factor methods checkboxes.
    • npm requires you to specify this on a per-package basis that I describe in the Restrict Publishing Access section below.
  2. When logging into npm and GitHub, use your password manager exclusively! Never type in a password or a 2FA code manually. Your password manager will help ensure that you don’t put in your credentials on a compromised (but realistic looking) domain.
    • Would you know that npmjs.help was a spoofed domain? Maybe on your average day, but on your worst day? When you didn’t sleep well the night before? 😴
  3. Review GitHub users that have the Write role in your repositories (Write can create releases).
  4. Find any repositories using NPM tokens and delete the tokens in the settings for both GitHub and npm. We’re moving to a post-token world.
  5. Switch to use Trusted Publishers (OIDC) in the Settings tab for each npm package. This will also setup the release to include provenance as well (which is great).
    • This scopes your credentials to one specific GitHub Action (you specify which file to point to in .github/workflows/) and allows you to remove any references to tokens in the GitHub Actions YAML configuration file.
    • The big goal here for me was to completely separate my publish workflow and credentials and disallow any access to those credentials from other workflows in the repository (usually unit tests that run on every commit to the repo). You could also use GitHub Environments to achieve this. This limits the blast radius from worm propagation (via postinstall or preinstall) to publish events only (not every commit), which is far more infrequent.
  6. Restrict npm Publishing Access in the Settings tab for each npm package. Use Require two-factor authentication and disallow tokens (recommended). Death to tokens!
  7. Check in your lock file (e.g. package-lock.json for npm). This is something I’ve personally felt a bit of resistance to, mostly because I hated managing git conflicts in these files (but dependabot has helped there). It is especially important when using a release script that uses npm packages to generate release artifacts. Prefer npm ci over npm install in your release script.
  8. GitHub Actions configuration files should pin the full SHA for uses dependencies (e.g. eleventy-plugin-vite). I learned that Dependabot can update and manage these too!

Other good ideas

Given the above changes, I would consider the following items to not to be of immediate urgency (though still recommended).

  • GitHub: Enable Immutable Releases preferably at the organization level. This will ensure no one can change tags and release contents after a release has been shipped.
  • Use a package manager cooldown.
  • Reduce dependencies! Every third party dependency has some risk associated with it, as you’re inheriting a bit of those developers’ security footprint too. It’s worth noting that the work being done by the folks at e18e to reduce dependency counts is making great headway to improve the ecosystem at large. You can do this in your own projects! I’m proud of the work we’ve done on @11ty/eleventy over the years (source: v3.1.0 release notes):
    Version Production Dep Count Production Size
    v3.1.0 ×142 21.4 MB
    v3.0.0 ×187 27.4 MB
    v2.0.1 ×215 36.4 MB
    v1.0.2 ×356 73.3 MB
  • Some folks recommend disabling scripts when installing (via npm config set ignore-scripts true or via stock use of pnpm). This might be marginally useful in some cases but in my opinion is just a short term solution in response to common attack patterns that we’ve already seen. Importing (or requiring) a compromised or malicious package can execute arbitrary commands without using a preinstall or postinstall script just fine. If you really need to lock down your environment, you might consider running a Virtual Machine, Dev Container, and/or using Node.js’ Permissions model or stock Deno.

Stay safe out there, y’all!

Additional Reading

Read the whole story
alvinashcraft
21 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

GCast 205: Mastering GitHub Copilot course, Integrating MCP with Copilot

1 Share

GCast 205:

Mastering GitHub Copilot course, Integrating MCP with Copilot

In this video, I walk through the excellent tutorial "Integrate MCP with GitHub Copilot." In this lesson, you learn to use GitHub Copilot to interact with an MCP server, extending the AI functionality of Copilot.

Links:
https://github.com/microsoft/Mastering-GitHub-Copilot-for-Paired-Programming/
https://github.com/microsoft/Mastering-GitHub-Copilot-for-Paired-Programming/

Read the whole story
alvinashcraft
22 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Carlos Reblos on Developer Tools for SQL Server

1 Share

Episode 879

Carlos Reblos on Developer Tools for SQL Server

Carlos Robles is a Product Manager for Microsoft Database Developer Tools.

He talks about Visual Studio Code extensions, using GitHub Copilot with SQL Server, SQL containers, integration with Microsoft Fabric, upcoming features, and the retirement of Azure Data Studio.

Links:
http://aka.ms/vscode-mssql-roadmap

Read the whole story
alvinashcraft
22 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Jennifer Wadella on The 7 Deadly Sins of Management

1 Share

Episode 878

Jennifer Wadella on The 7 Deadly Sins of Management

Jennifer Wadella has learned that there is no universal advice for managers to apply in every organization and in every situation. But she has found some things that managers should always avoid doing. She shares her seven deadly sins of management.

Read the whole story
alvinashcraft
22 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Trump embraces gas guzzlers and air pollution by weakening fuel economy standards

1 Share
Vehicles driving on a busy highway.
Motorists drive on Interstate 210 during the morning commute on December 03, 2025 in Pasadena, California.

President Donald Trump announced a new plan that lets carmakers pollute more by making less fuel efficient vehicles. The National Highway Traffic Safety Administration (NHTSA) said today that it’ll roll back fuel economy rules finalized last year by the Biden administration for model year 2022-2031 vehicles.

The Trump administration has eliminated incentives for EV purchases, stymied energy efficiency policies and gutted pollution regulations in general. The president wants the US to produce more oil and gas, and says that his agenda will boost business for American automakers. Critics contend that Americans will ultimately pay for these measures with higher fuel costs, as well as health risks and climate disasters stemming from tailpipe emissions.

“Slashing fuel economy standards will increase costs for drivers and threaten the progress made in reducing dangerous air pollution and preventing adverse health outcomes for children, older adults, and communities who live near busy roads,” Darien Davis, government affairs advocate on climate change and clean energy at the League of Conservation Voters, said in a statement emailed to The Verge.

“Slashing fuel economy standards will increase costs for drivers”

NHTSA proposed a federal fuel economy standard of around 34.5 miles per gallon of gas by 2031.  That’s far lower than the bar Biden set last year of reaching an average of roughly 50.4 miles per gallon by 2031. 

The agency previously estimated that the higher standards set in 2024 would collectively save Americans $23 billion in fuel costs over the years, or about $600 for each passenger car and light truck owner over the lifetime of their vehicle. The rules were expected to cut down gasoline use by 70 billion gallons through 2050. That would avoid 710 million metric tons of planet-heating carbon dioxide pollution, equivalent to taking more than 165.6 million gas-guzzling passenger vehicles off the road for a year. Trump claimed without evidence that his latest action would shave $1,000 off the price of a car, while clean energy advocates expect the rollback to lead to higher fuel costs.

Automakers would likely have had to sell more EVs in order to meet the higher Biden-era standards for fleetwide fuel economy. Trump’s Transportation Secretary Sean Duffy accused the Biden administration in June of illegally using Corporate Average Fuel Economy (CAFE) standards to mandate EV sales. In July, Republicans eliminated fines for carmakers failing to meet CAFE standards in the giant spending bill they passed in July and sunsetted tax credits for EVs. “America Now Effectively Has No Fuel Economy Rules,” reads a July headline from Kelley Blue Book

GM had paid $128.2 million in CAFE penalties for 2016 and 2017, Reuters reports. Stellantis, which owns Chrysler, has paid more than $590 million in penalties since 2016. Leadership for both companies joined the president in the Oval Office today as he announced the new CAFE standards. 

“We’ve just freed you up, so you’re going to have a good day, you’re going to have a good number of years,” Trump said to auto industry leaders during the announcement. 

“Today is a victory of common sense and affordability,” Ford CEO Jim Farley later responded. 

The US Department of Transportation is expected to post the proposal for public comment before finalizing standards next year.

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Beyond the Chat Window: How Change-Driven Architecture Enables Ambient AI Agents

1 Share

AI agents are everywhere now. Powering chat interfaces, answering questions, helping with code. We've gotten remarkably good at this conversational paradigm. But while the world has been focused on chat experiences, something new is quietly emerging: ambient agents. These aren't replacements for chat, they're an entirely new category of AI system that operates in the background, sensing, processing, and responding to the world in real time. And here's the thing, this is a new frontier. The infrastructure we need to build these systems barely exists yet. 

Or at least, it didn't until now.

Two Worlds: Conversational and Ambient 

Let me paint you a picture of the conversational AI paradigm we know well. You open a chat window. You type a question. You wait. The AI responds. Rinse and repeat. It's the digital equivalent of having a brilliant assistant sitting at a desk, ready to help when you tap them on the shoulder. 

Now imagine a completely different kind of assistant. One that watches for important changes, anticipates needs, and springs into action without being asked. That's the promise of ambient agents. AI systems that, as LangChain puts it"listen to an event stream and act on it accordingly, potentially acting on multiple events at a time." 

This isn't an evolution of chat; it's a fundamentally different interaction paradigm. Both have their place. Chat is great for collaboration and back-and-forth reasoning. Ambient agents excel at continuous monitoring and autonomous response. Instead of human-initiated conversations, ambient agents operate through detecting changes in upstream systems and maintaining context across time without constant prompting. 

The use cases are compelling and distinct from chat. Imagine a project management assistant that operates in two modes: you can chat with it to ask, "summarize project status", but it also runs in the background, constantly monitoring new tickets that are created, or deployment pipelines that fail, automatically reassigning tasks. Or consider a DevOps agent that you can query conversationally ("what's our current CPU usage?") but also monitors your infrastructure continuously, detecting anomalies and starting remediation before you even know there's a problem. 

The Challenge: Real-Time Change Detection 

Here's where building ambient agents gets tricky. While chat-based agents work perfectly within the request-response paradigm, ambient agents need something entirely different: continuous monitoring and real-time change detection. How do you efficiently detect changes across multiple data sources? How do you avoid the performance nightmare of constant polling? How do you ensure your agent reacts instantly when something critical happens? 

Developers trying to build ambient agents hit the same wall: creating a reliable, scalable change detection system is hard. You either end up with: 

Polling hell: Constantly querying databases, burning through resources, and still missing changes between polls 

Legacy system rewrites: Massive expensive multi-year projects to re-write legacy systems so that they produce domain events  

Webhook spaghetti: Managing dozens of event sources, each with different formats and reliability guarantees 

This is where the story takes an interesting turn. 

Enter Drasi: The Change Detection Engine You Didn't Know You Needed 

Drasi is not another AI framework. Instead, it solves the problem that ambient agents need solved: intelligent change detection. Think of it as the sensory system for your AI agents, the infrastructure that lets them perceive changes in the world. 

Drasi is built around three simple components: 

Sources: Connectivity to the systems that Drasi can observe as sources of change (PostgreSQL, MySQL, Cosmos DB, Kubernetes, EventHub) 

Continuous Queries: Graph-based queries (using Cypher/GQL) that monitor for specific change patterns 

Reactions: What happens when a continuous query detects changes, or lack thereof 

But here's the killer feature: Drasi doesn't just detect that something changed. It understands what changed and why it matters, and even if something should have changed but did not. Using continuous queries, you can define complex conditions that your agents care about, and Drasi handles all the plumbing to deliver those insights in real time. 

The Bridge: langchain-drasi Integration 

Now, detecting changes is only part of the challenge. You need to connect those changes to your AI agents in a way that makes sense. That's where langchain-drasi comes in, a purpose-built integration that bridges Drasi's change detection with LangChain's agent frameworks. It achieves this by leveraging the Drasi MCP Reaction, which exposes Drasi continuous queries as MCP resources. 

The integration provides a simple Tool that agents can use to: 

Discover available queries automatically 

Read current query results on demand 

Subscribe to real-time updates that flow directly into agent memory and workflow 

 

Here's what this looks like in practice:

from langchain_drasi import create_drasi_tool, MCPConnectionConfig

# Configure connection to Drasi MCP server
mcp_config = MCPConnectionConfig(server_url="http://localhost:8083")


# Create the tool with notification handlers
drasi_tool = create_drasi_tool(
    mcp_config=mcp_config,
    notification_handlers=[buffer_handler, console_handler] 
)


# Now your agent can discover and subscribe to data changes

# No more polling, no more webhooks, just reactive intelligence 

 

The beauty is in the notification handlers: pre-built components that determine how changes flow into your agent's consciousness: 

BufferHandler: Queues changes for sequential processing 

LangGraphMemoryHandler: Automatically integrates changes into agent checkpoints 

LoggingHandler: Integrates with standard logging infrastructure 

This isn't just plumbing; it's the foundation for what we might call "change-driven architecture" for AI systems. 

Example: The Seeker Agent Has Entered the Chat 

Let's make this concrete with my favorite example from the langchain-drasi repository: a hide and seek inspired non-player character (NPC) AI agent that seeks human players in a multi-player game environment. 

The Scenario 

Imagine a game where players move around a 2D map, updating their positions in a PostgreSQL database. But here's the twist: the NPC agent doesn't have omniscient vision. It can only detect players under specific conditions: 

Stationary targets: When a player doesn't move for more than 3 seconds (they're exposed) 

Frantic movement: When a player moves more than once in less than a second (panicking reveals your position) 

This creates interesting strategic gameplay, players must balance staying still (safe from detection but vulnerable if found) with moving carefully (one move per second is the sweet spot). The NPC agent seeks based on these glimpses of player activity. These detection rules are defined as Drasi continuous queries that monitor the player positions table.  

For reference, these are the two continuous queries we will use: 

When a player doesn't move for more than 3 seconds, this is a great example of detecting the absence of change use the trueLater function: 

 

MATCH 
    (p:player { type: 'human' })
WHERE drasi.trueLater(
        drasi.changeDateTime(p) <= (datetime.realtime() - duration{ seconds: 3 } )), 
        drasi.changeDateTime(p) + duration{ seconds: 3 } )
      )
RETURN
    p.id,
    p.x,
    p.y 

 

When a player moves more than once in less than a second is an example of using the previousValue function to compare that current state with a prior state: 

 

MATCH 
    (p:player { type: 'human' })
WHERE drasi.changeDateTime(p).epochMillis - drasi.previousValue(drasi.changeDateTime(p).epochMillis) < 1000
RETURN
    p.id,
    p.x,
    p.y

 

Here's the neat part: you can dynamically adjust the game's difficulty by adding or removing queries with different conditions; no code changes required, just deploy new Drasi queries. 

The traditional approach would have your agent constantly polling the data source checking these conditions: "Any player moves? How about now? Now? Now?" 

The Workflow in Action 

The agent operates through a LangGraph based state machine with two distinct phases: 

1. Setup Phase (First Run Only) 

  • Setup queries prompt- Prompts the AI model to discover available Drasi queries 
  • Setup queries call model - AI model calls the Drasi tool with discover operation 
  • Setup queries tools - Executes the Drasi tool calls to subscribe to relevant queries 
  • This phase loops until the AI model has discovered and subscribed to all relevant queries

2. Main Seeking Loop (Continuous) 

  • Check sensors - Consumes any new Drasi notifications from the buffer into the workflow state 
  • Evaluate targets - Uses AI model to parse sensor data and extract target positions 
  • Select and plan - Selects closest target and plans path 
  • Execute move- Executes the next move via game API 
  • Loop continues indefinitely, reacting to new notifications 

No polling. No delays. No wasted resources checking positions that don't meet the detection criteria. Just pure, reactive intelligence flowing from meaningful data changes to agent actions. The continuous queries act as intelligent filters, only alerting the agent when relevant changes occur. 

Click here for the full implementation 

The Bigger Picture: Change-Driven Architecture 

What we're seeing with Drasi and ambient agents isn't just a new tool, it's a new architectural pattern for AI systems. The core idea is profound: AI agents can react to the world changing, not just wait to be asked about it. This pattern enables entirely new categories of applications that complement traditional chat interfaces. 

The example might seem playful, but it demonstrates that AI agents can perceive and react to their environment in real time. Today it's seeking players in a game. Tomorrow it could be: 

  • Managing city traffic flows based on real-time sensor data 
  • Coordinating disaster response as situations evolve 
  • Optimizing supply chains as demand patterns shift 
  • Protecting networks as threats emerge 

The change detection infrastructure is here. The patterns are emerging. The only question is: what will you build? 

Where to Go from Here 

Ready to dive deeper? Here are your next steps: 

Explore Drasi: Head to drasi.io and discover the power of the change detection platform 

Try langchain-drasi: Clone the GitHub repository and run the Hide-and-Seek example yourself 

Join the conversation: The space is new and needs diverse perspectives. Join the community on Discord. Let us know if you have built ambient agents and what challenges you faced with real-time change detection. 

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories