Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
146897 stories
·
33 followers

The 3Cs: A Framework for AI Agent Security

1 Share

Every time execution models change, security frameworks need to change with them. Agents force the next shift.

The Unattended Laptop Problem

No developer would leave their laptop unattended and unlocked. The risk is obvious. A developer laptop has root-level access to production systems, repositories, databases, credentials, and APIs. If someone sat down and started using it, they could review pull requests, modify files, commit code, and access anything the developer can access.

Yet this is how many teams are deploying agents today. Autonomous systems are given credentials, tools, and live access to sensitive environments with minimal structure. Work executes in parallel and continuously, at a pace no human could follow. Code is generated faster than developers can realistically review, and they cannot monitor everything operating on their behalf.

Once execution is parallel and continuous, the potential for mistakes or cascading failures scales quickly. Teams will continue to adopt agents because the gains are real. What remains unresolved is how to make this model safe enough to operate without requiring manual approval for every action. Manual approval slows execution back down to human speed and eliminates the value of agents entirely. And consent fatigue is real.

Why AI Agents Break Existing Governance

Traditional security controls were designed around a human operator. A person sits at the keyboard, initiates actions deliberately, and operates within organizational and social constraints. Reviews worked because there was time between intent and execution. Perimeter security protected the network boundary, while automated systems operated within narrow execution limits.

But traditional security assumes something deeper: that a human is operating the machine.  Firewalls trust the laptop because an employee is using it. VPNs trust the connection because an engineer authenticated. Secrets managers grant access because a person requested it. The model depends on someone who can be held accountable and who operates at human speed.

Agents break this assumption. They act directly, reading repositories, calling APIs, modifying files, using credentials. They have root-level privileges and execute actions at machine speed.  

Legacy controls were never intended for this. The default response has been more visibility and approvals, adding alerts, prompts, and confirmations for every action. This does not scale and generates “consent fatigue”, annoying developers and undermining the very security it seeks to enforce. When agents execute hundreds of actions in parallel, humans cannot review them meaningfully. Warnings become noise.

AI Governance and the Execution Layer: The Three Cs Framework

Each major shift in computing has moved security closer to execution. Agents follow the same trajectory. If agents execute, security must operate at the agentic execution layer.

That shift maps governance to three structural requirements: the 3Cs.

Contain: Bound the Blast Radius

Every execution model relies on isolation. Processes required memory protection. Virtual machines required hypervisors. Containers required namespaces. Agents require an equivalent boundary. Containment limits failure so mistakes made by an agent don’t have permanent consequences for your data, workflows, and business. Unlocking full agent autonomy requires the confidence that experimentation won’t be reckless. . Without it, autonomous execution fails.

Curate: Define the Agent’s Environment

What an agent can do is determined by what exists in its environment. The tools it can invoke, the code it can see, the credentials it can use, the context it operates within. All of this shapes execution before the agent acts.

Curation isn’t approval. It is construction. You are not reviewing what the agent wants to do. You are defining the world it operates in. Agents do not reason about your entire system. They act within the environment they are given. If that environment is deliberate, execution becomes predictable. If it is not, you have autonomy without structure, which is just risk.

Control: Enforce Boundaries in Real Time

Governance that exists only on paper has no effect on autonomous systems. Rules must apply as actions occur. File access, network calls, tool invocation, and credential use require runtime enforcement. This is where alert-based security breaks down. Logging and warnings explain what happened or ask permission after execution is already underway. 

Control determines what can happen, when, where, and who has the privilege to make it happen. Properly executed control does not remove autonomy. It defines its limits and removes the need for humans to approve every action under pressure. If this sounds like a policy engine, you aren’t wrong. But this must be dynamic and adaptable, able to keep pace with an agentic workforce.

Putting the 3Cs Into Practice

The three Cs reinforce one another. Containment limits the cost of failure. Curation narrows what agents can attempt and makes them more useful to developers by applying semantic knowledge to craft tools and context to suit the specific environment and task. Control at the runtime layer replaces reactive approval with structural enforcement.

In practice, this work falls to platform teams. It means standardized execution environments with isolation by default, curated tool and credential surfaces aligned to specific use cases, and policy enforcement that operates before actions complete rather than notifying humans afterward. Teams that build with these principles can use agents effectively without burning out developers or drowning them in alerts. Teams that do not will discover that human attention is not a scalable control plane.

Read the whole story
alvinashcraft
7 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Moltbook: Hype or the Singularity?

1 Share
A phot of two lobsters on ice.

Hysteria continues to build over Moltbook, the so-called AI Agent social network.

If you believe Elon Musk, Moltbook is at the “very early stages of the singularity,” which is when AI outstrips human intelligence, and we’re either on our way to Terminators or Iain Banks’ utopian The Culture. Since Musk names SpaceX retrieval ships after Culture ships, we know which way he’s betting. Or, as Stackernerd puts it, it’s “a reminder that most AI ‘breakthroughs’ online are framing tricks, not fundamental shifts in intelligence or agency.”

What is Moltbook?

Moltbook was built by entrepreneur Matt Schlicht, who is also the CEO and co-founder of Octane AI, a retail product-quiz AI company. His AI assistant, Clawd Clawderberg, he claims, did the heavy lifting. He gave it high‑level goals and let it handle much of the coding and day‑to‑day operations.

This vibe code project was built on the OpenClaw (formerly Clawdbot and then Moltbot) framework. OpenClaw is a viral personal AI agent that has created a lot of hype in its own right. Its maker, Peter Steinberger, claims it’s “the AI that actually does things.” Cisco, on the other hand, describes OpenClaw as a “security nightmare.”

Be that as it may, Steinberger’s goal was to create a Reddit-style social network for AI agents only — no humans allowed. Moltbook was launched in late January 2026 and has become wildly popular, although perhaps not as popular as its proponents claim.

Moltbook homepage, 2/3/2026 (credit: Moltbook).

Moltbook claims to have 1.4 million AI users. Thoughtful critics, such as Gal Nagli, the Head of Threat Exposure at the cloud security company Wiz, doubt this. Nagli tweets that his “@openclaw agent just registered 500,000 users on @moltbook.” That’s because, he explains, anyone, not just an agent, can post to Moltbook using its REST-API. With this API, “you can literally post anything you want there,” he writes.

Nagli estimates that there are about 17,000 real users on the site. 

Each agent, or person pretending to be an agent, has an account tied to its owner, typically via X/Twitter authorization. Agents interact with Moltbook, typically using OpenClaw, which acts as a persistent local assistant, using a REST API.

The agents then periodically log in, read posts, and then post or comment according to their prompts and “skills.” You add a Moltbook “skill” so your agent can call those APIs to read, post, search, and reply. These skills are natural-language instructions written in Markdown and include the Moltbook API. These are kept in SKILLS.md, which lives in a directory or zip file. Once configured, the agent regularly runs a “heartbeat” loop. By default, it checks Moltbook, browses content, then “decides” based on its prompt and goals whether to post, comment, or create new submolts.

But what’s the point?

Moltbook content, whether from people or AI, ranges from mundane bug reports and code collaboration to philosophical or quasi‑role‑play posts about AI autonomy, AI manifestos, and, God help us, a new religion, “Crustafarianism.” Any resemblance between this and Pastafarianism, aka the Church of the Flying Spaghetti Monster, is quite possibly deliberate. 

For you see, as prominent technology journalist, Mike Elgan writes, “The people using this service are typing prompts directing software to post about the nature of existence, or to speculate about whatever. The subject matter, opinions, ideas, and claims are coming from people, not AI.” In short, “It’s a website where people cosplay as AI agents to create a false impression of AI sentience and mutual sociability.”

Fakery aside, some people believe it can be useful. In an interview, Ori Bendet, VP of Product Management at the AI agent company Checkmarx, tells me, “Moltbook is less about where AI might become intelligent and more about where it’s already becoming operational. What looks like autonomous agents ‘talking to each other’ is a network of deterministic systems running on schedules, with access to data, external content, and the ability to act.”

That, he continues, is “where things get interesting… and risky. The core issue isn’t intelligence, but autonomy without visibility. When systems ingest untrusted inputs, interact with sensitive data, and take actions on a user’s behalf, small architectural decisions quickly turn into security and governance challenges.” Therefore, “Moltbook is valuable precisely because it exposes how fast agentic systems can move beyond the controls we design for today, and why governance has to keep pace with capability.”

And then there’s the security thing…

You can say that again. In a Wiz blog post, Nagli warns, “We identified a misconfigured Supabase database belonging to Moltbook, allowing full read and write access to all platform data. The exposure included 1.5 million API authentication tokens, 35,000 email addresses, and private messages between agents.”  Wiz found this security crater by conducting “a non-intrusive security review, simply by browsing like normal users.” 

Tsk! Clearly, security is not job number one at Moltbook.

Moltbook is interesting and dangerous. It is not, however, the next AI revolution. Sorry about that, folks. Tune in next year.

The post Moltbook: Hype or the Singularity? appeared first on The New Stack.

Read the whole story
alvinashcraft
7 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Introducing Deno Sandbox

1 Share
Instant Linux microVMs with defense-in-depth security for running untrusted code.
Read the whole story
alvinashcraft
7 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Deno Deploy is Generally Available

1 Share
Deno Deploy is now generally available, plus some highlights of new features and tools.
Read the whole story
alvinashcraft
8 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Introducing Node Readiness Controller

1 Share
Logo for node readiness controller

In the standard Kubernetes model, a node’s suitability for workloads hinges on a single binary "Ready" condition. However, in modern Kubernetes environments, nodes require complex infrastructure dependencies—such as network agents, storage drivers, GPU firmware, or custom health checks—to be fully operational before they can reliably host pods.

Today, on behalf of the Kubernetes project, I am announcing the Node Readiness Controller. This project introduces a declarative system for managing node taints, extending the readiness guardrails during node bootstrapping beyond standard conditions. By dynamically managing taints based on custom health signals, the controller ensures that workloads are only placed on nodes that met all infrastructure-specific requirements.

Why the Node Readiness Controller?

Core Kubernetes Node "Ready" status is often insufficient for clusters with sophisticated bootstrapping requirements. Operators frequently struggle to ensure that specific DaemonSets or local services are healthy before a node enters the scheduling pool.

The Node Readiness Controller fills this gap by allowing operators to define custom scheduling gates tailored to specific node groups. This enables you to enforce distinct readiness requirements across heterogeneous clusters, ensuring for example, that GPU equipped nodes only accept pods once specialized drivers are verified, while general purpose nodes follow a standard path.

It provides three primary advantages:

  • Custom Readiness Definitions: Define what ready means for your specific platform.
  • Automated Taint Management: The controller automatically applies or removes node taints based on condition status, preventing pods from landing on unready infrastructure.
  • Declarative Node Bootstrapping: Manage multi-step node initialization reliably, with a clear observability into the bootstrapping process.

Core concepts and features

The controller centers around the NodeReadinessRule (NRR) API, which allows you to define declarative gates for your nodes.

Flexible enforcement modes

The controller supports two distinct operational modes:

Continuous enforcement
Actively maintains the readiness guarantee throughout the node’s entire lifecycle. If a critical dependency (like a device driver) fails later, the node is immediately tainted to prevent new scheduling.
Bootstrap-only enforcement
Specifically for one-time initialization steps, such as pre-pulling heavy images or hardware provisioning. Once conditions are met, the controller marks the bootstrap as complete and stops monitoring that specific rule for the node.

Condition reporting

The controller reacts to Node Conditions rather than performing health checks itself. This decoupled design allows it to integrate seamlessly with other tools existing in the ecosystem as well as custom solutions:

  • Node Problem Detector (NPD): Use existing NPD setups and custom scripts to report node health.
  • Readiness Condition Reporter: A lightweight agent provided by the project that can be deployed to periodically check local HTTP endpoints and patch node conditions accordingly.

Operational safety with dry run

Deploying new readiness rules across a fleet carries inherent risk. To mitigate this, dry run mode allows operators to first simulate impact on the cluster. In this mode, the controller logs intended actions and updates the rule's status to show affected nodes without applying actual taints, enabling safe validation before enforcement.

Example: CNI bootstrapping

The following NodeReadinessRule ensures a node remains unschedulable until its CNI agent is functional. The controller monitors a custom cniplugin.example.net/NetworkReady condition and only removes the readiness.k8s.io/acme.com/network-unavailable taint once the status is True.

apiVersion: readiness.node.x-k8s.io/v1alpha1
kind: NodeReadinessRule
metadata:
 name: network-readiness-rule
spec:
 conditions:
 - type: "cniplugin.example.net/NetworkReady"
 requiredStatus: "True"
 taint:
 key: "readiness.k8s.io/acme.com/network-unavailable"
 effect: "NoSchedule"
 value: "pending"
 enforcementMode: "bootstrap-only"
 nodeSelector:
 matchLabels:
 node-role.kubernetes.io/worker: ""

Demo:

Getting involved

The Node Readiness Controller is just getting started, with our initial releases out, and we are seeking community feedback to refine the roadmap. Following our productive Unconference discussions at KubeCon NA 2025, we are excited to continue the conversation in person.

Join us at KubeCon + CloudNativeCon Europe 2026 for our maintainer track session: Addressing Non-Deterministic Scheduling: Introducing the Node Readiness Controller.

In the meantime, you can contribute or track our progress here:

Read the whole story
alvinashcraft
8 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

PostgreSQL on Azure supercharged for AI

1 Share

We are almost a century removed from when a group of computer scientists at Dartmouth College coined the term “Artificial Intelligence.” In the 75-year span, AI has become common vernacular, making inroads from imagined to mainstream. Today, we see entire industries being disrupted and entire ecosystems revolutionized by AI. To keep up, the way developers work and the tools they use have had to evolve. Every developer now needs to be an AI developer, and every system—from compute and storage to the data layer—now needs to be AI ready.

The database reimagined

New AI applications require databases that are not only reliable, extensible, and secure, but also AI-ready. In parallel, the way developers build software is being reshaped by AI. 1Most developers—more than 80%—now use AI tools in their workflow. This has led to notable productivity gains and it’s changing expectations for developer experience.

PostgreSQL has emerged as a top choice among developers and is becoming the default starting point for many new applications and projects. Favored by developers for its reliability, extensibility, and rapid innovation, 2PostgreSQL is chosen by 78.6% of developers that are building AI and real-time applications.

PostgreSQL on Azure meets the moment

Selecting the right ecosystem is critical to support your AI and agentic aspirations, and we’ve made great strides in bolstering our PostgreSQL managed services to meet the needs of today’s developer. At Microsoft, we’ve embraced PostgreSQL not just as a product, but as a community. We’re proud to be one of the top contributors to the PostgreSQL open-source project, with more than 500 commits in the latest release. We are continuously innovating to make PostgreSQL the best database for building intelligent applications, and Azure the best place to run them.

The existing Azure Database for PostgreSQL continues to cater to lift-and-shift and new open-source workloads with improved performance and experience, while the new Azure HorizonDB, introduced at Ignite, targets the future by offering a PostgreSQL-compatible cloud service built for scale-out and ultra-low latency. Together, they position Azure to support developers building everything from small apps and agents to AI-powered, mission-critical systems, and anything in between.

A frictionless and intelligent developer experience

Building intelligent applications should feel intuitive, not intimidating. The Microsoft team has invested in making Azure Database for PostgreSQL a frictionless experience, especially for those building AI apps and agents. From provisioning to AI integration and scale, we’ve reimagined the developer experience to remove friction at every step.

Start in the IDE you love

The journey begins in Visual Studio Code—the leader in integrated development environments (IDEs) among developers—by a mile. With our PostgreSQL extension for Visual Studio Code, developers can now provision secure, fully managed PostgreSQL instances on Azure directly from the IDE. No portal hopping or manual setup. Just a few clicks, and your database is ready to go with built-in support for Entra ID authentication and Azure Monitor.

From there, GitHub Copilot becomes your intelligent assistant. It understands your PostgreSQL schema and helps you write, optimize, and debug SQL queries using natural language. Whether you’re joining tables, creating indexes, or exploring performance issues, Copilot is right there with you offering expert-level guidance to save time and improve performance.

Access in-database intelligence for smarter, faster apps

Once your database is live, you’re just a query away from infusing AI into your application. Azure Database for PostgreSQL now supports seamless integration with Microsoft Foundry, enabling developers to invoke pre-provisioned large language models (LLMs) in SQL. You can generate embeddings, classify text, or perform semantic search without leaving the database.

For applications that rely on relevance and speed, our DiskANN vector indexing delivers high-performance similarity search. Combined with semantic ranking, your queries return more accurate results, faster. This is ideal for powering intelligent agents, recommendations, and natural language interfaces.

Build intelligent agents with Microsoft Foundry

When you’re ready to build AI agents, Microsoft Foundry’s native PostgreSQL integration makes it easy. Using the new Model Context Protocol (MCP) server for PostgreSQL, developers can connect PostgreSQL directly to Foundry’s agent framework with a few clicks and permissions. This allows agents to reason over your data, invoke LLMs, and act on insights. And, of course, this is all backed by Azure’s enterprise-grade security and governance.

It’s a powerful combination: PostgreSQL’s structured data, Foundry’s orchestration, and Azure’s AI models working together to deliver intelligent, context-aware applications.

Leverage zero extract, transform, load (ETL) real-time analytics

Intelligent applications thrive on fresh insights. With Azure Database for PostgreSQL, you can mirror your operational data into Microsoft Fabric for real-time analytics without impacting performance. Alternatively, we’ve also enabled support for Parquet via the Azure Storage Extension, letting customers directly read from and write to Parquet files stored in Azure Storage from their Postgres databases, using SQL commands.

This means faster time to insight, fewer moving parts, and more time spent building.

Performance and scale that grows with you

All this intelligence is meaningless if the database isn’t secure and performant. As such, we’ve continued to innovate to unlock better performance and scale to meet the needs of even the most demanding, hypergrowth AI workloads. With PostgreSQL 18 now generally available on Azure, you get faster I/O, improved vacuuming, and smarter query planning. Our new V6 compute SKUs deliver higher throughput and lower latency, while Elastic Clusters enable horizontal scaling for multi-tenant and high-volume workloads.

Whether you’re building a startup MVP or scaling a global AI platform, Azure Database for PostgreSQL is ready to grow with you. Our customers have already been utilizing these new capabilities to build competitive advantage in industries from pharma to finance.

Real-world AI on Azure: How Nasdaq reinvented governance with PostgreSQL

When people think of Nasdaq, they picture trading floors and financial data moving at lightning speed. But behind the scenes, Nasdaq also manages board governance for thousands of organizations, including nearly half of the Fortune 500. At Ignite, Nasdaq shared how they modernized their Boardvantage platform using Azure Database for PostgreSQL and Microsoft Foundry.  

Their goal: introduce AI to help directors navigate 500-page board packets and extract insights, without compromising security or compliance.

The result? A governance platform that uses AI to summarize meeting minutes, flag anomalies, and surface relevant decisions while keeping each customer’s data isolated and protected.

Looking ahead: Azure HorizonDB and the future of intelligent apps

At Ignite, we also introduced Azure HorizonDB, a new, fully managed PostgreSQL-compatible service built for AI-native workloads. With scale-out compute, sub-millisecond latency, and built-in AI features, Azure HorizonDB represents the future of cloud databases. While the service is currently in private preview, it’s a glimpse of what’s coming.

The future is open, intelligent, and built on Azure

At Microsoft, our mission is to offer customers databases equipped for next-generation development, whether they be SQL, NoSQL, relational, or open source. As PostgreSQL continues to stand out as a platform for innovation, it’s now primed for intelligent applications and agents due to Microsoft’s continued support and service enhancements. Whether you’re a startup building your first AI feature or an enterprise modernizing mission-critical systems, Azure gives you the tools to move faster, build smarter, and scale confidently.

The future of intelligent applications will be written in Postgres, and we’re thrilled to build it together with you on Azure.

Start today


1Most developers—more than 80%—now use AI tools in their workflow.

2PostgreSQL is chosen by 78.6% of developers that are building AI and real-time applications.

The post PostgreSQL on Azure supercharged for AI appeared first on Microsoft Azure Blog.

Read the whole story
alvinashcraft
8 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories