Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
146508 stories
·
33 followers

Deploy Moltbot on AWS or Hetzner Securely with Pulumi and Tailscale

1 Share
Update (January 2026): Clawdbot is now Moltbot (and Clawd is now Molty). Anthropic asked for the change due to trademark issues. The CLI command is now moltbot and the new handle is @moltbot.

Moltbot is everywhere right now. The open-source AI assistant gained 9,000 GitHub stars in a single day, received public praise from former Tesla AI head Andrej Karpathy, and has sparked a global run on Mac Minis as developers scramble to give this “lobster assistant” a home. Users are calling it “Jarvis living in a hard drive” and “Claude with hands”—the personal AI assistant that Siri promised but never delivered.

The Mac Mini craze is real: people are buying dedicated hardware just to run Moltbot, with some enthusiasts purchasing 40 Mac Minis at once. Even Logan Kilpatrick from Google DeepMind couldn’t resist ordering one. But here’s the thing: you don’t actually need a Mac Mini. Moltbot runs anywhere: on a VPS, in the cloud, or on that old laptop gathering dust.

With all this hype, I had to try it myself. But instead of clicking through the AWS console or running manual commands on a VPS, I wanted to do it right from the start: infrastructure as code with Pulumi. Why? Because when I inevitably want to tear it down, spin up a new instance, or deploy to a different region, I don’t want to remember which buttons I clicked three weeks ago. I want a single pulumi up command.

Dan got the assignment right:

Dan’s tweet suggesting Hetzner VMs instead of Mac Minis for Moltbot

In this post, I’ll show you how to deploy Moltbot to AWS or Hetzner Cloud (if you want European data residency or just want to spend less). We’ll use Pulumi to define the infrastructure and Tailscale to keep your AI assistant off the public internet.

What is Moltbot?

Moltbot is an open-source AI assistant created by Peter Steinberger that runs on your own infrastructure. It connects to WhatsApp, Slack, Discord, Google Chat, Signal, and iMessage. It can control browsers, generate videos and images, clone your voice for voice notes, and run scheduled tasks via cron. There’s a skills system for extending functionality, and you can run it on pretty much anything: Mac Mini, Raspberry Pi, VPS, laptop, or gaming PC.

The difference from cloud-hosted AI? Moltbot runs on your server, not Anthropic’s. It’s available 24/7 across all your devices, can schedule automated tasks, and keeps your entire conversation history locally. Check the official Moltbot documentation for the full feature list.

Prerequisites

Before getting started, ensure you have:

  • Pulumi CLI installed and configured
  • A Pulumi Cloud account
  • AWS account (for AWS deployment)
  • Hetzner Cloud account (for European deployment)
  • Anthropic API key
  • Node.js 18+ installed
  • Tailscale account with HTTPS enabled (one-time setup in admin console)
This guide uses Anthropic’s API, but Moltbot works with other providers too. Check the providers documentation if you’d rather use OpenAI, Google Gemini, or a local model via Ollama.

Understanding Moltbot architecture

Moltbot uses a gateway-centric architecture where a single daemon acts as the control plane for all messaging, tool execution, and client connections:

Component Port Description
Gateway 18789 WebSocket server handling channels, nodes, sessions, and hooks
Browser control 18791 Headless Chrome instance for web automation
Docker sandbox - Isolated container environment for running tools safely

The Gateway connects to messaging platforms (WhatsApp, Slack, Discord, etc.), the CLI, the web UI, and mobile apps. The Browser component lets Moltbot open web pages, fill forms, scrape data, and download files. Docker sandboxing runs bash commands in isolated containers so your bot can execute code without risking your host system.

Setting up ESC for secrets management

Deploying Moltbot means handling sensitive credentials: API keys, auth tokens, cloud provider secrets. You don’t want these hardcoded or scattered across environment variables. Pulumi ESC (Environments, Secrets, and Configuration) stores them securely and passes them directly to your Pulumi program.

Create a new ESC environment:

pulumi env init <your-org>/moltbot-secrets

Add your secrets to the environment:

values:
 anthropicApiKey:
 fn::secret: "sk-ant-xxxxx"
 tailscaleAuthKey:
 fn::secret: "tskey-auth-xxxxx"
 tailnetDnsName: "tailxxxxx.ts.net"
 hcloudToken:
 fn::secret: "your-hetzner-api-token"
 pulumiConfig:
 anthropicApiKey: ${anthropicApiKey}
 tailscaleAuthKey: ${tailscaleAuthKey}
 tailnetDnsName: ${tailnetDnsName}
 hcloud:token: ${hcloudToken}
To find your Tailnet DNS name, go to the Tailscale admin console, look under the DNS section, and find your tailnet name (e.g., tailxxxxx.ts.net). This is the domain suffix used for all machines in your Tailscale network.

Then create a Pulumi.dev.yaml file in your project to reference the environment:

environment:
 - <your-org>/moltbot-secrets

This approach keeps your secrets out of your codebase and passes them directly to Moltbot during automated onboarding.

Securing with Tailscale

By default, deploying Moltbot exposes SSH (port 22), the gateway (port 18789), and browser control (port 18791) to the public internet. This is convenient for testing but not ideal for production use.

Tailscale creates a secure mesh VPN that lets you access your Moltbot instance without exposing unnecessary ports publicly. When you provide a Tailscale auth key, the Pulumi program:

  1. Removes gateway and browser ports from public access
  2. Keeps SSH as fallback for debugging if Tailscale setup fails
  3. Installs Tailscale on the instance during provisioning (after other dependencies)
  4. Enables Tailscale SSH so you can SSH via Tailscale without managing keys
  5. Joins your Tailnet automatically using the auth key
The Pulumi program installs Docker, Node.js, and Moltbot first, then configures Tailscale last. This ensures that even if the Tailscale auth key is invalid or expired, you can still SSH in via the public IP to troubleshoot.

To generate a Tailscale auth key:

  1. Go to Tailscale Admin Console
  2. Click “Generate auth key”
  3. Enable “Reusable” if you plan to redeploy
  4. Copy the key and add it to your ESC environment

Deploying to AWS

Let’s walk through the complete AWS deployment. Create a new Pulumi project:

mkdir moltbot-aws && cd moltbot-aws
pulumi new typescript

Install the required dependencies:

npm install @pulumi/aws @pulumi/tls
Do not use t3.micro instances for Moltbot. The 1 GB memory is insufficient for installation. Use t3.medium (4 GB) or t3.large (8 GB) instead.

The Pulumi program

Running Moltbot on AWS means setting up a VPC, subnets, security groups, an EC2 instance, SSH keys, and a cloud-init script that installs everything. That’s a lot of clicking in the AWS console. The Pulumi program below defines all of it in code.

Replace the contents of index.ts with the following:

import * as pulumi from "@pulumi/pulumi";
import * as aws from "@pulumi/aws";
import * as tls from "@pulumi/tls";

const config = new pulumi.Config();

const instanceType = config.get("instanceType") ?? "t3.medium";
const anthropicApiKey = config.requireSecret("anthropicApiKey");
const model = config.get("model") ?? "anthropic/claude-sonnet-4";
const enableSandbox = config.getBoolean("enableSandbox") ?? true;
const gatewayPort = config.getNumber("gatewayPort") ?? 18789;
const browserPort = config.getNumber("browserPort") ?? 18791;

const tailscaleAuthKey = config.requireSecret("tailscaleAuthKey");
const tailnetDnsName = config.require("tailnetDnsName");

// Generate a random token for gateway authentication
const gatewayToken = new tls.PrivateKey("moltbot-gateway-token", {
 algorithm: "ED25519",
}).publicKeyOpenssh.apply(key => {
 const hash = require("crypto").createHash("sha256").update(key).digest("hex");
 return hash.substring(0, 48);
});

const sshKey = new tls.PrivateKey("moltbot-ssh-key", {
 algorithm: "ED25519",
});

const vpc = new aws.ec2.Vpc("moltbot-vpc", {
 cidrBlock: "10.0.0.0/16",
 enableDnsHostnames: true,
 enableDnsSupport: true,
 tags: { Name: "moltbot-vpc" },
});

const gateway = new aws.ec2.InternetGateway("moltbot-igw", {
 vpcId: vpc.id,
 tags: { Name: "moltbot-igw" },
});

const subnet = new aws.ec2.Subnet("moltbot-subnet", {
 vpcId: vpc.id,
 cidrBlock: "10.0.1.0/24",
 mapPublicIpOnLaunch: true,
 tags: { Name: "moltbot-subnet" },
});

const routeTable = new aws.ec2.RouteTable("moltbot-rt", {
 vpcId: vpc.id,
 routes: [
 {
 cidrBlock: "0.0.0.0/0",
 gatewayId: gateway.id,
 },
 ],
 tags: { Name: "moltbot-rt" },
});

new aws.ec2.RouteTableAssociation("moltbot-rta", {
 subnetId: subnet.id,
 routeTableId: routeTable.id,
});

const securityGroup = new aws.ec2.SecurityGroup("moltbot-sg", {
 vpcId: vpc.id,
 description: "Security group for Moltbot instance",
 ingress: [
 {
 description: "SSH access (fallback)",
 fromPort: 22,
 toPort: 22,
 protocol: "tcp",
 cidrBlocks: ["0.0.0.0/0"],
 },
 ],
 egress: [
 {
 fromPort: 0,
 toPort: 0,
 protocol: "-1",
 cidrBlocks: ["0.0.0.0/0"],
 },
 ],
 tags: { Name: "moltbot-sg" },
});

const keyPair = new aws.ec2.KeyPair("moltbot-keypair", {
 publicKey: sshKey.publicKeyOpenssh,
});

const ami = aws.ec2.getAmiOutput({
 owners: ["099720109477"],
 mostRecent: true,
 filters: [
 { name: "name", values: ["ubuntu/images/hvm-ssd-gp3/ubuntu-noble-24.04-amd64-server-*"] },
 { name: "virtualization-type", values: ["hvm"] },
 ],
});

const userData = pulumi
 .all([tailscaleAuthKey, anthropicApiKey, gatewayToken])
 .apply(([tsAuthKey, apiKey, gwToken]) => {
 return `#!/bin/bash
set -e

export DEBIAN_FRONTEND=noninteractive

# System updates
apt-get update
apt-get upgrade -y

# Install Docker
curl -fsSL https://get.docker.com | sh
systemctl enable docker
systemctl start docker
usermod -aG docker ubuntu

# Install NVM and Node.js for ubuntu user
sudo -u ubuntu bash << 'UBUNTU_SCRIPT'
set -e
cd ~

# Install NVM
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.1/install.sh | bash

# Load NVM
export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && . "$NVM_DIR/nvm.sh"

# Install Node.js 22
nvm install 22
nvm use 22
nvm alias default 22

# Install Moltbot
npm install -g moltbot@beta

# Add NVM to bashrc if not already there
if ! grep -q 'NVM_DIR' ~/.bashrc; then
 echo 'export NVM_DIR="$HOME/.nvm"' >> ~/.bashrc
 echo '[ -s "$NVM_DIR/nvm.sh" ] && . "$NVM_DIR/nvm.sh"' >> ~/.bashrc
fi
UBUNTU_SCRIPT

# Set environment variables for ubuntu user
echo 'export ANTHROPIC_API_KEY="${apiKey}"' >> /home/ubuntu/.bashrc

# Install and configure Tailscale
echo "Installing Tailscale..."
curl -fsSL https://tailscale.com/install.sh | sh
tailscale up --authkey="${tsAuthKey}" --ssh || echo "WARNING: Tailscale setup failed. Run 'sudo tailscale up' manually."

# Enable systemd linger for ubuntu user (required for user services to run at boot)
loginctl enable-linger ubuntu

# Start user's systemd instance (required for user services during cloud-init)
systemctl start user@1000.service

# Run Moltbot onboarding as ubuntu user (skip daemon install, do it separately)
echo "Running Moltbot onboarding..."
sudo -H -u ubuntu ANTHROPIC_API_KEY="${apiKey}" GATEWAY_PORT="${gatewayPort}" bash -c '
export HOME=/home/ubuntu
export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && . "$NVM_DIR/nvm.sh"

moltbot onboard --non-interactive --accept-risk \
 --mode local \
 --auth-choice apiKey \
 --gateway-port $GATEWAY_PORT \
 --gateway-bind loopback \
 --skip-daemon \
 --skip-skills || echo "WARNING: Moltbot onboarding failed. Run moltbot onboard manually."
'

# Install daemon service with XDG_RUNTIME_DIR set
echo "Installing Moltbot daemon..."
sudo -H -u ubuntu XDG_RUNTIME_DIR=/run/user/1000 bash -c '
export HOME=/home/ubuntu
export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && . "$NVM_DIR/nvm.sh"

moltbot daemon install || echo "WARNING: Daemon install failed. Run moltbot daemon install manually."
'

# Configure gateway for Tailscale Serve (trustedProxies + skip device pairing + set token)
echo "Configuring gateway for Tailscale Serve..."
sudo -H -u ubuntu GATEWAY_TOKEN="${gwToken}" python3 << 'PYTHON_SCRIPT'
import json
import os
config_path = "/home/ubuntu/.moltbot/moltbot.json"
with open(config_path) as f:
 config = json.load(f)
config["gateway"]["trustedProxies"] = ["127.0.0.1"]
config["gateway"]["controlUi"] = {
 "enabled": True,
 "allowInsecureAuth": True
}
config["gateway"]["auth"] = {
 "mode": "token",
 "token": os.environ["GATEWAY_TOKEN"]
}
with open(config_path, "w") as f:
 json.dump(config, f, indent=2)
print("Configured gateway with trustedProxies, controlUi, and token")
PYTHON_SCRIPT

# Enable Tailscale HTTPS proxy (requires HTTPS to be enabled in Tailscale admin console)
echo "Enabling Tailscale HTTPS proxy..."
tailscale serve --bg ${gatewayPort} || echo "WARNING: tailscale serve failed. Enable HTTPS in your Tailscale admin console first."

echo "Moltbot setup complete!"
`;
 });

const instance = new aws.ec2.Instance("moltbot-instance", {
 ami: ami.id,
 instanceType: instanceType,
 subnetId: subnet.id,
 vpcSecurityGroupIds: [securityGroup.id],
 keyName: keyPair.keyName,
 userData: userData,
 userDataReplaceOnChange: true,
 rootBlockDevice: {
 volumeSize: 30,
 volumeType: "gp3",
 },
 tags: { Name: "moltbot" },
});

export const publicIp = instance.publicIp;
export const publicDns = instance.publicDns;
export const privateKey = sshKey.privateKeyOpenssh;

// Construct the Tailscale MagicDNS hostname from the private IP
// AWS private IPs like 10.0.1.15 become hostnames like ip-10-0-1-15
const tailscaleHostname = instance.privateIp.apply(ip =>
 `ip-${ip.replace(/\./g, "-")}`
);

export const tailscaleUrl = pulumi.interpolate`https://${tailscaleHostname}.${tailnetDnsName}/`;
export const tailscaleUrlWithToken = pulumi.interpolate`https://${tailscaleHostname}.${tailnetDnsName}/?token=${gatewayToken}`;
export const gatewayTokenOutput = gatewayToken;

Deploying to Hetzner

Hetzner Cloud is a solid choice if you need European data residency or want to spend less money. Spoiler: it’s a lot less money.

Hetzner has similar concepts to AWS but different names. EC2 instances become Servers. Security groups become Firewalls. Same idea, different provider. The resource types come from @pulumi/hcloud.

Create a new project for Hetzner:

mkdir moltbot-hetzner && cd moltbot-hetzner
pulumi new typescript

Install the Hetzner provider:

npm install @pulumi/hcloud @pulumi/tls
The default server type cax21 is an ARM-based (Ampere) instance with 4 vCPUs and 8 GB RAM. ARM instances cost less for the same compute. If you need x86 architecture, use ccx13 or similar CCX series instead.

The Hetzner Pulumi program

Replace index.ts with the following:

import * as pulumi from "@pulumi/pulumi";
import * as hcloud from "@pulumi/hcloud";
import * as tls from "@pulumi/tls";

const config = new pulumi.Config();

const serverType = config.get("serverType") ?? "cax21";
const location = config.get("location") ?? "fsn1";
const anthropicApiKey = config.requireSecret("anthropicApiKey");
const model = config.get("model") ?? "anthropic/claude-sonnet-4";
const enableSandbox = config.getBoolean("enableSandbox") ?? true;
const gatewayPort = config.getNumber("gatewayPort") ?? 18789;
const browserPort = config.getNumber("browserPort") ?? 18791;

const tailscaleAuthKey = config.requireSecret("tailscaleAuthKey");
const tailnetDnsName = config.require("tailnetDnsName");

// Generate a random token for gateway authentication
const gatewayToken = new tls.PrivateKey("moltbot-gateway-token", {
 algorithm: "ED25519",
}).publicKeyOpenssh.apply(key => {
 const hash = require("crypto").createHash("sha256").update(key).digest("hex");
 return hash.substring(0, 48);
});

const sshKey = new tls.PrivateKey("moltbot-ssh-key", {
 algorithm: "ED25519",
});

const hcloudSshKey = new hcloud.SshKey("moltbot-sshkey", {
 publicKey: sshKey.publicKeyOpenssh,
});

const firewallRules: hcloud.types.input.FirewallRule[] = [
 {
 direction: "out",
 protocol: "tcp",
 port: "any",
 destinationIps: ["0.0.0.0/0", "::/0"],
 description: "Allow all outbound TCP",
 },
 {
 direction: "out",
 protocol: "udp",
 port: "any",
 destinationIps: ["0.0.0.0/0", "::/0"],
 description: "Allow all outbound UDP",
 },
 {
 direction: "out",
 protocol: "icmp",
 destinationIps: ["0.0.0.0/0", "::/0"],
 description: "Allow all outbound ICMP",
 },
 {
 direction: "in",
 protocol: "tcp",
 port: "22",
 sourceIps: ["0.0.0.0/0", "::/0"],
 description: "SSH access (fallback)",
 },
];

const firewall = new hcloud.Firewall("moltbot-firewall", {
 rules: firewallRules,
});

const userData = pulumi
 .all([tailscaleAuthKey, anthropicApiKey, gatewayToken])
 .apply(([tsAuthKey, apiKey, gwToken]) => {
 return `#!/bin/bash
set -e

export DEBIAN_FRONTEND=noninteractive

# System updates
apt-get update
apt-get upgrade -y

# Install Docker
curl -fsSL https://get.docker.com | sh
systemctl enable docker
systemctl start docker

# Create ubuntu user (Hetzner uses root by default)
useradd -m -s /bin/bash -G docker ubuntu || true

# Install NVM and Node.js for ubuntu user
sudo -u ubuntu bash << 'UBUNTU_SCRIPT'
set -e
cd ~

# Install NVM
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.1/install.sh | bash

# Load NVM
export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && . "$NVM_DIR/nvm.sh"

# Install Node.js 22
nvm install 22
nvm use 22
nvm alias default 22

# Install Moltbot
npm install -g moltbot@beta

# Add NVM to bashrc if not already there
if ! grep -q 'NVM_DIR' ~/.bashrc; then
 echo 'export NVM_DIR="$HOME/.nvm"' >> ~/.bashrc
 echo '[ -s "$NVM_DIR/nvm.sh" ] && . "$NVM_DIR/nvm.sh"' >> ~/.bashrc
fi
UBUNTU_SCRIPT

# Set environment variables for ubuntu user
echo 'export ANTHROPIC_API_KEY="${apiKey}"' >> /home/ubuntu/.bashrc

# Install and configure Tailscale
echo "Installing Tailscale..."
curl -fsSL https://tailscale.com/install.sh | sh
tailscale up --authkey="${tsAuthKey}" --ssh || echo "WARNING: Tailscale setup failed. Run 'sudo tailscale up' manually."

# Enable systemd linger for ubuntu user (required for user services to run at boot)
loginctl enable-linger ubuntu

# Start user's systemd instance (required for user services during cloud-init)
systemctl start user@1000.service

# Run Moltbot onboarding as ubuntu user (skip daemon install, do it separately)
echo "Running Moltbot onboarding..."
sudo -H -u ubuntu ANTHROPIC_API_KEY="${apiKey}" GATEWAY_PORT="${gatewayPort}" bash -c '
export HOME=/home/ubuntu
export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && . "$NVM_DIR/nvm.sh"

moltbot onboard --non-interactive --accept-risk \
 --mode local \
 --auth-choice apiKey \
 --gateway-port $GATEWAY_PORT \
 --gateway-bind loopback \
 --skip-daemon \
 --skip-skills || echo "WARNING: Moltbot onboarding failed. Run moltbot onboard manually."
'

# Install daemon service with XDG_RUNTIME_DIR set
echo "Installing Moltbot daemon..."
sudo -H -u ubuntu XDG_RUNTIME_DIR=/run/user/1000 bash -c '
export HOME=/home/ubuntu
export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && . "$NVM_DIR/nvm.sh"

moltbot daemon install || echo "WARNING: Daemon install failed. Run moltbot daemon install manually."
'

# Configure gateway for Tailscale Serve (trustedProxies + skip device pairing + set token)
echo "Configuring gateway for Tailscale Serve..."
sudo -H -u ubuntu GATEWAY_TOKEN="${gwToken}" python3 << 'PYTHON_SCRIPT'
import json
import os
config_path = "/home/ubuntu/.moltbot/moltbot.json"
with open(config_path) as f:
 config = json.load(f)
config["gateway"]["trustedProxies"] = ["127.0.0.1"]
config["gateway"]["controlUi"] = {
 "enabled": True,
 "allowInsecureAuth": True
}
config["gateway"]["auth"] = {
 "mode": "token",
 "token": os.environ["GATEWAY_TOKEN"]
}
with open(config_path, "w") as f:
 json.dump(config, f, indent=2)
print("Configured gateway with trustedProxies, controlUi, and token")
PYTHON_SCRIPT

# Enable Tailscale HTTPS proxy (requires HTTPS to be enabled in Tailscale admin console)
echo "Enabling Tailscale HTTPS proxy..."
tailscale serve --bg ${gatewayPort} || echo "WARNING: tailscale serve failed. Enable HTTPS in your Tailscale admin console first."

echo "Moltbot setup complete!"
`;
 });

const server = new hcloud.Server("moltbot-server", {
 serverType: serverType,
 location: location,
 image: "ubuntu-24.04",
 sshKeys: [hcloudSshKey.id],
 firewallIds: [firewall.id.apply(id => Number(id))],
 userData: userData,
 labels: {
 purpose: "moltbot",
 },
});

export const ipv4Address = server.ipv4Address;
export const privateKey = sshKey.privateKeyOpenssh;

// Construct the Tailscale MagicDNS hostname from the server name
// Hetzner servers use their name as the hostname
const tailscaleHostname = server.name;

export const tailscaleUrl = pulumi.interpolate`https://${tailscaleHostname}.${tailnetDnsName}/`;
export const tailscaleUrlWithToken = pulumi.interpolate`https://${tailscaleHostname}.${tailnetDnsName}/?token=${gatewayToken}`;
export const gatewayTokenOutput = gatewayToken;

You can find both programs in the Pulumi examples repo under moltbot/:

GitHub repository: pulumi/examples
github.com/pulumi/examples

Cost comparison

Before deploying, let’s compare the costs between AWS and Hetzner for running Moltbot 24/7:

AWS (t3.medium) Hetzner (cax21)
vCPUs 2 4
Memory 4 GB 8 GB
Storage 30 GB gp3 (+$2.40/mo) 80 GB NVMe (included)
Traffic Pay per GB 20 TB included
Architecture x86 (Intel/AMD) ARM (Ampere)
Hourly price $0.0416 €0.0104 (~$0.011)
Monthly price ~$33 (with storage) €6.49 (~$7)
Annual cost ~$396 ~$84

Hetzner gives you double the vCPUs, double the RAM, at less than a quarter of the price. The trade-off? ARM architecture instead of x86. But Moltbot doesn’t care - it’s just Node.js and Docker.

Prices are for on-demand instances as of January 2026. AWS prices are for us-east-1; Hetzner prices exclude VAT. Both include standard networking and storage. Check AWS EC2 pricing and Hetzner Cloud pricing for current rates.

Running the deployment

With your ESC environment configured in Pulumi.dev.yaml, deploy with:

pulumi up

After deployment completes, you’ll see outputs similar to:

Outputs:
 gatewayTokenOutput : "786c099cc8f8bf20dbebf40b8b51b75cf5cdab25..."
 privateKey : [secret]
 publicDns : "ec2-x-x-x-x.compute-1.amazonaws.com"
 publicIp : "x.x.x.x"
 tailscaleUrl : "https://ip-10-0-1-x.tailxxxxx.ts.net/"
 tailscaleUrlWithToken: "https://ip-10-0-1-x.tailxxxxx.ts.net/?token=786c099..."

The tailscaleUrlWithToken output provides the complete URL with authentication token. Copy and paste it into your browser to access the Moltbot web UI.

Output names vary slightly between providers: AWS uses publicIp and publicDns, while Hetzner uses ipv4Address. The Tailscale hostname is derived from the instance’s private IP (AWS) or server name (Hetzner).

Automated onboarding

The Pulumi program runs Moltbot’s non-interactive onboarding during instance provisioning. It uses your Anthropic API key from ESC, binds the gateway to loopback with Tailscale Serve as the HTTPS proxy, generates a secure gateway token (exported in Pulumi outputs), installs the daemon as a systemd user service, and configures trustedProxies and controlUi.allowInsecureAuth to skip device pairing when accessed via Tailscale.

The cloud-init script runs moltbot onboard --non-interactive with all necessary flags, then configures the gateway for secure Tailscale access. Your instance is ready as soon as provisioning finishes.

Access the web UI

The easiest way to access the Moltbot web UI is to use the tailscaleUrlWithToken output from Pulumi:

# Get the full URL with token
pulumi stack output tailscaleUrlWithToken

Copy and paste this URL into your browser. The URL includes both the Tailscale MagicDNS hostname and the authentication token, so you can access the web UI directly.

Finding your Tailnet DNS name: Go to the Tailscale admin console and look under the DNS section for your tailnet name (e.g., tailxxxxx.ts.net). You can also find your machines and their MagicDNS hostnames in the Machines tab.
Token-based authentication provides an additional layer of security on top of Tailscale’s network-level authentication. Only devices on your Tailnet can reach the URL, and the token prevents unauthorized access if someone gains access to your Tailnet.

From the web UI, you can connect messaging channels (WhatsApp, Discord, Slack), configure skills and integrations, and manage settings.

Verify the deployment

After deployment completes, SSH into your instance to verify everything is running:

# Check your Tailscale admin console for the new machine
ssh ubuntu@<tailscale-ip>

# Check Moltbot gateway status
systemctl --user status moltbot-gateway

Test your assistant

The cloud-init script needs a few minutes to finish. It installs Docker, Node.js, Moltbot, and Tailscale, then runs onboarding and starts the daemon. If you hit the URL right after pulumi up completes, the gateway probably won’t be ready yet. Give it 2-3 minutes.

Open the gateway dashboard using the tailscaleUrlWithToken output and use the built-in chat to test your assistant:

Moltbot gateway dashboard showing a chat conversation

Your personal AI assistant is now running 24/7 on your own infrastructure, accessible securely through Tailscale.

Security considerations

When self-hosting an AI assistant, security matters. Moltbot’s rapid adoption meant thousands of instances spun up in days, and not everyone locked them down. The community noticed:

Tweet warning about exposed Moltbot gateways with zero auth

The tweet isn’t exaggerating. A quick Shodan search shows exposed gateways on port 18789 with shell access, browser automation, and API keys up for grabs:

Shodan search showing exposed Moltbot instances on port 18789

Don’t let your instance be one of them.

Concern Without Tailscale With Tailscale
SSH access Public (port 22 open) Public fallback + Tailscale SSH
Gateway access Public (port 18789 open) Private (Tailscale only)
Browser control Public (port 18791 open) Private (Tailscale only)
API keys in transit Exposed if gateway accessed over HTTP Protected by Tailscale encryption
Attack surface 3 open ports 1 open port (SSH fallback)
SSH remains accessible as a fallback even with Tailscale enabled. This allows you to troubleshoot if Tailscale fails to connect. Once you’ve confirmed Tailscale is working, you can manually remove the SSH ingress rule from your security group for maximum security.

My recommendations:

  • Always use Tailscale for production
  • Rotate your auth keys periodically
  • Use Pulumi ESC for secrets instead of hardcoding
  • Enable Tailscale SSH to avoid managing keys manually
  • Monitor your Tailscale admin console for unauthorized devices
  • Remove the SSH fallback after confirming Tailscale works if you want zero public ports

What’s next?

Now that Moltbot is running, you can install skills (voice generation, video creation, browser automation), set up scheduled tasks with cron, invite colleagues to your Tailnet for shared access, or connect additional channels like WhatsApp and Discord.

Conclusion

Deploying Moltbot with infrastructure as code means you can reproduce your setup anytime, version control it, and tear it down with a single command. Adding Tailscale keeps it private - no exposed ports, no hoping you configured your firewall correctly at 2am.

If you run into issues or have questions, drop by the Pulumi Community Slack or GitHub Discussions.

New to Pulumi? Get started here.

Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

Is AI Killing Software? — With Bret Taylor

1 Share

Bret Taylor is the CEO of Sierra and OpenAI's board chair. Taylor joins Big Technology Podcast to discuss how AI is reshaping software, from vibe coding to the rise of AI agents that will replace dashboards, forms, and the way we interact with technology. We also cover OpenAI's decision to introduce ads, whether AI progress is actually slowing down, and what Brett has learned from working with Sam Altman, Mark Zuckerberg, Elon Musk, and Sheryl Sandberg. Hit play for an essential conversation on the future of software with someone who's been at the center of every major tech shift for two decades.

---

---

Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice.

Want a discount for Big Technology on Substack + Discord? Here’s 25% off for the first year: https://www.bigtechnology.com/subscribe?coupon=0843016b

Learn more about your ad choices. Visit megaphone.fm/adchoices





Download audio: https://pdst.fm/e/tracking.swap.fm/track/t7yC0rGPUqahTF4et8YD/pscrb.fm/rss/p/traffic.megaphone.fm/AMPP7933875812.mp3?updated=1769608388
Read the whole story
alvinashcraft
12 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Team Happiness as the True Measure of Scrum Master Success in Construction | Felipe Engineer-Manriquez

1 Share

Agile in Construction: Team Happiness as the True Measure of Scrum Master Success in Construction With Felipe Engineer-Manriquez

Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes.

 

"The teams that are having fun and are light-hearted, making jokes—these are high-performing teams almost 99% of the time. But the teams that are overly sarcastic or too quiet? They're burning out." - Felipe Engineer-Manriquez

 

Felipe offers a refreshingly human definition of success for Scrum Masters: team happiness. After years of traumatic experiences in construction—days when he pounded his steering wheel in frustration during his commute—Felipe developed what he calls being a "human thermometer." He can sense a team's emotional state within 5 minutes of being with them. His proxy for success is a simple Likert scale of 1-5: 5 is Nirvana (working at Google with massages), and 1 is wanting to jump out the window. Felipe emphasizes that most people in construction internalize stress and push it down, so you have to ask directly. When he asked an estimator this question, the man quietly admitted he was at a 2—ready to walk away. Without asking, Felipe would never have known. The key insight: schedule improvements happen as teams move closer to a 5. And the foundation of it all? Understanding. "People do not have an overt need to be loved," Felipe shares from his Scrum training. "They have an overt need to be understood." A successful Scrum Master meddles appropriately, runs toward problems, and focuses on understanding teammates before trying to implement change.

 

Self-reflection Question: If you asked each of your team members to rate their happiness from 1-5 today, what do you think they would say, and what would you learn that you don't currently know?

Featured Retrospective Format for the Week: Start/Stop/Keep

Felipe's favorite retrospective format is Start/Stop/Keep—but his approach to introducing it is what makes the difference. He connects it to something construction teams already know: the post-mortem. He explains the morbid origin of the term (surgeons standing around a dead patient discussing what went wrong) to emphasize the seriousness of learning. Then he reframes the retrospective as a recurring post-mortem—a "lessons learned" cycle. Start: What should we begin doing that will make things better? Stop: What should we no longer do that doesn't add value? Keep: What good things are we doing that we want to maintain? Felipe uses silent brainstorming so everyone has time to think, then makes responses visible on a whiteboard or digital display. The cadence scales with sprint length—45 minutes for a week, 2 hours for two weeks, half a day for a month. His current team committed to monthly retrospectives and pre-writes their Start/Stop/Keep items, making the facilitated session efficient and focused.

 

[The Scrum Master Toolbox Podcast Recommends]

🔥In the ruthless world of fintech, success isn't just about innovation—it's about coaching!🔥

Angela thought she was just there to coach a team. But now, she's caught in the middle of a corporate espionage drama that could make or break the future of digital banking. Can she help the team regain their mojo and outwit their rivals, or will the competition crush their ambitions? As alliances shift and the pressure builds, one thing becomes clear: this isn't just about the product—it's about the people.

 

🚨 Will Angela's coaching be enough? Find out in Shift: From Product to People—the gripping story of high-stakes innovation and corporate intrigue.

 

Buy Now on Amazon

 

[The Scrum Master Toolbox Podcast Recommends]

 

About Felipe Engineer-Manriquez

 

Felipe Engineer-Manriquez is a best-selling author, international speaker, and host of The EBFC Show. A force in Lean and Agile, he helps teams build faster with less effort. Felipe trains and coaches changemakers worldwide—and wrote Construction Scrum to make work easier, better, and faster for everyone.

 

You can link with Felipe Engineer-Manriquez on LinkedIn.

 

You can also find Felipe at thefelipe.bio.link, check out The EBFC Show podcast, and join the EBFC Scrum Community of Practice.





Download audio: https://traffic.libsyn.com/secure/scrummastertoolbox/20260129_Felipe_Engineer_Thu.mp3?dest-id=246429
Read the whole story
alvinashcraft
17 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

The One Thing That Finally Made Us Start Streaming

1 Share
From: Coding After Work
Duration: 2:42
Views: 1

Starting streaming is hard—not because of talent, but because most new streamers overthink gear, setup, and timing. I did the same thing, and it almost stopped me from starting at all.

In this video, I share the best advice I got before I started streaming: just press stream. Your first stream won’t be perfect. Your first podcast won’t be perfect. Your first YouTube video won’t be perfect—and that’s completely fine. The fastest way to improve is to start, learn, and keep going.

This advice applies to streaming, content creation, YouTube, podcasts, and even coding. Build it, ship it, refactor it, and make the next version better. Waiting for perfect only slows you down.

If you’re a new streamer, content creator, or developer thinking about starting—this video is for you.
👉 Subscribe for more videos about streaming, coding, Blazor, and creating things in public.

Read the whole story
alvinashcraft
22 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Random.Code() - Still Working on params in Rocks

1 Share
From: Jason Bock
Duration: 0:00
Views: 0

In the last stream, I ran into a snag with optionals and ref-based params. In this stream, I probably won't resolve it completely, but let's try and make more progress.

#dotnet #csharp #mocking

https://github.com/JasonBock/Rocks/issues/268

Read the whole story
alvinashcraft
27 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

API Design: Don't try to guess

1 Share

I was reviewing some code, and I ran into the following snippet. Take a look at it:


public void AddAttachment(string fileName, Stream stream)
   {
       ValidationMethods.AssertNotNullOrEmpty(fileName, nameof(fileName));
       if (stream == null)
           throw new ArgumentNullException(nameof(stream));


       string type = GetContentType(fileName);


       _attachments.Add(new PutAttachmentCommandData("__this__", fileName, stream, type, changeVector: string.Empty));
   }


   private static string GetContentType(string fileName)
   {
       var extension = Path.GetExtension(fileName);
       if (string.IsNullOrEmpty(extension))
           return "image/jpeg"; // Default fallback


       return extension.ToLowerInvariant() switch
       {
           ".jpg" or ".jpeg" => "image/jpeg",
           ".png" => "image/png",
           ".webp" => "image/webp",
           ".gif" => "image/gif",
           ".pdf" => "application/pdf",
           ".txt" => "text/plain",
           _ => "application/octet-stream"
       };
   }

I don’t like this code because the API is trying to guess the intent of the caller. We are making some reasonable inferences here, for sure, but we are also ensuring that any future progress will require us to change our code, instead of letting the caller do that.

In fact, the caller probably knows a lot more than we do about what is going on. They know if they are uploading an image, and probably in what format too. They know that they just uploaded a CSV file (and that we need to classify it as plain text, etc.).

This is one of those cases where the best option is not to try to be smart. I recommended that we write the function to let the caller deal with it.

It is important to note that this is meant to be a public API in a library that is shipped to external customers, so changing something in the library is not easy (change, release, deploy, update - that can take a while). We need to make sure that we aren’t blocking the caller from doing things they may want to.

This is a case of trying to help the user, but instead ending up crippling what they can do with the API.

Read the whole story
alvinashcraft
55 seconds ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories