Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
150585 stories
·
33 followers

openclaw 2026.3.28

1 Share

Breaking

  • Providers/Qwen: remove the deprecated qwen-portal-auth OAuth integration for portal.qwen.ai; migrate to Model Studio with openclaw onboard --auth-choice modelstudio-api-key. (#52709) Thanks @pomelo-nwu.
  • Config/Doctor: drop automatic config migrations older than two months; very old legacy keys now fail validation instead of being rewritten on load or by openclaw doctor.

Changes

  • xAI/tools: move the bundled xAI provider to the Responses API, add first-class x_search, and auto-enable the xAI plugin from owned web-search and tool config so bundled Grok auth/configured search flows work without manual plugin toggles. (#56048) Thanks @huntharo.
  • xAI/onboarding: let the bundled Grok web-search plugin offer optional x_search setup during openclaw onboard and openclaw configure --section web, including an x_search model picker with the shared xAI key.
  • MiniMax: add image generation provider for image-01 model, supporting generate and image-to-image editing with aspect ratio control. (#54487) Thanks @liyuan97.
  • Plugins/hooks: add async requireApproval to before_tool_call hooks, letting plugins pause tool execution and prompt the user for approval via the exec approval overlay, Telegram buttons, Discord interactions, or the /approve command on any channel. The /approve command now handles both exec and plugin approvals with automatic fallback. (#55339) Thanks @vaclavbelak and @joshavant.
  • ACP/channels: add current-conversation ACP binds for Discord, BlueBubbles, and iMessage so /acp spawn codex --bind here can turn the current chat into a Codex-backed workspace without creating a child thread, and document the distinction between chat surface, ACP session, and runtime workspace.
  • OpenAI/apply_patch: enable apply_patch by default for OpenAI and OpenAI Codex models, and align its sandbox policy access with write permissions.
  • Plugins/CLI backends: move bundled Claude CLI, Codex CLI, and Gemini CLI inference defaults onto the plugin surface, add bundled Gemini CLI backend support, and replace gateway run --claude-cli-logs with generic --cli-backend-logs while keeping the old flag as a compatibility alias.
  • Plugins/startup: auto-load bundled provider and CLI-backend plugins from explicit config refs, so bundled Claude CLI, Codex CLI, and Gemini CLI message-provider setups no longer need manual plugins.allow entries.
  • Podman: simplify the container setup around the current rootless user, install the launch helper under ~/.local/bin, and document the host-CLI openclaw --container <name> ... workflow instead of a dedicated openclaw service user.
  • Slack/tool actions: add an explicit upload-file Slack action that routes file uploads through the existing Slack upload transport, with optional filename/title/comment overrides for channels and DMs.
  • Message actions/files: start unifying file-first sends on the canonical upload-file action by adding explicit support for Microsoft Teams and Google Chat, and by exposing BlueBubbles file sends through upload-file while keeping the legacy sendAttachment alias.
  • Plugins/Matrix TTS: send auto-TTS replies as native Matrix voice bubbles instead of generic audio attachments. (#37080) thanks @Matthew19990919.
  • CLI: add openclaw config schema to print the generated JSON schema for openclaw.json. (#54523) Thanks @kvokka.
  • Config/TTS: auto-migrate legacy speech config on normal reads and secret resolution, keep legacy diagnostics for Doctor, and remove regular-mode runtime fallback for old bundled tts.<provider> API-key shapes.
  • Memory/plugins: move the pre-compaction memory flush plan behind the active memory plugin contract so memory-core owns flush prompts and target-path policy instead of hardcoded core logic.
  • MiniMax: trim model catalog to M2.7 only, removing legacy M2, M2.1, M2.5, and VL-01 models. (#54487) Thanks @liyuan97.
  • Plugins/runtime: expose runHeartbeatOnce in the plugin runtime system namespace so plugins can trigger a single heartbeat cycle with an explicit delivery target override (e.g. heartbeat: { target: "last" }). (#40299) Thanks @loveyana.
  • Agents/compaction: preserve the post-compaction AGENTS refresh on stale-usage preflight compaction for both immediate replies and queued followups. (#49479) Thanks @jared596.
  • Agents/compaction: surface safeguard-specific cancel reasons and relabel benign manual /compact no-op cases as skipped instead of failed. (#51072) Thanks @afurm.
  • Docs: add pnpm docs:check-links:anchors for Mintlify anchor validation while keeping scripts/docs-link-audit.mjs as the stable link-audit entrypoint. (#55912) Thanks @velvet-shark.
  • Tavily: mark outbound API requests with X-Client-Source: openclaw so Tavily can attribute OpenClaw-originated traffic. (#55335) Thanks @lakshyaag-tavily.

Fixes

  • Agents/Anthropic: recover unhandled provider stop reasons (e.g. sensitive) as structured assistant errors instead of crashing the agent run. (#56639)
  • Google/models: resolve Gemini 3.1 pro, flash, and flash-lite for all Google provider aliases by passing the actual runtime provider ID and adding a template-provider fallback; fix flash-lite prefix ordering. (#56567)
  • OpenAI Codex/image tools: register Codex for media understanding and route image prompts through Codex instructions so image analysis no longer fails on missing provider registration or missing instructions. (#54829) Thanks @neeravmakwana.
  • Agents/image tool: restore the generic image-runtime fallback when no provider-specific media-understanding provider is registered, so image analysis works again for providers like openrouter and minimax-portal. (#54858) Thanks @MonkeyLeeT.
  • WhatsApp: fix infinite echo loop in self-chat DM mode where the bot's own outbound replies were re-processed as new inbound user messages. (#54570) Thanks @joelnishanth
  • Telegram/splitting: replace proportional text estimate with verified HTML-length search so long messages split at word boundaries instead of mid-word; gracefully degrade when tag overhead exceeds the limit. (#56595)
  • Telegram/delivery: skip whitespace-only and hook-blanked text replies in bot delivery to prevent GrammyError 400 empty-text crashes. (#56620)
  • Telegram/send: validate replyToMessageId at all four API sinks with a shared normalizer that rejects non-numeric, NaN, and mixed-content strings. (#56587)
  • Mistral: normalize OpenAI-compatible request flags so official Mistral API runs no longer fail with remaining 422 status code (no body) chat errors.
  • Control UI/config: keep sensitive raw config hidden by default, replace the blank blocked editor with an explicit reveal-to-edit state, and restore raw JSON editing without auto-exposing secrets. Fixes #55322.
  • CLI/zsh: defer compdef registration until compinit is available so zsh completion loads cleanly with plugin managers and manual setups. (#56555)
  • BlueBubbles/debounce: guard debounce flush against null message text by sanitizing at the enqueue boundary and adding an independent combiner guard. (#56573)
  • Auto-reply: suppress JSON-wrapped {"action":"NO_REPLY"} control envelopes before channel delivery with a strict single-key detector; preserves media when text is only a silent envelope. (#56612)
  • ACP/ACPX agent registry: align OpenClaw's ACPX built-in agent mirror with the latest openclaw/acpx command defaults and built-in aliases, pin versioned npx built-ins to exact versions, and stop unknown ACP agent ids from falling through to raw --agent command execution on the MCP-proxy path. (#28321) Thanks @m0nkmaster and @vincentkoc.
  • Security/audit: extend web search key audit to recognize Gemini, Grok/xAI, Kimi, Moonshot, and OpenRouter credentials via a boundary-safe bundled-web-search registry shim. (#56540)
  • Docs/FAQ: remove broken Xfinity SSL troubleshooting cross-links from English and zh-CN FAQ entries — both sections already contain the full workaround inline. (#56500)
  • Telegram: deliver verbose tool summaries inside forum topic sessions again, so threaded topic chats now match DM verbose behavior. (#43236) Thanks @frankbuild.
  • BlueBubbles/CLI agents: restore inbound prompt image refs for CLI routed turns, reapply embedded runner image size guardrails, and cover both CLI image transport paths with regression tests. (#51373)
  • BlueBubbles/groups: optionally enrich unnamed participant lists with local macOS Contacts names after group gating passes, so group member context can show names instead of only raw phone numbers.
  • Discord/reconnect: drain stale gateway sockets, clear cached resume state before forced fresh reconnects, and fail closed when old sockets refuse to die so Discord recovery stops looping on poisoned resume state. (#54697) Thanks @ngutman.
  • iMessage: stop leaking inline [[reply_to:...]] tags into delivered text by sending reply_to as RPC metadata and stripping stray directive tags from outbound messages. (#39512) Thanks @mvanhorn.
  • CLI/plugins: make routed commands use the same auto-enabled bundled-channel snapshot as gateway startup, so configured bundled channels like Slack load without requiring a prior config rewrite. (#54809) Thanks @neeravmakwana.
  • CLI/message send: write manual openclaw message send deliveries into the resolved agent session transcript again by always threading the default CLI agent through outbound mirroring. (#54187) Thanks @KevInTheCloud5617.
  • CLI/onboarding: show the Kimi Code API key option again in the Moonshot setup menu so the interactive picker includes all Kimi setup paths together. Fixes #54412 Thanks @sparkyrider
  • Agents/status: use provider-aware context window lookup for fresh Anthropic 4.6 model overrides so /status shows the correct 1.0m window instead of an underreported shared-cache minimum. (#54796) Thanks @neeravmakwana.
  • OpenAI/WebSocket: preserve reasoning replay metadata and tool-call item ids on WebSocket tool turns, and start a fresh response chain when full-context resend is required. (#53856) Thanks @xujingchen1996.
  • OpenAI/WS: restore reasoning blocks for Responses WebSocket runs and keep reasoning/tool-call replay metadata intact so resumed sessions do not lose or break follow-up reasoning-capable turns. (#53856) Thanks @xujingchen1996.
  • Agents/errors: surface provider quota/reset details when available, but keep HTML/Cloudflare rate-limit pages on the generic fallback so raw error pages are not shown to users. (#54512) Thanks @bugkill3r.
  • Claude CLI: switch the bundled Claude CLI backend to stream-json output so watchdogs see progress on long runs, and keep session/usage metadata even when Claude finishes with an empty result line. (#49698) Thanks @felear2022.
  • Claude CLI/MCP: always pass a strict generated --mcp-config overlay for background Claude CLI runs, including the empty-server case, so Claude does not inherit ambient user/global MCP servers. (#54961) Thanks @markojak.
  • Agents/embedded replies: surface mid-turn 429 and overload failures when embedded runs end without a user-visible reply, while preserving successful media-only replies that still use legacy mediaUrl. (#50930) Thanks @infichen.
  • Chat/UI: move the chat send button onto the shared ghost-button theme styling, while keeping the stop button icon readable on the danger state. (#55075) Thanks @bottenbenny.
  • WhatsApp/allowFrom: show a specific allowFrom policy error for valid blocked targets instead of the misleading <E.164|group JID> format hint. Thanks @mcaxtr.
  • Agents/cooldowns: scope rate-limit cooldowns per model so one 429 no longer blocks every model on the same auth profile, replace the exponential 1 min -> 1 h escalation with a stepped 30 s / 1 min / 5 min ladder, and surface a user-facing countdown message when all models are rate-limited. (#49834) Thanks @kiranvk-2011.
  • Agents/embedded transport errors: distinguish common network failures like connection refused, DNS lookup failure, and interrupted sockets from true timeouts in embedded-run user messaging and lifecycle diagnostics. (#51419) Thanks @scoootscooob.
  • Telegram/pairing: ignore self-authored DM message updates so bot-pinned status cards and similar service updates do not trigger bogus pairing requests or re-enter inbound dispatch. (#54530) thanks @huntharo
  • Mattermost/replies: keep pairing replies, slash-command fallback replies, and model-picker messages on the resolved config path so exec: SecretRef bot tokens work across all outbound reply branches. (#48347) thanks @mathiasnagler.
  • Microsoft Teams/config: accept the existing welcomeCard, groupWelcomeCard, promptStarters, and feedback/reflection keys in strict config validation so already-supported Teams runtime settings stop failing schema checks. (#54679) Thanks @gumclaw.
  • MCP/channels: add a Gateway-backed channel MCP bridge with Codex/Claude-facing conversation tools, Claude channel notifications, and safer stdio bridge lifecycle handling for reconnects and routed session discovery.
  • Plugins/SDK: thread moduleUrl through plugin-sdk alias resolution so user-installed plugins outside the openclaw directory (e.g. ~/.openclaw/extensions/) correctly resolve openclaw/plugin-sdk/* subpath imports, and gate plugin-sdk:check-exports in release:check. (#54283) Thanks @xieyongliang.
  • Config/web fetch: allow the documented tools.web.fetch.maxResponseBytes setting in runtime schema validation so valid configs no longer fail with unrecognized-key errors. (#53401) Thanks @erhhung.
  • Message tool/buttons: keep the shared buttons schema optional in merged tool definitions so plain action=send calls stop failing validation when no buttons are provided. (#54418) Thanks @adzendo.
  • Agents/openai-compatible tool calls: deduplicate repeated tool call ids across live assistant messages and replayed history so OpenAI-compatible backends no longer reject duplicate tool_call_id values with HTTP 400. (#40996) Thanks @xaeon2026.
  • Models/openai-completions: default non-native OpenAI-compatible providers to omit tool-definition strict fields unless users explicitly opt back in, so tool calling keeps working on providers that reject that option. (#45497) Thanks @sahancava.
  • Plugins/context engines: retry strict legacy assemble() calls without the new prompt field when older engines reject it, preserving prompt-aware retrieval compatibility for pre-prompt plugins. (#50848) thanks @danhdoan.
  • CLI/update status: explicitly say up to date when the local version already matches npm latest, while keeping the availability logic unchanged. (#51409) Thanks @dongzhenye.
  • Daemon/Linux: stop flagging non-gateway systemd services as duplicate gateways just because their unit files mention OpenClaw, reducing false-positive doctor/log noise. (#45328) Thanks @gregretkowski.
  • Feishu: close WebSocket connections on monitor stop/abort so ghost connections no longer persist, preventing duplicate event processing and resource leaks across restart cycles. (#52844) Thanks @schumilin.
  • Feishu: use the original message create_time instead of Date.now() for inbound timestamps so offline-retried messages carry the correct authoring time, preventing mis-targeted agent actions on stale instructions. (#52809) Thanks @schumilin.
  • Control UI/Skills: open skill detail dialogs with the browser modal lifecycle so clicking a skill row keeps the panel centered instead of rendering it off-screen at the bottom of the page.
  • Matrix/replies: include quoted poll question/options in inbound reply context so the agent sees the original poll content when users reply to Matrix poll messages. (#55056) Thanks @alberthild.
  • Matrix/plugins: keep plugin bootstrap from crashing when built runtime mixes bare and deep matrix-js-sdk entrypoints, so unrelated channels do not get taken down during plugin load. (#56273) Thanks @aquaright1.
  • Agents/sandbox: honor tools.sandbox.tools.alsoAllow, let explicit sandbox re-allows remove matching built-in default-deny tools, and keep sandbox explain/error guidance aligned with the effective sandbox tool policy. (#54492) Thanks @ngutman.
  • Agents/sandbox: make blocked-tool guidance glob-aware again, redact/sanitize session-specific explain hints for safer copy-paste, and avoid leaking control-character session keys in those hints. (#54684) Thanks @ngutman.
  • Agents/compaction: trigger timeout recovery compaction before retrying high-context LLM timeouts so embedded runs stop repeating oversized requests. (#46417) thanks @joeykrug.
  • Agents/compaction: reconcile sessions.json.compactionCount after a late embedded auto-compaction success so persisted session counts catch up once the handler reports completion. (#45493) Thanks @jackal092927.
  • Agents/failover: classify Codex accountId token extraction failures as auth errors so model fallback continues to the next configured candidate. (#55206) Thanks @cosmicnet.
  • Plugins/runtime: reuse only compatible active plugin registries across tools, providers, web search, and channel bootstrap, align /tools/invoke plugin loading with the session workspace, and retry outbound channel recovery when the pinned channel surface changes so plugin tools and channels stop disappearing or re-registering from mismatched runtime loads. Thanks @gumadeiras.
  • Talk/macOS: stop direct system-voice failures from replaying system speech, use app-locale fallback for shared watchdog timing, and add regression coverage for the macOS fallback route and language-aware timeout policy. (#53511) thanks @hongsw.
  • Discord/gateway cleanup: keep late Carbon reconnect-exhausted errors suppressed through startup/dispose cleanup so Discord monitor shutdown no longer crashes on late gateway close events. (#55373) Thanks @Takhoffman.
  • Discord/gateway shutdown: treat expected reconnect-exhausted events during intentional lifecycle stop as clean shutdowns so startup-abort cleanup no longer surfaces false gateway failures. (#55324) Thanks @joelnishanth.
  • Discord/gateway shutdown: suppress reconnect-exhausted events that were already buffered before teardown flips lifecycleStopping, so stale-socket Discord restarts no longer crash the whole gateway. Fixes #55403 and #55421. Thanks @lml2468 and @vincentkoc.
  • GitHub Copilot/auth refresh: treat large expires_at values as seconds epochs and clamp far-future runtime auth refresh timers so Copilot token refresh cannot fall into a setTimeout overflow hot loop. (#55360) Thanks @michael-abdo.
  • Agents/status: use the persisted runtime session model in session_status when no explicit override exists, and honor per-agent thinkingDefault in both session_status and /status. (#55425) Thanks @scoootscooob, @xaeon2026, and @ysfbsf.
  • Heartbeat/runner: guarantee the interval timer is re-armed after heartbeat runs and unexpected runner errors so scheduled heartbeats do not silently stop after an interrupted cycle. (#52270) Thanks @MiloStack.
  • Config/Doctor: rewrite stale bundled plugin load paths from legacy extensions/* locations to the packaged bundled path, including directory-name mismatches and slash-suffixed config entries. (#55054) Thanks @SnowSky1.
  • WhatsApp/mentions: stop treating mentions embedded in quoted messages as direct mentions so replying to a message that @mentioned the bot no longer falsely triggers mention gating. (#52711) Thanks @lurebat.
  • Matrix: keep separate 2-person rooms out of DM routing after m.direct seeds successfully, while still honoring explicit is_direct state and startup fallback recovery. (#54890) thanks @private-peter
  • Agents/ollama fallback: surface non-2xx Ollama HTTP errors with a leading status code so HTTP 503 responses trigger model fallback again. (#55214) Thanks @bugkill3r.
  • Feishu/tools: stop synthetic agent ids like agent-spawner from being treated as Feishu account ids during tool execution, so tools fall back to the configured/default Feishu account unless the contextual id is a real enabled Feishu account. (#55627) Thanks @MonkeyLeeT.
  • Google/tools: strip empty required: [] arrays from Gemini tool schemas so optional-only tool parameters no longer trigger Google validator 400s. (#52106) Thanks @oliviareid-svg.
  • Onboarding/TUI/local gateways: show the resolved gateway port in setup output, clarify no-daemon local health/dashboard messaging, and preserve loopback Control UI auth on reruns and explicit local gateway URLs so local quickstart flows recover cleanly. (#55730) Thanks @shakkernerd.
  • TUI/chat log: keep system messages as single logical entries and prune overflow at whole-message boundaries so wrapped system spacing stays intact. (#55732) Thanks @shakkernerd.
  • TUI/activation: validate /activation arguments in the TUI and reject invalid values instead of silently coercing them to mention. (#55733) Thanks @shakkernerd.
  • Agents/model switching: apply /model changes to active embedded runs at the next safe retry boundary, so overloaded or retrying turns switch to the newly selected model instead of staying pinned to the old provider.
  • Agents/Codex fallback: classify Codex server_error payloads as failoverable, sanitize Codex error: payloads before they reach chat, preserve context-overflow guidance for prefixed invalid_request_error payloads, and omit provider request_id values from user-facing UI copy. (#42892) Thanks @xaeon2026.
  • Memory/search: share memory embedding provider registrations across split plugin runtimes so memory search no longer fails with unknown provider errors after memory-core registers built-in adapters. (#55945) Thanks @glitch418x.
  • Discord/Carbon beta: update @buape/carbon to the latest beta and pass the new RateLimitError request argument so Discord stays compatible with the upstream beta constructor change. (#55980) Thanks @ngutman.
  • Plugins/inbound claims: pass full inbound attachment arrays through inbound_claim hook metadata while keeping the legacy singular media attachment fields for compatibility. (#55452) Thanks @huntharo.
  • Plugins/Matrix: preserve sender filenames for inbound media by forwarding originalFilename to saveMediaBuffer. (#55692) thanks @esrehmki.
  • Matrix/mentions: recognize matrix.to mentions whose visible label uses the bot's room display name, so requireMention: true rooms respond correctly in modern Matrix clients. (#55393) thanks @nickludlam.
  • Ollama/thinking off: route thinkingLevel=off through the live Ollama extension request path so thinking-capable Ollama models now receive top-level think: false instead of silently generating hidden reasoning tokens. (#53200) Thanks @BruceMacD.
  • Plugins/diffs: stage bundled @pierre/diffs runtime dependencies during packaged updates so the bundled diff viewer keeps loading after global installs and updates. (#56077) Thanks @gumadeiras.
  • Plugins/diffs: load bundled Pierre themes without JSON module imports so diff rendering keeps working on newer Node builds. (#45869) thanks @NickHood1984.
  • Plugins/uninstall: remove owned channels.<id> config when uninstalling channel plugins, and keep the uninstall preview aligned with explicit channel ownership so built-in channels and shared keys stay intact. (#35915) Thanks @wbxl2000.
  • Plugins/Matrix: prefer explicit DM signals when choosing outbound direct rooms and routing unmapped verification summaries, so strict 2-person fallback rooms do not outrank the real DM. (#56076) thanks @gumadeiras
  • Plugins/Matrix: resolve env-backed accessToken and password SecretRefs against the active Matrix config env path during startup, and officially accept SecretRef accessToken config values. (#54980) thanks @kakahu2015.
  • Microsoft Teams/proactive DMs: prefer the freshest personal conversation reference for user:<aadObjectId> sends when multiple stored references exist, so replies stop targeting stale DM threads. (#54702) Thanks @gumclaw.
  • Gateway/plugins: reuse the session workspace when building HTTP /tools/invoke tool lists and harden tool construction to infer the session agent workspace by default, so workspace plugins do not re-register on repeated HTTP tool calls. (#56101) thanks @neeravmakwana
  • Brave/web search: normalize unsupported Brave country filters to ALL before request and cache-key generation so locale-derived values like VN stop failing with upstream 422 validation errors. (#55695) Thanks @chen-zhang-cs-code.
  • Discord/replies: preserve leading indentation when stripping inline reply tags so reply-tagged plain text and fenced code blocks keep their formatting. (#55960) Thanks @Nanako0129.
  • Daemon/status: surface immediate gateway close reasons from lightweight probes and prefer those concrete auth or pairing failures over generic timeouts in openclaw daemon status. (#56282) Thanks @mbelinky.
  • Agents/failover: classify HTTP 410 errors as retryable timeouts by default while still preserving explicit session-expired, billing, and auth signals from the payload. (#55201) thanks @nikus-pan.
  • Agents/subagents: restore completion announce delivery for extension channels like BlueBubbles. (#56348)
  • Plugins/Matrix: load bundled @matrix-org/matrix-sdk-crypto-nodejs through createRequire(...) so E2EE media send and receive keep the package-local native binding lookup working in packaged ESM builds. (#54566) thanks @joelnishanth.
  • Plugins/Matrix: encrypt E2EE image thumbnails with thumbnail_file while keeping unencrypted-room previews on thumbnail_url, so encrypted Matrix image events keep thumbnail metadata without leaking plaintext previews. (#54711) thanks @frischeDaten.
  • Telegram/forum topics: keep native /new and /reset routed to the active topic by preserving the topic target on forum-thread command context. (#35963)
Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

Stanford study outlines dangers of asking AI chatbots for personal advice

1 Share
While there’s been plenty of debate about AI sycophancy, a new study by Stanford computer scientists attempts to measure how harmful that tendency might be.
Read the whole story
alvinashcraft
4 hours ago
reply
Pennsylvania, USA
Share this story
Delete

You are falling behind because you haven’t fed the insincerity machine in the last 5 minutes

1 Share

split sequence of the game Zak MC kracken and the alien mindbenders showing you two aliens turning on a machine created to make humankind stupid.

I was lucky enough to witness the beginnings of social media, working on the platforms that made it happen. I’ve also seen the decline of its first iterations and products. Currently I am witnessing the idea of a social web being perverted, weaponised and automated out of any trace of human or social aspect…

In my current job I’m running a 200k+ subscribers newsletter and a quite successful podcast. I had my own social presence since around 2004 with varying degrees of success. I really don’t care for the numbers and I never in earnest tried to make a living solely off my social presence. So I never tried “growth hacking” or took deliberate steps to reach millions. I use social media as a channel out, a scratchpad to note down ideas and experiments and invite other people to comment and together create better solutions, share information and joy. Social media to me always meant humans writing things as they wanted to tell the world about them.

Two things that gave me quite some reach over the years have never changed though: it’s important to post a lot and in a reliable cadence and it’s important to have a voice and take a stand, voice an opinion.

Whilst collecting tools to cover in our newsletter, I came across one service that annoys the hell out of me.

AI Social Media Writing Assistant for LinkedIn, Twitter & 6 More Platforms
Your AI reputation coach that learns your voice, reads your feeds, and tells you exactly where to show up, then writes comments and posts that sound like you, drawing from your real stories and experience.

Excellent, isn’t it? Instead of having to do all the reading, thinking or creating you point a machine to the things you did in the past and make it appear as you. And not just for posting, also for commenting and interacting with probably people but more likely other bots. We automate away the human or social part, trading it for growth and numbers.

The speed in which highly successful people publish huge treaties and books lately makes me understand that tools like that are pretty widespread and used. I do get about 10 emails a day offering AI tools that automate my job as developer relations leader.

The thing is that I don’t want that. I don’t want to give the impression that I’m part of a conversation and available for advice when I’m clearly not. I don’t want to publish for the sake of having published at a certain time or in a thread that causes lots of comments.

Social media has become a toxic rage bait machine with the companies that run it clearly being ok with this. I really would love people to call out more when others are obviously replaced by automation and to tell the platforms to bugger off when they ask you to create more content geared towards interaction rather than information.

I remember a long time ago foursquare was a social thing to do. You checked in at a place to show that you’re there and ready to interact with people and meet contacts.

I was at an event that time and bummed out as my flight to the office was early and I couldn’t attend the party with networking booths. So I told another speaker that this is a shame and his answer was to go past the venue on the way to the airport and check in on Foursquare so people thought you’ve been there and it was their fault for not finding you. I lost a ton of respect for that person on that day.

As an actor or author you don’t send your body or stunt double to attend interviews or sell autographs at comic con. Don’t create a virtual double that posts for you on social media when you can’t be arsed or feel overwhelmed. Take that overwhelming feeling and write about it, showing the world that your mental health is as fragile as the one of the people who follow you and read your work. Be human and only there when you can be there.

Intro sequence of the game Zak MC kracken and the alien mindbenders showing you how to difficult disguise yourself to blend in with the aliens bent to make humankind stupid.

Read the whole story
alvinashcraft
4 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Upskilling your agents

1 Share

Share Episode         
         
In this adventure, we sit down with Dan Wahlin, Principal of DevRel for JavaScript, AI, and Cloud at Microsoft, to explore the complexities of modern infrastructure. We examine how cloud platforms like Azure function as "building blocks". Which of course, can quickly become overwhelming without the right instruction manuals. To bridge this gap, one potential solution we discuss is the emerging reliance on AI "skills"—specialized markdown files. They can give coding agents the exact knowledge needed to deploy poorly documented complex open-source projects to container apps without requiring deep infrastructure expertise.

         

And we are saying the silent part outloud, as we review how handing the keys over to autonomous agents introduces terrifying new attack vectors. It's the security nightmare of prompt injections and the careless execution of unvetted AI skills. Which is a blast from the past, and we reminisce how current downloading of random agent instructions to running untrusted executables from early internet sites. While tools like OpenClaw purport to offer incredible automation, such as allowing agents to scour the internet and execute code without human oversight, it's already led us to disastrous leaks of API keys. We emphasize the critical necessity of validating skills through trusted repositories where even having agents perform security reviews on the code before execution is not enough.

         

Finally, we tackle the philosophical debate around AI productivity and why Dorota's LLMs raise the floor and not the ceiling is so spot on. The standout pick requires mentioning, a fascinating 1983 paper titled "Ironies of Automation" by Lisanne Bainbridge. This paper perfectly predicts our current dilemma: automating systems often leaves the most complex, difficult tasks to human operators, proving that as automation scales, the need for rigorous human monitoring actually increases, destroying the very value that was attempting to be captured by the original innovation.

         💡 Notable Links:         
🎯 Picks:         




Download audio: https://dts.podtrac.com/redirect.mp3/api.spreaker.com/download/episode/70959256/download.mp3
Read the whole story
alvinashcraft
4 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Meet Claude Mythos: Leaked Anthropic post reveals the powerful upcoming model

1 Share
Matt Binder reports: An accidental leak has now been officially confirmed by AI company Anthropic regarding its most powerful AI model yet. The model, now known as “Claude Mythos,” was originally uncovered in a report from Fortune. Anthropic has since confirmed the details about the leak to the outlet. The data leak included details about the upcoming release of the...

Source

Read the whole story
alvinashcraft
7 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Nvidia’s NemoClaw has three layers of agent security. None of them solve the real problem.

1 Share
Low-poly illustration of a crab with outstretched claws, representing the proliferation of 'Claw'-branded agentic AI tools like OpenClaw and NemoClaw.

The speed of LLM adoption demands that we check its trajectory from time to time. CEO Jensen Huang, talking at the Nvidia GPU Technology Conference, covered the growth of agentic computing. Over a two-year period, there has been a 10,000-fold increase in compute demand per user, with overall usage increasing 100 times. That’s a lot of tokens, which is why AI still sucks up a lot of investment dollars.

As we saw last week, the current star of the agentic world in terms of personal-user popularity is definitely OpenClaw, which appears to deliver on many science-fiction dreams of useful talking computers.

So there is no mystery as to why Nvidia backs OpenClaw all the way. It is the most unrestrained form of token use out there. And of course Mr Huang would also encourage companies to adopt an “OpenClaw strategy”. But just like Anthropic, they know they can only embrace the open-source phenomenon while wearing plenty of armour.

Hence, Nvidia launched NemoClaw, which rides the OpenClaw wave, before adding enough guardrails to make it vaguely safer. But unfortunately, NemoClaw doesn’t replace OpenClaw; it sits on top of it.

Hugging the crab

As we see from recent articles, there will be many opportunities to make OpenClaw safer. And just like Anthropic, Nvidia believes the answer to OpenClaw is to let Nvidia protect you from it. For this, they add three security architecture components.

The first piece is policy enforcement — a system heavily used in the last few decades. This is the boundary-setting governance layer that hopes to make sure the teenager returns home before evening.

By constraining filesystem and network access, the hope is that an agent will reason about why it is blocked and propose a policy update that the human user can approve. But if it leaves through the bedroom window, it can bypass you altogether, with you being none the wiser. And this multiplies for multi-agent systems.

There is an inherent inefficiency in letting self-evolving agents install packages, learn skills, and spawn subagents only to stop them at the door because you don’t like what they are wearing.

“There is an inherent inefficiency in letting self-evolving agents install packages, learn skills, and spawn subagents only to stop them at the door because you don’t like what they are wearing.”

Overall, the more skills the system knows, the less effective policy enforcement is, as it really only learns after the fact. You either stop tasks so often that they are no longer autonomous, or hope you can out-guess a mastermind that you are paying to solve problems 24/7. In reality, the success of any system will be the experience (and cynicism) of the engineers employed to manage it.

The second piece is privacy routing. This is a good way to both control expenses and to stop giving up quite so much of your IP to the cloud providers. (But this doesn’t stop agents from emailing your passwords out because a third party asked nicely.)

Set up well, you decide what stays local and what queries go to the larger cloud models. A router can make decisions about model selection based on cost and an advanced privacy policy. Unlike cloud providers, Nvidia can make good money selling more chips if you try to run heavy inference on your own machines. But it is always sensible to select the right model for the task.

The third piece is sandboxed execution. This is vital to prevent a bad process from having simple access to neighbouring agent processes, but it also provides a way to test a system with much lower risk by tracking and inspecting intended network traffic. This is also important for long-running tasks that cannot be trivially tested otherwise. If you just want to run agents in a container, you can try NanoClaw.

But truly, “significant advancement over OpenClaw” is a low bar. I would expect more attempts to build secure products from the ground up, but until that happens, companies will bide their time and see where the very bottom of the security failure trench is, before taking the plunge.

Too many claws

By the end of 2026, many small outfits and global organisations will probably have an agentic strategy. Hence, the increasing number of “claws” out there. DefenseClaw. PicoClaw. ZeroClaw. There probably is a Sanity Claws.

As the corporate market increases its appetite for agentic computing, the next true barrier will be the ability to employ the right staff to control it. While people are warning us about how many developer jobs may be lost (and seeing share prices rise in the hope of lower overheads), what is less discussed is the difficulty of hiring the right people to babysit the new systems. As I’ve mentioned, it is no longer about employing eager young coders — it is more about grizzled vets spotting potential pitfalls throughout the workflow, and working out risk profiles.

“It is no longer about employing eager young coders — it is more about grizzled vets spotting potential pitfalls throughout the workflow, and working out risk profiles.”

The reason why Apple, Google, Microsoft, et al. did not deliver on the early promises of digital assistants and still haven’t is precisely that they can see the problems. In fact, ever since HAL refused to open the pod bay doors, the big companies have been very careful how they frame AI publicly, knowing full well that enough embarrassing failures would cause a hard rejection. That an open-source project like OpenClaw has opened Pandora’s Box is no reason for responsible organisations to ride on hope while underplaying the risks.

The post Nvidia’s NemoClaw has three layers of agent security. None of them solve the real problem. appeared first on The New Stack.

Read the whole story
alvinashcraft
7 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories