Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
150872 stories
·
33 followers

openclaw 2026.4.1

1 Share

2026.4.1

Changes

  • Tasks/chat: add /tasks as a chat-native background task board for the current session, with recent task details and agent-local fallback counts when no linked tasks are visible. Related #54226. Thanks @vincentkoc.
  • Web search/SearXNG: add the bundled SearXNG provider plugin for web_search with configurable host support. (#57317) Thanks @cgdusek.
  • Amazon Bedrock/Guardrails: add Bedrock Guardrails support to the bundled provider. (#58588) Thanks @MikeORed.
  • macOS/Voice Wake: add the Voice Wake option to trigger Talk Mode. (#58490) Thanks @SmoothExec.
  • Feishu/comments: add a dedicated Drive comment-event flow with comment-thread context resolution, in-thread replies, and feishu_drive comment actions for document collaboration workflows. (#58497) Thanks @wittam-01.
  • Gateway/webchat: make chat.history text truncation configurable with gateway.webchat.chatHistoryMaxChars and per-request maxChars, while preserving silent-reply filtering and existing default payload limits. (#58900)
  • Agents/default params: add agents.defaults.params for global default provider parameters. (#58548) Thanks @lpender.
  • Agents/failover: cap prompt-side and assistant-side same-provider auth-profile retries for rate-limit failures before cross-provider model fallback, add the auth.cooldowns.rateLimitedProfileRotations knob, and document the new fallback behavior. (#58707) Thanks @Forgely3D
  • Cron/tools allowlist: add openclaw cron --tools for per-job tool allowlists. (#58504) Thanks @andyk-ms.
  • Channels/session routing: move provider-specific session conversation grammar into plugin-owned session-key surfaces, preserving Telegram topic routing and Feishu scoped inheritance across bootstrap, model override, restart, and tool-policy paths.
  • WhatsApp/reactions: add reactionLevel guidance for agent reactions. Thanks @mcaxtr.
  • Telegram/errors: add configurable errorPolicy and errorCooldownMs controls so Telegram can suppress repeated delivery errors per account, chat, and topic without muting distinct failures. (#51914) Thanks @chinar-amrutkar
  • ZAI/models: add glm-5.1 and glm-5v-turbo to the bundled Z.AI provider catalog. (#58793) Thanks @tomsun28
  • Agents/compaction: resolve agents.defaults.compaction.model consistently for manual /compact and other context-engine compaction paths, so engine-owned compaction uses the configured override model across runtime entrypoints. (#56710) Thanks @oliviareid-svg

Fixes

  • Chat/error replies: stop leaking raw provider/runtime failures into external chat channels, return a friendly retry message instead, and add a specific /new hint for Bedrock toolResult/toolUse session mismatches. (#58831) Thanks @ImLukeF.
  • Gateway/reload: ignore startup config writes by persisted hash in the config reloader so generated auth tokens and seeded Control UI origins do not trigger a restart loop, while real gateway.auth.* edits still require restart. (#58678) Thanks @yelog
  • Tasks/gateway: keep the task registry maintenance sweep from stalling the gateway event loop under synchronous SQLite pressure, so upgraded gateways stop hanging about a minute after startup. (#58670) Thanks @openperf
  • Tasks/status: hide stale completed background tasks from /status and session_status, prefer live task context, and show recent failures only when no active work remains. (#58661) Thanks @vincentkoc
  • Tasks/gateway: re-check the current task record before maintenance marks runs lost or prunes them, so a task heartbeat or cleanup update that lands during a sweep no longer gets overwritten by stale snapshot state.
  • Exec/approvals: honor exec-approvals.json security defaults when inline or configured tool policy is unset, and keep Slack and Discord native approval handling aligned with inferred approvers and real channel enablement so remote exec stops falling into false approval timeouts and disabled states. Thanks @scoootscooob and @vincentkoc.
  • Exec/approvals: make allow-always persist as durable user-approved trust instead of behaving like allow-once, reuse exact-command trust on shell-wrapper paths that cannot safely persist an executable allowlist entry, keep static allowlist entries from silently bypassing ask:"always", and require explicit approval when Windows cannot build an allowlist execution plan instead of hard-dead-ending remote exec. Thanks @scoootscooob and @vincentkoc.
  • Exec/cron: resolve isolated cron no-route approval dead-ends from the effective host fallback policy when trusted automation is allowed, and make openclaw doctor warn when tools.exec is broader than ~/.openclaw/exec-approvals.json so stricter host-policy conflicts are explicit. Thanks @scoootscooob and @vincentkoc.
  • Sessions/model switching: keep /model changes queued behind busy runs instead of interrupting the active turn, and retarget queued followups so later work picks up the new model as soon as the current turn finishes.
  • Gateway/HTTP: skip failing HTTP request stages so one broken facade no longer forces every HTTP endpoint to return 500. (#58746) Thanks @yelog
  • Gateway/nodes: stop pinning live node commands to the approved node-pair record. Node pairing remains a trust/token flow, while per-node system.run policy stays in that node's exec approvals config. Fixes #58824.
  • WebChat/exec approvals: use native approval UI guidance in agent system prompts instead of telling agents to paste manual /approve commands in webchat sessions. Thanks @vincentkoc.
  • Web UI/OpenResponses: preserve rewritten stream snapshots in webchat and keep OpenResponses final streamed text aligned when models rewind earlier output. (#58641) Thanks @neeravmakwana
  • Discord/inbound media: pass Discord attachment and sticker downloads through the shared idle-timeout and worker-abort path so slow or stuck inbound media fetches stop hanging message processing. (#58593) Thanks @aquaright1
  • Telegram/retries: keep non-idempotent sends on the strict safe-send path, retry wrapped pre-connect failures, and preserve 429 / retry_after backoff for safe delivery retries. (#51895) Thanks @chinar-amrutkar
  • Telegram/exec approvals: route topic-aware exec approval followups through Telegram-owned threading and approval-target parsing, so forum-topic approvals stay in the originating topic instead of falling back to the root chat. (#58783)
  • Telegram/local Bot API: preserve media MIME types for absolute-path downloads so local audio files still trigger transcription and other MIME-based handling. (#54603) Thanks @jzakirov
  • Channels/WhatsApp: pass inbound message timestamp to model context so the AI can see when WhatsApp messages were sent. (#58590) Thanks @Maninae
  • Channels/QQ Bot: keep /bot-logs export gated behind a truly explicit QQBot allowlist, rejecting wildcard and mixed wildcard entries while preserving the real framework command path. Thanks @vincentkoc.
  • Channels/plugins: keep bundled channel plugins loadable from legacy channels.<id> config even under restrictive plugin allowlists, and make openclaw doctor warn only on real plugin blockers instead of misleading setup guidance. (#58873) Thanks @obviyus
  • Plugins/bundled runtimes: restore externalized bundled plugin runtime dependency staging across packed installs, Docker builds, and local runtime staging so bundled plugins keep their declared runtime deps after the 2026.3.31 externalization change. (#58782)
  • LINE/runtime: resolve the packaged runtime contract from the built dist/plugins/runtime layout so LINE channels start correctly again after global npm installs on 2026.3.31. (#58799) Thanks @vincentkoc.
  • MiniMax/plugins: auto-enable the bundled MiniMax plugin for API-key auth/config so MiniMax image generation and other plugin-owned capabilities load without manual plugin allowlisting. (#57127) Thanks @tars90percent.
  • Ollama/model picker: show only Ollama models after provider selection in the CLI picker. (#55290) Thanks @Luckymingxuan.
  • CDP/profiles: prefer cdpPort over stale WebSocket URLs so browser automation reconnects cleanly. (#58499) Thanks @Mlightsnow.
  • Media/paths: resolve relative MEDIA paths against the agent workspace so local attachment references keep working. (#58624) Thanks @aquaright1.
  • Memory/session indexing: keep full reindexes from skipping session transcripts when sync is triggered by session-start or watch, so restart-driven reindexes preserve session memory. (#39732) Thanks @upupc
  • Memory/QMD: prefer --mask over --glob when creating QMD collections so default memory collections keep their intended patterns and stop colliding on restart. (#58643) Thanks @GitZhangChi.
  • Subagents/tasks: keep subagent completion and cleanup from crashing when task-registry writes fail, so a corrupt or missing task row no longer takes down the gateway during lifecycle finalization. Thanks @vincentkoc.
  • Sandbox/browser: compare browser runtime inspection against agents.defaults.sandbox.browser.image so openclaw sandbox list --browser stops reporting healthy browser containers as image mismatches. (#58759) Thanks @sandpile.
  • Plugins/install: forward --dangerously-force-unsafe-install through archive and npm-spec plugin installs so the documented override reaches the security scanner on those install paths. (#58879) Thanks @ryanlee-gemini.
  • Auto-reply/commands: strip inbound metadata before slash command detection so wrapped /model, /new, and /status commands are recognized. (#58725) Thanks @Mlightsnow.
  • Agents/Anthropic: preserve thinking blocks and signatures across replay, cache-control patching, and context pruning so compacted Anthropic sessions continue working instead of failing on later turns. (#58916) Thanks @obviyus
  • Agents/failover: unify structured and raw provider error classification so provider-specific 400/422 payloads no longer get forced into generic format failures before retry, billing, or compaction logic can inspect them. (#58856) Thanks @aaron-he-zhu.
  • Auth profiles/store: coerce misplaced SecretRef objects out of plaintext key and token fields during store load so agents without ACP runtime stop crashing on .trim() after upgrade. (#58923) Thanks @openperf.
  • ACPX/runtime: repair queue owner unavailable session recovery by replacing dead named sessions and resuming the backend session when ACPX exposes a stable session id, so the first ACP prompt no longer inherits a dead handle. (#58669) Thanks @neeravmakwana
  • ACPX/runtime: retry dead-session queue-owner repair without --resume-session when the reported ACPX session id is stale, so recovery still creates a fresh named session instead of failing session init. Thanks @obviyus.
  • Auth/OpenAI Codex: persist plugin-refreshed OAuth credentials to auth-profiles.json before returning them, so rotated Codex refresh tokens survive restart and stop falling into refresh_token_reused loops. (#53082)
  • Discord/gateway: hand reconnect ownership back to Carbon, keep runtime status aligned with close/reconnect state, and force-stop sockets that open without reaching READY so Discord monitors recover promptly instead of waiting on stale health timeouts. (#59019) Thanks @obviyus
Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

Unveiling the Claude Code Leak: A Glimpse into the Future of AI Assistants

1 Share

In a surprising turn of events, Anthropic’s Claude Code had its entire source code leaked to the public, showcasing not just what the tool is capable of today, but offering a sneak peek into its ambitious future. This incident underscores the rapid pace of AI development and the new horizons it could bring to business automation.

Earlier today, researcher Chiao Fan Shell uncovered something unusual on the npm registry—a source map file that acted like a decoder ring, exposing 512,000 lines of Claude Code. This file inadvertently revealed a storage bucket containing 1,900 files, fully accessible to the internet. The fallout was swift and widespread, with the code finding its way to GitHub, gathering over 1,100 stars and numerous forks.

Based on content from AI News Today | Julian Goldie Podcast

While the leak itself may feel like a misstep, the contents reveal some fascinating advancements Anthropic has in the pipeline. One key feature is Chyros, a persistent assistant mode that observes and acts autonomously within your working environment, freeing up time and mental bandwidth for business owners. Currently gated within internal feature flags, it holds the promise of transforming how agency owners and business operators manage their workflows.

Another feature, Ultra Plan, is designed for deep, autonomous planning sessions, allowing for strategic planning without the need for constant prompting. Imagine running a complex content or product strategy session and receiving a structured output, all autonomously managed by Claude.

The leaked code also hints at a future where Claude can interact via Voice Mode, further lowering the barrier for those intimidated by traditional command-line interfaces, as well as a sophisticated Multi-Agent Coordinator to manage numerous parallel tasks seamlessly.

And let’s not forget the whimsical addition of Buddy, a Tamagotchi-style virtual pet that adds a touch of charm, customizable with various avatars and endowed with distinctive personalities.

This incident follows closely on the heels of another Anthropic mishap involving their CMS, which exposed over 3,000 internal files, including early details about an unreleased model known as Claude Mythos. Despite these recent security blunders, it’s crucial to understand that the model itself remains unaffected, with no customer data at risk. These leaks emphasize the frenzied pace of AI development, and while missteps occur, they also highlight the pioneering features that lie ahead.

To truly harness the power of Claude, businesses must begin to view it not merely as a smart assistant but rather as an integrated operating system capable of transforming operational efficiency. The businesses that adapt fastest will be those that embrace its potential to streamline and innovate.

For those eager to stay ahead and utilize Claude’s evolving capabilities, exploring platforms like the AI Profit Boardroom can offer valuable insights and community support, keeping you in sync with every new feature and update as they roll out.

As we stand on the cusp of these advancements, the question remains: will you be ready to integrate and leverage these tools when Chyros and other features become a reality, or will you find yourself still learning the basics?

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Many Engineering Leaders Are Getting AI Adoption Wrong

1 Share

But according to CTO and AI consultant Chris Parsons, the real challenge isn’t the tools themselves, it’s having the right mindset to use them effectively.

Just introducing new tools isn’t enough to make a real impact. What truly matters is how teams work -how they build, collaborate, and keep learning along the way, says Parsons.

He explained why many engineering leaders struggle with AI adoption, and how teams can move beyond just using AI tools to creating workflows where AI truly becomes a collaborator.

AI tools can’t be treated like any other software

Generative AI has sparked huge expectations across engineering teams. Many organizations assume that introducing tools like code assistants or LLM-powered platforms will instantly boost productivity.

But Parsons argues that the real hurdle isn’t the technology itself, it’s how well the organization understands and adapts to using it.

For CTOs and engineering leaders, this often leads to a common mistake: assuming developers can adopt AI tools as quickly and easily as they would a new IDE or software library. In reality, the shift goes much deeper:

It’s a fundamentally different way of working. You can’t simply give engineers an AI tool and expect them to start using it effectively right away. It requires time, experimentation, and a real shift in how teams approach development.

Unlike traditional software, AI is inherently non-deterministic – running the same prompt can yield different results. Teams may see promising outcomes in internal tests and assume it will behave consistently in production, only to find that real users often produce very different results.

People (not frameworks) drive organizational success

Parsons’ perspective on AI adoption is also shaped by his experience scaling engineering teams. During his time at Gower Street, he helped grow a small team into an organization of more than 50 people.

Early in that journey, he focused heavily on building the most efficient team structures and processes. Over time, however, he realized that organizational success depended much more on people than on frameworks:

If your engineering manager and product manager aren’t speaking to each other, introducing a weekly meeting won’t fix the problem.

The key to AI success? Track everything

Parsons starts with a surprisingly simple recommendation for AI adoptation: log everything. Every interaction, every model response, and every step in the AI pipeline should be recorded. These logs form the backbone for understanding how the system really performs in the real world.

From there, teams can go through the logs manually to see which responses are helpful, which are accurate, and which might cause problems.

At the beginning, the responses are often not that good. Sometimes they’re okay, sometimes quite bad, and occasionally surprisingly bad.

This hands-on review helps teams refine prompts, tweak workflows, and boost successful responses. Parsons suggests tracking AI performance just like engineering efficiency, using metrics like positive interactions and fewer errors.

You should explore meta‑prompting

One technique Parsons believes more leaders should explore is meta‑prompting – using AI to improve the prompts themselves.

Rather than trying to write the perfect prompt from the start, Parsons recommends letting the AI lead the conversation, asking one clarifying question at a time. This lets the model gather context gradually and deliver much better results.

Over time, teams can keep improving prompts by asking the AI what extra information it would have needed earlier, refining them step by step.

Parsons sees this iterative approach as part of a bigger shift: AI is evolving from a simple assistant into a collaborator, increasingly guiding problem-solving and asking questions like a coach.

We’ll start giving AI tasks and letting it run for some time without us being involved.

That shift will mean rethinking how teams collaborate with machines. Parsons points out that even the communication tools teams rely on may need to evolve to accommodate AI as an active participant in discussions and workflows.

The post Many Engineering Leaders Are Getting AI Adoption Wrong appeared first on ShiftMag.

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

The Model You Love Is Probably Just the One You Use

1 Share

The following article originally appeared on Medium and is being republished here with the author’s permission.

Ask 10 developers which LLM they’d recommend and you’ll get 10 different answers—and almost none of them are based on objective comparison. What you’ll get instead is a reflection of the models they happen to have access to, the ones their employer approved, and the ones that influencers they follow have been quietly paid to promote.

We’re all living inside recursively nested walled gardens, and most of us don’t realize it.

This blog's sponsor has an amazing model

The access problem

In corporate environments, the model selection often happens by accident. Someone on the team tries Claude Code one weekend, gets excited, tells the group on Slack, and suddenly the whole organization is using it. Nobody evaluated alternatives. Nobody ran a bakeoff. The decision was made by whoever had a company card and a free Saturday.

That’s not a criticism—it’s just how these things go. But it means that when that same person tells you their favorite model, they’re really telling you which model they’ve had the most reps with. There’s a genuine learning function at play: You get faster, your prompts get better, and the model starts to feel almost intuitive. It’s not that the model is objectively superior. It’s that you’ve gotten good at using it.

This matters more than people admit, because a lot of this space runs on feelings rather than evidence. People feel good about Opus right now. It feels powerful; it feels smart; it feels like you’re using the best tool available. And maybe you are. But ask someone who’s paying for their own tokens whether they feel the same way, and you tend to get a more calibrated answer. Skin in the game has a way of sharpening opinions.

The influence problem

There’s also a lot of money moving through this space in ways that don’t always get disclosed. Model providers are spending real budget to make sure the right people have the right experiences—early access, credits, invitations to the right events. Anthropic does it. OpenAI does it. This isn’t a scandal; it’s just marketing, but it muddies the signal considerably. When someone you follow is effusive about a model, it’s worth asking whether they arrived at that opinion through sustained use or through a curated demo environment.

Meanwhile, some developers—especially those building in the open—will use whatever doesn’t cost an arm and a leg. Their enthusiasm for a model might be more about its pricing tier than its capability ceiling. That’s also a valid signal, but it’s not the same signal.

The alignment problem (the other one)

Then there are the geopolitical considerations. Some developers are deliberately avoiding Qwen and GLM due to concerns about the countries they originate from. Others are using them because they’re compelling, capable models that happen to be dramatically cheaper. Both camps think the other is being naive. This is a real conversation that doesn’t have a clean answer, but it’s happening mostly under the surface.

What I’ve actually been doing

I’ve been forcing myself to test outside my comfort zone. I’ve spent the last week using Codex seriously—not casually—and my experience so far is that it’s nearly indistinguishable from Claude Sonnet 4.6 for most coding tasks, and it’s running at roughly half the cost when you factor in how efficiently it uses tokens. That’s not a small difference. I want to live with it longer before I have a firm opinion, but “a week” is the minimum threshold I’d set for any model evaluation. Anything less and you’re just rating your first impression.

I’ve also started using Qwen and GLM-5 seriously. Early results are interesting. I’ve had some compelling successes and a few jarring errors. I’ll reserve judgment.

What I’ve noticed with my own Anthropic usage is something worth naming: I default to Haiku for well-scoped, mechanical tasks. Sonnet handles almost everything else with room to spare. Opus only comes out when I need genuine breadth—architecture questions, strategic framing, anything with a genuinely wide scope. But I’ve watched people in corporate environments leave the dial on Opus permanently because they’re not paying for tokens themselves. And here’s the thing—that’s actually not always to their advantage. High-powered models overthink simple tasks. They’ll add abstractions you didn’t ask for, restructure things that didn’t need restructuring. When I have a clearly templated class to write, Haiku gets it right at a tenth of the cost, and it doesn’t second-guess the design.

The thing we should be talking about

Everyone last month was exercised about what Sam Altman said about energy consumption. Fine. But I think the more pressing question is about marketing budgets and how they’re distorting the collective understanding of these tools. The benchmarks are starting to feel managed. The influencer coverage is clearly shaped. The access programs create a positive bias among people with the largest audiences.

None of this means the models are bad. Some of them are genuinely remarkable. But when you ask someone which model to use, you’re getting an answer that’s filtered through their employer’s procurement decisions, the influencers they follow, what they can afford, and how long they’ve been using that particular tool. The answer you get tells you a lot about their situation. It tells you almost nothing about the model.

Take it all with appropriate skepticism—including this post.



Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Advanced Installer 23.6

1 Share
Advanced Installer 23.6 was released on April 1st, 2026
Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Introducing Updater Cloud Add-On | Build, Host & Publish Updates Directly From Your Installer Project

1 Share
Publishing updates shouldn’t slow you down. [...]
Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories