Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
152563 stories
·
33 followers

Fourth of July 2026 in Philly: 7 Top Ways to Celebrate America’s 250th

1 Share

No one does Fourth of July like Philadelphia.

The City of Brotherly Love — the literal birthplace of the nation — pulls out all the stops each year to celebrate Independence Day. For the nation’s 250th birthday, Philly had to go even bigger.

From festivals to fireworks to FIFA World Cup 26 and the FanFest, Philly visitors and residents’ options for Fourth of July festivities are endless.

If you’re interested in watching big names in music on one of the biggest stages in Philly, Wawa Welcome America’s concert and massive fireworks show is the place to be.

Museums and galleries across the city — including the Museum of the American Revolution, the National Constitution Center and The Franklin Institute — have spent years crafting exhibitions that are positively perfect for America’s milestone year.

Or you can head out to the countryside for major historic celebrations at Valley Forge and a number of other sites.

And that’s just the beginning. (We don’t call it a party 250 years in the making for nothing.) Read on for seven awesome ways to spend your 2026 Fourth of July in Philadelphia.

Read the whole story
alvinashcraft
14 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Microsoft Build 2026 is coming!

1 Share

...And we shall be there! Not in Seattle this year, but, for a change, in San Francisco. To be more accurate, Microsoft Build will be at Fort Mason Center in San Francisco, CA. for two full days, June 2-3. As is usual, sessions will also be broadcast online.

The main emphasis of Microsoft Build 2026 is going to be AI. Not only how to use AI workflows and agents to write code and applications, but also how to provide AI capabilities to end-users to help them use those apps. Naturally sessions will also cover how developers "supervise" output from AI agents through testing, checking outputs for security, applicability, and so on. For more details on the sessions that will occur at Build, please follow this link.

Like every year, DevExpress will have a booth in the Partner Hub, and we will be there to chat to attendees about what's happening with our next major releases coming up in late June, as well as how we're supporting the topics highlighted in the Build sessions. We'll talk about how we're providing support for AI agents when writing apps with our controls, as well as how we're providing AI capabilities for end-users of the apps that use those controls. 

We look forward to seeing you at our booth if you're going to Microsoft Build 2026. Do please stop by and say hello!

Read the whole story
alvinashcraft
35 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

1.0.37

1 Share

2026-04-27

  • Location-based permission persistence is now enabled by default, so approvals carry over across sessions for the same directory
  • Add copilot completion <bash|zsh|fish> subcommand to generate static shell completion scripts for subcommands, flags, and known choice values
  • Press s in the session picker to cycle sort order: relevance, last used, created, or name
  • ACP model config options now include description and metadata for clients using the configOptions API
  • Model and effort change notification no longer appears when re-selecting the same model or effort level
  • Clipboard write no longer leaks X11 handles on Linux
  • Pending message indicator displays correctly alongside prompt frames
  • Fix detached HEAD detection always returning false after switch to git branch --show-current
  • Skill picker list stays fully visible when skills have errors or warnings
  • /ask responses now render markdown, including tables and formatted links
Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

OpenClaw 2026.4.26

1 Share

2026.4.26

Changes

  • Channels/QQBot: add full group chat support (history tracking, @-mention gating, activation modes, per-group config, FIFO message queue with deliver debounce), C2C stream_messages streaming with a StreamingController lifecycle manager, unified sendMedia with chunked upload for large files, and refactor the engine into pipeline stages, focused outbound submodules, builtin slash-command modules, and explicit DI ports via createEngineAdapters(). (#70624) Thanks @cxyhhhhh.
  • Channels/Yuanbao: register the Tencent Yuanbao external channel plugin (openclaw-plugin-yuanbao) in the official channel catalog, contract suites, and community plugin docs, with a new docs/channels/yuanbao.md quick-start guide for WebSocket bot DMs and group chats. (#72756) Thanks @loongfay.
  • Control UI/Talk: add a generic browser realtime transport contract, Google Live browser Talk sessions with constrained ephemeral tokens, and a Gateway relay for backend-only realtime voice plugins. Thanks @VACInc.
  • CLI/models: route provider-filtered model listing through an explicit source plan so user config, installed manifest rows, Provider Index previews, and scoped runtime fallbacks keep a stable authority order without adding another catalog cache. Thanks @shakkernerd.
  • Providers: add Cerebras as a bundled plugin with onboarding, static model catalog, docs, and manifest-owned endpoint metadata.
  • Memory/OpenAI-compatible: add optional memorySearch.inputType, queryInputType, and documentInputType config for asymmetric embedding endpoints, including direct query embeddings and provider batch indexing. Carries forward #63313 and #60727. Thanks @HOYALIM and @prospect1314521.
  • Ollama/memory: add model-specific retrieval query prefixes for nomic-embed-text, qwen3-embedding, and mxbai-embed-large memory-search queries while leaving document batches unchanged. Carries forward #45013. Thanks @laolin5564.
  • Plugins/providers: move pre-runtime model-id normalization, endpoint host metadata, OpenAI-compatible request-family hints, model-catalog aliases/suppressions, OpenAI stale Spark suppression, and reusable startup metadata snapshots into plugin manifests so core no longer carries bundled-provider routing tables or repeated manifest rebuilds. Thanks @shakkernerd.
  • Plugins/config: deprecate direct plugin config load/write helpers in favor of passed runtime snapshots plus transactional mutation helpers with explicit restart follow-up policy, scanner guardrails, runtime warnings, and revision-based cache invalidation.
  • Plugins/install: allow OPENCLAW_PLUGIN_STAGE_DIR to contain layered runtime-dependency roots, resolving read-only preinstalled deps before installing missing deps into the final writable root. Fixes #72396. Thanks @liorb-mountapps.
  • Control UI: add a raw config pending-changes diff panel that parses JSON5, redacts sensitive values until reveal, and avoids fake raw-edit callbacks when opening the panel. Refs #39831; supersedes #48621 and #46654. Thanks @JiajunBernoulli and @BunsDev.
  • Control UI: polish the quick settings dashboard grid so common cards align across desktop, tablet, and mobile layouts without wasting horizontal space. Thanks @BunsDev.
  • Matrix/E2EE: add openclaw matrix encryption setup to enable Matrix encryption, bootstrap recovery, and print verification status from one setup flow. Thanks @gumadeiras.
  • Agents/compaction: add an opt-in agents.defaults.compaction.maxActiveTranscriptBytes preflight trigger that runs normal local compaction when the active JSONL grows too large, requiring transcript rotation so successful compaction moves future turns onto a smaller successor file instead of raw byte-splitting history. Thanks @vincentkoc.
  • CLI/migration: add openclaw migrate with plan, dry-run, JSON, pre-migration backup, onboarding detection, archive-only reports, a Claude Code/Desktop importer, and a Hermes importer for configuration, memory/plugin hints, model providers, MCP servers, skills, commands, and supported credentials. Thanks @vincentkoc and @NousResearch.

Fixes

  • Agents/LSP: terminate bundled stdio LSP process trees during runtime disposal and Gateway shutdown, so nested children such as tsserver do not survive stop or restart. Fixes #72357. Thanks @ai-hpc and @bittoby.
  • Gateway/device tokens: stop echoing rotated bearer tokens from shared/admin device.token.rotate responses while preserving the same-device token handoff needed by token-only clients before reconnect. (#66773) Thanks @MoerAI.
  • Control UI/Talk: keep Google Live browser sessions on the WebSocket transport instead of falling back to WebRTC, validate browser Google Live WebSocket endpoints, cap Gateway relay sessions per browser connection, and remove stale browser-native voice buttons that did not use the configured Talk/TTS provider. Thanks @BunsDev.
  • Gateway/startup: reuse config snapshot plugin manifests for startup auto-enable, config validation, and plugin bootstrap planning, including authored source config and disabled setup-probe handling, so restrictive allowlists avoid duplicate manifest/config passes during boot. Thanks @shakkernerd.
  • Agents/subagents: enforce subagents.allowAgents for explicit same-agent sessions_spawn(agentId=...) calls instead of auto-allowing requester self-targets. Fixes #72827. Thanks @oiGaDio.
  • ACP/sessions_spawn: let explicit sessions_spawn(runtime="acp") bootstrap turns run while acp.dispatch.enabled=false still blocks automatic ACP thread dispatch. Fixes #63591. Thanks @moeedahmed.
  • CLI/update: install npm global updates into a verified temporary prefix before swapping the package tree into place, preventing mixed old/new installs and stale packaged files from breaking openclaw update verification. Thanks @shakkernerd.
  • Gateway: skip CLI startup self-respawn for foreground gateway runs so low-memory Linux/Node 24 hosts start through the same path as direct dist/index.js without hanging before logs. Fixes #72720. Thanks @sign-2025.
  • Google Meet: route local Chrome joins through OpenClaw browser control, grant Meet media permissions, pin local Chrome audio defaults to BlackHole 2ch, and use the configured OpenClaw browser profile so joined agents no longer show Permission needed or use raw/default Chrome state. Thanks @DougButdorf and @oromeis.
  • Plugins/discovery: follow symlinked plugin directories in global and workspace plugin roots while keeping broken links ignored and existing package safety checks in place. Fixes #36754; carries forward #72695 and #63206. Thanks @Quackstro, @ming1523, and @xsfX20.
  • Plugins/install: skip test files and directories during install security scans while still force-scanning declared runtime entrypoints, so packaged test mocks no longer block plugin installs. Fixes #66840; carries forward #67050. Thanks @saurabhjain1592 and @Magicray1217.
  • Plugins/install: allow exact package-manager peer links back to the trusted OpenClaw host package during install security scans while continuing to block spoofed or nested escaping node_modules symlinks. Carries forward #70819. Thanks @fgabelmannjr.
  • Plugins/install: resolve plugin install destinations from the active profile state dir across CLI, ClawHub, marketplace, local path, and channel setup installs, so openclaw --profile <name> plugins install ... no longer writes into the default profile. Fixes #69960; carries forward #69971. Thanks @FrancisLyman and @Sanjays2402.
  • Plugins/registry: suppress duplicate-plugin startup warnings when a tracked npm-installed plugin intentionally overrides the bundled plugin with the same id. Carries forward #48673. Thanks @abdushsk.
  • Plugins/startup: reuse canonical realpath lookups throughout each plugin discovery pass, including package and manifest boundary checks, so Windows npm-global startups no longer repeat expensive path resolution for the same plugin roots. Fixes #65733. Thanks @welfo-beo.
  • Gateway/proxy: pass ALL_PROXY / all_proxy into the global Undici env-proxy dispatcher and provider proxy-fetch helper while keeping SSRF trusted-proxy auto-upgrade on HTTP_PROXY / HTTPS_PROXY only, so gateway/provider calls honor all-proxy setups without weakening guarded fetches. Fixes #43821; carries forward #43919. Thanks @RickyTong1.
  • Reply/link understanding: keep media and link preprocessing on stable runtime entrypoints and continue with raw message content if optional enrichment fails, so URL-bearing messages are no longer dropped after stale runtime chunk upgrades. Fixes #68466. Thanks @songshikang0111.
  • Discord: persist routed model-picker overrides when the hidden /model dispatch succeeds but the bound thread session store is still stale, including LM Studio suffixed model ids. Carries forward #61473. Thanks @Nanako0129.
  • Nodes/CLI: add openclaw nodes remove --node <id|name|ip> and node.pair.remove so stale gateway-owned node pairing records can be cleaned without hand-editing state files.
  • Gateway: include the connecting client and fresh presence version in the initial hello-ok snapshot, so clients no longer need a follow-up event before seeing themselves online.
  • Docker: install the CA certificate bundle in the slim runtime image so HTTPS calls from containerized gateways no longer fail TLS setup after the bookworm-slim base switch. Fixes #72787. Thanks @ryuhaneul.
  • Providers/OpenRouter: remove retired Hunter Alpha and Healer Alpha static catalog rows and disable proxy reasoning injection for stale Hunter Alpha configs, so replies are not hidden when OpenRouter returns answer text in reasoning fields. Fixes #43942. Thanks @EvanDataForge.
  • Providers/reasoning: let Groq and LM Studio declare provider-native reasoning effort values, so Qwen thinking models receive none/default or off/on instead of OpenAI-only low/medium values. Fixes #32638. Thanks @Aqu1bp, @mgoulart, @Norpps, and @BSTail.
  • Local models: default custom providers with only baseUrl to the Chat Completions adapter and trust loopback model requests automatically, so local OpenAI-compatible proxies receive /v1/chat/completions without timing out. Fixes #40024. Thanks @parachuteshe.
  • Channels/message tool: surface Discord, Slack, and Mattermost user:/channel: target syntax in the shared message target schema and Discord ambiguity errors, so DM sends by numeric id stop burning retries before finding user:<id>. Fixes #72401. Thanks @garyd9, @hclsys, and @praveen9354.
  • Agents/tools: scope tool-loop detection history to the active run when available, so scheduled heartbeat cycles no longer inherit stale repeated-call counts from previous runs. Fixes #40144. Thanks @mattbrown319.
  • Agents/subagents: preserve requester delivery for completion announces across different channel accounts, keep same-channel thread completions routed to the child thread, and fail closed instead of guessing a child binding when requester conversation signal is missing. Thanks @sfuminya and @suyua9.
  • Agents/status: persist the post-compaction token estimate from auto-compaction when providers omit usage metadata, so /status and session lists keep showing fresh context usage after compaction. Fixes #67667; carries forward #72822. Thanks @Jimmy-xuzimo and @skylight-9.
  • Control UI: show loading, reload, and retry states when a lazy dashboard panel cannot load after an upgrade, so the Logs tab no longer appears blank on stale browser bundles. Fixes #72450. Thanks @sobergou.
  • Gateway/plugins: start the Gateway in degraded mode when a single plugin entry has invalid schema config, and let openclaw doctor --fix quarantine that plugin config instead of crash-looping every channel. Fixes #62976 and #70371. Thanks @Doraemon-Claw and @pksidekyk.
  • Agents/plugins: skip malformed plugin tools with missing schema objects and report plugin diagnostics, so one broken tool no longer crashes Anthropic agent runs. Fixes #69423. Thanks @jmnickels.
  • Agents/reasoning: recover fully wrapped unclosed <think> replies that would otherwise sanitize to empty text while keeping strict stripping for closed reasoning blocks and unclosed tails after visible text. Fixes #37696; supersedes #51915. Thanks @druide67 and @okuyam2y.
  • Control UI/Gateway: bind WebChat handshakes to their active socket and reject post-close server registrations, so aborted connects no longer leave zombie clients or misleading duplicate WebSocket connection logs. Fixes #72753. Thanks @LumenFromTheFuture.
  • Agents/fallback: split ambiguous provider failures into empty_response, no_error_details, and unclassified, and add flat fallback-step fields to structured fallback logs so primary-model failures stay visible when later fallbacks also fail. Fixes #71922; refs #71744. Thanks @andyk-ms and @nikolaykazakovvs-ux.
  • Plugins/Windows: normalize Windows absolute paths before handing bundled plugin modules to Jiti, so Feishu/Lark message sending no longer fails with unsupported c: ESM loader URLs. Fixes #72783. Thanks @jackychen-png.
  • CLI/doctor: run bundled plugin runtime-dependency repairs through the async npm installer with spinner/line progress and heartbeat updates, so long openclaw doctor --fix installs no longer look hung in TTY or piped output. Fixes #72775. Thanks @dfpalhano.
  • Feishu/Windows: normalize bundled channel sidecar loads before Jiti evaluates them, so Feishu outbound sends no longer fail with raw C: ESM loader errors on Windows. Fixes #72783. Thanks @jackychen-png.
  • Agents/tools: ignore volatile exec runtime metadata when comparing tool-loop outcomes, so enabled loop detection can stop repeated identical shell-command results instead of resetting on duration, PID, session, or cwd changes. Fixes #34574; supersedes #41502. Thanks @gucasbrg and @Zcg2021.
  • Agents/fallback: classify internal live-session model switch conflicts as unknown fallback failures instead of provider overloads, preventing local vLLM endpoints from receiving misleading overloaded cooldowns. Refs #63229. Thanks @clawdia-lobster.
  • Discord: let thread sessions inherit the parent channel's session-level /model override as a model-only fallback without enabling parent transcript inheritance. Fixes #72755. Thanks @solavrc.
  • Gateway/plugins: skip stale configured channels whose matching plugin is no longer discoverable, point cleanup at openclaw doctor --fix, and keep unrelated channel typos fatal so one missing channel plugin no longer crash-loops the Gateway. Fixes #53311. Thanks @futhgar.
  • Control UI: keep session-specific assistant identity loads authoritative after WebSocket connect, so non-main agent chat sessions do not show the main agent name in the header after bootstrap refreshes. Fixes #72776. Thanks @rockytian-top.
  • Agents/Qwen: preserve exact custom modelstudio provider configs with foreign api owners so explicit OpenAI-compatible Model Studio endpoints no longer get normalized into the bundled Qwen plugin path. Fixes #64483. Thanks @FiredMosquito831.
  • MCP/bundle-mcp: normalize CLI-native type: "http" MCP server entries to OpenClaw transport: "streamable-http" on save, repair existing configs with doctor, and keep embedded Pi from falling back to legacy SSE GET-first startup for those servers. Fixes #72757. Thanks @Studioscale.
  • OpenCode: expose Anthropic Opus/Sonnet 4.x thinking levels for proxied Claude models, so /think xhigh, /think adaptive, and /think max validate consistently with the direct Anthropic provider. Fixes #72729. Thanks @haishmg and @aaajiao.
  • Media-understanding/audio: migrate deprecated {input} placeholders in legacy audio.transcription.command configs to {{MediaPath}}, so custom audio transcribers no longer receive the literal placeholder after doctor repair. Fixes #72760. Thanks @krisfanue3-hash.
  • Ollama/WSL2: warn when GPU-backed WSL2 installs combine CUDA visibility with an autostarting ollama.service using Restart=always, and document the systemd, .wslconfig, and keep-alive mitigation for crash loops. Carries forward #61022; fixes #61185. Thanks @yhyatt.
  • Ollama/onboarding: de-dupe suggested bare local models against installed :latest tags and skip redundant pulls, so setup shows the installed model once and no longer says it is downloading an already available model. Fixes #68952. Thanks @tleyden.
  • Memory-core/doctor: keep doctor.memory.status on the cached path by default and only run live embedding pings for explicit deep probes, preventing slow local embedding backends from blocking Gateway status checks. Fixes #71568. Thanks @apex-system.
  • Memory/QMD: group same-source collections into one QMD search invocation when the installed QMD supports multiple -c filters, while keeping older QMD builds on the per-collection fallback. Fixes #72484; supersedes #72485 and #69583. Thanks @BsnizND and @zeroaltitude.
  • Memory/QMD: accept QMD status vector-count variants such as Vectors = 42, Vectors:42, and Vectors: 42 embedded, so memory status --deep no longer reports embeddings unavailable for healthy QMD wrappers. Fixes #63652; carries forward #63678. Thanks @apoapostolov and @WarrenJones.
  • Memory/QMD: skip QMD vector status probes and embedding maintenance in lexical searchMode: "search", so BM25-only QMD setups on ARM do not trigger llama.cpp/Vulkan builds during status checks or embed cycles. Fixes #59234 and #67113. Thanks @PrinceOfEgypt, @Vksh07, @Snipe76, @NomLom, @t4r3e2q1-commits, and @dmak.
  • Memory/QMD: report the live watcher dirty state in memory status, so changed QMD-backed memory files show as dirty until the queued sync finishes. Fixes #60244. Thanks @xinzf.
  • Compaction: skip oversized pre-compaction checkpoint snapshots and prune duplicate long user turns from compaction input and rotated successor transcripts, preventing retry storms from being preserved across checkpoint cycles. Fixes #72780. Thanks @SweetSophia.
  • Control UI/Cron: render cron job prompts and run summaries as sanitized markdown in the dashboard, with full-width block content, safer link clicks, and no duplicate error text when a failed run has no summary. Supersedes #48504. Thanks @garethdaine.
  • Control UI/Gateway: preserve WebChat client version labels across localhost, 127.0.0.1, and IPv6 loopback aliases on the same port, avoiding misleading vcontrol-ui connection logs while investigating duplicate-message reports. Refs #72753 and #72742. Thanks @LumenFromTheFuture and @allesgutefy.
  • Agents/reasoning: treat orphan closing reasoning tags with following answer text as a privacy boundary across delivery, history, streaming, and Control UI sanitizers so malformed local-model output cannot leak chain-of-thought text. Fixes #67092. Thanks @AnildoSilva.
  • Memory-core: run one-shot memory CLI commands through transient builtin and QMD managers so memory index, memory status --index, and memory search no longer start long-lived file watchers that can hit macOS EMFILE limits. Fixes #59101; carries forward #49851. Thanks @mbear469210-coder and @maoyuanxue.
  • Agents/ACP: ship the Claude ACP adapter with OpenClaw and require Claude result messages before idle can complete a prompt, preventing parent agents from waking early on long-running sessions_spawn(runtime: "acp", agentId: "claude") children. Fixes #72080. Thanks @siavash-saki and @iannwu.
  • CLI/tasks: route tasks --json, tasks list --json, and tasks audit --json through a lean JSON path so read-only task inspection no longer loads unrelated plugin/runtime command graphs. Fixes #66238. Thanks @ChuckChambers.
  • Memory-core: re-resolve the active runtime config whenever memory_search or memory_get executes, so provider changes made by config.patch stop leaving stale embedding backends behind in existing tool instances. Fixes #61098. Thanks @BradGroux and @Linux2010.
  • WebChat: keep bare /new and /reset startup instructions out of visible chat history while preserving /reset <note> as user-visible transcript text. Fixes #72369. Thanks @collynes and @haishmg.
  • Tasks/memory: checkpoint and truncate SQLite WAL sidecars on a timer and before close for task, Task Flow, proxy capture, and builtin memory databases, bounding long-running gateway *.sqlite-wal growth. Fixes #72774. Thanks @dfpalhano.
  • CLI/doctor: remove dangling channel config, heartbeat targets, and channel model overrides when stale plugin repair removes a missing channel plugin, preventing Gateway boot loops after failed plugin reinstalls. Fixes #65293. Thanks @yidecode.
  • Control UI/Gateway: cache, coalesce, stale-refresh, and invalidate effective tool inventory on channel registry changes while reusing the gateway-bound plugin registry and avoiding model/auth discovery, so chat runs no longer stall Control UI requests on repeated plugin/model setup. Fixes #72365; supersedes #72558. Thanks @Gabiii2398 and @1yihui.
  • Channels/setup: treat bundled channel plugins as already bundled during channels add and onboarding, enabling them without writing redundant plugins.load.paths entries or path install records. Fixes #72740. Thanks @iCodePoet.
  • WhatsApp: honor gateway HTTPS_PROXY / HTTP_PROXY env vars for QR-login WebSocket connections, while respecting NO_PROXY, so proxied networks no longer fall back to direct mmg.whatsapp.net connections that time out with 408. Fixes #72547; supersedes #72692. Thanks @mebusw and @SymbolStar.
  • Bonjour: default mDNS advertisements to the system hostname when it is DNS-safe, avoiding openclaw.local probing conflicts and Gateway restart loops on hosts such as Lobster or ubuntu. Fixes #72355 and #72689; supersedes #72694. Thanks @mscheuerlein-bot, @gcusms, @moyuwuhen601, @pavan987, @zml-0912, @hhq365, and @SymbolStar.
  • Agents/OpenAI-compatible: retry replay-safe empty stop turns once for openai-completions endpoints, so transient empty local backend responses no longer surface as “Agent couldn't generate a response” when a continuation succeeds, and restore openclaw agent --model for one-shot CLI runs. Fixes #72751. Thanks @moooV252.
  • Git hooks: skip ignored staged paths when formatting and restaging pre-commit files, so merge commits no longer abort when .gitignore newly ignores staged merged content. Fixes #72744. Thanks @100yenadmin.
  • Memory-core/dreaming: add a supported dreaming.model knob for Dream Diary narrative subagents, wired through phase config and the existing plugin subagent model-override trust gate. Refs #65963. Thanks @esqandil and @mjamiv.
  • Agents/Anthropic: remove trailing assistant prefill payloads when extended thinking is enabled, so Opus 4.7/Sonnet 4.6 requests do not fail Anthropic's user-final-turn validation. Fixes #72739. Thanks @superandylin.
  • Agents/vLLM/Qwen: add plugin-owned Qwen thinking controls for vLLM chat-template kwargs and DashScope-style top-level enable_thinking flags, including preserved thinking for agent loops. Fixes #72329. Thanks @stavrostzagadouris.
  • Memory-core/dreaming: treat request-scoped narrative fallback as expected, skip session cleanup when no subagent run was created, and remove duplicate phase-level cleanup so fallback no longer emits warning noise. Fixes #67152. Thanks @jsompis.
  • Agents/exec: apply configured tools.exec.timeoutSec to background, yieldMs, and node system.run commands when no per-call timeout is set, preventing auto-backgrounded and remote node commands from running indefinitely. Fixes #67600; supersedes #67603. Thanks @dlmpx and @kagura-agent.
  • Config/doctor: stop masking unknown-key validation diagnostics such as agents.defaults.llm, and have openclaw doctor --fix remove the retired agents.defaults.llm timeout block. Thanks @aidiffuser.
  • CLI/startup: keep the built pre-dispatch CLI graph free of package-level imports and extend packaged CLI smoke coverage to onboard and doctor help paths, preventing missing runtime dependencies such as tslog from killing onboarding before repair code can run. Fixes #63024. Thanks @hu19940121.
  • CLI/plugins: preserve unversioned ClawHub install specs so plugins update can follow newer ClawHub releases instead of pinning to the initially resolved version. Fixes #63010; supersedes #58426. Thanks @kangsen1234 and @robinspt.
  • Memory-core/subagents: tag plugin-created subagent sessions with their plugin owner so dreaming narrative cleanup can delete its own ephemeral sessions without granting broad admin session deletion. Fixes #72712. Thanks @BSG2000.
  • Gateway/models: move local-provider pricing opt-outs, OpenRouter/LiteLLM aliases, and proxy passthrough pricing lookup into plugin manifest metadata so core no longer carries extension-specific pricing tables.
  • CLI/update: honor OPENCLAW_NO_AUTO_UPDATE=1 as a gateway startup kill-switch for configured background package auto-updates, so operators can hold a deliberate downgrade during incident recovery without editing config first. Fixes #72715. Thanks @Xivi08.
  • Agents/Claude CLI: force live-session launches to include --output-format stream-json whenever OpenClaw adds --input-format stream-json, so new Claude CLI sessions no longer fail immediately while reusable sessions keep working. Fixes #72206. Thanks @kwangwonkoh and @Xivi08.
  • CLI/plugins: accept ClawHub plugin API wildcard ranges such as * without rejecting compatible plugin installs, while still requiring a valid runtime API version. Fixes #56446; supersedes #56466. Thanks @darconada and @claygeo.
  • CLI/plugins: add an explicit npm:<package> install prefix that skips ClawHub lookup for known npm packages while keeping bare package specs ClawHub-first. Fixes #55805; supersedes #54377. Thanks @Zeoy2020 and @vagusX.
  • CLI/plugins: let config-gated bundled plugins install without persisting invalid placeholder config entries, so install/uninstall sweeps can cover plugins such as memory-lancedb before the user configures credentials. Thanks @vincentkoc.
  • CLI/plugins: reject malformed ClawHub plugin specs with trailing @ before registry lookup, so empty-version typos report as invalid specs instead of package-not-found errors. Fixes #56579; supersedes #56582. Thanks @Kansodata.
  • Agents/sessions: acquire the session write lock only after cold bootstrap, plugin, and tool setup so fallback runs are not blocked by stalled pre-model startup work.
  • Browser/plugins: auto-start the bundled browser plugin when root browser config is present, including restrictive plugin allowlists, and ignore stale persisted plugin registries whose package paths no longer exist.
  • Browser: circuit-break repeated managed Chrome launch failures per profile so browser requests stop spawning Chromium indefinitely when CDP cannot start. Fixes #64271. Thanks @TheophilusChinomona.
  • Gateway/models: skip external OpenRouter and LiteLLM pricing refreshes for local/self-hosted model endpoints so startup does not wait on remote pricing catalogs for local-only Ollama, vLLM, and compatible providers.
  • CLI/plugins: stop security-blocked plugin installs from retrying as hook packs, so normal plugin packages report the scanner failure without a misleading "not a valid hook pack" follow-up. Fixes #61175; supersedes #64102. Thanks @KonsultDigital and @ziyincody.
  • Agents/Anthropic: strip stale trailing assistant prefill turns from outbound replay so context-engine short circuits cannot send unsupported assistant-prefill payloads to provider APIs. Fixes #72556. Thanks @Veda-openclaw.
  • Agents/Google: strip stale trailing assistant/model prefill turns from Gemini outbound replay so Google Generative AI requests end with a user turn or function response. Follow-up to #72556. Thanks @Veda-openclaw.
  • Control UI/Dreaming: require explicit confirmation before applying restart-impacting Dreaming mode changes, with restart warning copy and loading feedback. Fixes #63804. (#63807) Thanks @bbddbb1.
  • CLI/agent: mark Gateway-to-embedded fallback runs with meta.transport: "embedded" and meta.fallbackFrom: "gateway" in JSON output, and make the terminal diagnostic explicit so scripts and operators can distinguish fallback runs from Gateway runs. Fixes #71416. Thanks @amknight.
  • Agents/tools: normalize null or missing tool-call arguments to {} for parameterless object schemas before Pi validation, so empty-argument tools run instead of failing argument validation. Fixes #72587. Thanks @amknight.
  • Agents/subagents: clear active embedded-run state before terminal lifecycle events so post-completion cleanup no longer treats finished child runs as still active and skips archive or announcement bookkeeping. (#70187) Thanks @amknight.
  • CLI/update: keep the automatic post-update completion refresh on the core-command tree so it no longer stages bundled plugin runtime deps before the Gateway restart path, avoiding .24 update hangs and 1006 disconnect cascades. Fixes #72665. Thanks @sakalaboator and @He-Pin.
  • Control UI: make explicit Reload Config actions discard stale local config edits while passive refreshes and failed-save recovery keep pending drafts intact. Fixes #40352; carries forward #40443. Thanks @realmikechong-dotcom.
  • Agents/Bedrock: stop heartbeat runs from persisting blank user transcript turns and repair existing blank user text messages before replay, preventing AWS Bedrock ContentBlock blank-text validation failures. Fixes #72640 and #72622. Thanks @goldzulu.
  • Agents/LM Studio: promote standalone bracketed local-model tool requests into registered tool calls and hide unsupported bracket blocks from visible replies, so MemPalace MCP lookups do not print raw [tool] JSON scaffolding in chat. Fixes #66178. Thanks @detroit357.
  • Local models: warn when an assistant reply looks like a tool call but the provider emitted plain text instead of a structured tool invocation, making fake/non-executed tool calls visible in logs. Fixes #51332. Thanks @emilclaw.
  • Local models: accept persisted non-secret local auth markers for private-LAN custom OpenAI-compatible providers, so LAN Ollama configs no longer fail with missing auth when ollama-local is saved as the key. Fixes #49736. Thanks @charles-zh.
  • TUI/local models: treat visible gateway client labels such as openclaw-tui as the current requester session for session-aware tools, so Ollama tool calls no longer fail by resolving the UI label as a session id. Fixes #66391. Thanks @kickingzebra.
  • Local models: route self-hosted OpenAI-compatible model discovery through the guarded fetch path pinned to the configured host, covering vLLM and SGLang setup without reopening local/LAN SSRF probes. Supersedes #46359. Thanks @cdxiaodong.
  • Local models: classify terminated, reset, closed, timeout, and aborted model-call failures and attach a process memory snapshot to the diagnostic event, making LM Studio/Ollama RAM-pressure failures easier to prove from stability bundles. Refs #65551. Thanks @BigWiLLi111.
  • Local models: pass configured provider request timeouts through OpenAI SDK transports and the model idle watchdog so long-running local or custom OpenAI-compatible streams use one timeout knob instead of hitting the SDK's 10-minute default or the 120s idle default. Fixes #63663. Thanks @aidiffuser.
  • LM Studio: trust configured LM Studio loopback, LAN, and tailnet endpoints for guarded model requests by default, preserving explicit private-network opt-outs. Refs #60994. Thanks @tnowakow.
  • Docker/setup: route Docker onboarding defaults for host-side LM Studio and Ollama through host.docker.internal and add the Linux host-gateway mapping to the bundled Compose file, so containerized gateways can reach local providers without using container loopback. Fixes #68684; supersedes #68702. Thanks @safrano9999 and @skolez.
  • Agents/LM Studio: strip prior-turn Gemma 4 reasoning from OpenAI-compatible replay while preserving active tool-call continuation reasoning. Fixes #68704. Thanks @chip-snomo and @Kailigithub.
  • LM Studio: allow interactive onboarding to leave the API key blank for unauthenticated local servers, using local synthetic auth while clearing stale LM Studio auth profiles. Fixes #66937. Thanks @olamedia.
  • Plugins/startup/registry: reuse a Gateway PluginLookUpTable and one manifest registry pass across startup plugin IDs, plugin loading, deferred channel reloads, model pricing, read-only channel defaults, capability/provider/media resolution, manifest contracts, extractors, web fallback discovery, owner maps, and cold provider-discovery caches, with new startup-trace timing/count metrics for installed-index, manifest, startup-plan, and owner-map work. Thanks @shakkernerd and @mcaxtr.
  • Mattermost: keep direct-message replies top-level by suppressing reply roots for DM delivery while preserving channel and group thread roots, and derive inbound chat kind from the trusted channel lookup instead of the websocket event channel type. Carries forward #60115, #55186, #72305, and #72659; refs #59758, #59981, #59791, and #57565. Thanks @vincentkoc, @jwchmodx, and @hnykda.
  • Docker: pre-create /home/node/.openclaw with node ownership and private permissions so first-run Docker Compose named volumes no longer fail startup with EACCES. (#48072, #63959; fixes #61279) Thanks @timoxue and @jeanibarz.
  • CLI/Gateway: treat local restart probe policy closes for connect, exact device required, pairing, and auth failures as Gateway reachability proof without accepting empty, broad standalone token/password/scope/role, or pair-substring 1008 close reasons. Fixes #48771; carries forward #48801; related #63491. Thanks @MarsDoge and @genoooool.
  • Feishu: send outgoing interactive reply payloads as native cards with clickable buttons while preserving text, media, and document-comment fallbacks. Fixes #13175 and #58298; carries forward #47891. Thanks @Horacehxw.
  • Process/Windows: decode command stdout and stderr from raw bytes with console-codepage awareness, while preserving valid UTF-8 output and multibyte characters split across chunks. Fixes #50519. Thanks @iready, @kevinten10, @zhangyongjie1997, @knightplat-blip, @heiqishi666, and @slepybear.
  • Bonjour/Windows: hide the bundled mDNS advertiser's Windows ARP shell probe so Gateway startup no longer flashes command-prompt windows. Fixes #70238. Thanks @alexandre-leng, @PratikRai0101, @infinitypacific, and @tomerpeled.
  • Agents/bootstrap: dedupe hook-injected bootstrap context files by workspace-relative path and store normalized resolved paths so duplicate relative and absolute hook paths no longer depend on the process cwd. (#59344; fixes #59319; related #56721, #56725, and #57587) Thanks @koen666.
  • Agents/bootstrap: refresh cached workspace bootstrap snapshots on long-lived main-session turns when AGENTS.md, SOUL.md, MEMORY.md, or TOOLS.md change on disk, while preserving unchanged snapshot identity through the workspace file cache. (#64871; related #43901, #26497, #28594, #30896) Thanks @aimqwest and @mikejuyoon.
  • macOS Gateway: detect installed-but-unloaded LaunchAgent split-brain states during status, doctor, and restart, and re-bootstrap launchd supervision before falling back to unmanaged listener restarts. Fixes #67335, #53475, and #71060; refs #58890, #60885, and #70801. Thanks @ze1tgeist88, @dafacto, and @vishutdhar.
  • Plugins/install: treat mirrored core logger dependencies as staged bundled runtime deps so packaged Gateway starts do not crash when the external plugin-runtime-deps root is missing tslog. Fixes #72228; supersedes #72493. Thanks @deepujain.
  • Build/plugins: preserve active bundled runtime-dependency staging temp directories owned by live build processes so overlapping postbuild runs no longer delete each other's staged deps mid-prune. Supersedes #72220. Thanks @VACInc.
  • Plugins/install: hide bundled runtime-dependency npm child windows on Windows across Gateway startup, postinstall, and packaged staging paths so Telegram/Anthropic dependency repair no longer flashes shell windows. Fixes #72315. Thanks @athuljayaram and @joshfeng.
  • Agents/Windows: normalize lazy agent runtime imports before Node ESM loading so Windows drive-letter subagent-registry runtime paths no longer fail every agent task with ERR_UNSUPPORTED_ESM_URL_SCHEME. Fixes #72636; carries forward #72716. Thanks @Andyz-CData and @xialonglee.
  • Plugins/Windows: normalize lazy plugin service override imports before Node ESM loading so drive-letter browser-control module paths no longer fail with ERR_UNSUPPORTED_ESM_URL_SCHEME. Fixes #72573; supersedes #72599 and #72582. Thanks @llzzww316, @feineryonah-byte, and @WuKongAI-CMU.
  • Browser/plugins: load playwright-core through the browser runtime shim so packaged installs can run Playwright actions from staged plugin runtime deps after doctor/startup repair. Fixes #72168; supersedes #72238. Thanks @zdg1110 and @yetval.
  • Plugins/install: stage bundled plugin runtime dependencies before Gateway startup, drain update restarts, and materialize plugin-owned root chunks in external mirrors so staged deps resolve under native ESM. Fixes #72058; supersedes #72084. Thanks @amnesia106 and @drvoss.
  • TTS/SecretRef: resolve messages.tts.providers.*.apiKey from the active runtime snapshot so SecretRef-backed MiniMax and other TTS provider keys work in runtime reply/audio paths. Fixes #68690. Thanks @joshavant.
  • Gateway/install: surface systemd user-bus recovery hints during Linux service activation and retry via the target user scope when systemctl --user reports no-medium bus failures, without letting stale SUDO_USER override sudo -u installs. Fixes #39673; refs #44417 and #63561. Thanks @Arbor4, @myrsu, @mssteuer, and @boyuaner.
  • CLI/nodes: make unfiltered openclaw nodes list prefer the effective paired-node view used by nodes status while preserving pending rows, pairing-scope fallback, terminal-safe table rendering, and paired JSON metadata. Fixes #46871; carries forward #65772 through the ProjectClownfish #72619 repair. Thanks @skainguyen1412.
  • CLI/startup: read generated startup metadata from the bundled dist layout before falling back to live help rendering, so root/browser help and channel-option bootstrap stay on the fast path. Thanks @vincentkoc.
  • Feishu/Lark: stop treating broadcast-only @all/@_all messages as bot mentions while preserving direct bot mentions, including messages that also include @all. Fixes #37706. Thanks @JosepLee.
  • CLI/help: treat positional help invocations like openclaw channels help as help paths for startup gating, avoiding model/auth warmup while preserving positional arguments such as openclaw docs help. Thanks @gumadeiras.
  • Web search: route plugin-scoped web_search SecretRefs through the active runtime config snapshot so provider execution receives resolved credentials across app/runtime paths, including plugins.entries.brave.config.webSearch.apiKey. Fixes #68690. Thanks @VACInc.
  • Voice Call: allow SecretRef-backed Twilio auth tokens and call-specific OpenAI/ElevenLabs TTS API keys through the plugin config surface. Fixes #68690. Thanks @joshavant.
  • Google Meet/Voice Call: clean stale chrome-node realtime bridges before rejoining, expose bridge inspection, tolerate transient node input pull failures, default Chrome command-pair audio to 24 kHz PCM16 while preserving legacy 8 kHz G.711 mu-law pairs, handle Gemini Live interruptions/VAD and function-response names correctly, route stateful google_meet tools through the gateway runtime, support realtime.agentId, and send non-blocking consult continuations before long tool-backed answers finish. Fixes #72371, #72525, #72523, #72440, and #72425; (#72372, #72524, #72381, #72441, #72189, #72426) Thanks @BsnizND and @VACInc.
  • Discord/media: keep incidental Markdown image badges in final replies as text unless a channel opts into Markdown-image media extraction, while preserving Telegram Markdown-image media replies and explicit MEDIA: attachments. Fixes #72642. Thanks @solavrc and @Bartok9.
  • Matrix/E2EE: stabilize recovery and broken-device QA flows while avoiding Matrix device-cleanup sync races that could leave shutdown-time crypto work running. Thanks @gumadeiras.
  • Cron: apply cron.maxConcurrentRuns to the nested isolated-agent lane, start isolated execution timeouts only after the runner enters that lane, keep legacy flat jobs.json rows loadable, invalidate stale pending runtime slots after schedule edits, and preserve due slots for formatting-only rewrites. Fixes #72707, #27996, #71607, and #41783; carries forward #71651. Thanks @kagura-agent, @xialonglee, @fagnersouza666, @ayanesakura, and @Hurray0.
  • Cron/delivery: classify isolated successes, quiet NO_REPLY turns, model/provider failures, execution denials, --no-deliver traces, skipped-job alerts, and verified delivery outcomes correctly so cron history, retries, and failure counters reflect what actually happened. Fixes #72732, #50170, #43604, #68452, #60846, #72210, and #67172; follow-up to #54188; carries forward #43631, #68453, #72219, and #67186. Thanks @zNatix, @pixeldyn, @ChickenEggRoll, @SPFAdvisors, @anyech, @slideshow-dingo, @hatemclawbot-collab, @xydigit-sj, @oc-gh-dr, @hclsys, and @1yihui.
  • Cron/routing: preserve direct Telegram thread/account IDs, explicit Discord user:/channel: delivery targets, and session:<id> failure-destination routing so reminders, cron announcements, and failure alerts keep the intended recipient kind across direct and group chats. Fixes #44270; refs #62777; carries forward #44325, #44351, #44412, #72657, #68535, and #62798. Thanks @RunMintOn, @arkyu2077, @0xsline, @vincentkoc, @slideshow-dingo, @likewen-tech, and @neeravmakwana.
  • Subagents: keep the delegated task only in the subagent system prompt and send a short initial kickoff message, avoiding duplicate task tokens while preserving multiline task formatting. Fixes #72019; carries forward #72053. Thanks @Wizongod and @ly85206559.
  • Onboarding/GitHub Copilot: add manifest-owned --github-copilot-token support for non-interactive setup, including env fallback, tokenRef storage in ref mode, saved-profile reuse, and current Copilot default-model wiring. Refs #50002 and supersedes #50003. Thanks @scottgl9.
  • Gateway/install: add a validated --wrapper/OPENCLAW_WRAPPER service install path that persists executable LaunchAgent/systemd wrappers across forced reinstalls, updates, and doctor repairs instead of falling back to raw node/bun ProgramArguments. Fixes #69400. (#72445) Thanks @willtmc.
  • Plugins: fail plugin registration when loader-owned acceptance gates reject missing hook names or memory-only capability registration from non-memory plugins, surfacing the issue through plugin status and doctor instead of silently dropping the registration. Fixes #72459. Thanks @amknight.
  • macOS Gateway: write launchd services with a state-dir WorkingDirectory, use a durable state-dir temp path instead of freezing macOS session TMPDIR, create that temp directory before bootstrap, and label abort-shaped launchd exits as SIGABRT/abort in status output. Fixes #53679 and #70223; refs #71848. Thanks @dlturock, @stammi922, and @palladius.
  • Control UI/update: make Update now require a real gateway process replacement, report skipped/error update outcomes with stable reasons, and verify the running gateway version after restart so global installs cannot silently keep old code in memory. Fixes #62492; addresses #64892 and #63562. Thanks @IAMSamuelRodda.
  • Exec approvals: accept runtime-owned source: "allow-always" and commandText allowlist metadata in gateway and node approval-set payloads so Control UI round-trips no longer fail with unexpected property 'source'. Fixes #60000; carries forward #60064. Thanks @sd1471123, @sharkqwy, and @luoyanglang.
  • Exec/node: skip approval-plan preparation for full-trust host=node runs so interpreter and script commands no longer fail with SYSTEM_RUN_DENIED: approval cannot safely bind when effective policy is security=full and ask=off. Fixes #48457 and duplicate #69251. Thanks @ajtran303, @jaserNo1, @Blakeshannon, @lesliefag, and @AvIsBeastMC.
  • Exec/node: synthesize a local approval plan when a paired node advertises system.run without system.run.prepare, unblocking approval-required host=node exec on current macOS companion nodes while preserving remote prepare for node hosts that support it. Fixes #37591 and duplicate #66839; carries forward #69725. Thanks @soloclz.
  • Memory/QMD: prefer QMD's --mask collection pattern flag so root memory indexing stays scoped to MEMORY.md instead of widening to every markdown file in the workspace. Fixes #65480; supersedes #65481 and #66259. Thanks @ccage-simp, @Bortlesboat, @seank-com, and @crazyscience.
  • Memory/doctor: treat the specific gateway timeout after ... gateway memory probe result as inconclusive instead of reporting embeddings not ready, while preserving warnings for explicit failures. Fixes #44426; carries forward #46576 with the Greptile review feedback applied. Thanks Cengiz (@ghost).
  • Gateway/startup: defer QMD, core request handlers, setup wizard, CLI outbound senders, plugin HTTP routes, chat/session projection, node session runtime validation, embedded-run activity reads, MCP loopback server imports, channel runtime helpers, HTTP/canvas/plugin auth helpers, isolated cron imports, and hook dispatch parsing until their request or shutdown paths, while making plain gateway status use a parse-only config snapshot so no-plugin boots and status reads avoid broad runtime fanout. Thanks @vincentkoc.
  • Lobster/Gateway: memoize repeated Ajv schema compilation before loading the embedded Lobster runtime so scheduled workflows and llm.invoke loops stop growing gateway heap on content-identical schemas. Fixes #71148. Thanks @cmi525, @vsolaz, and @vincentkoc.
  • Codex harness: normalize cached input tokens before session/context accounting so prompt cache reads are not double-counted in /status, session_status, or persisted sessionEntry.totalTokens. Fixes #69298. Thanks @richardmqq.
  • Hooks/session-memory: use the host local timezone for memory filenames, fallback timestamp slugs, and markdown headers instead of UTC dates. Fixes #46703. (#46721) Thanks @Astro-Han.
  • Gateway health: preserve live runtime-backed channel/account state in gateway.health snapshots and cached refreshes while keeping raw probe payloads on sensitive/admin paths only. (#39921, #42586, #46527, #52770, #42543) Thanks @FAL1989, @rstar327, @0xble, and @ajayr.
  • Feishu: extract quoted/replied interactive-card text across schema 1.0, schema 2.0, i18n, template-variable, and post-format fallback shapes without carrying broad generated/config churn from related parser experiments. (#38776, #60383, #42218, #45936) Thanks @lishuaigit, @lskun, @just2gooo, and @Br1an67.
  • Telegram/agents: hide raw failed write/edit warning messages in Telegram when the assistant already explicitly acknowledges the failed action, while keeping warnings when the reply claims success or omits the failure; #39406 remains the broader configurable delivery-policy follow-up. Fixes #51065; covers #39631. Thanks @Bartok9 and @Bortlesboat.
  • Exec approvals: accept a symlinked OPENCLAW_HOME as the trusted approvals root while still rejecting symlinked .openclaw path components below it. (#64663) Thanks @FunJim.
  • Logging: add top-level hostname, flattened message, and available agent_id, session_id, and channel fields to file-log JSONL records for multi-agent filtering without removing existing structured log arguments. Fixes #51075. Thanks @stevengonsalvez.
  • ACP: route server logs to stderr before Gateway config/bootstrap work so ACP stdout remains JSON-RPC only for IDE integrations. Fixes #49060. Thanks @Hollychou924.
  • Logging: propagate internal request trace scopes through Gateway HTTP requests and WebSocket frames so file logs, diagnostic events, agent run traces, model-call traces, OTEL spans, and trusted provider traceparent headers share a correlatable traceId without logging raw request or model content. Fixes #40353. Thanks @liangruochong44-ui.
  • Diagnostics/OTEL: capture privacy-safe model-call request payload bytes, streamed response bytes, first-response latency, and total duration in diagnostic events, plugin hooks, stability snapshots, and OTEL model-call spans/metrics without logging raw model content. Fixes #33832. Thanks @wwh830.
  • Logging: write validated diagnostic trace context as top-level traceId, spanId, parentSpanId, and traceFlags fields in file-log JSONL records so traced requests and model calls are easier to correlate in log processors. Refs #40353. Thanks @liangruochong44-ui.
  • Logging/sessions: apply configured redaction patterns to persisted session transcript text and accept escaped character classes in safe custom redaction regexes, so transcript JSONL no longer keeps matching sensitive text in the clear. Fixes #42982. Thanks @panpan0000.
  • Agents/sessions: let sessions_spawn runtime="subagent" ignore ACP-only streamTo and resumeSessionId fields while keeping ACP passthrough and documenting streamTo as ACP-only. Fixes #43556 and #63120; covers #56326, #61724, #64714, and #67248; carries forward #68397, #65282, #58686, #56342, and #40102. Thanks @skernelx, @damselem, @Br1an67, @Mintalix, @IsaacAPerez, @vvitovec, @Sanjays2402, @shenkq97, and @1034378361.
  • Providers/Ollama: honor /api/show capabilities, custom Modelfile PARAMETER num_ctx, configured provider/model context defaults, whitelisted native params such as temperature, top_p, and think, and native thinking effort levels so local models get accurate tools, context, and thinking behavior without forcing full-context VRAM use. Fixes #64710, duplicate #65343, #68344, #44550, #52206, #49684, #68662, #48010, #71584, and #44786; supersedes #69464; carries forward #44955. Thanks @yuan-b, @netherby, @xilopaint, @Diyforfun2026, @neeravmakwana, @taitruong, @armi0024, @LokiCode404, @zhouZcong, @dshenster-byte, @tangzhi, @pandego, @maweibin, @Adam-Researchh, @EmpireCreator, @g0st1n, and @voltwake.
  • Image tool/media: honor tools.media.image.timeoutSeconds and matching per-model image timeouts in explicit image analysis, including the MiniMax VLM fallback path, so slow local vision models are not capped by hardcoded 30s/60s aborts. Fixes #67889; supersedes #67929. Thanks @AllenT22 and @alchip.
  • Providers/Ollama: strip custom provider prefixes before native chat/embedding requests, skip ambient localhost discovery unless config/auth opts in, handle custom remote api: "ollama" providers, accept OpenAI SDK-style baseURL, scope synthetic local auth and embedding bearer headers to declared host boundaries, resolve custom-named local providers for subagents, add provider-scoped model request timeouts, preserve explicit input modalities, and document params.keep_alive plus local/LAN/cloud/multi-host/web-search/embedding/thinking setup recipes. Fixes #72353, #56939, #62533, #43945, #64541, #68796, and #39690; supersedes #57116, #62549, #69261, #69857, #65143, and #66511; refs #43945; carries forward #43224 and #39785. Thanks @maximus-dss, @hclsys, @IanxDev, @tsukhani, @issacthekaylon, @Julien-BKK, @Linux2010, @hyspacex, @maxramsay, @Meli73, @LittleJakub, @Juankcba, @uninhibite-scholar, @yfge, @Skrblik, and @Mriris.
  • Providers/Ollama: move memory embeddings to /api/embed with batched input, route local web search through Ollama's signed daemon proxy while keeping cloud auth scoped, treat Ollama memory embeddings as key-optional in doctor, and keep model usage visible by estimating native transcript usage when /api/chat omits counters. Fixes #39983, #69132, and #46584; carries forward #39112. Thanks @sskkcc, @LiudengZhang, @yoon1012, @hyspacex, @fengly78, and @TylonHH.
  • Agents/Ollama: parse stringified native tool-call arguments, retry native empty/thinking-only turns, accept already-prefixed LLM task model overrides, apply provider-owned replay normalization for Cloud models, validate explicit --thinking max, show resolved thinking defaults in Control UI, and include configured provider models in models list --provider. Fixes #69735, #50052, #71697, #71584, #72407, and #65207; supersedes #69910; carries forward #66552 and #61223. Thanks @rongshuzhao, @yfge, @L3G, @ralphy-maplebots, @Hollychou924, @ismael-81, @g0st1n, @NotecAG, and @drzeast-png.
  • Providers/PDF/Ollama: add bounded network timeouts for Ollama model pulls and native Anthropic/Gemini PDF analysis requests so unresponsive provider endpoints no longer hang sessions indefinitely. Fixes #54142; supersedes #54144 and #54145. Thanks @jinduwang1001-max and @arkyu2077.
  • Docker/QA: add observability coverage to the normal Docker aggregate so QA-lab OTEL and Prometheus diagnostics run inside Docker. Thanks @vincentkoc.
  • Auto-reply: poison inbound message dedupe after replay-unsafe provider/runtime failures so retries stay safe before visible progress but cannot duplicate messages after block output, tool side effects, or session progress. Fixes #69303; keeps #58549 and #64606 as duplicate validation. Thanks @martingarramon, @NikolaFC, and @zeroth-blip.
  • Agents/model fallback: keep auto-persisted fallback model overrides selected across turns until /new or reset clears them, avoiding repeated probes of a known-bad primary while /status shows the selected and active models. Thanks @kibedu.
  • Agents/model fallback: jump directly to a known later live-session model redirect instead of walking unrelated fallback candidates, while preserving the already-landed live-session/fallback loop guard. Fixes #57471; related loop family already closed via #58496. Thanks @yuxiaoyang2007-prog.
  • Gateway/Bonjour: keep @homebridge/ciao cancellation handlers registered across advertiser restarts so late probing cancellations cannot crash Linux and other mDNS-churned gateways.
  • Plugins/startup: load the default memory-core slot during Gateway startup when permitted so active-memory recall can call memory_search and memory_get without requiring an explicit plugins.slots.memory entry, while preserving plugins.slots.memory: "none".
  • Plugins/CLI: prefer native require for compiled bundled plugin JavaScript before jiti so read-only config, status, device, and node commands avoid unnecessary transform overhead on slow hosts. Fixes #62842. Thanks @Effet.
  • Plugins/compat/CLI: inventory doctor-side deprecation migrations separately from runtime plugin compatibility, add dated records for legacy extension-api, memory registration, provider hook/type aliases, runtime aliases, channel SDK helpers, and approval/test utility shims, refresh the persisted registry after managed plugin removals, make plugin install/uninstall writes conflict-aware, clear stale denylists, and fail tracked plugin/hook updates or unloadable package installs instead of leaving stale state. Thanks @vincentkoc.
  • WebChat/Control UI: support non-video file attachments in chat uploads while preserving the existing image attachment path and MIME-sniff fallback for generic image uploads. (#70947) Thanks @IAMSamuelRodda.
  • Skills/memory: restore Chokidar v5 hot reloads by watching concrete skill and memory roots with filters, including SKILL.md removals and deleted skill folders without broad workspace recursion. Fixes #27404, #33585, and #41606. Thanks @shelvenzhou, @08820048, and @rocke2020.
  • Gateway/chat: keep duplicate attachment-backed chat.send retries with the same idempotency key on the documented in-flight path so aborts still target the real active run. Fixes #70139. Thanks @Feelw00.
  • Gateway/session rows: report the same config-resolved thinking default that runtime sessions use, including global and per-agent defaults, so Control UI and TUI default labels stay aligned. (#71779, #70981, #71033, #70302) Thanks @chen-zhang-cs-code, @SymbolStar, and @cholaolu-boop.
  • Plugins: share package entrypoint resolution between install and discovery, reject mismatched runtimeExtensions, and cache bundled runtime-dependency manifest reads during scans.
  • WhatsApp/Web: keep quiet but healthy linked-device sessions connected by basing the watchdog on WhatsApp Web transport activity, while retaining a longer app-silence cap so frame activity cannot mask a stuck session forever. Fixes #70678; carries forward the focused #71466 approach and keeps #63939 as related configurable-timeout follow-up. Thanks @vincentkoc and @oromeis.
  • Discord/gateway: count failed health-monitor restart attempts toward cooldown and hourly caps, and evict stale account lifecycle state during channel reloads so repeated Discord gateway recovery cannot loop on old status. Fixes #38596. (#40413) Thanks @jellyAI-dev and @vashquez.
  • Cron/context engine: run isolated cron jobs under run-scoped context-engine session keys so prior runs of the same job are not inherited unless the job is explicitly session-bound. (#72292) Thanks @jalehman.
  • Control UI: localize command palette labels, categories, skill shortcuts, footer hints, and connect-command copy labels while preserving localized command palette search matching. (#61130, #61119) Thanks @rubensfox20.
  • Plugins/memory-lancedb: request float embedding responses from OpenAI-compatible servers so local providers that default SDK requests to base64 no longer return dimension-mismatched LanceDB vectors while preserving configured dimensions. Fixes #45982. (#59048, #46069, #45986) Thanks @deep-introspection, @xiaokhkh, @caicongyang, and @thiswind.
  • Plugins/memory-lancedb: advance auto-capture cursors per session only after messages are processed or intentionally skipped, retry failed messages, survive compacted histories, and clear cursor state on session end. Fixes #71349; carries forward #42083. Thanks @as775116191.
  • Plugins/memory-core: respect configured memory-search embedding concurrency during non-batch indexing so local Ollama embedding backends can serialize indexing instead of flooding the server. Fixes #66822. (#66931) Thanks @oliviareid-svg and @LyraInTheFlesh.
  • Docker/update smoke: keep the package-derived update-channel fixture on package-shipped files and make its UI build stub create the asset the updater verifies. Thanks @vincentkoc.
  • Gateway/models: repair legacy models.providers.*.api = "openai" config values to openai-completions, and skip providers with future stale API enum values during startup instead of bricking the gateway. Fixes #72477. (#72542) Thanks @JooyoungChoi14 and @obviyus.
  • Gateway/skills: redact apiKey and secret-named env values from the skills.update RPC response to prevent leaking credentials into WebSocket traffic, client logs, or session transcripts. Config is still written to disk in full; only the response payload is redacted. (#69998) Thanks @Ziy1-Tan.
  • Plugins/CLI: let flag-driven openclaw channels add install the selected channel plugin from its default source without opening an interactive prompt, fixing published npm Telegram setup in stdin-closed automation.
  • Onboarding/setup: keep first-run config reads, plugin compatibility notices, OpenAI Codex auth, post-auth default-model policy lookup, skip-auth, provider-scoped model pickers, and post-model sanity checks on cold manifest/setup metadata unless the user chooses to browse all models, avoiding full plugin/provider runtime loads between prompts. Thanks @shakkernerd.
  • Gateway/Bonjour: suppress known @homebridge/ciao cancellation and network assertion failures through scoped process handlers so malformed mDNS packets or restricted VPS networking disable/restart Bonjour instead of crashing the gateway. Fixes #67578. Thanks @zenassist26-create.
  • Discord: keep late clicks on already-resolved exec approval buttons quiet when elevated mode auto-resolved the request, while still surfacing real approval submission failures. Fixes #66906. Thanks @rlerikse.
  • Telegram: send a fresh final message for long-lived preview-streamed replies so the visible Telegram timestamp reflects completion time instead of the preview creation time. Thanks @rubencu.
Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

What Is an AI-Powered Recommendation Engine?

1 Share

What are AI recommendation engines?

AI recommendation engines are systems that use AI to analyze data and user behavior to predict and suggest content, products, or actions relevant to each individual user. In basic terms, it’s a digital matchmaker that connects people with products, content, or services they’re likely to enjoy or need. These systems are what make personalization on the internet possible.

Traditional recommendation systems were often based on simple rules. For example, a bestseller list is a rule-based suggestion – it recommends items popular among all users. Another rule might be, “If a user buys a printer, recommend ink cartridges.” While useful, these systems are static and don’t adapt to individual tastes.

AI-driven personalization, powered by machine learning (ML), is far more dynamic. Instead of relying on predefined rules, it learns from user behavior. It analyzes your past actions, like what you’ve watched, bought, liked, or even just looked at, to build a unique profile of your preferences. This allows it to deliver highly relevant suggestions that improve your user satisfaction and keep you engaged with the platform.

How recommendation engines work

The process of delivering a personalized recommendation can be broken down into three high-level components: data collection, model training, and inference.

  1. Data collection: The engine gathers large amounts of data. This includes explicit data, such as ratings and reviews, as well as implicit data, such as click history, purchase behavior, and viewing time. The more high-quality data the system has, the better its predictions will be.
  2. Model training: This is where machine learning happens. The collected data is used to train an algorithm to recognize patterns, enabling the model to understand relationships between users and items, as well as the attributes of the items themselves.
  3. Inference: Once the model is trained, it can make inferences. When a user interacts with the application, the engine uses the model to generate a ranked list of recommended items in real time.

Three core approaches form the foundation of most recommendation engines:

  • Collaborative filtering: This method recommends items based on the behavior of similar users. It operates on the principle that if person A has similar tastes to person B, then person A is likely to enjoy items that person B has liked.
  • Content-based filtering: This approach recommends items that are similar to those a user has previously liked. It focuses on the attributes of the items themselves. If you watch a lot of science fiction movies, it will suggest more science fiction movies.
  • Hybrid models: These models combine collaborative and content-based filtering (and other techniques) to leverage their respective strengths and minimize their weaknesses, resulting in more accurate and robust recommendations.

Modern systems also use advanced techniques such as deep learning, graph neural networks, and reinforcement learning to better understand complex patterns and adapt to changing user preferences.

Key algorithms and techniques

Let’s dive a little deeper into the core algorithms that power these systems.

Collaborative filtering: Leveraging user behavior patterns

Collaborative filtering is one of the most popular techniques because it doesn’t need to know anything about the items themselves; it just needs user interaction data. It identifies users with similar tastes and recommends items that have been popular within that group. For example, if you and another user both love a specific set of movies, the system might recommend a movie to you that the other user has seen and rated highly.

Content-based filtering: Recommending based on item features

Content-based filtering relies on the characteristics of items. It creates a profile for each user based on the attributes of items they have interacted with. If you frequently read articles about machine learning and data science, the system will look for other articles with similar keywords, topics, or tags and recommend them to you. This method is particularly useful when there isn’t enough user data available.

Hybrid approaches: Combining multiple methods for better accuracy

Hybrid models are the current standard for high-performing recommendation systems. By combining collaborative and content-based methods, they can overcome common problems. For instance, collaborative filtering struggles with new items that have no interaction data (the “cold start” problem). A hybrid model can use content-based filtering to recommend a new item based on its attributes until enough user data is collected.

Use cases for AI recommendation engines across industries

Recommendation engines aren’t just for retail and media. Their ability to personalize experiences provides value across a wide range of sectors.

  • E-commerce and retail: This is the most classic use case. Engines power “customers who bought this also bought” sections, personalized homepages, and targeted email campaigns that drive sales and increase average order value.
  • Media and entertainment: Streaming services use recommendations to suggest movies, shows, and music, keeping users subscribed and engaged. News outlets suggest articles to increase readership and time on site.
  • Social platforms: Social media feeds are highly curated by recommendation engines that decide which posts, people, and groups to show you to maximize your engagement.
  • Enterprise applications: In a business context, engines can recommend relevant documents or experts within a knowledge management system, or suggest solutions to support agents to resolve customer issues faster.
  • Healthcare and finance: In more regulated fields, engines can suggest personalized financial products based on a user’s profile or even assist doctors by suggesting potential treatment plans based on similar patient cases.

Architecture and data considerations

Building a recommendation engine that can serve millions of users requires a robust technical architecture. These systems are a key part of a company’s data stack, interacting with various databases and processing pipelines.

A critical consideration is the choice between real-time versus batch processing. Batch processing involves periodically retraining the model with new data (e.g., once a day). Real-time processing updates recommendations instantly as a user interacts with the app. While real-time is more complex, it offers a more responsive and dynamic user experience.

Modern recommendation engines heavily rely on vector search and embeddings. An embedding is a numerical representation (a vector) of an item, like a product or a movie. Items with similar meanings or attributes will have vectors that are close to each other in a multidimensional space. Vector search enables the system to find semantically similar items at incredible speed, delivering recommendations that are more nuanced and scalable than those provided by simple keyword matching.

The role of the database is crucial. It needs to store massive amounts of user and item data, including the vector embeddings. It must also provide fast data retrieval to power real-time recommendations. 

Databases like Couchbase Capella are designed for this kind of workload, offering scalable storage, fast key-value access, and integrated capabilities like full-text and vector search to support complex high-performance recommendation systems.

Recommendation engine evaluation metrics

How do you know if your recommendation engine is actually effective? Success is measured using a variety of metrics.

  • Precision, recall, F1 score: Precision measures how many of the recommended items are relevant. Recall measures how many of the relevant items were recommended. The F1 score is a balance between the two.
  • Root mean square error (RMSE) and ranking metrics: RMSE measures the accuracy of predicted ratings. Ranking metrics such as normalized discounted cumulative gain (NDCG) evaluate the quality of a ranked list of recommendations.
  • Click-through rate (CTR) and conversion lift: These business-oriented metrics measure user engagement. CTR tracks how often users click on recommended items, while conversion lift measures the increase in sales or other desired actions resulting from the recommendations.
  • A/B testing: The ultimate test is to run comparative experiments. A/B testing involves showing different versions of recommendations to different user groups to see which one performs better against key business metrics. Continuous feedback loops are essential for refining and improving models over time.

Challenges and best practices

Building and maintaining a recommendation engine comes with its own set of challenges.

  • Data quality and bias: If the training data is skewed, the recommendations will be too. This can create filter bubbles or unfairly favor certain items or user groups. It’s crucial to ensure data is clean, diverse, and representative.
  • Scalability and latency: As the number of users and items grows, the system must scale to handle the load without slowing down. Low latency is critical for real-time personalization, as users expect instant results.
  • Privacy and compliance: Recommendation engines often use personal data, so they must be designed with privacy in mind. Complying with regulations like GDPR and CCPA is non-negotiable.

To overcome these challenges, teams should adopt several best practices. Using hybrid models and embeddings can improve accuracy and address the cold start problem. Incremental learning, where models are updated frequently with small batches of new data, helps keep recommendations fresh. Finally, continuous monitoring for bias, performance, and data drift is essential to maintaining a healthy, effective system.

How to build or integrate an AI recommendation engine

For businesses looking to implement personalization, the process can be broken down into a few key steps:

  1. Data preparation: Consolidate and clean user interaction and item data. This is often the most time-consuming but critical step.
  2. Model selection: Choose the right algorithm (collaborative, content-based, hybrid) based on your data and business goals.
  3. Feature extraction: Convert raw data into features that the model can understand. For content-based models, this involves extracting item attributes. For advanced models, it means creating embeddings.
  4. Deployment: Deploy the trained model behind an API so that your application can request recommendations.

A typical tech stack might include ML libraries like TensorFlow or PyTorch, a distributed processing framework like Apache Spark, and a high-performance database. For example, a solution might use Couchbase for its ability to store JSON documents, serve data with low latency, and run integrated vector searches, making it a powerful backend for a real-time recommendation API.

Key takeaways and related resources

An AI-powered recommendation engine is a powerful tool for driving engagement and personalizing user experiences. By understanding user behavior and item attributes, these systems can deliver relevant suggestions that create value for both the business and the customer.

Key takeaways

  1. AI recommendation engines use machine learning to predict user preferences, offering a dynamic alternative to static, rule-based systems.
  2. They work by collecting data, training models to find patterns, and inferring recommendations in real time.
  3. Core techniques include collaborative filtering (based on similar users) and content-based filtering (based on similar items), with hybrid models offering the best performance.
  4. Use cases span nearly every industry, from e-commerce and media to enterprise software and healthcare.
  5. A robust architecture requires a scalable database that supports real-time processing and advanced features such as vector search.
  6. Success is measured through a combination of accuracy metrics (precision and recall), business metrics (CTR and conversion), and A/B testing.
  7. Key challenges include data bias, scalability, and privacy, which can be addressed through best practices like using hybrid models and continuous monitoring.

If you want to explore more topics around AI and recommendation engines, these resources will keep you on track:

Related resources

FAQs

What types of data are needed to build an effective recommendation engine? You need both user interaction data (e.g., clicks, purchases, ratings, views) and item attribute data (e.g., product category, genre, author, technical specs). The cleaner and more comprehensive the data, the better the recommendations will be.

How do collaborative, content-based, and hybrid recommendation approaches differ? Collaborative filtering recommends based on what similar users like. Content-based filtering recommends based on the attributes of items a user has liked. Hybrid approaches combine both methods to improve accuracy and overcome their individual limitations.

How do recommendation engines handle the cold start problem? The cold start problem occurs when a new user or new item has no data. Engines can handle this by falling back to nonpersonalized recommendations (like “most popular”), using content-based filtering for new items, or asking new users about their preferences during onboarding.

How do you deliver real-time recommendations at scale? This requires a high-performance architecture with a scalable database capable of low-latency reads, efficient data-processing pipelines (often using streaming technology), and features like vector search to quickly find similar items among millions of options.

When should a business consider using AI-powered recommendations? A business should consider using AI for recommendations when it has a large catalog of items (products, articles, videos) and wants to improve user engagement, increase conversion rates, or enhance customer loyalty by providing a more personalized experience.

How are recommendation engines evolving with generative AI and large language models? Generative AI and LLMs are making recommendations more conversational and context-aware. Instead of just showing a list of items, future engines might generate natural language explanations for why an item is recommended or create dynamic, interactive dialogues to help users discover what they want.

The post What Is an AI-Powered Recommendation Engine? appeared first on The Couchbase Blog.

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Communication is hard, but sometimes I can fix it.

1 Share

We used to type code to tell the computer what to do. When that gets tedious, we made libraries and functions until the code was more communicative.

Now I type English words to tell the agent what to tell the computer what to do. Sometimes that gets tedious, and then I need to find new ways to make it easier.

Here’s an example.

Iterating could be easier.

The work: I’m getting Claude to build a program that turns Claude conversation logs into a vertical HTML comic.

some conversation log lines in Claude's jsonl format
some Claude conversation history in html

As we iterate on this, I ask it a lot of questions about the output. This way, I learn something about the problem domain (how Claude Code records conversations). And then I get it to tweak the output to my liking.

In the example above, I wondered where the Background command "Start dev server on alternate ports" notification came from, so I asked Claude how I could know. To ask it, I had to cut and paste the text from the HTML, and then Claude had to grep the HTML to see what I was talking about, and also grep the JSONL to find the input. What if later, a very similar message appeared? It couldn’t tell exactly what I was talking about.

I can’t just point to the UI.

This wasn’t the first time I struggled to refer to a panel in the comic. This time, my frustration served as an alarm: do something about it, Jess. There has to be a better way to tell it which panel I’m talking about.

When communication gets difficult, that’s a signal. I can change this.

So I made it make a way to point to the UI.

In this case, I asked Claude to add a reference tag to each panel. The reference tag for each panel contains the line number (that was its idea) and filename (that was my idea) of the JSONL line represented by this panel. I push ‘r’ to toggle whether these reference tags show (my idea). When I click one, the value is copied (its idea).

the html comic with references.

Now I can ask the same question more succinctly: How can I find out where episode-8-before:L63 came from?

Claude understood and added a hover effect that highlights the originating bash tool call.

That hover effect is OK; I used it a few times. Those reference tags are gold! I’ve used them a dozen times already, and development is smoother for it. Claude can find the panel I’m talking about quickly both in the input JSONL and the output HTML. Our communication is streamlined.

This was a great idea. Iterating is much easier now!

I am in the loop and on the loop.

There are (at least) two feedback loops running here. One is the development loop, with Claude doing what I ask and then me checking whether that is indeed what I want. Here, I’m a human in the loop with the AI. This works well since we’re prototyping, learning the domain and discovering what output I want.

Then there’s a meta-level feedback loop, the “is this working?” check when I feel resistance. Frustration, tedium, annoyance–these feelings are a signal to me that maybe this work could be easier. I step back and think about how the AI could work more accurately and smoothly. Annie Vella called this the “middle loop,” and Kief Morris renamed it “human on the loop.”

Here, I’m both in the development loop with the AI, and I’m “on the loop” as a thoughtful collaborator, smoothing the development loop when it gets rough.

Resistance will be assimilated.

As developers using software to build software, we have potential to mold our own work environment. With AI making software change superfast, changing our program to make debugging easier pays off immediately. Also, this is fun!

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories