Release Notes April 16, 2026

OpenClaw 2026.4.15-beta.1: What's Actually in It

OpenClaw 2026.4.15-beta.1 shipped overnight on April 15โ€“16, 2026. It's a pre-release (don't update production yet) but the feature list is substantial โ€” and it fixes several of the most-complained-about 4.14 issues. Here's the practical breakdown.

Pre-release: 4.15-beta.1 is not recommended for production deployments yet. Test on a second machine or wait for 4.15 stable. Production users should stay on 4.14 (or 4.12 if you're hitting the Codex/Cloudflare bug).

New Features

1. OAuth / Model Auth Status Card in Control UI

The new Overview panel now shows an OAuth token health card for each provider โ€” you can see at a glance whether tokens are healthy, expiring, or already expired, plus current rate-limit pressure. It calls a new models.authStatus gateway method that strips credentials before surfacing them in the UI (no exposed keys). Results are cached for 60 seconds to avoid hammering provider APIs.

Who needs this: Anyone using ChatGPT OAuth, GitHub Copilot, or Google provider โ€” these tokens expire silently and the only symptom has been cryptic "request failed" errors. Now you'll see a red attention callout in the UI before it breaks your agent.

2. LanceDB Cloud Storage for Memory Indexes

The memory-lancedb plugin can now store its vector index on remote object storage (S3-compatible) instead of local disk only. This means durable, shared memory for multi-machine setups โ€” your agent's memory index survives a disk wipe and can be accessed from different nodes.

Who needs this: Anyone running OpenClaw on ephemeral infrastructure (cloud VMs, containers) or wanting to sync memory across multiple agents.

3. GitHub Copilot Embedding Provider

You can now use GitHub Copilot as the embedding provider for memory search, not just as an inference model. A new dedicated Copilot embedding host helper handles transport, token refresh, and remote overrides. If you're already paying for Copilot, this is free memory embeddings.

4. localModelLean Experimental Flag

Add agents.defaults.experimental.localModelLean: true to your config and OpenClaw will drop heavyweight default tools (browser automation, cron scheduling, message routing) from the default tool set โ€” dramatically reducing prompt size for weaker hardware. This is the official fix for the "my Raspberry Pi / 8GB RAM machine is context-crushing itself" problem.

// openclaw.json
{
  "agents": {
    "defaults": {
      "experimental": {
        "localModelLean": true
      }
    }
  }
}

Normal path is unchanged โ€” only affects setups that opt in. You can still install and use browser/cron/message tools manually; they just don't load by default.

5. Leaner Published Builds

Bundled plugin runtime deps are now localized to their owning extensions (not stuffed into core). The published docs payload is trimmed. Install/package-manager guardrails are tighter. Net effect: smaller installs, fewer stale chunk import errors on upgrade.

Key Bug Fixes

Cloudflare 403 / "DNS Lookup Failed" on openai-codex (4.14 regression)

The 4.14 issue where openai-codex/gpt-5.4 failed with a fake "DNS lookup failed" error (actually a Cloudflare 403 misread as a DNS failure) is addressed via the stale chunk prune fix. The cli/update fix prunes stale packaged dist chunks after npm upgrades, which was the root cause of the malformed request headers.

Security: Exec Approval Prompt No Longer Leaks Secrets

Before 4.15-beta.1, exec approval prompts could expose inline credentials (API keys, tokens) in the rendered approval UI. Now secrets are redacted before rendering โ€” you see [REDACTED] instead of the actual value. This was tracked in issues #61077 and #64790.

Config Update Stale-Hash Race

openclaw configure was failing with stale-hash errors after writes because it wasn't re-reading the persisted config hash post-update. Fixed. (Issue #64188)

Onboarding Crash on Globally Installed CLI

Channel-selection during onboarding was crashing on some global npm installs. Fixed. (Issue #66736) โ€” relevant if you're setting up fresh installs.

Memory-Core Security: QMD Memory Backend Path Traversal

A security fix prevents the QMD memory backend from being used as a generic workspace file reader that bypasses read tool-policy denials. Memory reads are now restricted to canonical memory files (MEMORY.md, memory/**, etc.) and active indexed QMD documents only. (Issue #66026)

Should You Upgrade Now?

Your situation Recommendation
Production, stable on 4.12 Stay on 4.12 until 4.15 stable
Broken on 4.14 (Cloudflare/DNS bug) Roll back to 4.12 or switch to direct API key
Want LanceDB cloud or Copilot embeddings Test 4.15-beta.1 on non-production machine
Pi / weak hardware hitting context limits Try 4.15-beta.1 with localModelLean: true
Building a new setup from scratch Wait for 4.15 stable (days away)

Install 4.15-beta.1 (Test Environments Only)

npm install -g openclaw@2026.4.15-beta.1
openclaw gateway restart

To pin back to 4.14: npm install -g openclaw@2026.4.14

To pin back to 4.12: npm install -g openclaw@2026.4.12

Don't want to track releases yourself?

ClawReady monitors OpenClaw releases, tests upgrades before they hit your setup, and handles rollbacks when betas break things. We watch the changelog so you don't have to.

Book a Free Call โ†’