Release Notes April 15, 2026

OpenClaw 2026.4.14: What's Actually New

OpenClaw 2026.4.14 dropped April 13 as a broad quality release targeting model provider stability, security hardening, and better support for local/weaker hardware setups. Here are the changes that actually matter for most users.

GPT-5 Explicit Turn Improvements

The 4.14 release specifically calls out "explicit turn improvements for the GPT-5 family." If you've been testing GPT-5 models via the OpenAI provider and hitting turn-boundary or multi-turn conversation issues, this patch addresses the provider-level handling.

No config changes needed โ€” if you're already using GPT-5 models, the fix applies automatically after updating.

Local Model Lean Mode (New)

New experimental flag: agents.defaults.experimental.localModelLean: true

This drops heavyweight default tools โ€” browser, cron, and message โ€” from the tool list when a local model is active. The result is a significantly smaller prompt, which matters on weaker hardware (8GB RAM machines, Pi 4/5, older NUCs) where every token saved means faster inference and less context overflow.

To enable in openclaw.json:

{
  "agents": {
    "defaults": {
      "experimental": {
        "localModelLean": true
      }
    }
  }
}

Important: this only affects the normal tool path for local models. Your full tool set is still available when using cloud providers (Claude, GPT, Gemini). The flag is experimental โ€” if a workflow requires browser or message tools from a local model, keep it off.

Exec Approval Secret Redaction (Security Fix)

Important security fix: exec approval prompts now redact secrets so credential material can't leak in rendered prompt content.

Previously, if a command included an API key or password inline (e.g., curl -H "Authorization: Bearer sk-abc123..."), the full string could appear in the approval review UI. As of 4.14, secrets matching known credential patterns are masked before display.

This was tracked as GitHub issue #61077. If you've been approving exec commands that included tokens or passwords, now's a good time to rotate any credentials that may have been visible in your session history.

LanceDB Cloud Storage for Memory

The memory-lancedb plugin now supports remote object storage backends (S3, GCS, Azure Blob). Previously, vector memory indexes were local-disk only.

For most small operators this doesn't change anything โ€” local disk is fine. This is primarily useful if you're running multiple OpenClaw instances that need to share a memory index, or if you're deploying to a serverless/ephemeral environment where local disk isn't durable.

GitHub Copilot as an Embedding Provider

Memory search now supports GitHub Copilot as an embedding provider. If your org already has Copilot enabled, this gives you a free embedding backend for semantic memory search without needing a separate OpenAI or Cohere key.

OAuth/Model Auth Status Card

A new Model Auth status card in the Control UI shows OAuth token health and provider rate-limit pressure at a glance. If your Anthropic or Google OAuth token is expiring (or expired), you'll see an attention callout instead of silent failures.

Packaging Fix: Stale Chunks After npm Upgrade

4.14 includes a fix for global npm upgrades failing on stale chunk imports. If you've ever seen ERR_MODULE_NOT_FOUND after npm update -g openclaw, the update process now prunes stale dist chunks automatically.

Should You Update?

Yes. 4.14 fixes the packaging regression from 4.12 (missing subagent-registry.runtime.js), includes the exec secret redaction security fix, and improves stability for GPT-5 and local model users. There's no known breaking change in this release.

npm update -g openclaw
openclaw gateway restart

Not sure if your OpenClaw setup is configured correctly?

ClawReady reviews your full config โ€” models, memory, channels, security โ€” and tells you exactly what to fix. Starting at $49.

Book a Free Call โ†’