OpenClaw v2026.4.22 dropped overnight and it's one of the more feature-dense releases in recent memory. The headline is full xAI provider support — Grok now handles image generation, text-to-speech, and speech-to-text alongside its existing chat capabilities. But there are several other changes worth knowing about, especially if you run voice pipelines or a gateway-less setup.

Here's everything that changed and what it means for your setup.

1. xAI / Grok Gets Full Multimodal Support

Previously, xAI in OpenClaw meant Grok for chat. v2026.4.22 turns it into a full multimodal provider:

This makes xAI a serious competitor to OpenAI's full provider stack within OpenClaw. If you have an xAI API key and want to stay on one provider for text + voice + images, you now can.

How to Configure xAI Image Gen

# In openclaw.json → providers → xai
{
  "providers": {
    "xai": {
      "apiKey": "your-xai-key",
      "imageModel": "grok-imagine-image",
      "ttsVoice": "aurora"
    }
  }
}

Voices available: aurora, echo, flash, nimbus, orion, zephyr (exact names may vary — check /models after update).

2. Streaming STT for Deepgram, ElevenLabs, and Mistral

Real-time Voice Call transcription now extends beyond OpenAI and xAI. Three more providers gain streaming STT in this release:

If you're running OpenClaw for voice-first workflows (call summaries, live transcription, voice-driven task entry), this significantly expands your provider options beyond OpenAI's realtime API.

ElevenLabs users: Scribe v2 batch transcription is new — it processes inbound audio media (voice messages on WhatsApp, Telegram, etc.) rather than just live streams. This makes ElevenLabs a solid all-in-one voice stack alongside its TTS capabilities.

3. Local TUI Mode — Gateway No Longer Required for Terminal Chat

This is a quiet but meaningful change. Previously, OpenClaw's terminal UI (TUI) required a running gateway daemon. v2026.4.22 adds a local embedded mode that lets you run terminal chats without a gateway — while still enforcing plugin approval gates.

What this means:

openclaw chat --local   # runs embedded, no gateway needed

Note: Local mode won't have access to channel-specific context (your Telegram/WhatsApp conversations, etc.) since those route through the gateway. It's best for quick tasks, testing, and offline use.

4. /models add — Register Models From Chat

Previously, adding a custom model required editing openclaw.json and restarting the gateway. v2026.4.22 adds a /models add command that registers a model live from chat:

/models add openai gpt-5.5
/models add xai grok-imagine-image-pro
/models add ollama llama3.3:70b

No restart needed. The model is available immediately after the command. The existing /models command (provider browser) is unchanged — /models add is an extension, not a replacement.

5. WhatsApp Native Reply Quoting

OpenClaw can now quote messages in WhatsApp replies natively, using configurable replyToMode. This means when your agent responds to a specific WhatsApp message, it shows as a threaded reply rather than a standalone message — which is how humans actually communicate in WhatsApp.

Also new for WhatsApp: per-group and per-direct system prompt overrides. You can now configure different agent personalities for different WhatsApp groups or DM conversations:

# openclaw.json example
"channels": {
  "whatsapp": {
    "groups": {
      "Family Chat": { "systemPrompt": "You are a friendly family assistant..." },
      "Work Team": { "systemPrompt": "You are a professional work assistant..." },
      "*": { "systemPrompt": "Default prompt for all other groups" }
    }
  }
}

6. Onboarding Auto-Install

A minor but welcome fix: missing provider and channel plugins now auto-install during first-run setup. Previously, if you skipped a plugin step during onboarding, you'd hit cryptic errors later and need to manually recover. Now setup just handles it.

7. Bug Fix: Plugin Discovery Hang (lossless-claw compat)

The strict-match contract added in v2026.4.14 broke plugins whose internal engine id doesn't match their registered slot id — most notably lossless-claw and similar third-party plugins. This produced repeated info.id must match registered id errors on every turn.

v2026.4.22 fixes this. If you've been seeing that error since mid-April, update and it should clear.

Full Change Summary

How to Update

Update command

npm update -g openclaw && openclaw gateway restart

Check your version after: openclaw --version should show 2026.4.22 or later.

If you were affected by the lossless-claw plugin hang: After updating, do a full gateway restart (openclaw gateway stop && openclaw gateway start) rather than just a reload. The plugin registry needs to be rebuilt cleanly.

What This Release Means for Your Setup

If you're running a voice-first setup, this is a significant upgrade — Deepgram, ElevenLabs, and Mistral users can now use streaming STT for Voice Call without depending on OpenAI's realtime API. If you're a Grok user, you now have a complete multimodal stack in one provider.

The local TUI mode and /models add command are quality-of-life wins that reduce friction in daily use — especially for operators who frequently test new models or use OpenClaw from multiple machines.

Running an Older OpenClaw Version?

Staying current matters — especially with active security CVEs in the ecosystem right now. ClawReady's $49 audit covers your version, config, and exposure — and flags anything that needs attention before it becomes a problem.

Get a Security Audit →