OpenRouter periodically makes powerful models free while they're in preview or building adoption. Right now, Qwen 3.6 Plus Preview is one of them โ€” and the r/clawdbot community has been putting it through its paces in OpenClaw.

The short version: it's genuinely capable for everyday tasks. Scheduling, summarizing, drafting emails, managing memory, running heartbeat cycles. It won't match Claude Opus on complex reasoning, but for 80% of what most business operators use OpenClaw for, it holds up.

โš ๏ธ "Free for now" caveat: OpenRouter preview pricing can change without much notice. This works today โ€” set it up, use it, but keep your paid model configured as a fallback.

The Model Landscape Right Now

Qwen 3.6 Plus Preview
$0 / free tier
via OpenRouter ยท preview pricing ยท good for everyday tasks
Claude Sonnet 4.6
Fast, reliable ยท best for complex work + reasoning
Qwen 2.5 7B (local)
$0 forever
via Ollama ยท fully local ยท slower but private

How to Configure Qwen 3.6 in OpenClaw

First, get an OpenRouter API key at openrouter.ai โ€” free account, no credit card needed for free-tier models.

Then in your openclaw.json:

{
  "model": "openrouter/qwen/qwen3.6-plus-preview:free",
  "providers": {
    "openrouter": {
      "apiKey": "sk-or-..."
    }
  }
}

Important: The provider prefix matters. It must be openrouter/qwen/qwen3.6-plus-preview:free โ€” not just the model name alone. A common mistake in the community has been omitting the openrouter/ prefix.

Recommended Split: Free vs. Paid

Don't route everything through the free model. Here's a sensible split:

You can configure this split in OpenClaw by setting a default model and overriding per-session or per-skill when you need the heavy model.

What Qwen 3.6 Struggles With

Be realistic about the limits:

Real numbers from the community: Users running Qwen 3.6 for heartbeats + routine tasks report cutting their monthly API spend by 60โ€“80% while keeping Claude for anything client-facing or complex. At $3,000+/mo in API costs, that's meaningful.

Local Alternative: Ollama + Qwen 2.5

If you have a machine with 8GB+ RAM and want zero API dependency, Ollama with Qwen 2.5 7B is a fully local option. It's slower than the cloud model but completely free forever and private โ€” no data leaves your machine.

For a NucBox-class machine (Ryzen 7, 14GB RAM), Qwen 2.5 7B runs at about 8โ€“12 tokens/second โ€” fine for background tasks, a bit slow for real-time conversation.

Don't Skip the Spend Limit

Even with a free model as default, you still have a paid model configured as fallback. Make sure your spend cap is set on your Anthropic/OpenAI account so a misconfigured routing rule can't accidentally run 10,000 tokens through Claude at 3 AM.

Settings โ†’ Plans โ†’ Usage limits. Set a monthly cap. This takes 2 minutes and has saved people hundreds of dollars.