OpenRouter periodically makes powerful models free while they're in preview or building adoption. Right now, Qwen 3.6 Plus Preview is one of them โ and the r/clawdbot community has been putting it through its paces in OpenClaw.
The short version: it's genuinely capable for everyday tasks. Scheduling, summarizing, drafting emails, managing memory, running heartbeat cycles. It won't match Claude Opus on complex reasoning, but for 80% of what most business operators use OpenClaw for, it holds up.
โ ๏ธ "Free for now" caveat: OpenRouter preview pricing can change without much notice. This works today โ set it up, use it, but keep your paid model configured as a fallback.
The Model Landscape Right Now
How to Configure Qwen 3.6 in OpenClaw
First, get an OpenRouter API key at openrouter.ai โ free account, no credit card needed for free-tier models.
Then in your openclaw.json:
{
"model": "openrouter/qwen/qwen3.6-plus-preview:free",
"providers": {
"openrouter": {
"apiKey": "sk-or-..."
}
}
}
Important: The provider prefix matters. It must be openrouter/qwen/qwen3.6-plus-preview:free โ not just the model name alone. A common mistake in the community has been omitting the openrouter/ prefix.
Recommended Split: Free vs. Paid
Don't route everything through the free model. Here's a sensible split:
- Qwen 3.6 (free) for: Heartbeat cycles, scheduling tasks, simple drafts, memory updates, web searches, summaries, calendar management, routine tool calls
- Claude Sonnet/Opus for: Complex reasoning, financial analysis, anything you're sending to clients, multi-step planning, code generation, anything where quality matters
You can configure this split in OpenClaw by setting a default model and overriding per-session or per-skill when you need the heavy model.
What Qwen 3.6 Struggles With
Be realistic about the limits:
- Long context memory: Qwen 3.6 loses coherence on very long workspace files โ anything over ~40K tokens starts to drift
- Complex multi-step reasoning: It can handle 2โ3 step tasks well; 5+ step chains with dependencies should go to Claude
- Tool call reliability: Free tier models occasionally mis-format tool calls. Add a fallback check if you're running mission-critical automations
- Rate limits: Free tier has throughput limits โ fine for heartbeat cycles, potentially slow for bulk tasks
Real numbers from the community: Users running Qwen 3.6 for heartbeats + routine tasks report cutting their monthly API spend by 60โ80% while keeping Claude for anything client-facing or complex. At $3,000+/mo in API costs, that's meaningful.
Local Alternative: Ollama + Qwen 2.5
If you have a machine with 8GB+ RAM and want zero API dependency, Ollama with Qwen 2.5 7B is a fully local option. It's slower than the cloud model but completely free forever and private โ no data leaves your machine.
For a NucBox-class machine (Ryzen 7, 14GB RAM), Qwen 2.5 7B runs at about 8โ12 tokens/second โ fine for background tasks, a bit slow for real-time conversation.
Don't Skip the Spend Limit
Even with a free model as default, you still have a paid model configured as fallback. Make sure your spend cap is set on your Anthropic/OpenAI account so a misconfigured routing rule can't accidentally run 10,000 tokens through Claude at 3 AM.
Settings โ Plans โ Usage limits. Set a monthly cap. This takes 2 minutes and has saved people hundreds of dollars.