Gemma 4 landed with impressive benchmarks โ 85.5% tool-use accuracy on the 27B MoE variant, genuinely competitive with paid models on many tasks. If you're trying to use it through OpenClaw's native Google provider (the same way you'd use gemini-3-flash), you're hitting this:
at GoogleProvider.resolve (/usr/local/lib/node_modules/openclaw/dist/providers/google.js:284:11)
at async LLMRouter.route (/usr/local/lib/node_modules/openclaw/dist/llm/router.js:112:22)
This is a known gap โ GitHub issue #61501 was filed requesting native Gemma 4 support in the Google provider. The issue is that Gemma 4 uses a different endpoint structure than Gemini models, and OpenClaw's Google provider currently only knows how to route to the generativelanguage.googleapis.com Gemini family. Gemma 4 via Google AI Studio uses a separate API path.
Official support is in review but not yet merged. Two workarounds work today.
Option 1 โ Use the OpenAI-Compatible Gemini Endpoint (Recommended)
Google exposes an OpenAI-compatible endpoint at https://generativelanguage.googleapis.com/v1beta/openai/ that supports both Gemini and Gemma 4 models. Since OpenClaw's openai-completions provider works correctly, you can route Gemma 4 through it:
# In ~/.openclaw/openclaw.json
"models": {
"providers": {
"google-compat": {
"baseUrl": "https://generativelanguage.googleapis.com/v1beta/openai/",
"apiKey": "YOUR_GOOGLE_AI_STUDIO_API_KEY",
"api": "openai-completions",
"models": [
{
"id": "gemma-4-27b-it",
"name": "Gemma 4 27B (via Google AI Studio)",
"input": ["text", "image"],
"contextWindow": 128000,
"maxTokens": 8192
},
{
"id": "gemma-4-9b-it",
"name": "Gemma 4 9B (via Google AI Studio)",
"input": ["text"],
"contextWindow": 128000,
"maxTokens": 8192
}
]
}
}
}
Get your Google AI Studio API key at aistudio.google.com/apikey โ it's free up to the rate limits, which are generous for personal use.
Then select the model:
/model google-compat/gemma-4-27b-it
This is the recommended path for most users. You get Gemma 4's full capability via Google's infrastructure without running anything locally, and it's free within the AI Studio rate limits.
Option 2 โ Run Gemma 4 Locally via Ollama
If you want fully local inference with no API dependency, Ollama supports Gemma 4:
# Pull Gemma 4 (choose based on your RAM) ollama pull gemma4:27b # needs ~20GB RAM โ high quality, slow on CPU ollama pull gemma4:12b # needs ~9GB RAM โ good balance ollama pull gemma4:4b # needs ~4GB RAM โ fast, limited tool calling
Then configure OpenClaw to use it via the OpenAI-compatible endpoint (the same Ollama config that avoids the tool-calling stringification bug):
"providers": {
"ollama": {
"baseUrl": "http://127.0.0.1:11434/v1",
"apiKey": "ollama",
"api": "openai-completions",
"injectNumCtxForOpenAICompat": true,
"models": [
{
"id": "gemma4:12b",
"name": "Gemma 4 12B (Local)",
"contextWindow": 32768,
"cost": {"input": 0, "output": 0}
}
]
}
}
Hardware note: Gemma 4's MoE architecture means the 27B model only activates ~4B parameters per token, which makes it faster than you'd expect for its size. But on a CPU-only machine (no GPU), even the 12B variant will be slow โ expect 3โ8 tokens/second on a modern desktop CPU. Fine for batch tasks, frustrating for interactive use.
Which Variant to Use
Google released several Gemma 4 variants. For OpenClaw use:
- gemma-4-27b-it โ Instruction-tuned, best tool calling. Use via Google AI Studio unless you have 20GB+ RAM for local.
- gemma-4-9b-it โ Good balance. Free on AI Studio. Works well for most tasks.
- gemma-4-4b-it โ Fastest locally. Tool calling reliability drops noticeably vs 9B+.
The -it suffix means instruction-tuned โ always use that variant for OpenClaw, not the base pretrained models.
When Will Native Google Provider Support Land?
The feature request (GitHub issue #61501) is marked as in review. The OpenClaw team has been responsive on Google provider issues โ native Gemma 4 support will likely land in a 4.x release in the next few weeks. Until then, the OpenAI-compatible endpoint workaround above is stable and works well.
Once native support ships: You'll be able to add Gemma 4 directly under your existing google provider config the same way you add Gemini models. The OpenAI-compat workaround config above can be removed at that point, though it'll continue to work even after.