The Raspberry Pi is an appealing always-on OpenClaw host: low power draw (~5W), silent, under $100, and you probably already have one. The official docs say it works โ with "rough edges." That's underselling it a bit.
ARM64 OpenClaw installs have several known failure modes that the standard setup guides don't cover. This guide documents all of them, with specific fixes for each.
Hardware scope: This guide covers Raspberry Pi 4 (4GB+ recommended) and Pi 5 running Raspberry Pi OS 64-bit (Bookworm) or Ubuntu 24.04 LTS ARM64. Pi 3 and 32-bit OS installs are not recommended โ Node.js v22 requires 64-bit.
Hardware Compatibility at a Glance
| Model | OpenClaw | Ollama Local Models | Notes |
|---|---|---|---|
| Pi 5 (8GB) | โ Works | โ Works | Best Pi option. Can run Qwen 2.5 3B comfortably. |
| Pi 4 (8GB) | โ Works | โ Limited | Ollama works with small models (โค3B). Slow inference. |
| Pi 4 (4GB) | โ Works | โ Very limited | Only smallest models (1Bโ2B). API model recommended instead. |
| Pi 4 (2GB) | โ Tight | โ Not practical | OpenClaw gateway alone uses ~400MB. Very little headroom. |
| Pi 3 | โ Not supported | โ No | 32-bit OS, ARMv7 architecture. Node v22 requires 64-bit. |
Issue 1: The npm Install / Binary Compatibility Problem
The most common ARM failure: you run npm install -g openclaw and get cryptic binary errors like:
Error: /usr/local/lib/node_modules/openclaw/node_modules/better-sqlite3/build/Release/better_sqlite3.node: invalid ELF header
Or:
Error: /usr/local/lib/node_modules/openclaw/node_modules/sharp/build/Release/sharp-linux-arm64.node: cannot open shared object file
Why it happens: Some of OpenClaw's dependencies ship prebuilt native binaries for x86_64. On ARM, npm tries to use those binaries, which are the wrong architecture. Or the ARM64 binary was never built for that dependency version.
Fix โ force native compilation:
# Install build dependencies first sudo apt-get install -y build-essential python3 python3-pip libvips-dev # Install openclaw and force rebuild of native modules npm install -g openclaw --build-from-source
This takes 10โ20 minutes on a Pi 4 (the compilation is slow on ARM) but produces binaries that actually work.
Alternative: Use the git install path instead of npm global. Clone the repo, run npm install locally, then npm run build. This gives you more control over the build process and makes it easier to debug compilation failures.
Issue 2: Node.js Version
Raspberry Pi OS ships with old Node.js versions. OpenClaw requires v20+ (v22 recommended). The system package often gives you v16 or v18.
# Check what you have node --version # If it's under v20, install nvm and upgrade curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.7/install.sh | bash source ~/.bashrc # Install and use Node 22 nvm install 22 nvm use 22 nvm alias default 22 # Verify node --version # should show v22.x.x
Then reinstall openclaw with the correct Node version active.
Issue 3: Skills with External Binary Dependencies
Some ClawHub skills bundle or depend on external binaries that only have x86_64 builds. These silently fail to load on ARM without a clear error message.
Safe skills on ARM64:
- Skills that are pure JavaScript/TypeScript โ always work
- Skills using Node.js child_process to call CLI tools you've installed via apt โ work if the CLI has an ARM64 build
- WhatsApp via Baileys โ pure JS, ARM64 is fine
Potentially broken skills on ARM64:
- Skills that bundle precompiled binaries (check for
.nodefiles in the skill directory) - Skills with native npm dependencies that don't have ARM64 prebuilds
Diagnosis: after installing a skill, run openclaw doctor and check for skill-related warnings. If a skill shows dependency errors, look at its package.json for native deps and check if they have ARM64 support.
Issue 4: Swap Memory for Gateway Stability
On Pi 4 with 4GB RAM, the OpenClaw gateway plus an active conversation can push close to the memory limit โ especially with multi-turn sessions building up context. Without swap, you get OOM kills that look like mysterious crashes.
# Check current swap free -h # If swap is 0 or very small, increase it sudo dphys-swapfile swapoff sudo nano /etc/dphys-swapfile # Change CONF_SWAPSIZE=100 to CONF_SWAPSIZE=2048 sudo dphys-swapfile setup sudo dphys-swapfile swapon
Note: Heavy swap use on a Pi running off an SD card will wear it out fast. If you're using a Pi as a permanent always-on host, run it from a USB SSD, not an SD card. This makes a significant difference in both speed and longevity.
Issue 5: Gateway Auto-Start on Boot
OpenClaw doesn't auto-start on reboot by default. On a Pi that's meant to be always-on, this means any power cycle or update requires manual intervention. Here's the systemd service setup:
# Create the service file sudo nano /etc/systemd/system/openclaw.service
[Unit] Description=OpenClaw Gateway After=network.target [Service] Type=simple User=YOUR_USERNAME WorkingDirectory=/home/YOUR_USERNAME ExecStart=/home/YOUR_USERNAME/.nvm/versions/node/v22.0.0/bin/openclaw gateway start --foreground Restart=on-failure RestartSec=10 [Install] WantedBy=multi-user.target
# Enable and start sudo systemctl daemon-reload sudo systemctl enable openclaw sudo systemctl start openclaw # Check status sudo systemctl status openclaw
Replace YOUR_USERNAME with your actual username, and update the Node path to match your nvm installation (which openclaw will give you the full path).
Issue 6: Ollama on ARM โ Model Selection Matters
Ollama runs on ARM64, but the model size ceiling is much lower than on x86 machines:
- Pi 5 (8GB): Qwen 2.5 3B or Llama 3.2 3B work well. Anything 7B+ will be extremely slow.
- Pi 4 (8GB): Same, but slower. Qwen 2.5 1.5B is more practical for interactive use.
- Pi 4 (4GB): Use Anthropic/OpenAI API instead of local models. Not enough RAM for meaningful local inference.
For a Pi-hosted OpenClaw that's meant to be practical, the best architecture is: local Pi for the gateway and workspace, API model for inference. This gives you 24/7 uptime without the API needing to be always running, while still getting Claude or GPT quality responses.
Power tip: Set up OpenClaw with Anthropic API as the default model but with Ollama as a fallback for simple tasks. You get quality responses when it matters and free local inference for heartbeats, file operations, and low-stakes queries.