NVIDIA Publishes Official NemoClaw Repo: Run OpenClaw Securely Inside NVIDIA OpenShell
NVIDIA just published NVIDIA/NemoClaw to their official GitHub — an open-source reference stack that simplifies running OpenClaw always-on assistants more securely using the NVIDIA OpenShell runtime.
This is NVIDIA's formal entry into the OpenClaw deployment infrastructure space, part of their broader NVIDIA Agent Toolkit.
What NemoClaw Is
NemoClaw wraps OpenClaw in a hardened deployment blueprint that adds:
- Guided onboarding —
nemoclaw onboardsets up the full stack - Hardened security blueprint — sandboxed agent execution via OpenShell runtime
- State management — managed lifecycle for long-running agents
- OpenShell-managed channel messaging — channel access routed through the secure runtime
- Routed inference — managed model routing inside the sandbox
- Layered protection — additional security controls on top of OpenClaw's existing model
The sandbox image is ~2.4 GB compressed, running via Docker/k3s with the OpenShell gateway. The whole thing is managed through nemoclaw onboard rather than raw OpenClaw commands — NVIDIA explicitly warns against mixing OpenShell self-update commands with NemoClaw's managed environment.
Supported Platforms
| OS | Runtime | Status |
|---|---|---|
| Linux | Docker | ✅ Tested (primary path) |
| macOS (Apple Silicon) | Colima, Docker Desktop | ✅ Tested with limitations |
| DGX Spark | Docker | ✅ Tested |
| Windows WSL2 | Docker Desktop (WSL backend) | ✅ Tested with limitations |
WSL2 support is notable — NemoClaw is explicitly tested on the same environment many self-hosted Windows users run OpenClaw on today.
Hardware Requirements
| Resource | Minimum | Recommended |
|---|---|---|
| CPU | 4 vCPU | 4+ vCPU |
| RAM | 8 GB | 16 GB |
| Disk | 20 GB free | 40 GB free |
NVIDIA specifically warns: machines with less than 8 GB RAM may hit OOM during image push due to Docker daemon + k3s + OpenShell gateway combined memory usage. 8 GB of swap is the workaround at the cost of slower performance.
Why This Matters
NVIDIA publishing an official OpenClaw integration under their own GitHub organization — alongside DGX Spark hardware support — is a significant legitimacy signal. This isn't a community fork or a third-party wrapper; it's NVIDIA's Agent Toolkit making OpenClaw a first-class deployment target.
The timing is notable too. This drops the same week as SecurityScorecard's report on 28K+ exposed OpenClaw instances with weak security configurations. NVIDIA's response: a hardened reference stack that addresses exactly those exposure patterns — sandboxed execution, managed inference routing, layered protection.
Alpha Status — Who Should Use It
NemoClaw is currently in early preview (available since March 16, 2026). NVIDIA is clear: "This software is not production-ready. Interfaces, APIs, and behavior may change without notice."
It's worth experimenting with if you:
- Run OpenClaw on NVIDIA hardware (especially DGX or high-end GPU setups)
- Have security requirements that standard OpenClaw configuration doesn't fully address
- Want to get ahead of what NVIDIA's Agent Toolkit direction looks like
- Are evaluating enterprise-grade deployment options
For most individual and small business operators, standard OpenClaw with proper configuration (hardened gateway, scoped permissions, Cloudflare Tunnel for remote access) addresses the security concerns without the Docker/k3s overhead. NemoClaw is more appropriate for team or enterprise deployments where managed sandboxing and audit trails are requirements.
Either way — the fact that NVIDIA is formally investing in OpenClaw deployment infrastructure tells you everything about where the market is heading.
Get a Security-First OpenClaw Setup — $49 Audit or $99 Full →