Claw Planet reference · v0a · first cut
last updated 2026-05-07 edit on GitHub colophon
§ 2 Setup / § 2.1

Hardware matrix

What hardware can run an OpenClaw Gateway, what you trade off at each tier, and what to actually buy if you're starting from zero.

Note on verification: Compiled from official runtime requirements (Node 24 recommended, 22.14+ minimum, WSL2 strongly recommended on Windows) plus public hardware specs. Power and thermal numbers for Pi cited from Raspberry Pi Foundation product pages. Not yet validated against Sush's actual boxes.

Read this first

You can run an OpenClaw Gateway on a wide spectrum of hardware — from a Raspberry Pi sitting on your desk to an Azure VM in a datacenter. The trade-offs are real but knowable. This page is the cheat-sheet I wished I had before picking where to start.

TL;DR: if you have a recent-ish Mac (M1 or newer), use it. If you don’t, a Raspberry Pi 5 with 8GB is the strongest “buy it and forget it” pick. Everything else is a variation on those two themes.

The matrix

HardwareRAMOpenClaw fitVoice featuresCost (USD-ish)Best for
Mac M1 / M2 / M3 / M48GB+Excellent — runtime + local model possibleYes (Voice Wake + Talk Mode native)$1,000+ (you probably already own one)Daily driver. Best feature parity, easiest install.
Linux desktop (mid-range)16GBExcellent — runtime + local modelLimited (text channels only — no native voice node)$400–$1,500If you live on Linux. WSL2 on Windows works the same way.
Intel NUC / mini PC16GBExcellent — silent always-on home serverLimited$400–$700”Set and forget” on your desk, fanless or quiet.
Raspberry Pi 5 (8GB)8GBGood — runtime fine; small local model onlyNone (no voice node)$80 + power supply + SD card ≈ $130Cheap always-on. Most people’s first OpenClaw box.
Raspberry Pi 4 (4GB or 8GB)4GB / 8GBOK with caveats — see Pi pageNone$40–$70Already-have-one starter. 4GB is tight.
Old laptop (recycle)4GB+Depends on age — anything from ~2018+ likely fineMaybe (Mac yes, others limited)Free if you have oneBest value if you’ve got one collecting dust.
Azure VM (B-series small)2GB+ (B2s = 4GB)Workable for text channels; no local modelNone~$40–$100/monthProduction-style deployment. See §2.5 Azure.
Azure Container AppsconfigurableWorkable for stateless restarts; persistence needs careNonescales with useServerless-ish path. See §2.5 Azure.
Old gaming desktop with GPU16GB+ + GPUExcellent if you want to run local modelsLimitedalready have itIf you’ve already invested in local LLMs (Ollama et al), the Gateway can drive them locally.
NAS (Synology / TrueNAS / Unraid)depends on modelPossible via Docker — see §2.7 DockerNonealready have itQuiet 24/7 home server you already pay for in electricity.

What “fit” actually means

The Gateway runtime is Node 24 (or 22.14+). It needs:

  • ~200–500MB of RAM at idle (one Gateway, one agent)
  • More if you load skills / MCP servers — each loaded skill or MCP adds context
  • Network connectivity to your model provider — unless you’re running local models, every message round-trip hits Anthropic / OpenAI / etc.
  • Persistent disk for sessions JSONL (~/.openclaw/agents/<agentId>/sessions/) and workspace files
  • The ability to keep a process alivelaunchd on macOS, systemd on Linux, your platform’s equivalent on Windows

So the runtime itself is light. What grows your hardware needs is what you connect to it:

WorkloadExtra RAM neededExtra disk needed
Just the Gateway, text channels, API-based modelsNegligibleNegligible
Adding 5+ MCP servers (filesystem, GitHub, browser)+100–500MBNegligible
Long sessions accumulating JSONL transcriptsNegligible~10MB / month / active session
Local model via Ollama (e.g. Llama 3.1 8B)+6GB at minimum+5–10GB per model
Local model larger (70B+)+50GB+ and you need a decent GPU+40GB+ per model
Voice features (TTS, ASR)+500MB–1GB+1GB

So if you want API-only models + text channels, almost any modern computer works. If you want local models, plan for the model’s footprint not the runtime’s.

Why this matters

A common mistake is to over-spec the box because “AI = expensive hardware.” That’s true if you’re training models. It’s not true if you’re running an agent runtime that calls a hosted API. A $130 Pi 5 can do the same agent work as a $3,000 Mac, as long as the model is hosted.

The decision really comes down to:

  1. What’s always-on? (Pi / NUC / NAS / VM are 24/7. A laptop closes.)
  2. Do you want voice? (Mac/iOS/Android only — see §1.4 drawback #6.)
  3. Do you want a local model? (Then RAM and possibly GPU dominate; otherwise hosted API is fine.)
  4. What’s the threat surface? (Always-on internet-facing changes the security calculus — see §6.1 Self-hosting checklist.)

If you don’t have anything yet and want the cheapest credible path to “I have an OpenClaw and it works”:

  • Raspberry Pi 5, 8GB ($80)
  • 64GB or 128GB MicroSD card or NVMe (faster) ($15–$30)
  • Official 27W USB-C power supply ($12)
  • A case with a fan (the Pi 5 will throttle without one) ($15)
  • An ethernet cable to your router (faster + more reliable than wifi)

Total: ~$130. Boot Raspberry Pi OS Lite (or Ubuntu Server). Follow §2.6 Raspberry Pi. Connect the channels you actually use. Done.

What we are NOT going to claim

We have not benchmarked OpenClaw on every box in the table. RAM/CPU floors are inferred from Node runtime requirements + architectural docs + community discussion. Specific timing numbers (cold-start latency, memory-at-rest) need actual runs to confirm. Sush will run on his M2 MacBook (Mac column promotes to tested-by-sush) and a Pi 5 (Pi column promotes to tested-by-sush) — those two will get real numbers first.

Sources