OpenClaw + Ollama on Windows
Installation & Configuration Guide — Windows 10 / 11
Covers WSL2 setup, Ollama local model install, OpenClaw gateway, and auto-start configuration
OpenClaw is a local-first AI agent. It runs a gateway process on your machine that connects to a local LLM (via Ollama) or a cloud AI provider (Anthropic, OpenAI), then routes instructions through messaging apps like Telegram, WhatsApp, or Discord.
| Important: OpenClaw on Windows Requires WSL2 OpenClaw does not run as a native Windows application. It depends on Unix/POSIX subsystem features (inotify file watchers, Unix sockets, POSIX process management) that are not available in native Windows environments. WSL2 (Windows Subsystem for Linux) solves this by running a genuine Linux kernel inside Windows. This is the only officially supported path for Windows users. This guide installs: WSL2 + Ubuntu 24.04 → Node.js 22 → OpenClaw → Ollama (native Windows app) → auto-start configuration. |
|---|
| Component | Minimum (Cloud API mode) | Recommended (Local LLM via Ollama) |
|---|---|---|
| CPU | Any modern x86-64, 4 cores | 8+ cores, modern Intel/AMD (Zen 3+) |
| RAM (System) | 8 GB (4 GB for WSL2 minimum) | 16–32 GB (assign 8–12 GB to WSL2) |
| Storage | 20 GB free (10 GB for WSL2 + OpenClaw) | 50–100 GB SSD (models are 5–20 GB each) |
| GPU (optional) | Not required — CPU-only Ollama works | NVIDIA GPU with 8+ GB VRAM for 14B models |
| GPU VRAM (if GPU used) | 8 GB VRAM → 7B–8B parameter models | 16 GB VRAM → 13B–14B models at full speed |
| Network | Broadband for API calls + model downloads | Broadband — models are 5–20 GB to download |
| Virtualization | VT-x or AMD-V enabled in BIOS/UEFI | Same — verify in BIOS before starting |
| GPU Notes for Windows + Ollama NVIDIA GPUs: Install CUDA drivers before WSL2. Ollama for Windows uses CUDA natively and runs outside WSL2 — no GPU passthrough needed. NVIDIA compute capability 5.0+ required (GTX 900 series or newer). AMD GPUs: ROCm support on Windows is limited. AMD Radeon users should install the ROCm package from the Ollama Windows release page. Performance may vary. No GPU (CPU-only): Ollama will use system RAM. A 14B parameter model (Q4 quantized, ~9 GB) will run but slowly — expect 2–5 tokens/second instead of 30–45 on GPU. Functional, not fast. |
|---|
Open Task Manager → Performance tab → CPU. Look for 'Virtualization: Enabled' in the lower right. If it shows Disabled, restart your PC, enter BIOS/UEFI, and enable Intel VT-x or AMD-V.
Open PowerShell as Administrator (right-click Start → 'Windows PowerShell (Admin)') and run:
| wsl --install |
|---|
This installs WSL2, the Linux kernel, and Ubuntu 24.04 (the default and recommended distribution). Restart your computer when prompted.
After restarting, Ubuntu will open automatically and ask you to create a Linux username and password. This is separate from your Windows account. Choose a simple username (lowercase, no spaces) and remember the password — you will need it for sudo commands.
OpenClaw and Ollama use systemd for service management. Enable it now:
| sudo tee /etc/wsl.conf > /dev/null << 'EOF' [boot] systemd=true [automount] enabled=true options="metadata,umask=22,fmask=11" EOF |
|---|
Then from a PowerShell window (not inside Ubuntu):
| wsl --shutdown |
|---|
Reopen Ubuntu from the Start menu. Verify systemd is running:
| systemctl --version |
|---|
By default WSL2 can consume up to 50% of system RAM. Create a config file to set a sensible limit. In Windows Explorer, navigate to your user folder (C:\Users\YourName\) and create a file named .wslconfig with Notepad:
| # File: C:\Users\YourName\.wslconfig [wsl2] memory=8GB # Adjust based on your total RAM processors=4 # Number of CPU cores for WSL2 swap=4GB # Optional swap space localhostForwarding=true |
|---|
Apply the change:
| wsl --shutdown |
|---|
Reopen Ubuntu. Recommended memory allocation: assign roughly half your total system RAM to WSL2, keeping at least 4 GB for Windows.
Inside Ubuntu, run:
| sudo apt update && sudo apt upgrade -y |
|---|
OpenClaw requires Node.js version 22 or later. Run the following inside your Ubuntu (WSL2) terminal:
| # Add Node.js 22 package source curl -fsSL https://deb.nodesource.com/setup_22.x | sudo -E bash - # Install Node.js sudo apt-get install -y nodejs # Verify the version (must show v22.x.x or higher) node --version npm --version |
|---|
| If node --version shows below v22 Do not use the default Ubuntu apt packages for Node.js — they are outdated. The nodesource setup script above installs the correct version directly from the Node.js distribution. If you already have an older Node.js installed, run: sudo apt-get remove nodejs first, then repeat the steps above. |
|---|
Fix npm global install permissions (prevents errors later):
| mkdir -p ~/.npm-global npm config set prefix '~/.npm-global' echo 'export PATH=~/.npm-global/bin:$PATH' >> ~/.bashrc source ~/.bashrc |
|---|
Inside Ubuntu (WSL2):
| # Install OpenClaw globally npm install -g openclaw@latest # Verify installation openclaw --version |
|---|
The onboarding wizard creates your config directory (~/.openclaw/), prompts for your LLM API key, and installs the background daemon:
| openclaw onboard --install-daemon |
|---|
During onboarding you will be asked to:
| Keep All Files on the Linux File System Critical: Store OpenClaw config and working files inside WSL2 (/home/yourusername/.openclaw/), NOT on the Windows drive (/mnt/c/...). File I/O on the Linux filesystem is 5–10x faster. Storing configs on /mnt/c/ causes slow startup and intermittent read errors. |
|---|
| # Start gateway in foreground (for first-run verification) openclaw start # You should see: Gateway running on http://127.0.0.1:18789 # Open this URL in your Windows browser to verify the dashboard # Stop with Ctrl+C, then start as daemon: openclaw start --daemon |
|---|
The OpenClaw dashboard runs inside WSL2 but is accessible from your Windows browser at:
| http://localhost:18789 |
|---|
If localhost does not resolve, WSL2's port forwarding may need to be configured. From PowerShell as Administrator:
| # Get WSL2's current IP address $wslIp = (wsl hostname -I).Trim().Split(' ')[0] # Forward Windows port to WSL2 netsh interface portproxy add v4tov4 listenport=18789 listenaddress=0.0.0.0 connectport=18789 connectaddress=$wslIp |
|---|
Note: The WSL2 IP address changes on each restart. See Phase 5 for a startup script that refreshes this automatically.
Create a privileged configuration with no cloud API keys. Inside Ubuntu:
| # Copy base config as starting point cp ~/.openclaw/openclaw.json ~/.openclaw/openclaw-privileged.json # Edit the privileged config nano ~/.openclaw/openclaw-privileged.json |
|---|
In the privileged config, remove or blank any cloud API keys and set the primary model to your local Ollama model (configured in Phase 4). Switch configs when starting a privileged session:
| openclaw start --config ~/.openclaw/openclaw-privileged.json --daemon |
|---|
Unlike OpenClaw, Ollama installs as a native Windows application — not inside WSL2. This allows it to use your Windows GPU drivers directly.
Verify Ollama is running from a Windows terminal (cmd or PowerShell):
| curl http://localhost:11434 |
|---|
You should receive a response. Ollama is also accessible from inside WSL2 at the same localhost:11434 address due to WSL2's localhost forwarding.
| GPU Driver Requirement for Accelerated Inference NVIDIA: Install the latest Game Ready or Studio driver from nvidia.com before installing Ollama. CUDA is bundled with Ollama — no separate CUDA install needed. AMD Radeon: Download and extract the ROCm package (ollama-windows-amd64-rocm.zip) from the Ollama releases page on GitHub into the same directory as Ollama's CLI. No GPU: Ollama will fall back to CPU inference automatically. Performance will be significantly slower but functional. |
|---|
Open a Windows terminal (cmd, PowerShell, or Windows Terminal) and pull your preferred models:
| # Primary recommendation for 16+ GB RAM systems ollama pull deepseek-r1:14b # Secondary model (optional, ~9 GB) ollama pull qwen3:14b # Lighter option for 8–12 GB RAM systems ollama pull deepseek-r1:7b |
|---|
| Model | Disk / RAM | GPU VRAM Needed | Best For |
|---|---|---|---|
| deepseek-r1:7b | ~5 GB / 8 GB RAM | 8 GB VRAM (or CPU) | Light tasks, low-RAM systems |
| deepseek-r1:14b | ~9 GB / 16 GB RAM | 12–16 GB VRAM (or CPU) | Recommended: strong reasoning, balanced |
| qwen3:14b | ~9 GB / 16 GB RAM | 12–16 GB VRAM (or CPU) | Alternative to DeepSeek, good at code |
| deepseek-r1:32b | ~20 GB / 32 GB RAM | 24 GB VRAM (RTX 4090) | High-end systems only |
Inside your Ubuntu terminal, confirm OpenClaw can reach Ollama:
| curl http://localhost:11434 # Expected response: Ollama is running # List downloaded models curl http://localhost:11434/api/tags | python3 -m json.tool |
|---|
Edit your OpenClaw config to use the local model as primary. Inside Ubuntu:
| nano ~/.openclaw/openclaw.json |
|---|
Set the model field to your chosen Ollama model. Example snippet:
| { "agents": { "defaults": { "model": { "primary": "ollama/deepseek-r1:14b" } } } } |
|---|
Save, then restart the gateway: openclaw stop && openclaw start --daemon
Ollama installs itself as a Windows startup application automatically. Verify in Task Manager → Startup Apps that 'Ollama' is listed and enabled.
WSL2 does not start automatically when Windows boots by default. The following Task Scheduler approach starts WSL2 and the OpenClaw daemon at login.
In PowerShell as Administrator, create and run a startup script:
| # Create startup script at C:\Users\YourName\wsl-openclaw-start.ps1 $script = @' wsl -u root -- bash -c 'systemctl start openclaw-gateway 2>/dev/null || true' Start-Sleep -Seconds 5 $wslIp = (wsl hostname -I).Trim().Split(" ")[0] netsh interface portproxy delete v4tov4 listenport=18789 listenaddress=0.0.0.0 2>$null netsh interface portproxy add v4tov4 listenport=18789 listenaddress=0.0.0.0 connectport=18789 connectaddress=$wslIp '@ $script | Out-File -FilePath "$env:USERPROFILE\wsl-openclaw-start.ps1" -Encoding UTF8 |
|---|
Register it in Task Scheduler to run at login:
| $action = New-ScheduledTaskAction -Execute "PowerShell.exe" ` -Argument "-NonInteractive -WindowStyle Hidden -File $env:USERPROFILE\wsl-openclaw-start.ps1" $trigger = New-ScheduledTaskTrigger -AtLogOn $settings = New-ScheduledTaskSettingsSet -RunOnlyIfNetworkAvailable -StartWhenAvailable Register-ScheduledTask -TaskName "OpenClaw WSL2 Startup" ` -Action $action -Trigger $trigger -Settings $settings -RunLevel Highest |
|---|
Never expose the OpenClaw gateway (port 18789) directly to the internet. Use Tailscale to access it securely from other devices.
| Verify Ollama Stays on Localhost Only After setup, confirm Ollama is not accessible from outside your machine. Ollama defaults to localhost:11434 but this should be verified. From another device on your network (not via Tailscale), attempt: curl http://[your-windows-ip]:11434 This should fail or time out. If it succeeds, Ollama is exposed. Add a Windows Firewall rule to block port 11434 inbound. |
|---|
| Problem | Fix |
|---|---|
| wsl --install fails | Ensure Virtualization is enabled in BIOS. Run PowerShell as Administrator. Check Windows version: Settings → System → About → OS Build must be 19041+. |
| WSL version is 1, not 2 | Run: wsl --set-default-version 2 and then: wsl --set-version Ubuntu 2 |
| Node.js shows version below 22 | Remove old Node: sudo apt-get remove nodejs. Then re-run the nodesource setup script from Phase 2. |
| openclaw: command not found | Run: source ~/.bashrc. If still missing: npm config get prefix — verify ~/.npm-global/bin is in your PATH. |
| Dashboard shows 'unauthorized' | Run: openclaw dashboard — this prints a tokenized URL. Use that URL with the ?token= parameter in your browser. |
| localhost:18789 doesn't load in browser | Run the portproxy commands from Step 3.4. Also check Windows Firewall — add an inbound rule allowing TCP 18789. |
| Ollama not reachable from WSL2 | Verify Ollama is running in Windows tray. Test: curl http://localhost:11434 from both Windows and WSL2. If WSL2 fails, check localhostForwarding=true is set in .wslconfig. |
| Model runs slowly / CPU only | Ollama may not be detecting your GPU. Check: ollama ps to see active models. Verify NVIDIA drivers are current. Check Device Manager for GPU errors. |
| WSL2 high memory usage | Adjust .wslconfig memory= setting. Run wsl --shutdown when not in use to release memory. Run: Optimize-VHD on the WSL2 .vhdx file to reclaim disk space. |
| DNS failures inside WSL2 | Run: sudo rm /etc/resolv.conf and then: echo 'nameserver 8.8.8.8' | sudo tee /etc/resolv.conf |
| wsl # Open Ubuntu terminal wsl --shutdown # Stop all WSL2 instances (frees memory) wsl --list --verbose # List distributions and WSL versions wsl --update # Update WSL2 kernel |
|---|
| ollama list # List downloaded models ollama pull deepseek-r1:14b # Download a model ollama rm deepseek-r1:7b # Remove a model ollama ps # Show currently loaded model ollama serve # Start Ollama server manually (if not running) |
|---|
| openclaw start --daemon # Start gateway as background daemon openclaw stop # Stop gateway openclaw status # Check gateway status openclaw dashboard # Open dashboard (prints token URL) openclaw update # Update to latest version openclaw start --config ~/.openclaw/openclaw-privileged.json --daemon # Local-only session |
|---|
Personal reference guide. Verify all commands against current official documentation at docs.openclaw.ai and docs.ollama.com before executing on production systems.