๐ก๏ธ Why Local AI?
When you use cloud AI assistants (Claude Desktop, ChatGPT), your documents and prompts are sent to third-party servers. Even though LawTasksAI processes documents locally, your AI assistant is still uploading data to Anthropic, OpenAI, or similar providers.
A local AI setup eliminates this entirely. Your documents, prompts, and AI processing all happen on your computer. No internet connection required (except for initial setup and license verification). Full attorney-client privilege protection.
What You're Building
By the end of this guide, you'll have:
- OpenClaw โ A local AI assistant that runs on your computer
- Ollama โ Software that runs large language models locally
- Llama 3.1 โ A powerful, free, open-source AI model (70B or 405B)
- LawTasksAI โ Integrated and ready to process confidential documents privately
Time required: 30-60 minutes (depending on download speeds)
Cost: $0 (all software is free and open-source)
Hardware Requirements
โ ๏ธ Important: You Need a Powerful Computer
Running AI models locally requires significant RAM. Check your computer specs before starting.
| Model | RAM Required | Performance | Recommended For |
|---|---|---|---|
| Llama 3.1 8B | 8GB | Basic | Simple tasks, testing |
| Llama 3.1 70B | 32GB+ | Excellent | Most legal work |
| Llama 3.1 405B | 64GB+ | Best-in-class | Complex analysis, large documents |
How to check your RAM:
- Windows: Press
Win + Pause/Breakor right-click "This PC" โ Properties - Mac: Apple menu โ About This Mac
- Linux: Run
free -hin terminal
๐ก Tip: If you don't have enough RAM, you can still use LawTasksAI with cloud AI for non-confidential work, and rent a powerful cloud server (AWS, GCP) when you need local processing for sensitive documents. This is still more private than using Claude/ChatGPT.
Step-by-Step Setup
Install OpenClaw
OpenClaw is a local AI assistant that connects to your local AI models. It's like having Claude Desktop or ChatGPT, but everything runs on your computer.
For Windows:
- Go to openclaw.ai
- Click "Download for Windows"
- Run the installer
- Follow the setup wizard
For Mac:
- Go to openclaw.ai
- Click "Download for Mac"
- Open the .dmg file and drag OpenClaw to Applications
- Launch OpenClaw (you may need to allow it in System Preferences โ Security)
For Linux:
# Install via npm (requires Node.js 18+)
npm install -g openclaw
# Or download the .deb/.rpm from openclaw.ai
Verify installation: Open OpenClaw. You should see a chat interface. Type /status to confirm it's running.
Install Ollama
Ollama is the software that runs AI models on your computer. Think of it like a local AI engine.
For Windows & Mac:
- Go to ollama.com
- Click "Download"
- Run the installer
- Ollama will start automatically as a background service
For Linux:
curl -fsSL https://ollama.com/install.sh | sh
Verify installation:
- Windows/Mac: You should see an Ollama icon in your system tray
- All platforms: Open terminal/command prompt and run:
ollama --version
Download the AI Model
Now you'll download the actual AI "brain" that will process your legal documents and research queries. We recommend Llama 3.1 โ it's free, powerful, and designed for professional use.
โ ๏ธ Large Download Warning
Llama 3.1 70B: ~40GB download
Llama 3.1 405B: ~230GB download
This may take several hours on a typical internet connection. Do this overnight or during downtime.
Open your terminal/command prompt and run:
For 32GB+ RAM (Recommended):
ollama pull llama3.1:70b
For 64GB+ RAM (Best Performance):
ollama pull llama3.1:405b
For 8-16GB RAM (Testing Only):
ollama pull llama3.1:8b
You'll see a progress bar. Go get coffee. โ
Verify the model works:
ollama run llama3.1:70b
# You should see a chat prompt. Type a test question:
>>> What is attorney-client privilege?
# Press Ctrl+D to exit when done
Configure OpenClaw to Use Your Local Model
Now we tell OpenClaw to use the local AI model instead of sending data to the cloud.
In OpenClaw:
- Type
/config - Look for the model setting
- Set it to:
ollama/llama3.1:70b(or whichever model you downloaded)
Or edit the config file directly:
- Windows:
%USERPROFILE%\.openclaw\config.yml - Mac/Linux:
~/.openclaw/config.yml
Add or update this section:
model: ollama/llama3.1:70b
providers:
ollama:
baseUrl: http://localhost:11434
Restart OpenClaw for changes to take effect.
Test it: In OpenClaw, type: What is the statute of limitations for fraud in California?
The response should come from your local model (you may notice it's slightly slower than cloud AI, but everything is private).
Install LawTasksAI
Now you're ready to add LawTasksAI skills to your local setup.
In OpenClaw, type:
/skill install lawtasksai
You'll be prompted for your license key (starts with lt_). Enter it.
Don't have a license yet? Purchase credits at lawtasksai.com. You'll receive your license key via email instantly.
Verify installation:
/skill list
You should see lawtasksai in the list.
Test Your Private Setup
Let's make sure everything works and nothing is going to the cloud.
Test 1: Check Your Connection
In OpenClaw, type: /status
Look for the model line. It should show ollama/llama3.1:70b (or your chosen model).
Test 2: Disconnect Your Internet
- Turn off Wi-Fi or unplug ethernet
- In OpenClaw, ask:
Summarize the attorney-client privilege doctrine - You should still get a response (proving it's running locally)
- Reconnect to the internet
Test 3: Run a LawTasksAI Document Task
In OpenClaw, type: What legal document tasks do you have?
The AI should list available tasks. Try one that processes documents locally, like:
Analyze this deposition transcript(attach a PDF)Summarize this contract(attach a Word doc)Review this discovery response(attach a document)
The document will be processed entirely on your machine. LawTasksAI will only send a license verification request (not the document contents).
๐ You're Done!
Congratulations! You now have a fully private, Rule 1.6 compliant AI legal assistant. Your documents never leave your computer. Your prompts never go to the cloud. Everything stays local.
Troubleshooting
Problem: "Model not found" or "Connection refused"
Solution:
- Make sure Ollama is running (check system tray for Ollama icon)
- Verify the model downloaded correctly:
ollama list - Check that OpenClaw config points to
http://localhost:11434 - Restart Ollama service
Problem: Responses are very slow
Solution:
- You may not have enough RAM. Check system monitor while running.
- Try a smaller model:
ollama pull llama3.1:8b - Close other applications to free up memory
- Consider upgrading your hardware or using a smaller model for routine tasks
Problem: "License key invalid" when using LawTasksAI
Solution:
- You need an internet connection for license verification (just not for AI processing)
- Check that your license key starts with
lt_ - Verify your key at lawtasksai.com
- Email support@lawtasksai.com if issues persist
Problem: OpenClaw crashes or freezes
Solution:
- Your model may be too large for your RAM. Use
llama3.1:8binstead. - Check logs:
/logsin OpenClaw or check~/.openclaw/logs/ - Restart OpenClaw and try a simpler prompt first
Performance Tips
Speed Up Responses
- Use GPU acceleration if you have a compatible NVIDIA GPU (requires CUDA setup)
- Quantized models: Smaller, faster models that sacrifice a bit of quality:
ollama pull llama3.1:70b-q4 - Adjust context window: Smaller context = faster responses but less memory
Save Disk Space
Models are stored in:
- Windows:
C:\Users\YourName\.ollama\models - Mac:
~/.ollama/models - Linux:
~/.ollama/models
Delete unused models: ollama rm llama3.1:8b
Batch Processing
For large document review projects, consider:
- Processing documents overnight (schedule with OpenClaw cron)
- Breaking large PDFs into smaller chunks
- Using the 405B model for the most critical analysis, 70B for routine work
Comparing Cloud vs. Local
| Feature | Cloud AI (Claude/ChatGPT) | Local AI (This Setup) |
|---|---|---|
| Privacy | โ ๏ธ Data sent to third-party | โ Everything stays local |
| Speed | โ Very fast | โ ๏ธ Depends on your hardware |
| Cost | $20-60/month | $0/month (after hardware) |
| Setup | โ 5 minutes | โ ๏ธ 30-60 minutes |
| Hardware Required | โ Any computer | โ ๏ธ 32GB+ RAM recommended |
| Rule 1.6 Compliance | โ ๏ธ Requires client consent | โ Fully compliant |
| Internet Required | โ Yes (always) | โ ๏ธ Only for license check |
| Quality | โ Best-in-class | โ Excellent (with 70B+) |
Upgrading to GPU Acceleration (Advanced)
If you have an NVIDIA GPU, you can dramatically speed up local AI processing:
- Install NVIDIA CUDA Toolkit: developer.nvidia.com/cuda-downloads
- Verify GPU is detected:
nvidia-smi - Ollama will automatically use GPU if available
- Check GPU usage while running:
nvidia-smiin a separate terminal
Performance boost: 5-10x faster responses with a modern GPU.
Need Help?
If you get stuck:
- Email us: support@lawtasksai.com โ we'll walk you through it
- OpenClaw docs: docs.openclaw.ai
- Ollama docs: github.com/ollama/ollama
๐ผ Want Professional Setup?
We offer white-glove setup services for law firms. We'll remotely configure your systems, train your staff, and ensure everything works perfectly. Email support@lawtasksai.com for pricing.
What's Next?
Now that you have a fully private setup:
- Explore the task catalog: lawtasksai.com/skills
- Read the security documentation: lawtasksai.com/security
- Create a usage policy: Document when to use local vs. cloud AI
- Train your team: Show paralegals and associates how to use the system