โœ… Full Rule 1.6 Compliance

Local AI Setup Guide

Run LawTasksAI with complete confidentiality โ€” everything stays on your computer, nothing goes to the cloud.

๐Ÿ›ก๏ธ Why Local AI?

When you use cloud AI assistants (Claude Desktop, ChatGPT), your documents and prompts are sent to third-party servers. Even though LawTasksAI processes documents locally, your AI assistant is still uploading data to Anthropic, OpenAI, or similar providers.

A local AI setup eliminates this entirely. Your documents, prompts, and AI processing all happen on your computer. No internet connection required (except for initial setup and license verification). Full attorney-client privilege protection.

What You're Building

By the end of this guide, you'll have:

Time required: 30-60 minutes (depending on download speeds)

Cost: $0 (all software is free and open-source)

Hardware Requirements

โš ๏ธ Important: You Need a Powerful Computer

Running AI models locally requires significant RAM. Check your computer specs before starting.

Model RAM Required Performance Recommended For
Llama 3.1 8B 8GB Basic Simple tasks, testing
Llama 3.1 70B 32GB+ Excellent Most legal work
Llama 3.1 405B 64GB+ Best-in-class Complex analysis, large documents

How to check your RAM:

๐Ÿ’ก Tip: If you don't have enough RAM, you can still use LawTasksAI with cloud AI for non-confidential work, and rent a powerful cloud server (AWS, GCP) when you need local processing for sensitive documents. This is still more private than using Claude/ChatGPT.

Step-by-Step Setup

1

Install OpenClaw

OpenClaw is a local AI assistant that connects to your local AI models. It's like having Claude Desktop or ChatGPT, but everything runs on your computer.

For Windows:

  1. Go to openclaw.ai
  2. Click "Download for Windows"
  3. Run the installer
  4. Follow the setup wizard

For Mac:

  1. Go to openclaw.ai
  2. Click "Download for Mac"
  3. Open the .dmg file and drag OpenClaw to Applications
  4. Launch OpenClaw (you may need to allow it in System Preferences โ†’ Security)

For Linux:

# Install via npm (requires Node.js 18+)
npm install -g openclaw

# Or download the .deb/.rpm from openclaw.ai

Verify installation: Open OpenClaw. You should see a chat interface. Type /status to confirm it's running.

2

Install Ollama

Ollama is the software that runs AI models on your computer. Think of it like a local AI engine.

For Windows & Mac:

  1. Go to ollama.com
  2. Click "Download"
  3. Run the installer
  4. Ollama will start automatically as a background service

For Linux:

curl -fsSL https://ollama.com/install.sh | sh

Verify installation:

3

Download the AI Model

Now you'll download the actual AI "brain" that will process your legal documents and research queries. We recommend Llama 3.1 โ€” it's free, powerful, and designed for professional use.

โš ๏ธ Large Download Warning

Llama 3.1 70B: ~40GB download
Llama 3.1 405B: ~230GB download

This may take several hours on a typical internet connection. Do this overnight or during downtime.

Open your terminal/command prompt and run:

For 32GB+ RAM (Recommended):

ollama pull llama3.1:70b

For 64GB+ RAM (Best Performance):

ollama pull llama3.1:405b

For 8-16GB RAM (Testing Only):

ollama pull llama3.1:8b

You'll see a progress bar. Go get coffee. โ˜•

Verify the model works:

ollama run llama3.1:70b

# You should see a chat prompt. Type a test question:
>>> What is attorney-client privilege?

# Press Ctrl+D to exit when done
4

Configure OpenClaw to Use Your Local Model

Now we tell OpenClaw to use the local AI model instead of sending data to the cloud.

In OpenClaw:

  1. Type /config
  2. Look for the model setting
  3. Set it to: ollama/llama3.1:70b (or whichever model you downloaded)

Or edit the config file directly:

Add or update this section:

model: ollama/llama3.1:70b
providers:
  ollama:
    baseUrl: http://localhost:11434

Restart OpenClaw for changes to take effect.

Test it: In OpenClaw, type: What is the statute of limitations for fraud in California?

The response should come from your local model (you may notice it's slightly slower than cloud AI, but everything is private).

5

Install LawTasksAI

Now you're ready to add LawTasksAI skills to your local setup.

In OpenClaw, type:

/skill install lawtasksai

You'll be prompted for your license key (starts with lt_). Enter it.

Don't have a license yet? Purchase credits at lawtasksai.com. You'll receive your license key via email instantly.

Verify installation:

/skill list

You should see lawtasksai in the list.

6

Test Your Private Setup

Let's make sure everything works and nothing is going to the cloud.

Test 1: Check Your Connection

In OpenClaw, type: /status

Look for the model line. It should show ollama/llama3.1:70b (or your chosen model).

Test 2: Disconnect Your Internet

  1. Turn off Wi-Fi or unplug ethernet
  2. In OpenClaw, ask: Summarize the attorney-client privilege doctrine
  3. You should still get a response (proving it's running locally)
  4. Reconnect to the internet

Test 3: Run a LawTasksAI Document Task

In OpenClaw, type: What legal document tasks do you have?

The AI should list available tasks. Try one that processes documents locally, like:

The document will be processed entirely on your machine. LawTasksAI will only send a license verification request (not the document contents).

๐ŸŽ‰ You're Done!

Congratulations! You now have a fully private, Rule 1.6 compliant AI legal assistant. Your documents never leave your computer. Your prompts never go to the cloud. Everything stays local.

Troubleshooting

Problem: "Model not found" or "Connection refused"

Solution:

Problem: Responses are very slow

Solution:

Problem: "License key invalid" when using LawTasksAI

Solution:

Problem: OpenClaw crashes or freezes

Solution:

Performance Tips

Speed Up Responses

Save Disk Space

Models are stored in:

Delete unused models: ollama rm llama3.1:8b

Batch Processing

For large document review projects, consider:

Comparing Cloud vs. Local

Feature Cloud AI (Claude/ChatGPT) Local AI (This Setup)
Privacy โš ๏ธ Data sent to third-party โœ… Everything stays local
Speed โœ… Very fast โš ๏ธ Depends on your hardware
Cost $20-60/month $0/month (after hardware)
Setup โœ… 5 minutes โš ๏ธ 30-60 minutes
Hardware Required โœ… Any computer โš ๏ธ 32GB+ RAM recommended
Rule 1.6 Compliance โš ๏ธ Requires client consent โœ… Fully compliant
Internet Required โœ… Yes (always) โš ๏ธ Only for license check
Quality โœ… Best-in-class โœ… Excellent (with 70B+)

Upgrading to GPU Acceleration (Advanced)

If you have an NVIDIA GPU, you can dramatically speed up local AI processing:

  1. Install NVIDIA CUDA Toolkit: developer.nvidia.com/cuda-downloads
  2. Verify GPU is detected: nvidia-smi
  3. Ollama will automatically use GPU if available
  4. Check GPU usage while running: nvidia-smi in a separate terminal

Performance boost: 5-10x faster responses with a modern GPU.

Need Help?

If you get stuck:

๐Ÿ’ผ Want Professional Setup?

We offer white-glove setup services for law firms. We'll remotely configure your systems, train your staff, and ensure everything works perfectly. Email support@lawtasksai.com for pricing.

What's Next?

Now that you have a fully private setup:

Back to LawTasksAI Home Security Overview