File size: 4,501 Bytes
7e09b15 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 | # CCPA Compliance Analyzer — OPEN HACK 2026
## Solution Overview
This system analyzes natural-language business practice descriptions and determines whether they violate the California Consumer Privacy Act (CCPA). It uses a **RAG (Retrieval-Augmented Generation)** architecture:
1. **Knowledge Base**: The full CCPA statute (key sections 1798.100–1798.135) is pre-encoded as structured text in `ccpa_knowledge.py`, covering all major consumer rights and business obligations.
2. **LLM Inference**: [Llama 3.2 3B](https://ollama.com/library/llama3.2) (via Ollama) receives a system prompt containing the CCPA statute context + the user's business practice prompt and returns a JSON classification.
3. **Rule-Based Fallback**: A deterministic keyword/pattern matcher provides a reliable backup if the LLM is unavailable or returns unparseable output.
4. **FastAPI Server**: Exposes `GET /health` and `POST /analyze` endpoints on port 8000.
**Pipeline**: `POST /analyze` → LLM (Llama 3.2 3B via Ollama) with CCPA context → JSON parse → logic validation → `{"harmful": bool, "articles": [...]}`
---
## Docker Run Command
```bash
docker run --gpus all -p 8000:8000 -e HF_TOKEN=xxx yourusername/ccpa-compliance:latest
```
Without GPU (CPU-only mode, slower):
```bash
docker run -p 8000:8000 yourusername/ccpa-compliance:latest
```
---
## Environment Variables
| Variable | Required | Description |
|---|---|---|
| `HF_TOKEN` | No | Hugging Face access token (not needed for llama3.2 via Ollama) |
| `MODEL_NAME` | No | Ollama model to use (default: `llama3.2:3b`) |
| `OLLAMA_HOST` | No | Ollama server URL (default: `http://localhost:11434`) |
---
## GPU Requirements
- **Recommended**: NVIDIA GPU with ≥4GB VRAM (RTX 3060 or better)
- **CPU-only fallback**: Supported, but inference will be significantly slower (~30-60s per request). The 120s timeout in the test script provides sufficient buffer.
- **Model size**: llama3.2:3b is ~2GB on disk, ~2GB VRAM when loaded
---
## Local Setup Instructions (Fallback — no Docker)
> Use only if Docker fails. Manual deployment incurs a score penalty.
**Requirements**: Linux, Python 3.11+, [Ollama](https://ollama.com)
```bash
# 1. Install Ollama
curl -fsSL https://ollama.com/install.sh | sh
# 2. Start Ollama and pull model
ollama serve &
ollama pull llama3.2:3b
# 3. Install Python dependencies
pip install fastapi uvicorn httpx pydantic
# 4. Run the FastAPI server
cd /path/to/ccpa_project
uvicorn app:app --host 0.0.0.0 --port 8000
# 5. Verify it's running
curl http://localhost:8000/health
```
---
## API Usage Examples
### Health Check
```bash
curl http://localhost:8000/health
# Response: {"status": "ok"}
```
### Analyze — Violation Detected
```bash
curl -X POST http://localhost:8000/analyze \
-H "Content-Type: application/json" \
-d '{"prompt": "We are selling our customers personal information to data brokers without giving them a chance to opt out."}'
# Response:
# {"harmful": true, "articles": ["Section 1798.120", "Section 1798.100"]}
```
### Analyze — No Violation
```bash
curl -X POST http://localhost:8000/analyze \
-H "Content-Type: application/json" \
-d '{"prompt": "We provide a clear privacy policy and allow customers to opt out of data selling at any time."}'
# Response:
# {"harmful": false, "articles": []}
```
### Using docker-compose (with organizer test script)
```bash
docker compose up -d
python validate_format.py
docker compose down
```
---
## Project Structure
```
ccpa_project/
├── app.py # FastAPI server + LLM/rule-based analysis
├── ccpa_knowledge.py # CCPA statute knowledge base (RAG source)
├── requirements.txt # Python dependencies
├── Dockerfile # Container definition (pre-pulls llama3.2:3b)
├── start.sh # Container startup (starts Ollama + uvicorn)
├── docker-compose.yml # Compose config for easy orchestration
└── README.md # This file
```
---
## Notes on Accuracy
- The system cites sections based on CCPA statute analysis, not keyword matching alone.
- The LLM is instructed to identify **all** violated sections, not just the most obvious one.
- The rule-based fallback provides reliable detection for common violation patterns.
- Incorrect article citations result in zero marks per the scoring rubric, so the system is conservative: it only cites a section when there is clear evidence of a violation matching that section's specific requirements.
|