| # CCPA Compliance Analyzer — OPEN HACK 2026 | |
| ## Solution Overview | |
| This system analyzes natural-language business practice descriptions and determines whether they violate the California Consumer Privacy Act (CCPA). It uses a **RAG (Retrieval-Augmented Generation)** architecture: | |
| 1. **Knowledge Base**: The full CCPA statute (key sections 1798.100–1798.135) is pre-encoded as structured text in `ccpa_knowledge.py`, covering all major consumer rights and business obligations. | |
| 2. **LLM Inference**: [Llama 3.2 3B](https://ollama.com/library/llama3.2) (via Ollama) receives a system prompt containing the CCPA statute context + the user's business practice prompt and returns a JSON classification. | |
| 3. **Rule-Based Fallback**: A deterministic keyword/pattern matcher provides a reliable backup if the LLM is unavailable or returns unparseable output. | |
| 4. **FastAPI Server**: Exposes `GET /health` and `POST /analyze` endpoints on port 8000. | |
| **Pipeline**: `POST /analyze` → LLM (Llama 3.2 3B via Ollama) with CCPA context → JSON parse → logic validation → `{"harmful": bool, "articles": [...]}` | |
| --- | |
| ## Docker Run Command | |
| ```bash | |
| docker run --gpus all -p 8000:8000 -e HF_TOKEN=xxx yourusername/ccpa-compliance:latest | |
| ``` | |
| Without GPU (CPU-only mode, slower): | |
| ```bash | |
| docker run -p 8000:8000 yourusername/ccpa-compliance:latest | |
| ``` | |
| --- | |
| ## Environment Variables | |
| | Variable | Required | Description | | |
| |---|---|---| | |
| | `HF_TOKEN` | No | Hugging Face access token (not needed for llama3.2 via Ollama) | | |
| | `MODEL_NAME` | No | Ollama model to use (default: `llama3.2:3b`) | | |
| | `OLLAMA_HOST` | No | Ollama server URL (default: `http://localhost:11434`) | | |
| --- | |
| ## GPU Requirements | |
| - **Recommended**: NVIDIA GPU with ≥4GB VRAM (RTX 3060 or better) | |
| - **CPU-only fallback**: Supported, but inference will be significantly slower (~30-60s per request). The 120s timeout in the test script provides sufficient buffer. | |
| - **Model size**: llama3.2:3b is ~2GB on disk, ~2GB VRAM when loaded | |
| --- | |
| ## Local Setup Instructions (Fallback — no Docker) | |
| > Use only if Docker fails. Manual deployment incurs a score penalty. | |
| **Requirements**: Linux, Python 3.11+, [Ollama](https://ollama.com) | |
| ```bash | |
| # 1. Install Ollama | |
| curl -fsSL https://ollama.com/install.sh | sh | |
| # 2. Start Ollama and pull model | |
| ollama serve & | |
| ollama pull llama3.2:3b | |
| # 3. Install Python dependencies | |
| pip install fastapi uvicorn httpx pydantic | |
| # 4. Run the FastAPI server | |
| cd /path/to/ccpa_project | |
| uvicorn app:app --host 0.0.0.0 --port 8000 | |
| # 5. Verify it's running | |
| curl http://localhost:8000/health | |
| ``` | |
| --- | |
| ## API Usage Examples | |
| ### Health Check | |
| ```bash | |
| curl http://localhost:8000/health | |
| # Response: {"status": "ok"} | |
| ``` | |
| ### Analyze — Violation Detected | |
| ```bash | |
| curl -X POST http://localhost:8000/analyze \ | |
| -H "Content-Type: application/json" \ | |
| -d '{"prompt": "We are selling our customers personal information to data brokers without giving them a chance to opt out."}' | |
| # Response: | |
| # {"harmful": true, "articles": ["Section 1798.120", "Section 1798.100"]} | |
| ``` | |
| ### Analyze — No Violation | |
| ```bash | |
| curl -X POST http://localhost:8000/analyze \ | |
| -H "Content-Type: application/json" \ | |
| -d '{"prompt": "We provide a clear privacy policy and allow customers to opt out of data selling at any time."}' | |
| # Response: | |
| # {"harmful": false, "articles": []} | |
| ``` | |
| ### Using docker-compose (with organizer test script) | |
| ```bash | |
| docker compose up -d | |
| python validate_format.py | |
| docker compose down | |
| ``` | |
| --- | |
| ## Project Structure | |
| ``` | |
| ccpa_project/ | |
| ├── app.py # FastAPI server + LLM/rule-based analysis | |
| ├── ccpa_knowledge.py # CCPA statute knowledge base (RAG source) | |
| ├── requirements.txt # Python dependencies | |
| ├── Dockerfile # Container definition (pre-pulls llama3.2:3b) | |
| ├── start.sh # Container startup (starts Ollama + uvicorn) | |
| ├── docker-compose.yml # Compose config for easy orchestration | |
| └── README.md # This file | |
| ``` | |
| --- | |
| ## Notes on Accuracy | |
| - The system cites sections based on CCPA statute analysis, not keyword matching alone. | |
| - The LLM is instructed to identify **all** violated sections, not just the most obvious one. | |
| - The rule-based fallback provides reliable detection for common violation patterns. | |
| - Incorrect article citations result in zero marks per the scoring rubric, so the system is conservative: it only cites a section when there is clear evidence of a violation matching that section's specific requirements. | |