AgentIC / docs /IP_SAFETY_AND_EXPANSION.md
vxkyyy's picture
fix: ensure litellm is installed for LLM backend, .env never overrides HuggingFace secrets, add /ping endpoint
2654ab0

AgentIC β€” IP Safety, Data Privacy & Expansion Plan

1. Is Your IP Secure?

Short answer: Yes, by design β€” with caveats to understand.

Your chip designs (RTL, specs, build prompts) pass through external LLM APIs during the build. Here is precisely what leaves the system and what never does.


2. What Data Leaves AgentIC

Data Where it goes Can you control it?
Design description + RTL prompts NVIDIA NIM API (integrate.api.nvidia.com) Yes β€” route to local Ollama instead
Spec + RTL during agent reasoning Same NVIDIA endpoint Yes β€” BYOK plan lets you use your own hosted API
Build logs Stays on server only N/A β€” never sent externally
VCD waveforms Stays on server only N/A
GDS layout files Stays on server only N/A
Training JSONL export Stays on local disk (training/agentic_sft_data.jsonl) N/A β€” never uploaded
API keys HuggingFace Space Secrets (encrypted at rest by HF) Yes β€” rotate at any time
User profiles + build counts Supabase (only when auth is enabled) Yes β€” opt-in, use your own Supabase project

Nothing is ever sold, shared, or used to train third-party models. The training JSONL is written locally on your machine or HF Space persistent storage β€” it stays yours.


3. The LLM API Risk β€” And How to Eliminate It

When a user submits "Design a 32-bit RISC-V processor with branch prediction", that prompt goes to NVIDIA's inference endpoint. NVIDIA's API Terms of Service state they do not use API inputs to train their models.

If your design is confidential, use the BYOK plan:

  • Set plan = byok for your account in Supabase
  • Store your own API key (pointing to a self-hosted vLLM/Ollama endpoint)
  • The build runs entirely on-premises β€” nothing leaves your network

Self-hosted LLM options (zero data egress):

# Ollama β€” local GPU inference
LLM_BASE_URL=http://localhost:11434
LLM_MODEL=ollama/llama3.3:70b

# vLLM on your own server
LLM_BASE_URL=http://YOUR_SERVER:8000/v1
LLM_MODEL=meta-llama/Llama-3.3-70B-Instruct

4. What Stays Completely Private

These never leave the system under any configuration:

  • EDA tool execution β€” iverilog, verilator, yosys, sby run 100% locally
  • VCD simulation waveforms β€” generated and stored locally
  • GDS chip layouts β€” OpenLane output, stays on disk
  • Training JSONL β€” local fine-tuning data, never uploaded
  • Build logs β€” streamed to your browser via SSE, never stored externally
  • Supabase data β€” your own Supabase project, your data

5. Expansion Plan

Phase 1 β€” Platform Foundation (Current)

  • Multi-agent RTL build pipeline (RTL β†’ Verification β†’ Formal β†’ Coverage β†’ GDSII)
  • Human-in-the-loop approval at each stage
  • Supabase auth + plan tiers (Free / Starter / Pro / BYOK)
  • Razorpay billing with webhook verification
  • HuggingFace Spaces deployment (Docker)
  • Training data export pipeline (local JSONL)

Phase 2 β€” Scale & Monetize (Q1 2026)

  • Frontend auth UI β€” login/signup pages using Supabase Auth JS SDK
  • Pricing page β€” Razorpay checkout integration in React
  • User dashboard β€” build history, plan status, upgrade prompts
  • BYOK key management UI β€” set/update encrypted API key from browser
  • Team accounts β€” shared plan, shared build quota

Phase 3 β€” IP Hardening (Q2 2026)

  • On-premise mode β€” single Docker Compose stack with bundled local LLM (Ollama)
  • Air-gapped deployment guide β€” no internet required, all EDA tools + LLM in one stack
  • Design vault β€” encrypted storage for completed RTL/GDS with per-user S3-compatible bucket
  • Differential privacy on training export β€” strip user identifiers from JSONL before fine-tuning
  • Audit log β€” every API call that contains design data is logged with timestamp + user

Phase 4 β€” Enterprise (Q3 2026)

  • SSO β€” SAML/OIDC via Supabase (works with Google Workspace, Okta)
  • NDA-grade deployment β€” dedicated HF Space per enterprise tenant with isolated secrets
  • Custom PDK support β€” bring your own standard cell library without submitting it to any cloud
  • Multi-project wafer slot reservation β€” integration with Efabless / Skywater shuttle APIs
  • SLA agreement β€” 99.9% uptime on HF Pro+ hardware (A10G GPU)

6. Security Architecture Summary

User Browser
     β”‚
     β”‚  HTTPS (TLS 1.3)
     β–Ό
HuggingFace Space (Docker container)
     β”‚
     β”œβ”€β”€ FastAPI (server/api.py)
     β”‚       β”œβ”€β”€ Supabase JWT verification  ← user never sees DB directly
     β”‚       β”œβ”€β”€ Plan guard (402 on limit)
     β”‚       └── BYOK key decrypt          ← key never logged
     β”‚
     β”œβ”€β”€ LLM call  ──────────────────────► NVIDIA NIM API (or your private endpoint)
     β”‚       └── Design prompt goes here   ← only this crosses the boundary
     β”‚
     β”œβ”€β”€ EDA tools (iverilog, yosys, sby)  ← 100% local, no network calls
     β”‚
     └── Build artifacts β†’ local disk
             training/agentic_sft_data.jsonl  ← yours only

Supabase (your project)
     β”œβ”€β”€ profiles (plan, build count, encrypted BYOK key)
     β”œβ”€β”€ builds (job history)
     └── payments (Razorpay records)

7. Secrets You Control

Secret Stored where How to rotate
NVIDIA_API_KEY HF Space Secrets + local .env NVIDIA dashboard β†’ regenerate
SUPABASE_SERVICE_KEY HF Space Secrets only Supabase β†’ Settings β†’ API
ENCRYPTION_KEY HF Space Secrets only Change + re-encrypt stored BYOK keys
RAZORPAY_KEY_SECRET HF Space Secrets only Razorpay dashboard
HF_TOKEN GitHub Actions Secrets HuggingFace β†’ Settings β†’ Tokens

Rotate all keys immediately if:

  • A key appears in any public log, PR, or error message
  • You suspect unauthorized use
  • Any team member with access leaves

To update HF Space secrets programmatically:

from huggingface_hub import HfApi
api = HfApi(token="your_new_hf_token")
api.add_space_secret("vxkyyy/AgentIC", "NVIDIA_API_KEY", "new_value")