Spaces:
Paused
Apply for a GPU community grant: Academic project
Project: GovOn โ Korean Government Civil Complaint AI Assistant
We are a university research team building an AI-powered system to help
Korean government agencies respond to civil complaints more efficiently
and accurately.
Why we need GPU:
- Our base model (EXAONE 4.0-32B-AWQ, ~20GB VRAM) requires at least
24GB GPU for inference - We serve Multi-LoRA adapters (civil response + legal citation) via
vLLM per-request switching - This is an academic project โ we have no commercial budget for GPU
What we built:
- LangGraph approval-gated agent runtime with human-in-the-loop
- 74K civil response training data from AI Hub public datasets
- 270K legal citation training data (precedents + statutes)
- QLoRA fine-tuned adapters on EXAONE 4.0-32B
- CLI tool with streaming status display
Hardware requested: A100 (80GB) or L40S (48GB)
- 32B-AWQ model (~20GB) + Multi-LoRA + KV cache needs 40-48GB
- Used for academic demo and user acceptance testing
GitHub: https://github.com/GovOn-Org/GovOn
Datasets: huggingface.co/datasets/umyunsang/govon-civil-response-data
License: MIT (code), EXAONE 1.2-NC (model)