Spaces:
Sleeping
Sleeping
gnai-creator commited on
Commit ·
1f08279
1
Parent(s): 4598ede
Add model weights with Git LFS
Browse files- DEPLOY_GUIDE.md +238 -0
- Dockerfile +27 -0
- FIX_GIT_LFS.md +125 -0
- QUICK_START.md +147 -0
- README.md +204 -6
- app.py +158 -0
- model_info.json +76 -0
- requirements.txt +6 -0
- test_local.py +217 -0
DEPLOY_GUIDE.md
ADDED
|
@@ -0,0 +1,238 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# 🚀 HuggingFace Space Deployment Guide
|
| 2 |
+
|
| 3 |
+
This guide will help you deploy the AletheionGuard Space to HuggingFace.
|
| 4 |
+
|
| 5 |
+
## 📋 Prerequisites
|
| 6 |
+
|
| 7 |
+
1. **HuggingFace Account**: Sign up at https://huggingface.co/join
|
| 8 |
+
2. **Git LFS**: Install Git Large File Storage
|
| 9 |
+
```bash
|
| 10 |
+
# Ubuntu/Debian
|
| 11 |
+
sudo apt-get install git-lfs
|
| 12 |
+
|
| 13 |
+
# macOS
|
| 14 |
+
brew install git-lfs
|
| 15 |
+
|
| 16 |
+
# Initialize
|
| 17 |
+
git lfs install
|
| 18 |
+
```
|
| 19 |
+
3. **HuggingFace CLI** (optional but recommended):
|
| 20 |
+
```bash
|
| 21 |
+
pip install huggingface_hub
|
| 22 |
+
huggingface-cli login
|
| 23 |
+
```
|
| 24 |
+
|
| 25 |
+
## 🔑 Step 1: Create Access Token
|
| 26 |
+
|
| 27 |
+
1. Go to https://huggingface.co/settings/tokens
|
| 28 |
+
2. Click **New token**
|
| 29 |
+
3. Name: `AletheionGuard Deploy`
|
| 30 |
+
4. Type: **Write** (needed to push to Space)
|
| 31 |
+
5. Copy the token (starts with `hf_...`)
|
| 32 |
+
|
| 33 |
+
## 📦 Step 2: Create New Space
|
| 34 |
+
|
| 35 |
+
### Option A: Via Web Interface (Easiest)
|
| 36 |
+
|
| 37 |
+
1. Go to https://huggingface.co/new-space
|
| 38 |
+
2. Fill in details:
|
| 39 |
+
- **Owner**: Your username or organization
|
| 40 |
+
- **Space name**: `aletheionguard` (or your preferred name)
|
| 41 |
+
- **License**: AGPL-3.0
|
| 42 |
+
- **SDK**: Docker
|
| 43 |
+
- **Visibility**: Public or Private (choose Private for BYO-HF mode)
|
| 44 |
+
3. Click **Create Space**
|
| 45 |
+
4. You'll be redirected to your new Space
|
| 46 |
+
|
| 47 |
+
### Option B: Via CLI
|
| 48 |
+
|
| 49 |
+
```bash
|
| 50 |
+
huggingface-cli repo create aletheionguard --type space --space_sdk docker
|
| 51 |
+
```
|
| 52 |
+
|
| 53 |
+
## 📤 Step 3: Push Files to Space
|
| 54 |
+
|
| 55 |
+
Navigate to the Space directory:
|
| 56 |
+
|
| 57 |
+
```bash
|
| 58 |
+
cd /home/sapo/AletheionGuard/hf_space_example/AletheionGuard
|
| 59 |
+
```
|
| 60 |
+
|
| 61 |
+
### First-time Setup (if not already a git repo)
|
| 62 |
+
|
| 63 |
+
```bash
|
| 64 |
+
# Check if already initialized
|
| 65 |
+
git status
|
| 66 |
+
|
| 67 |
+
# If not, initialize
|
| 68 |
+
git init
|
| 69 |
+
git lfs install
|
| 70 |
+
git lfs track "*.pth"
|
| 71 |
+
git lfs track "*.ckpt"
|
| 72 |
+
```
|
| 73 |
+
|
| 74 |
+
### Add HuggingFace Remote
|
| 75 |
+
|
| 76 |
+
Replace `YOUR_USERNAME` with your HuggingFace username:
|
| 77 |
+
|
| 78 |
+
```bash
|
| 79 |
+
git remote add hf https://huggingface.co/spaces/YOUR_USERNAME/aletheionguard
|
| 80 |
+
```
|
| 81 |
+
|
| 82 |
+
### Commit and Push
|
| 83 |
+
|
| 84 |
+
```bash
|
| 85 |
+
# Stage all files
|
| 86 |
+
git add .
|
| 87 |
+
|
| 88 |
+
# Commit
|
| 89 |
+
git commit -m "Initial commit: AletheionGuard Space with Trial 012 models"
|
| 90 |
+
|
| 91 |
+
# Push to HuggingFace
|
| 92 |
+
git push hf main
|
| 93 |
+
```
|
| 94 |
+
|
| 95 |
+
If prompted for credentials:
|
| 96 |
+
- **Username**: Your HuggingFace username
|
| 97 |
+
- **Password**: Your HF token (starts with `hf_...`)
|
| 98 |
+
|
| 99 |
+
## 🔧 Step 4: Configure Space Settings
|
| 100 |
+
|
| 101 |
+
1. Go to your Space: `https://huggingface.co/spaces/YOUR_USERNAME/aletheionguard`
|
| 102 |
+
2. Click **Settings**
|
| 103 |
+
3. Configure:
|
| 104 |
+
- **Hardware**: CPU Basic (free) or upgrade if needed
|
| 105 |
+
- **Persistent Storage**: Optional (0 GB for this Space)
|
| 106 |
+
- **Secrets**: Add `HUGGINGFACE_TOKEN` if needed for authentication
|
| 107 |
+
4. Click **Save**
|
| 108 |
+
|
| 109 |
+
## ✅ Step 5: Verify Deployment
|
| 110 |
+
|
| 111 |
+
1. Wait for build to complete (check **Build logs** tab)
|
| 112 |
+
2. Once "Running", test the endpoints:
|
| 113 |
+
|
| 114 |
+
```bash
|
| 115 |
+
# Health check
|
| 116 |
+
curl https://YOUR_USERNAME-aletheionguard.hf.space/health
|
| 117 |
+
|
| 118 |
+
# Predict (requires auth if Space is private)
|
| 119 |
+
curl -X POST https://YOUR_USERNAME-aletheionguard.hf.space/predict \
|
| 120 |
+
-H "Authorization: Bearer hf_YOUR_TOKEN" \
|
| 121 |
+
-H "Content-Type: application/json" \
|
| 122 |
+
-d '{"text": "Paris is the capital of France"}'
|
| 123 |
+
```
|
| 124 |
+
|
| 125 |
+
Expected response:
|
| 126 |
+
```json
|
| 127 |
+
{
|
| 128 |
+
"q1": 0.06,
|
| 129 |
+
"q2": 0.08,
|
| 130 |
+
"height": 0.90,
|
| 131 |
+
"message": "Heuristic metrics computed successfully.",
|
| 132 |
+
"verdict": "ACCEPT"
|
| 133 |
+
}
|
| 134 |
+
```
|
| 135 |
+
|
| 136 |
+
## 🔄 Step 6: Update Models (After Fine-tuning)
|
| 137 |
+
|
| 138 |
+
After fine-tuning with `scripts/finetune_real_data.py`:
|
| 139 |
+
|
| 140 |
+
```bash
|
| 141 |
+
# Copy new models
|
| 142 |
+
cp /home/sapo/AletheionGuard/models/real_finetuned/*.pth .
|
| 143 |
+
cp /home/sapo/AletheionGuard/models/real_finetuned/q1q2-finetuned-*.ckpt q1q2_best.ckpt
|
| 144 |
+
|
| 145 |
+
# Update metadata
|
| 146 |
+
nano model_info.json # Update training info and metrics
|
| 147 |
+
|
| 148 |
+
# Commit and push
|
| 149 |
+
git add *.pth *.ckpt model_info.json
|
| 150 |
+
git commit -m "Update to fine-tuned models (TruthfulQA + SQuAD)"
|
| 151 |
+
git push hf main
|
| 152 |
+
```
|
| 153 |
+
|
| 154 |
+
## 📊 Monitoring & Logs
|
| 155 |
+
|
| 156 |
+
- **View logs**: Click **Logs** tab in your Space
|
| 157 |
+
- **Restart Space**: Click **Factory reboot** if needed
|
| 158 |
+
- **Monitor usage**: Check **Analytics** tab
|
| 159 |
+
|
| 160 |
+
## 💰 Costs & Limits
|
| 161 |
+
|
| 162 |
+
### Free Tier (CPU Basic)
|
| 163 |
+
- ✅ Perfect for demos and low-traffic
|
| 164 |
+
- ✅ Includes: 2 vCPU, 16 GB RAM
|
| 165 |
+
- ❌ Limited to ~1000 requests/day
|
| 166 |
+
- ❌ May sleep after inactivity
|
| 167 |
+
|
| 168 |
+
### Paid Tier (Upgrade Options)
|
| 169 |
+
- **CPU Upgrade** ($0.03/hour): 4 vCPU, 32 GB RAM
|
| 170 |
+
- **GPU T4** ($0.60/hour): For faster inference
|
| 171 |
+
- **Persistent**: Space never sleeps
|
| 172 |
+
|
| 173 |
+
To upgrade:
|
| 174 |
+
1. Go to Space → Settings → Hardware
|
| 175 |
+
2. Select tier → Upgrade
|
| 176 |
+
|
| 177 |
+
## 🔐 Using with BYO-HF Mode
|
| 178 |
+
|
| 179 |
+
After deployment, use your Space with AletheionGuard:
|
| 180 |
+
|
| 181 |
+
```python
|
| 182 |
+
from aletheion_guard import EpistemicAuditor
|
| 183 |
+
|
| 184 |
+
auditor = EpistemicAuditor(
|
| 185 |
+
mode="byo-hf",
|
| 186 |
+
hf_space_url="https://huggingface.co/spaces/YOUR_USERNAME/aletheionguard",
|
| 187 |
+
hf_token="hf_YOUR_TOKEN" # Only if Space is private
|
| 188 |
+
)
|
| 189 |
+
|
| 190 |
+
result = auditor.evaluate("The Earth is flat")
|
| 191 |
+
print(f"Verdict: {result.verdict}, Height: {result.height}")
|
| 192 |
+
```
|
| 193 |
+
|
| 194 |
+
## 🐛 Troubleshooting
|
| 195 |
+
|
| 196 |
+
### Build Failed
|
| 197 |
+
|
| 198 |
+
Check **Build logs** for errors. Common issues:
|
| 199 |
+
|
| 200 |
+
1. **LFS bandwidth exceeded**: Use HuggingFace Pro ($9/month)
|
| 201 |
+
2. **File too large**: Compress models or use smaller batch size during training
|
| 202 |
+
3. **Docker build timeout**: Simplify Dockerfile or reduce dependencies
|
| 203 |
+
|
| 204 |
+
### Space Not Starting
|
| 205 |
+
|
| 206 |
+
1. Check **Logs** tab for runtime errors
|
| 207 |
+
2. Verify `app.py` has no syntax errors
|
| 208 |
+
3. Ensure all model files are present
|
| 209 |
+
4. Try **Factory reboot**
|
| 210 |
+
|
| 211 |
+
### Authentication Errors
|
| 212 |
+
|
| 213 |
+
1. Ensure HF token has **Write** permissions
|
| 214 |
+
2. For private Spaces, include token in requests
|
| 215 |
+
3. Check token expiration
|
| 216 |
+
|
| 217 |
+
### Slow Inference
|
| 218 |
+
|
| 219 |
+
1. Upgrade to GPU hardware
|
| 220 |
+
2. Optimize model (quantization, pruning)
|
| 221 |
+
3. Use caching for repeated requests
|
| 222 |
+
|
| 223 |
+
## 📚 Resources
|
| 224 |
+
|
| 225 |
+
- **HuggingFace Spaces Docs**: https://huggingface.co/docs/hub/spaces
|
| 226 |
+
- **Git LFS**: https://git-lfs.github.com/
|
| 227 |
+
- **Docker for Spaces**: https://huggingface.co/docs/hub/spaces-sdks-docker
|
| 228 |
+
- **AletheionGuard Docs**: https://docs.aletheion.com
|
| 229 |
+
|
| 230 |
+
## 💬 Support
|
| 231 |
+
|
| 232 |
+
- **Email**: support@aletheion.com
|
| 233 |
+
- **Discord**: https://discord.gg/aletheion
|
| 234 |
+
- **GitHub Issues**: https://github.com/AletheionAGI/AletheionGuard/issues
|
| 235 |
+
|
| 236 |
+
---
|
| 237 |
+
|
| 238 |
+
**Ready to deploy?** Follow the steps above and you'll have your Space running in minutes! 🚀
|
Dockerfile
ADDED
|
@@ -0,0 +1,27 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
FROM python:3.10-slim
|
| 2 |
+
|
| 3 |
+
WORKDIR /app
|
| 4 |
+
|
| 5 |
+
# Install system dependencies
|
| 6 |
+
RUN apt-get update && apt-get install -y \
|
| 7 |
+
build-essential \
|
| 8 |
+
&& rm -rf /var/lib/apt/lists/*
|
| 9 |
+
|
| 10 |
+
# Copy requirements and install Python dependencies
|
| 11 |
+
COPY requirements.txt .
|
| 12 |
+
RUN pip install --no-cache-dir -r requirements.txt
|
| 13 |
+
|
| 14 |
+
# Copy application code and model files
|
| 15 |
+
COPY app.py .
|
| 16 |
+
COPY model_info.json .
|
| 17 |
+
COPY *.pth .
|
| 18 |
+
COPY *.ckpt .
|
| 19 |
+
|
| 20 |
+
# Expose port 7860 (HuggingFace Spaces default)
|
| 21 |
+
EXPOSE 7860
|
| 22 |
+
|
| 23 |
+
# Set environment variables
|
| 24 |
+
ENV PYTHONUNBUFFERED=1
|
| 25 |
+
|
| 26 |
+
# Run the application
|
| 27 |
+
CMD ["uvicorn", "app:app", "--host", "0.0.0.0", "--port", "7860"]
|
FIX_GIT_LFS.md
ADDED
|
@@ -0,0 +1,125 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# 🔧 Fix Git LFS for HuggingFace Push
|
| 2 |
+
|
| 3 |
+
O push falhou porque o Git LFS não está configurado. Siga estes passos:
|
| 4 |
+
|
| 5 |
+
## Passo 1: Instalar Git LFS
|
| 6 |
+
|
| 7 |
+
```bash
|
| 8 |
+
# Para Ubuntu/Debian/WSL
|
| 9 |
+
curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | sudo bash
|
| 10 |
+
sudo apt-get install git-lfs
|
| 11 |
+
|
| 12 |
+
# Verificar instalação
|
| 13 |
+
git lfs version
|
| 14 |
+
```
|
| 15 |
+
|
| 16 |
+
## Passo 2: Configurar Git LFS no Repositório
|
| 17 |
+
|
| 18 |
+
```bash
|
| 19 |
+
cd /home/sapo/AletheionGuard/hf_space_example/AletheionGuard
|
| 20 |
+
|
| 21 |
+
# Inicializar Git LFS
|
| 22 |
+
git lfs install
|
| 23 |
+
|
| 24 |
+
# Verificar que .gitattributes existe e tem as configurações corretas
|
| 25 |
+
cat .gitattributes
|
| 26 |
+
```
|
| 27 |
+
|
| 28 |
+
## Passo 3: Resetar o Commit Anterior e Re-adicionar com LFS
|
| 29 |
+
|
| 30 |
+
```bash
|
| 31 |
+
# Voltar um commit (desfazer o commit que falhou no push)
|
| 32 |
+
git reset --soft HEAD~1
|
| 33 |
+
|
| 34 |
+
# Verificar arquivos que precisam de LFS
|
| 35 |
+
ls -lh *.pth *.ckpt
|
| 36 |
+
|
| 37 |
+
# Re-adicionar arquivos (agora com LFS)
|
| 38 |
+
git add .gitattributes
|
| 39 |
+
git add *.pth
|
| 40 |
+
git add *.ckpt
|
| 41 |
+
git add *.py
|
| 42 |
+
git add *.md
|
| 43 |
+
git add *.txt
|
| 44 |
+
git add Dockerfile
|
| 45 |
+
|
| 46 |
+
# Verificar que os arquivos grandes estão no LFS
|
| 47 |
+
git lfs ls-files
|
| 48 |
+
|
| 49 |
+
# Commit novamente
|
| 50 |
+
git commit -m "Initial commit: AletheionGuard Space with Trial 012 models"
|
| 51 |
+
```
|
| 52 |
+
|
| 53 |
+
## Passo 4: Push para HuggingFace
|
| 54 |
+
|
| 55 |
+
```bash
|
| 56 |
+
# Push
|
| 57 |
+
git push
|
| 58 |
+
|
| 59 |
+
# Se pedir credenciais:
|
| 60 |
+
# Username: gnai-creator
|
| 61 |
+
# Password: seu token HF (hf_...)
|
| 62 |
+
```
|
| 63 |
+
|
| 64 |
+
## Verificar Arquivos LFS
|
| 65 |
+
|
| 66 |
+
Após adicionar os arquivos, você deve ver algo assim:
|
| 67 |
+
|
| 68 |
+
```bash
|
| 69 |
+
$ git lfs ls-files
|
| 70 |
+
c9a5b2e1f3 * base_forces.pth
|
| 71 |
+
d4f8c7a2b1 * height_gate.pth
|
| 72 |
+
e3d9b4f8c2 * q1_gate.pth
|
| 73 |
+
f2e8a3d7c1 * q2_gate.pth
|
| 74 |
+
a1b2c3d4e5 * q1q2_best.ckpt
|
| 75 |
+
```
|
| 76 |
+
|
| 77 |
+
## Solução Alternativa (Se Git LFS Não Funcionar)
|
| 78 |
+
|
| 79 |
+
Se você não conseguir instalar o Git LFS, pode:
|
| 80 |
+
|
| 81 |
+
1. **Remover os modelos grandes** e fazer upload manual depois:
|
| 82 |
+
```bash
|
| 83 |
+
git rm --cached *.pth *.ckpt
|
| 84 |
+
git commit -m "Remove large files temporarily"
|
| 85 |
+
git push
|
| 86 |
+
```
|
| 87 |
+
|
| 88 |
+
2. **Upload manual pelo site**:
|
| 89 |
+
- Vá para https://huggingface.co/spaces/gnai-creator/AletheionGuard
|
| 90 |
+
- Clique em "Files" → "Upload files"
|
| 91 |
+
- Faça upload dos arquivos `.pth` e `.ckpt` manualmente
|
| 92 |
+
|
| 93 |
+
3. **Ou use modelos menores/mock** para demonstração:
|
| 94 |
+
- Edite `app.py` para usar apenas heurísticas (já implementado como fallback)
|
| 95 |
+
- Remove a necessidade dos arquivos `.pth`
|
| 96 |
+
|
| 97 |
+
## Comandos Úteis
|
| 98 |
+
|
| 99 |
+
```bash
|
| 100 |
+
# Ver status do LFS
|
| 101 |
+
git lfs status
|
| 102 |
+
|
| 103 |
+
# Ver arquivos rastreados pelo LFS
|
| 104 |
+
git lfs ls-files
|
| 105 |
+
|
| 106 |
+
# Ver tamanho dos arquivos
|
| 107 |
+
du -sh *.pth *.ckpt
|
| 108 |
+
|
| 109 |
+
# Limpar cache LFS se necessário
|
| 110 |
+
git lfs prune
|
| 111 |
+
```
|
| 112 |
+
|
| 113 |
+
## Problema Comum: "last.ckpt"
|
| 114 |
+
|
| 115 |
+
O arquivo `last.ckpt` não deveria estar no repositório. Para removê-lo:
|
| 116 |
+
|
| 117 |
+
```bash
|
| 118 |
+
# Se ele foi adicionado por engano
|
| 119 |
+
git rm last.ckpt
|
| 120 |
+
git commit -m "Remove last.ckpt"
|
| 121 |
+
```
|
| 122 |
+
|
| 123 |
+
---
|
| 124 |
+
|
| 125 |
+
**Depois de seguir estes passos, tente o push novamente!** 🚀
|
QUICK_START.md
ADDED
|
@@ -0,0 +1,147 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# 🚀 Quick Start - Deploy to HuggingFace in 5 Minutes
|
| 2 |
+
|
| 3 |
+
## ✅ Pre-Deploy Checklist
|
| 4 |
+
|
| 5 |
+
All files are ready! Here's what you have:
|
| 6 |
+
|
| 7 |
+
```
|
| 8 |
+
AletheionGuard/
|
| 9 |
+
├── app.py (4.9 KB) - FastAPI application
|
| 10 |
+
├── requirements.txt (122 B) - Python dependencies
|
| 11 |
+
├── Dockerfile (602 B) - Docker configuration
|
| 12 |
+
├── README.md (6.1 KB) - Space documentation
|
| 13 |
+
├── model_info.json (2.5 KB) - Model metadata
|
| 14 |
+
├── DEPLOY_GUIDE.md (5.9 KB) - Detailed deploy guide
|
| 15 |
+
├── test_local.py (6.9 KB) - Local testing script
|
| 16 |
+
├── .gitattributes (auto) - Git LFS configuration
|
| 17 |
+
│
|
| 18 |
+
├── Model Files (9 MB total):
|
| 19 |
+
├── q1_gate.pth (904 KB) - Aleatoric gate
|
| 20 |
+
├── q2_gate.pth (905 KB) - Epistemic gate
|
| 21 |
+
├── height_gate.pth (230 KB) - Height gate
|
| 22 |
+
├── base_forces.pth (198 KB) - Base forces
|
| 23 |
+
└── q1q2_best.ckpt (6.6 MB) - Full checkpoint
|
| 24 |
+
```
|
| 25 |
+
|
| 26 |
+
**Total Size**: 9 MB (lightweight!)
|
| 27 |
+
|
| 28 |
+
## 🔥 Quick Deploy (3 Steps)
|
| 29 |
+
|
| 30 |
+
### 1. Create HuggingFace Space
|
| 31 |
+
|
| 32 |
+
Go to: https://huggingface.co/new-space
|
| 33 |
+
|
| 34 |
+
Fill in:
|
| 35 |
+
- **Space name**: `aletheionguard`
|
| 36 |
+
- **SDK**: Docker
|
| 37 |
+
- **License**: AGPL-3.0
|
| 38 |
+
- **Visibility**: Public or Private
|
| 39 |
+
|
| 40 |
+
### 2. Push Files
|
| 41 |
+
|
| 42 |
+
```bash
|
| 43 |
+
cd /home/sapo/AletheionGuard/hf_space_example/AletheionGuard
|
| 44 |
+
|
| 45 |
+
# Add HuggingFace remote (replace YOUR_USERNAME)
|
| 46 |
+
git remote add hf https://huggingface.co/spaces/YOUR_USERNAME/aletheionguard
|
| 47 |
+
|
| 48 |
+
# Push
|
| 49 |
+
git push hf main
|
| 50 |
+
```
|
| 51 |
+
|
| 52 |
+
When prompted:
|
| 53 |
+
- **Username**: Your HuggingFace username
|
| 54 |
+
- **Password**: Your HF token (get from https://huggingface.co/settings/tokens)
|
| 55 |
+
|
| 56 |
+
### 3. Wait for Build
|
| 57 |
+
|
| 58 |
+
1. Go to your Space: `https://huggingface.co/spaces/YOUR_USERNAME/aletheionguard`
|
| 59 |
+
2. Click **Build logs** tab
|
| 60 |
+
3. Wait ~2-3 minutes for build to complete
|
| 61 |
+
4. Once "Running", test it!
|
| 62 |
+
|
| 63 |
+
## 🧪 Test Your Space
|
| 64 |
+
|
| 65 |
+
```bash
|
| 66 |
+
# Health check
|
| 67 |
+
curl https://YOUR_USERNAME-aletheionguard.hf.space/health
|
| 68 |
+
|
| 69 |
+
# Predict
|
| 70 |
+
curl -X POST https://YOUR_USERNAME-aletheionguard.hf.space/predict \
|
| 71 |
+
-H "Authorization: Bearer hf_YOUR_TOKEN" \
|
| 72 |
+
-H "Content-Type: application/json" \
|
| 73 |
+
-d '{"text": "Paris is the capital of France"}'
|
| 74 |
+
```
|
| 75 |
+
|
| 76 |
+
Expected:
|
| 77 |
+
```json
|
| 78 |
+
{
|
| 79 |
+
"q1": 0.06,
|
| 80 |
+
"q2": 0.08,
|
| 81 |
+
"height": 0.90,
|
| 82 |
+
"verdict": "ACCEPT",
|
| 83 |
+
"message": "Heuristic metrics computed successfully."
|
| 84 |
+
}
|
| 85 |
+
```
|
| 86 |
+
|
| 87 |
+
## 🔧 Test Locally First (Optional)
|
| 88 |
+
|
| 89 |
+
Before deploying, test locally:
|
| 90 |
+
|
| 91 |
+
```bash
|
| 92 |
+
# Install dependencies
|
| 93 |
+
pip install -r requirements.txt
|
| 94 |
+
|
| 95 |
+
# Run API
|
| 96 |
+
python -m uvicorn app:app --port 7860
|
| 97 |
+
|
| 98 |
+
# In another terminal, run tests
|
| 99 |
+
python test_local.py
|
| 100 |
+
```
|
| 101 |
+
|
| 102 |
+
## 📚 What's Next?
|
| 103 |
+
|
| 104 |
+
1. **Use BYO-HF Mode**:
|
| 105 |
+
```python
|
| 106 |
+
from aletheion_guard import EpistemicAuditor
|
| 107 |
+
|
| 108 |
+
auditor = EpistemicAuditor(
|
| 109 |
+
mode="byo-hf",
|
| 110 |
+
hf_space_url="https://huggingface.co/spaces/YOUR_USERNAME/aletheionguard",
|
| 111 |
+
hf_token="hf_YOUR_TOKEN"
|
| 112 |
+
)
|
| 113 |
+
|
| 114 |
+
result = auditor.evaluate("Text to audit")
|
| 115 |
+
```
|
| 116 |
+
|
| 117 |
+
2. **Fine-tune Models** (see `docs/PRE_DEPLOY_TRAINING.md`):
|
| 118 |
+
```bash
|
| 119 |
+
python scripts/finetune_real_data.py
|
| 120 |
+
```
|
| 121 |
+
|
| 122 |
+
3. **Update Space** with fine-tuned models:
|
| 123 |
+
```bash
|
| 124 |
+
cp models/real_finetuned/*.pth .
|
| 125 |
+
git add *.pth
|
| 126 |
+
git commit -m "Update to fine-tuned models"
|
| 127 |
+
git push hf main
|
| 128 |
+
```
|
| 129 |
+
|
| 130 |
+
## 🆘 Need Help?
|
| 131 |
+
|
| 132 |
+
- **Detailed Guide**: See `DEPLOY_GUIDE.md`
|
| 133 |
+
- **Model Info**: See `model_info.json`
|
| 134 |
+
- **API Docs**: See `README.md`
|
| 135 |
+
- **Issues**: https://github.com/AletheionAGI/AletheionGuard/issues
|
| 136 |
+
- **Email**: support@aletheion.com
|
| 137 |
+
|
| 138 |
+
## 💡 Pro Tips
|
| 139 |
+
|
| 140 |
+
- **Private Space**: Set to Private for BYO-HF mode (requires token for requests)
|
| 141 |
+
- **Public Space**: Set to Public for demo/showcase (free access)
|
| 142 |
+
- **Upgrade Hardware**: Go to Settings → Hardware for GPU/more CPU
|
| 143 |
+
- **Monitor Usage**: Check Analytics tab for request stats
|
| 144 |
+
|
| 145 |
+
---
|
| 146 |
+
|
| 147 |
+
**Ready?** Go to https://huggingface.co/new-space and start deploying! 🚀
|
README.md
CHANGED
|
@@ -1,12 +1,210 @@
|
|
| 1 |
---
|
| 2 |
title: AletheionGuard
|
| 3 |
-
emoji:
|
| 4 |
-
colorFrom:
|
| 5 |
-
colorTo:
|
| 6 |
sdk: docker
|
|
|
|
|
|
|
| 7 |
pinned: false
|
| 8 |
-
license:
|
| 9 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 10 |
---
|
| 11 |
|
| 12 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
title: AletheionGuard
|
| 3 |
+
emoji: 🛡️
|
| 4 |
+
colorFrom: purple
|
| 5 |
+
colorTo: blue
|
| 6 |
sdk: docker
|
| 7 |
+
sdk_version: "4.36.0"
|
| 8 |
+
app_file: app.py
|
| 9 |
pinned: false
|
| 10 |
+
license: agpl-3.0
|
| 11 |
+
tags:
|
| 12 |
+
- uncertainty-quantification
|
| 13 |
+
- epistemic-uncertainty
|
| 14 |
+
- llm-auditing
|
| 15 |
+
- ai-safety
|
| 16 |
+
- pytorch
|
| 17 |
+
language:
|
| 18 |
+
- en
|
| 19 |
+
library_name: pytorch
|
| 20 |
---
|
| 21 |
|
| 22 |
+
# 🛡️ AletheionGuard - Epistemic Auditor for LLMs
|
| 23 |
+
|
| 24 |
+
**AletheionGuard** is a cutting-edge epistemic uncertainty quantification system for Large Language Models (LLMs). It helps detect when AI models are confident vs. uncertain, reducing hallucinations and improving reliability.
|
| 25 |
+
|
| 26 |
+
## 🎯 What is AletheionGuard?
|
| 27 |
+
|
| 28 |
+
AletheionGuard audits LLM outputs by quantifying two types of uncertainty:
|
| 29 |
+
|
| 30 |
+
- **Q1 (Aleatoric)**: Inherent randomness in the data
|
| 31 |
+
- **Q2 (Epistemic)**: Model's lack of knowledge/confidence
|
| 32 |
+
|
| 33 |
+
From Q1 and Q2, we compute a **Height** score (confidence) using the pyramidal formula:
|
| 34 |
+
|
| 35 |
+
```
|
| 36 |
+
height = 1 - √(Q1² + Q2²)
|
| 37 |
+
```
|
| 38 |
+
|
| 39 |
+
Based on the height, we provide a **Verdict**:
|
| 40 |
+
- **ACCEPT** (height ≥ 0.85): High confidence
|
| 41 |
+
- **MAYBE** (0.70 ≤ height < 0.85): Moderate confidence
|
| 42 |
+
- **REFUSED** (height < 0.70): Low confidence, likely hallucination
|
| 43 |
+
|
| 44 |
+
## 🚀 Quick Start
|
| 45 |
+
|
| 46 |
+
### Option 1: Use Our Public API
|
| 47 |
+
|
| 48 |
+
```python
|
| 49 |
+
import requests
|
| 50 |
+
|
| 51 |
+
response = requests.post(
|
| 52 |
+
"https://api.aletheion.com/v1/audit",
|
| 53 |
+
headers={"Authorization": "Bearer ag_your_api_key"},
|
| 54 |
+
json={"text": "The Earth is flat"}
|
| 55 |
+
)
|
| 56 |
+
|
| 57 |
+
result = response.json()
|
| 58 |
+
print(f"Verdict: {result['verdict']}")
|
| 59 |
+
print(f"Q1: {result['q1']}, Q2: {result['q2']}, Height: {result['height']}")
|
| 60 |
+
```
|
| 61 |
+
|
| 62 |
+
Get your API key at: https://aletheion.com/signup
|
| 63 |
+
|
| 64 |
+
### Option 2: Deploy Your Own HF Space (BYO-HF Mode)
|
| 65 |
+
|
| 66 |
+
1. Click "Duplicate this Space" in HuggingFace
|
| 67 |
+
2. Set your Space to **PRIVATE**
|
| 68 |
+
3. Note your Space URL and HF token
|
| 69 |
+
4. Use with AletheionGuard client:
|
| 70 |
+
|
| 71 |
+
```python
|
| 72 |
+
from aletheion_guard import EpistemicAuditor
|
| 73 |
+
|
| 74 |
+
auditor = EpistemicAuditor(
|
| 75 |
+
mode="byo-hf",
|
| 76 |
+
hf_space_url="https://huggingface.co/spaces/YOUR_USERNAME/YOUR_SPACE",
|
| 77 |
+
hf_token="hf_your_token"
|
| 78 |
+
)
|
| 79 |
+
|
| 80 |
+
result = auditor.evaluate("Paris is the capital of France")
|
| 81 |
+
print(f"Verdict: {result.verdict}, Height: {result.height}")
|
| 82 |
+
```
|
| 83 |
+
|
| 84 |
+
### Option 3: Self-Hosted
|
| 85 |
+
|
| 86 |
+
Download models and run locally:
|
| 87 |
+
|
| 88 |
+
```python
|
| 89 |
+
from aletheion_guard import EpistemicAuditor
|
| 90 |
+
|
| 91 |
+
auditor = EpistemicAuditor(
|
| 92 |
+
mode="local",
|
| 93 |
+
model_path="./models"
|
| 94 |
+
)
|
| 95 |
+
|
| 96 |
+
result = auditor.evaluate("Quantum entanglement is spooky")
|
| 97 |
+
```
|
| 98 |
+
|
| 99 |
+
## 📦 Model Files
|
| 100 |
+
|
| 101 |
+
This Space includes pre-trained models from **Trial 012**:
|
| 102 |
+
|
| 103 |
+
| File | Size | Description |
|
| 104 |
+
|------|------|-------------|
|
| 105 |
+
| `q1_gate.pth` | 904 KB | Aleatoric Uncertainty (Q1) gate |
|
| 106 |
+
| `q2_gate.pth` | 905 KB | Epistemic Uncertainty (Q2) gate |
|
| 107 |
+
| `height_gate.pth` | 230 KB | Pyramidal Height gate |
|
| 108 |
+
| `base_forces.pth` | 198 KB | Base force embeddings |
|
| 109 |
+
| `q1q2_best.ckpt` | 6.6 MB | Full PyTorch Lightning checkpoint |
|
| 110 |
+
|
| 111 |
+
**Total size**: ~9 MB (very lightweight!)
|
| 112 |
+
|
| 113 |
+
See `model_info.json` for detailed metadata.
|
| 114 |
+
|
| 115 |
+
## 🏗️ Architecture
|
| 116 |
+
|
| 117 |
+
AletheionGuard uses a **Pyramidal Multi-Gate Architecture**:
|
| 118 |
+
|
| 119 |
+
1. **Embedding**: Text → `all-MiniLM-L6-v2` (384 dims)
|
| 120 |
+
2. **Q1 Gate**: MLP (384 → 256 → 128 → 64 → 1) for aleatoric uncertainty
|
| 121 |
+
3. **Q2 Gate**: MLP (384 → 256 → 128 → 64 → 1) for epistemic uncertainty
|
| 122 |
+
4. **Height Gate**: MLP (386 → 128 → 64 → 1) for combined confidence
|
| 123 |
+
5. **Base Forces**: Learnable embeddings for calibration
|
| 124 |
+
|
| 125 |
+
**Total parameters**: ~580,000 (2.3 MB)
|
| 126 |
+
**Inference time**: <50ms per request (CPU)
|
| 127 |
+
|
| 128 |
+
## 📊 Performance Metrics
|
| 129 |
+
|
| 130 |
+
**On Synthetic Test Set** (1,590 samples):
|
| 131 |
+
- Q1 MSE: 0.0501
|
| 132 |
+
- Q2 MSE: 0.0499
|
| 133 |
+
- RCE (Relative Calibration Error): 0.0415
|
| 134 |
+
- Height MSE: 0.0521
|
| 135 |
+
|
| 136 |
+
**Expected after fine-tuning** on real data (TruthfulQA + SQuAD v2):
|
| 137 |
+
- Q1 MSE: ~0.042-0.045 (10-15% improvement)
|
| 138 |
+
- Q2 MSE: ~0.040-0.043
|
| 139 |
+
- RCE: ~0.035-0.038
|
| 140 |
+
|
| 141 |
+
## 🔬 Research & Academic Use
|
| 142 |
+
|
| 143 |
+
AletheionGuard implements techniques from:
|
| 144 |
+
- **Epistemic Uncertainty Quantification** in deep learning
|
| 145 |
+
- **Pyramidal Framework** for multi-dimensional uncertainty
|
| 146 |
+
- **Calibration Theory** for reliable confidence scores
|
| 147 |
+
|
| 148 |
+
Citation:
|
| 149 |
+
```bibtex
|
| 150 |
+
@software{aletheionguard2025,
|
| 151 |
+
title={AletheionGuard: Epistemic Auditor for Large Language Models},
|
| 152 |
+
author={Muniz, Felipe Maya},
|
| 153 |
+
year={2025},
|
| 154 |
+
url={https://github.com/AletheionAGI/AletheionGuard},
|
| 155 |
+
license={AGPL-3.0-or-later}
|
| 156 |
+
}
|
| 157 |
+
```
|
| 158 |
+
|
| 159 |
+
## 🛠️ API Endpoints
|
| 160 |
+
|
| 161 |
+
This Space provides a minimal FastAPI endpoint for BYO-HF mode:
|
| 162 |
+
|
| 163 |
+
- `GET /`: Health check
|
| 164 |
+
- `POST /predict`: Predict uncertainties for text
|
| 165 |
+
- Input: `{"text": "...", "context": "..."}`
|
| 166 |
+
- Output: `{"q1": 0.xx, "q2": 0.xx, "height": 0.xx, "verdict": "..."}`
|
| 167 |
+
- `GET /health`: Health check
|
| 168 |
+
|
| 169 |
+
See `app.py` for implementation details.
|
| 170 |
+
|
| 171 |
+
## 📝 Notes
|
| 172 |
+
|
| 173 |
+
1. **MVP Status**: These models were trained on **synthetic data** and are suitable for demos and MVPs.
|
| 174 |
+
2. **Production Readiness**: For production use, fine-tune on real datasets like TruthfulQA and SQuAD v2.
|
| 175 |
+
3. **Fine-tuning Script**: Use `scripts/finetune_real_data.py` in the main repository.
|
| 176 |
+
4. **HuggingFace Pro**: For high-volume production use, consider HuggingFace Pro ($9/month) or dedicated inference endpoints.
|
| 177 |
+
|
| 178 |
+
## 🔐 Security & Privacy
|
| 179 |
+
|
| 180 |
+
- **BYO-HF Mode**: Run in your own private HF Space - we never see your data
|
| 181 |
+
- **Self-Hosted**: Run completely on-premises for maximum privacy
|
| 182 |
+
- **Public API**: Uses industry-standard encryption and data retention policies
|
| 183 |
+
|
| 184 |
+
## 📚 Documentation
|
| 185 |
+
|
| 186 |
+
- **Docs**: https://docs.aletheion.com
|
| 187 |
+
- **GitHub**: https://github.com/AletheionAGI/AletheionGuard
|
| 188 |
+
- **Website**: https://aletheion.com
|
| 189 |
+
- **API Reference**: https://docs.aletheion.com/api
|
| 190 |
+
|
| 191 |
+
## 📧 Support
|
| 192 |
+
|
| 193 |
+
- **Email**: support@aletheion.com
|
| 194 |
+
- **Discord**: https://discord.gg/aletheion
|
| 195 |
+
- **Issues**: https://github.com/AletheionAGI/AletheionGuard/issues
|
| 196 |
+
|
| 197 |
+
## 📄 License
|
| 198 |
+
|
| 199 |
+
**AGPL-3.0-or-later**
|
| 200 |
+
|
| 201 |
+
Copyright (c) 2024-2025 Felipe Maya Muniz / AletheionAGI
|
| 202 |
+
|
| 203 |
+
This program is free software: you can redistribute it and/or modify it under the terms of the GNU Affero General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
|
| 204 |
+
|
| 205 |
+
For commercial licensing, contact: licensing@aletheion.com
|
| 206 |
+
|
| 207 |
+
---
|
| 208 |
+
|
| 209 |
+
**Built with ❤️ by the AletheionGuard team**
|
| 210 |
+
*Making AI safer, one uncertainty at a time.*
|
app.py
ADDED
|
@@ -0,0 +1,158 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# SPDX-License-Identifier: AGPL-3.0-or-later
|
| 2 |
+
# Copyright (c) 2024-2025 Felipe Maya Muniz
|
| 3 |
+
|
| 4 |
+
"""
|
| 5 |
+
Reference Hugging Face Space for AletheionGuard BYO-HF mode.
|
| 6 |
+
|
| 7 |
+
This is a minimal FastAPI endpoint that clients can deploy on Hugging Face Spaces
|
| 8 |
+
to use with AletheionGuard's BYO-HF mode.
|
| 9 |
+
|
| 10 |
+
Deploy this Space as PRIVATE and use your HF token + Space URL with AletheionGuard.
|
| 11 |
+
"""
|
| 12 |
+
|
| 13 |
+
from fastapi import FastAPI, HTTPException, Header
|
| 14 |
+
from pydantic import BaseModel
|
| 15 |
+
from typing import Optional
|
| 16 |
+
import logging
|
| 17 |
+
import math
|
| 18 |
+
|
| 19 |
+
logging.basicConfig(level=logging.INFO)
|
| 20 |
+
logger = logging.getLogger(__name__)
|
| 21 |
+
|
| 22 |
+
app = FastAPI(
|
| 23 |
+
title="AletheionGuard HF Space",
|
| 24 |
+
description="Reference endpoint for BYO-HF mode",
|
| 25 |
+
version="1.0.0"
|
| 26 |
+
)
|
| 27 |
+
|
| 28 |
+
|
| 29 |
+
class PredictRequest(BaseModel):
|
| 30 |
+
"""Request model for /predict endpoint."""
|
| 31 |
+
text: str
|
| 32 |
+
context: Optional[str] = None
|
| 33 |
+
|
| 34 |
+
|
| 35 |
+
class PredictResponse(BaseModel):
|
| 36 |
+
"""Response model for /predict endpoint."""
|
| 37 |
+
q1: float
|
| 38 |
+
q2: float
|
| 39 |
+
height: float
|
| 40 |
+
message: str
|
| 41 |
+
verdict: Optional[str] = None # Optional debug field - NOT used by API
|
| 42 |
+
|
| 43 |
+
|
| 44 |
+
def get_verdict(q1: float, q2: float, height: float) -> str:
|
| 45 |
+
"""
|
| 46 |
+
Calculate verdict for debug purposes only.
|
| 47 |
+
|
| 48 |
+
NOTE: This is NOT the official verdict. The official verdict is always
|
| 49 |
+
calculated by the AletheionGuard API using the same rule.
|
| 50 |
+
|
| 51 |
+
Official epistemic rule:
|
| 52 |
+
- u = 1.0 - height (total uncertainty)
|
| 53 |
+
- If q2 >= 0.35 OR u >= 0.60 → REFUSED
|
| 54 |
+
- If q1 >= 0.35 OR (0.30 <= u < 0.60) → MAYBE
|
| 55 |
+
- Otherwise → ACCEPT
|
| 56 |
+
"""
|
| 57 |
+
u = 1.0 - height # Total uncertainty
|
| 58 |
+
|
| 59 |
+
if q2 >= 0.35 or u >= 0.60:
|
| 60 |
+
return "REFUSED"
|
| 61 |
+
if q1 >= 0.35 or (0.30 <= u < 0.60):
|
| 62 |
+
return "MAYBE"
|
| 63 |
+
return "ACCEPT"
|
| 64 |
+
|
| 65 |
+
|
| 66 |
+
@app.get("/")
|
| 67 |
+
def root():
|
| 68 |
+
"""Root endpoint."""
|
| 69 |
+
return {
|
| 70 |
+
"name": "AletheionGuard HF Space",
|
| 71 |
+
"version": "1.0.0",
|
| 72 |
+
"status": "operational"
|
| 73 |
+
}
|
| 74 |
+
|
| 75 |
+
|
| 76 |
+
@app.post("/predict", response_model=PredictResponse)
|
| 77 |
+
def predict(
|
| 78 |
+
request: PredictRequest,
|
| 79 |
+
authorization: str = Header(...)
|
| 80 |
+
):
|
| 81 |
+
"""
|
| 82 |
+
Predict endpoint for text analysis.
|
| 83 |
+
|
| 84 |
+
Returns heuristic uncertainty metrics (q1, q2, height) and optional verdict.
|
| 85 |
+
|
| 86 |
+
NOTE: This is an MVP implementation using heuristics. For production:
|
| 87 |
+
1. Load a sentence-transformer model
|
| 88 |
+
2. Use trained Q1/Q2 gates to compute actual metrics
|
| 89 |
+
3. Return embeddings/logits for calibration
|
| 90 |
+
|
| 91 |
+
Args:
|
| 92 |
+
request: Text and optional context
|
| 93 |
+
authorization: Bearer token (verified by HF automatically)
|
| 94 |
+
|
| 95 |
+
Returns:
|
| 96 |
+
Heuristic metrics with optional debug verdict
|
| 97 |
+
|
| 98 |
+
Example:
|
| 99 |
+
>>> POST /predict
|
| 100 |
+
>>> Headers: Authorization: Bearer hf_...
|
| 101 |
+
>>> Body: {"text": "Paris is the capital of France", "context": "geography"}
|
| 102 |
+
>>> Response: {"q1": 0.06, "q2": 0.18, "height": 0.81, "verdict": "ACCEPT"}
|
| 103 |
+
"""
|
| 104 |
+
try:
|
| 105 |
+
logger.info(f"Received prediction request - text_length={len(request.text)}")
|
| 106 |
+
|
| 107 |
+
# MVP: Compute heuristic metrics (replace with actual model in production)
|
| 108 |
+
# Simple heuristics based on text characteristics:
|
| 109 |
+
text_len = len(request.text)
|
| 110 |
+
word_count = len(request.text.split())
|
| 111 |
+
has_context = request.context is not None
|
| 112 |
+
|
| 113 |
+
# Heuristic Q1 (aleatoric): based on text ambiguity indicators
|
| 114 |
+
# Lower for factual statements, higher for opinion/uncertain language
|
| 115 |
+
q1 = min(0.30, 0.05 + (word_count / 200)) # Increases with verbosity
|
| 116 |
+
if any(word in request.text.lower() for word in ["maybe", "possibly", "might", "could"]):
|
| 117 |
+
q1 += 0.15
|
| 118 |
+
|
| 119 |
+
# Heuristic Q2 (epistemic): based on model confidence indicators
|
| 120 |
+
# Lower for common topics, higher for rare/complex topics
|
| 121 |
+
q2 = 0.10 if text_len > 20 else 0.20 # More text = more context
|
| 122 |
+
if has_context:
|
| 123 |
+
q2 -= 0.05 # Context helps reduce epistemic uncertainty
|
| 124 |
+
if any(word in request.text.lower() for word in ["quantum", "theoretical", "hypothetical"]):
|
| 125 |
+
q2 += 0.20
|
| 126 |
+
|
| 127 |
+
# Ensure bounds [0, 1]
|
| 128 |
+
q1 = max(0.0, min(1.0, q1))
|
| 129 |
+
q2 = max(0.0, min(1.0, q2))
|
| 130 |
+
|
| 131 |
+
# Compute height from pyramidal formula
|
| 132 |
+
height = max(0.0, min(1.0, 1.0 - math.sqrt(q1**2 + q2**2)))
|
| 133 |
+
|
| 134 |
+
# Compute verdict (optional debug field)
|
| 135 |
+
verdict = get_verdict(q1, q2, height)
|
| 136 |
+
|
| 137 |
+
return PredictResponse(
|
| 138 |
+
q1=round(q1, 3),
|
| 139 |
+
q2=round(q2, 3),
|
| 140 |
+
height=round(height, 3),
|
| 141 |
+
message="Heuristic metrics computed successfully.",
|
| 142 |
+
verdict=verdict # Debug only - API ignores this
|
| 143 |
+
)
|
| 144 |
+
|
| 145 |
+
except Exception as e:
|
| 146 |
+
logger.error(f"Prediction failed: {str(e)}")
|
| 147 |
+
raise HTTPException(status_code=500, detail=str(e))
|
| 148 |
+
|
| 149 |
+
|
| 150 |
+
@app.get("/health")
|
| 151 |
+
def health():
|
| 152 |
+
"""Health check endpoint."""
|
| 153 |
+
return {"status": "healthy"}
|
| 154 |
+
|
| 155 |
+
|
| 156 |
+
if __name__ == "__main__":
|
| 157 |
+
import uvicorn
|
| 158 |
+
uvicorn.run(app, host="0.0.0.0", port=7860) # HF Spaces use port 7860
|
model_info.json
ADDED
|
@@ -0,0 +1,76 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"model_name": "AletheionGuard Trial 012",
|
| 3 |
+
"version": "1.0.0",
|
| 4 |
+
"training_date": "2025-11-11",
|
| 5 |
+
"architecture": "Pyramidal Q1Q2 with Gates",
|
| 6 |
+
"framework": "PyTorch Lightning",
|
| 7 |
+
"embedding_model": "sentence-transformers/all-MiniLM-L6-v2",
|
| 8 |
+
"embedding_dim": 384,
|
| 9 |
+
|
| 10 |
+
"files": {
|
| 11 |
+
"q1_gate.pth": {
|
| 12 |
+
"description": "Aleatoric Uncertainty (Q1) Gate - MLP for inherent randomness",
|
| 13 |
+
"size_kb": 904,
|
| 14 |
+
"parameters": "~235k"
|
| 15 |
+
},
|
| 16 |
+
"q2_gate.pth": {
|
| 17 |
+
"description": "Epistemic Uncertainty (Q2) Gate - MLP for model confidence",
|
| 18 |
+
"size_kb": 905,
|
| 19 |
+
"parameters": "~235k"
|
| 20 |
+
},
|
| 21 |
+
"height_gate.pth": {
|
| 22 |
+
"description": "Pyramidal Height Gate - MLP for combined confidence score",
|
| 23 |
+
"size_kb": 230,
|
| 24 |
+
"parameters": "~60k"
|
| 25 |
+
},
|
| 26 |
+
"base_forces.pth": {
|
| 27 |
+
"description": "Base Force Embeddings - Learnable base representation",
|
| 28 |
+
"size_kb": 198,
|
| 29 |
+
"parameters": "~51k"
|
| 30 |
+
},
|
| 31 |
+
"q1q2_best.ckpt": {
|
| 32 |
+
"description": "Full PyTorch Lightning checkpoint (epoch 24, val_loss=0.2944)",
|
| 33 |
+
"size_mb": 6.6,
|
| 34 |
+
"parameters": "~580k total"
|
| 35 |
+
}
|
| 36 |
+
},
|
| 37 |
+
|
| 38 |
+
"training_info": {
|
| 39 |
+
"dataset": "Synthetic dataset with epistemic labels",
|
| 40 |
+
"num_samples": 1590,
|
| 41 |
+
"train_split": 0.7,
|
| 42 |
+
"val_split": 0.15,
|
| 43 |
+
"test_split": 0.15,
|
| 44 |
+
"epochs_trained": 33,
|
| 45 |
+
"best_epoch": 24,
|
| 46 |
+
"best_val_loss": 0.2944,
|
| 47 |
+
"learning_rate": 0.001,
|
| 48 |
+
"optimizer": "Adam",
|
| 49 |
+
"batch_size": 32
|
| 50 |
+
},
|
| 51 |
+
|
| 52 |
+
"metrics": {
|
| 53 |
+
"q1_mse": 0.0501,
|
| 54 |
+
"q2_mse": 0.0499,
|
| 55 |
+
"rce": 0.0415,
|
| 56 |
+
"height_mse": 0.0521,
|
| 57 |
+
"description": "Metrics on synthetic test set. Fine-tuning on real data (TruthfulQA + SQuAD) expected to improve by 10-15%."
|
| 58 |
+
},
|
| 59 |
+
|
| 60 |
+
"usage": {
|
| 61 |
+
"python_sdk": "from aletheion_guard import EpistemicAuditor; auditor = EpistemicAuditor(model_path='AletheionAGI/aletheionguard-models')",
|
| 62 |
+
"rest_api": "POST https://api.aletheion.com/v1/audit with { text: '...', api_key: 'ag_...' }",
|
| 63 |
+
"byo_hf": "Deploy this Space as PRIVATE and use with AletheionGuard BYO-HF mode"
|
| 64 |
+
},
|
| 65 |
+
|
| 66 |
+
"license": "AGPL-3.0-or-later",
|
| 67 |
+
"author": "Felipe Maya Muniz",
|
| 68 |
+
"copyright": "2024-2025 AletheionAGI",
|
| 69 |
+
|
| 70 |
+
"notes": [
|
| 71 |
+
"These models were trained on synthetic data and are suitable for MVP/demo purposes.",
|
| 72 |
+
"For production use, fine-tune on real datasets (TruthfulQA, SQuAD v2, etc.).",
|
| 73 |
+
"Models are small (~2.3MB total) and optimized for fast inference.",
|
| 74 |
+
"Expected inference time: <50ms per request on CPU."
|
| 75 |
+
]
|
| 76 |
+
}
|
requirements.txt
ADDED
|
@@ -0,0 +1,6 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
fastapi==0.104.1
|
| 2 |
+
uvicorn[standard]==0.24.0
|
| 3 |
+
pydantic==2.5.0
|
| 4 |
+
transformers==4.35.0
|
| 5 |
+
sentence-transformers==2.2.2
|
| 6 |
+
torch==2.1.0
|
test_local.py
ADDED
|
@@ -0,0 +1,217 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/usr/bin/env python3
|
| 2 |
+
"""
|
| 3 |
+
Test script to verify the HuggingFace Space locally before deployment.
|
| 4 |
+
|
| 5 |
+
Usage:
|
| 6 |
+
python test_local.py
|
| 7 |
+
"""
|
| 8 |
+
|
| 9 |
+
import requests
|
| 10 |
+
import time
|
| 11 |
+
import sys
|
| 12 |
+
from pathlib import Path
|
| 13 |
+
|
| 14 |
+
# Colors for terminal output
|
| 15 |
+
GREEN = '\033[92m'
|
| 16 |
+
RED = '\033[91m'
|
| 17 |
+
YELLOW = '\033[93m'
|
| 18 |
+
BLUE = '\033[94m'
|
| 19 |
+
RESET = '\033[0m'
|
| 20 |
+
|
| 21 |
+
def print_success(msg):
|
| 22 |
+
print(f"{GREEN}✓{RESET} {msg}")
|
| 23 |
+
|
| 24 |
+
def print_error(msg):
|
| 25 |
+
print(f"{RED}✗{RESET} {msg}")
|
| 26 |
+
|
| 27 |
+
def print_info(msg):
|
| 28 |
+
print(f"{BLUE}ℹ{RESET} {msg}")
|
| 29 |
+
|
| 30 |
+
def print_warning(msg):
|
| 31 |
+
print(f"{YELLOW}⚠{RESET} {msg}")
|
| 32 |
+
|
| 33 |
+
def check_files():
|
| 34 |
+
"""Check if all required files exist."""
|
| 35 |
+
print_info("Checking required files...")
|
| 36 |
+
|
| 37 |
+
required_files = [
|
| 38 |
+
"app.py",
|
| 39 |
+
"requirements.txt",
|
| 40 |
+
"Dockerfile",
|
| 41 |
+
"README.md",
|
| 42 |
+
"model_info.json",
|
| 43 |
+
"q1_gate.pth",
|
| 44 |
+
"q2_gate.pth",
|
| 45 |
+
"height_gate.pth",
|
| 46 |
+
"base_forces.pth",
|
| 47 |
+
"q1q2_best.ckpt",
|
| 48 |
+
]
|
| 49 |
+
|
| 50 |
+
all_exist = True
|
| 51 |
+
for file in required_files:
|
| 52 |
+
path = Path(file)
|
| 53 |
+
if path.exists():
|
| 54 |
+
size = path.stat().st_size / 1024 # KB
|
| 55 |
+
print_success(f"{file} ({size:.1f} KB)")
|
| 56 |
+
else:
|
| 57 |
+
print_error(f"{file} NOT FOUND")
|
| 58 |
+
all_exist = False
|
| 59 |
+
|
| 60 |
+
return all_exist
|
| 61 |
+
|
| 62 |
+
def test_api():
|
| 63 |
+
"""Test the FastAPI endpoints."""
|
| 64 |
+
print_info("\nStarting API tests...")
|
| 65 |
+
print_warning("Make sure the API is running: python -m uvicorn app:app --port 7860\n")
|
| 66 |
+
|
| 67 |
+
base_url = "http://localhost:7860"
|
| 68 |
+
|
| 69 |
+
# Test 1: Root endpoint
|
| 70 |
+
print_info("Test 1: Root endpoint (GET /)")
|
| 71 |
+
try:
|
| 72 |
+
response = requests.get(f"{base_url}/", timeout=5)
|
| 73 |
+
if response.status_code == 200:
|
| 74 |
+
data = response.json()
|
| 75 |
+
print_success(f"Status: {response.status_code}")
|
| 76 |
+
print_success(f"Response: {data}")
|
| 77 |
+
else:
|
| 78 |
+
print_error(f"Status: {response.status_code}")
|
| 79 |
+
return False
|
| 80 |
+
except requests.exceptions.ConnectionError:
|
| 81 |
+
print_error("Cannot connect to API. Is it running?")
|
| 82 |
+
print_info("Start it with: python -m uvicorn app:app --port 7860")
|
| 83 |
+
return False
|
| 84 |
+
except Exception as e:
|
| 85 |
+
print_error(f"Error: {str(e)}")
|
| 86 |
+
return False
|
| 87 |
+
|
| 88 |
+
# Test 2: Health endpoint
|
| 89 |
+
print_info("\nTest 2: Health check (GET /health)")
|
| 90 |
+
try:
|
| 91 |
+
response = requests.get(f"{base_url}/health", timeout=5)
|
| 92 |
+
if response.status_code == 200:
|
| 93 |
+
data = response.json()
|
| 94 |
+
print_success(f"Status: {response.status_code}")
|
| 95 |
+
print_success(f"Response: {data}")
|
| 96 |
+
else:
|
| 97 |
+
print_error(f"Status: {response.status_code}")
|
| 98 |
+
return False
|
| 99 |
+
except Exception as e:
|
| 100 |
+
print_error(f"Error: {str(e)}")
|
| 101 |
+
return False
|
| 102 |
+
|
| 103 |
+
# Test 3: Predict endpoint - factual statement
|
| 104 |
+
print_info("\nTest 3: Predict endpoint - Factual statement")
|
| 105 |
+
try:
|
| 106 |
+
response = requests.post(
|
| 107 |
+
f"{base_url}/predict",
|
| 108 |
+
headers={"Authorization": "Bearer test_token"},
|
| 109 |
+
json={"text": "Paris is the capital of France"},
|
| 110 |
+
timeout=5
|
| 111 |
+
)
|
| 112 |
+
if response.status_code == 200:
|
| 113 |
+
data = response.json()
|
| 114 |
+
print_success(f"Status: {response.status_code}")
|
| 115 |
+
print_success(f"Q1: {data['q1']}, Q2: {data['q2']}, Height: {data['height']}")
|
| 116 |
+
print_success(f"Verdict: {data['verdict']}")
|
| 117 |
+
|
| 118 |
+
# Validate values
|
| 119 |
+
if data['q1'] < 0 or data['q1'] > 1:
|
| 120 |
+
print_error(f"Q1 out of range: {data['q1']}")
|
| 121 |
+
return False
|
| 122 |
+
if data['q2'] < 0 or data['q2'] > 1:
|
| 123 |
+
print_error(f"Q2 out of range: {data['q2']}")
|
| 124 |
+
return False
|
| 125 |
+
if data['height'] < 0 or data['height'] > 1:
|
| 126 |
+
print_error(f"Height out of range: {data['height']}")
|
| 127 |
+
return False
|
| 128 |
+
else:
|
| 129 |
+
print_error(f"Status: {response.status_code}")
|
| 130 |
+
print_error(f"Response: {response.text}")
|
| 131 |
+
return False
|
| 132 |
+
except Exception as e:
|
| 133 |
+
print_error(f"Error: {str(e)}")
|
| 134 |
+
return False
|
| 135 |
+
|
| 136 |
+
# Test 4: Predict endpoint - uncertain statement
|
| 137 |
+
print_info("\nTest 4: Predict endpoint - Uncertain statement")
|
| 138 |
+
try:
|
| 139 |
+
response = requests.post(
|
| 140 |
+
f"{base_url}/predict",
|
| 141 |
+
headers={"Authorization": "Bearer test_token"},
|
| 142 |
+
json={"text": "Maybe quantum computing will solve all problems"},
|
| 143 |
+
timeout=5
|
| 144 |
+
)
|
| 145 |
+
if response.status_code == 200:
|
| 146 |
+
data = response.json()
|
| 147 |
+
print_success(f"Status: {response.status_code}")
|
| 148 |
+
print_success(f"Q1: {data['q1']}, Q2: {data['q2']}, Height: {data['height']}")
|
| 149 |
+
print_success(f"Verdict: {data['verdict']}")
|
| 150 |
+
|
| 151 |
+
# This should have higher uncertainty
|
| 152 |
+
if data['q1'] < 0.15 and data['q2'] < 0.15:
|
| 153 |
+
print_warning("Uncertainties seem low for an uncertain statement")
|
| 154 |
+
else:
|
| 155 |
+
print_error(f"Status: {response.status_code}")
|
| 156 |
+
return False
|
| 157 |
+
except Exception as e:
|
| 158 |
+
print_error(f"Error: {str(e)}")
|
| 159 |
+
return False
|
| 160 |
+
|
| 161 |
+
# Test 5: Predict endpoint - with context
|
| 162 |
+
print_info("\nTest 5: Predict endpoint - With context")
|
| 163 |
+
try:
|
| 164 |
+
response = requests.post(
|
| 165 |
+
f"{base_url}/predict",
|
| 166 |
+
headers={"Authorization": "Bearer test_token"},
|
| 167 |
+
json={
|
| 168 |
+
"text": "The Eiffel Tower is 324 meters tall",
|
| 169 |
+
"context": "geography"
|
| 170 |
+
},
|
| 171 |
+
timeout=5
|
| 172 |
+
)
|
| 173 |
+
if response.status_code == 200:
|
| 174 |
+
data = response.json()
|
| 175 |
+
print_success(f"Status: {response.status_code}")
|
| 176 |
+
print_success(f"Q1: {data['q1']}, Q2: {data['q2']}, Height: {data['height']}")
|
| 177 |
+
print_success(f"Verdict: {data['verdict']}")
|
| 178 |
+
else:
|
| 179 |
+
print_error(f"Status: {response.status_code}")
|
| 180 |
+
return False
|
| 181 |
+
except Exception as e:
|
| 182 |
+
print_error(f"Error: {str(e)}")
|
| 183 |
+
return False
|
| 184 |
+
|
| 185 |
+
return True
|
| 186 |
+
|
| 187 |
+
def main():
|
| 188 |
+
print(f"\n{BLUE}{'='*60}{RESET}")
|
| 189 |
+
print(f"{BLUE} AletheionGuard HuggingFace Space - Local Test Suite{RESET}")
|
| 190 |
+
print(f"{BLUE}{'='*60}{RESET}\n")
|
| 191 |
+
|
| 192 |
+
# Check files
|
| 193 |
+
if not check_files():
|
| 194 |
+
print_error("\nSome required files are missing!")
|
| 195 |
+
print_info("Make sure all model files are copied to this directory")
|
| 196 |
+
sys.exit(1)
|
| 197 |
+
|
| 198 |
+
print_success("\nAll required files present ✓\n")
|
| 199 |
+
|
| 200 |
+
# Test API
|
| 201 |
+
print_info("="*60)
|
| 202 |
+
if test_api():
|
| 203 |
+
print(f"\n{GREEN}{'='*60}{RESET}")
|
| 204 |
+
print(f"{GREEN} All tests passed! ✓{RESET}")
|
| 205 |
+
print(f"{GREEN}{'='*60}{RESET}\n")
|
| 206 |
+
print_info("Your Space is ready for deployment!")
|
| 207 |
+
print_info("Follow DEPLOY_GUIDE.md to push to HuggingFace\n")
|
| 208 |
+
return 0
|
| 209 |
+
else:
|
| 210 |
+
print(f"\n{RED}{'='*60}{RESET}")
|
| 211 |
+
print(f"{RED} Some tests failed ✗{RESET}")
|
| 212 |
+
print(f"{RED}{'='*60}{RESET}\n")
|
| 213 |
+
print_info("Fix the errors above before deploying\n")
|
| 214 |
+
return 1
|
| 215 |
+
|
| 216 |
+
if __name__ == "__main__":
|
| 217 |
+
sys.exit(main())
|