Spaces:
Sleeping
A newer version of the Gradio SDK is available:
6.1.0
π ViT Auditing Toolkit - Quick Reference
One-Liner Commands
# Quick start
python app.py
# Download sample images
python examples/download_samples.py
# Run tests
pytest tests/ -v
# Run with Docker
docker-compose up
# Check code style
black --check src/ tests/ app.py
# Generate coverage report
pytest --cov=src --cov-report=html tests/
π Project Structure Quick Map
ViT-XAI-Dashboard/
βββ app.py # π― Main application - START HERE
βββ requirements.txt # π¦ Dependencies
β
βββ src/ # π§ Core functionality
β βββ model_loader.py # Load ViT models from HF
β βββ predictor.py # Make predictions
β βββ explainer.py # XAI methods (Attention, GradCAM, SHAP)
β βββ auditor.py # Advanced auditing tools
β βββ utils.py # Helper functions
β
βββ examples/ # πΌοΈ Test images (20 images)
β βββ basic_explainability/ # For Tab 1
β βββ counterfactual/ # For Tab 2
β βββ calibration/ # For Tab 3
β βββ bias_detection/ # For Tab 4
β βββ general/ # Misc testing
β
βββ tests/ # π§ͺ Unit tests
β βββ test_phase1_complete.py # Basic tests
β βββ test_advanced_features.py # Advanced tests
β
βββ Documentation/ # π All docs
βββ README.md # Main documentation
βββ QUICKSTART.md # 5-minute setup
βββ TESTING.md # Testing guide
βββ CONTRIBUTING.md # Dev guidelines
βββ PROJECT_SUMMARY.md # This file
π― Common Tasks
Start the Dashboard
python app.py
# Opens at http://localhost:7860
Test a Single Tab
# 1. Start app: python app.py
# 2. Go to http://localhost:7860
# 3. Load ViT-Base model
# 4. Tab 1: Upload examples/basic_explainability/cat_portrait.jpg
# 5. Click "Analyze Image"
Add New Test Image
# Option 1: Manual
cp /path/to/image.jpg examples/basic_explainability/
# Option 2: Download from URL
curl -L "https://example.com/image.jpg" -o examples/general/my_image.jpg
Run Quick Test
# Smoke test (verify everything works)
python app.py &
sleep 10
curl http://localhost:7860
# If no error, you're good!
π Tab Reference
Tab 1: Basic Explainability (π)
Purpose: Understand predictions
Methods: Attention, GradCAM, GradientSHAP
Best Images: examples/basic_explainability/
Use When: Want to see what model focuses on
Tab 2: Counterfactual Analysis (π)
Purpose: Test robustness
Methods: Patch perturbation (blur/blackout/gray/noise)
Best Images: examples/counterfactual/
Use When: Testing prediction stability
Tab 3: Confidence Calibration (π)
Purpose: Validate confidence scores
Methods: Calibration curves, reliability diagrams
Best Images: examples/calibration/
Use When: Checking if confidence matches accuracy
Tab 4: Bias Detection (βοΈ)
Purpose: Find performance disparities
Methods: Subgroup analysis
Best Images: examples/bias_detection/
Use When: Testing fairness across conditions
π¨ Customization Quick Tips
Change Port
# app.py, last line:
demo.launch(server_port=7860) # Change 7860 to your port
Add New Model
# src/model_loader.py:
SUPPORTED_MODELS = {
"ViT-Base": "google/vit-base-patch16-224",
"ViT-Large": "google/vit-large-patch16-224",
# New additions
"ResNet-50": "microsoft/resnet-50",
"Swin Transformer": "microsoft/swin-base-patch4-window7-224",
"DeiT": "facebook/deit-base-patch16-224",
"EfficientNet": "google/efficientnet-b7",
}
Modify Colors
# app.py, custom_css variable:
# Change gradient colors, backgrounds, etc.
π Troubleshooting Quick Fixes
Port Already in Use
# Linux/Mac:
lsof -ti:7860 | xargs kill -9
# Windows:
netstat -ano | findstr :7860
taskkill /PID <PID> /F
Out of Memory
# Use smaller model
model_choice = "ViT-Base" # instead of ViT-Large
# Or clear GPU cache
import torch
torch.cuda.empty_cache()
Model Download Fails
# Set cache directory
export HF_HOME="/path/to/writable/dir"
export TRANSFORMERS_CACHE="/path/to/writable/dir"
Slow Inference
# Check GPU availability
python -c "import torch; print(torch.cuda.is_available())"
# Install CUDA version if False
pip install torch torchvision --index-url https://download.pytorch.org/whl/cu118
π Model Comparison
| Feature | ViT-Base | ViT-Large |
|---|---|---|
| Parameters | 86M | 304M |
| Memory | ~2GB | ~4GB |
| Speed | Faster | Slower |
| Accuracy | ~81% | ~83% |
| Best For | Quick tests | Production |
π§ͺ Testing Shortcuts
Minimal Test (30 seconds)
python app.py &
# Load model β Upload cat_portrait.jpg β Analyze
Full Test (5 minutes)
# One image per tab
Tab 1: cat_portrait.jpg
Tab 2: flower.jpg
Tab 3: clear_panda.jpg
Tab 4: dog_daylight.jpg
Comprehensive Test (30 minutes)
# Follow TESTING.md for all 22 tests
π Documentation Quick Links
- Setup: QUICKSTART.md
- Testing: TESTING.md
- Contributing: CONTRIBUTING.md
- Full Docs: README.md
- This Guide: PROJECT_SUMMARY.md
π Useful URLs
# Local
http://localhost:7860 # Main app
http://localhost:7860/docs # API docs (if enabled)
# Hugging Face (after deployment)
https://huggingface.co/spaces/YOUR-USERNAME/vit-auditing-toolkit
# GitHub (your repo)
https://github.com/dyra-12/ViT-XAI-Dashboard
β¨οΈ Keyboard Shortcuts (Browser)
Ctrl/Cmd + R: Reload interfaceCtrl/Cmd + Shift + I: Open dev toolsCtrl/Cmd + K: Clear console
π¦ File Sizes Reference
Total Project: ~1.6 MB
βββ Code: ~200 KB
βββ Images: ~1.3 MB
βββ Docs: ~100 KB
βββ Config: ~10 KB
π― Performance Benchmarks
Typical Response Times:
- Model Loading: 5-15s (first time)
- Prediction: 0.5-2s
- Attention Viz: 1-3s
- GradCAM: 2-4s
- GradientSHAP: 8-15s
- Counterfactual: 10-30s
- Calibration: 5-10s
- Bias Detection: 5-10s
π‘ Pro Tips
- Use ViT-Base for quick testing
- Use ViT-Large for production/demos
- Cache results if analyzing same image repeatedly
- Start with Tab 1 to understand predictions
- Use examples/ images for consistent testing
- Check TESTING.md for detailed test cases
- Read CONTRIBUTING.md before making changes
π Getting Help
- Check this file first
- Read relevant documentation
- Search GitHub issues
- Open new issue with details
- Join discussions
β Pre-Demo Checklist
Before showing to others:
- App runs without errors
- All tabs functional
- Sample images loaded
- Model loads quickly
- UI looks professional
- No console errors
- README updated with your info
Keep this file handy for quick reference! π
Last updated: October 26, 2024