ViT-Auditing-Toolkit / CHEATSHEET.md
Dyuti Dasmahapatra
docs: fix sample download paths; chore: Spaces-friendly Gradio launch (PORT, 0.0.0.0)
38107f5

A newer version of the Gradio SDK is available: 6.1.0

Upgrade

πŸš€ ViT Auditing Toolkit - Quick Reference

One-Liner Commands

# Quick start
python app.py

# Download sample images
python examples/download_samples.py

# Run tests
pytest tests/ -v

# Run with Docker
docker-compose up

# Check code style
black --check src/ tests/ app.py

# Generate coverage report
pytest --cov=src --cov-report=html tests/

πŸ“‚ Project Structure Quick Map

ViT-XAI-Dashboard/
β”œβ”€β”€ app.py                          # 🎯 Main application - START HERE
β”œβ”€β”€ requirements.txt                # πŸ“¦ Dependencies
β”‚
β”œβ”€β”€ src/                            # 🧠 Core functionality
β”‚   β”œβ”€β”€ model_loader.py            # Load ViT models from HF
β”‚   β”œβ”€β”€ predictor.py               # Make predictions
β”‚   β”œβ”€β”€ explainer.py               # XAI methods (Attention, GradCAM, SHAP)
β”‚   β”œβ”€β”€ auditor.py                 # Advanced auditing tools
β”‚   └── utils.py                   # Helper functions
β”‚
β”œβ”€β”€ examples/                       # πŸ–ΌοΈ Test images (20 images)
β”‚   β”œβ”€β”€ basic_explainability/      # For Tab 1
β”‚   β”œβ”€β”€ counterfactual/           # For Tab 2
β”‚   β”œβ”€β”€ calibration/              # For Tab 3
β”‚   β”œβ”€β”€ bias_detection/           # For Tab 4
β”‚   └── general/                  # Misc testing
β”‚
β”œβ”€β”€ tests/                         # πŸ§ͺ Unit tests
β”‚   β”œβ”€β”€ test_phase1_complete.py   # Basic tests
β”‚   └── test_advanced_features.py # Advanced tests
β”‚
└── Documentation/                 # πŸ“š All docs
    β”œβ”€β”€ README.md                 # Main documentation
    β”œβ”€β”€ QUICKSTART.md            # 5-minute setup
    β”œβ”€β”€ TESTING.md               # Testing guide
    β”œβ”€β”€ CONTRIBUTING.md          # Dev guidelines
    └── PROJECT_SUMMARY.md       # This file

🎯 Common Tasks

Start the Dashboard

python app.py
# Opens at http://localhost:7860

Test a Single Tab

# 1. Start app: python app.py
# 2. Go to http://localhost:7860
# 3. Load ViT-Base model
# 4. Tab 1: Upload examples/basic_explainability/cat_portrait.jpg
# 5. Click "Analyze Image"

Add New Test Image

# Option 1: Manual
cp /path/to/image.jpg examples/basic_explainability/

# Option 2: Download from URL
curl -L "https://example.com/image.jpg" -o examples/general/my_image.jpg

Run Quick Test

# Smoke test (verify everything works)
python app.py &
sleep 10
curl http://localhost:7860
# If no error, you're good!

πŸ” Tab Reference

Tab 1: Basic Explainability (πŸ”)

Purpose: Understand predictions
Methods: Attention, GradCAM, GradientSHAP
Best Images: examples/basic_explainability/
Use When: Want to see what model focuses on

Tab 2: Counterfactual Analysis (πŸ”„)

Purpose: Test robustness
Methods: Patch perturbation (blur/blackout/gray/noise)
Best Images: examples/counterfactual/
Use When: Testing prediction stability

Tab 3: Confidence Calibration (πŸ“Š)

Purpose: Validate confidence scores
Methods: Calibration curves, reliability diagrams
Best Images: examples/calibration/
Use When: Checking if confidence matches accuracy

Tab 4: Bias Detection (βš–οΈ)

Purpose: Find performance disparities
Methods: Subgroup analysis
Best Images: examples/bias_detection/
Use When: Testing fairness across conditions


🎨 Customization Quick Tips

Change Port

# app.py, last line:
demo.launch(server_port=7860)  # Change 7860 to your port

Add New Model

# src/model_loader.py:
SUPPORTED_MODELS = {
    "ViT-Base": "google/vit-base-patch16-224",
    "ViT-Large": "google/vit-large-patch16-224",
    # New additions
    "ResNet-50": "microsoft/resnet-50",
    "Swin Transformer": "microsoft/swin-base-patch4-window7-224",
    "DeiT": "facebook/deit-base-patch16-224",
    "EfficientNet": "google/efficientnet-b7",
}

Modify Colors

# app.py, custom_css variable:
# Change gradient colors, backgrounds, etc.

πŸ› Troubleshooting Quick Fixes

Port Already in Use

# Linux/Mac:
lsof -ti:7860 | xargs kill -9
# Windows:
netstat -ano | findstr :7860
taskkill /PID <PID> /F

Out of Memory

# Use smaller model
model_choice = "ViT-Base"  # instead of ViT-Large

# Or clear GPU cache
import torch
torch.cuda.empty_cache()

Model Download Fails

# Set cache directory
export HF_HOME="/path/to/writable/dir"
export TRANSFORMERS_CACHE="/path/to/writable/dir"

Slow Inference

# Check GPU availability
python -c "import torch; print(torch.cuda.is_available())"

# Install CUDA version if False
pip install torch torchvision --index-url https://download.pytorch.org/whl/cu118

πŸ“Š Model Comparison

Feature ViT-Base ViT-Large
Parameters 86M 304M
Memory ~2GB ~4GB
Speed Faster Slower
Accuracy ~81% ~83%
Best For Quick tests Production

πŸ§ͺ Testing Shortcuts

Minimal Test (30 seconds)

python app.py &
# Load model β†’ Upload cat_portrait.jpg β†’ Analyze

Full Test (5 minutes)

# One image per tab
Tab 1: cat_portrait.jpg
Tab 2: flower.jpg
Tab 3: clear_panda.jpg
Tab 4: dog_daylight.jpg

Comprehensive Test (30 minutes)

# Follow TESTING.md for all 22 tests

πŸ“š Documentation Quick Links

  • Setup: QUICKSTART.md
  • Testing: TESTING.md
  • Contributing: CONTRIBUTING.md
  • Full Docs: README.md
  • This Guide: PROJECT_SUMMARY.md

πŸ”— Useful URLs

# Local
http://localhost:7860              # Main app
http://localhost:7860/docs         # API docs (if enabled)

# Hugging Face (after deployment)
https://huggingface.co/spaces/YOUR-USERNAME/vit-auditing-toolkit

# GitHub (your repo)
https://github.com/dyra-12/ViT-XAI-Dashboard

⌨️ Keyboard Shortcuts (Browser)

  • Ctrl/Cmd + R: Reload interface
  • Ctrl/Cmd + Shift + I: Open dev tools
  • Ctrl/Cmd + K: Clear console

πŸ“¦ File Sizes Reference

Total Project: ~1.6 MB
β”œβ”€β”€ Code: ~200 KB
β”œβ”€β”€ Images: ~1.3 MB
β”œβ”€β”€ Docs: ~100 KB
└── Config: ~10 KB

🎯 Performance Benchmarks

Typical Response Times:

  • Model Loading: 5-15s (first time)
  • Prediction: 0.5-2s
  • Attention Viz: 1-3s
  • GradCAM: 2-4s
  • GradientSHAP: 8-15s
  • Counterfactual: 10-30s
  • Calibration: 5-10s
  • Bias Detection: 5-10s

πŸ’‘ Pro Tips

  1. Use ViT-Base for quick testing
  2. Use ViT-Large for production/demos
  3. Cache results if analyzing same image repeatedly
  4. Start with Tab 1 to understand predictions
  5. Use examples/ images for consistent testing
  6. Check TESTING.md for detailed test cases
  7. Read CONTRIBUTING.md before making changes

πŸ†˜ Getting Help

  1. Check this file first
  2. Read relevant documentation
  3. Search GitHub issues
  4. Open new issue with details
  5. Join discussions

βœ… Pre-Demo Checklist

Before showing to others:

  • App runs without errors
  • All tabs functional
  • Sample images loaded
  • Model loads quickly
  • UI looks professional
  • No console errors
  • README updated with your info

Keep this file handy for quick reference! πŸ“Œ

Last updated: October 26, 2024