Generative_Inference_Faces / TROUBLESHOOTING.md
Tahereh
Update to Generative Inference for Psychiatry Demo: add Noise stimulus, update parameters, fix model loading, and improve UI
420f791

A newer version of the Gradio SDK is available: 6.2.0

Upgrade

Troubleshooting: Why It Works on Hugging Face Spaces But Not Locally

Common Issues and Solutions

1. Missing Dependencies ⚠️ (Most Common)

Problem: The required Python packages are not installed locally.

Solution: Install all dependencies:

cd /home/tahereh/engram/users/Tahereh/Codes/Public_Codes/Generative_Inference_Faces
pip install -r requirements.txt

Required packages:

  • torch and torchvision (PyTorch)
  • gradio (for the web interface)
  • numpy, pillow (PIL), matplotlib
  • requests, tqdm, huggingface_hub

2. GPU Decorator ✅ (Fixed)

Problem: The @GPU decorator from Hugging Face Spaces is not available locally.

Solution: The code now automatically handles this:

  • On Hugging Face Spaces: Uses the spaces.GPU decorator
  • Locally: Uses a no-op decorator (GPU detection is automatic via PyTorch)

Status: ✅ Fixed in the code

3. Port Configuration ✅ (Fixed)

Problem: Port configuration was inconsistent between local and Spaces environments.

Solution: The code now:

  • Uses port 7860 by default (same as Spaces)
  • Allows custom port via --port argument
  • Automatically detects Hugging Face Spaces environment

Status: ✅ Fixed in the code

4. Model Files Not Downloaded

Problem: Model checkpoint files may not be downloaded yet.

Solution: The code will automatically download models on first run, but you can verify:

ls models/

Expected files:

  • resnet50_robust.pt
  • standard_resnet50.pt (optional)
  • resnet50_robust_face_100_checkpoint.pt (optional)

5. Missing Stimuli Images

Problem: Example images may be missing.

Solution: Verify stimuli directory exists:

ls stimuli/

All example images should be present for the demo to work fully.

6. CUDA/GPU Issues

Problem: GPU may not be available or configured correctly.

Solution: The code automatically detects available hardware:

  • CUDA (NVIDIA GPUs)
  • MPS (Apple Silicon)
  • CPU (fallback)

Check your setup:

import torch
print("CUDA available:", torch.cuda.is_available())
print("Device:", torch.device("cuda" if torch.cuda.is_available() else "cpu"))

7. Python Version

Problem: Incompatible Python version.

Solution: Use Python 3.8+ (tested with 3.11.5):

python --version

Quick Start Guide

  1. Install dependencies:

    pip install -r requirements.txt
    
  2. Run the app:

    python app.py
    

    Or with a custom port:

    python app.py --port 8080
    
  3. Access the web interface:

    • Open your browser to http://localhost:7860
    • Or the port you specified

Differences Between Hugging Face Spaces and Local

Feature Hugging Face Spaces Local
GPU Decorator @spaces.GPU available No-op decorator (automatic GPU)
Port Set via PORT env var Default 7860, or --port arg
Dependencies Pre-installed Must install manually
Environment SPACE_ID env var set Not set
Model Storage Persistent storage Local models/ directory

Testing the Fixes

After applying the fixes, test with:

# Check imports work
python -c "import gradio, torch, numpy, PIL; print('All imports OK')"

# Run the app
python app.py --port 7860

Still Having Issues?

  1. Check error messages: Look for specific import errors or file not found errors
  2. Verify Python environment: Make sure you're using the correct virtual environment
  3. Check file permissions: Ensure the models/ and stimuli/ directories are writable
  4. Review logs: Check the logs/ directory for model loading issues