File size: 3,755 Bytes
420f791
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
# Troubleshooting: Why It Works on Hugging Face Spaces But Not Locally

## Common Issues and Solutions

### 1. **Missing Dependencies** ⚠️ (Most Common)

**Problem**: The required Python packages are not installed locally.

**Solution**: Install all dependencies:
```bash
cd /home/tahereh/engram/users/Tahereh/Codes/Public_Codes/Generative_Inference_Faces
pip install -r requirements.txt
```

**Required packages**:
- `torch` and `torchvision` (PyTorch)
- `gradio` (for the web interface)
- `numpy`, `pillow` (PIL), `matplotlib`
- `requests`, `tqdm`, `huggingface_hub`

### 2. **GPU Decorator** ✅ (Fixed)

**Problem**: The `@GPU` decorator from Hugging Face Spaces is not available locally.

**Solution**: The code now automatically handles this:
- On Hugging Face Spaces: Uses the `spaces.GPU` decorator
- Locally: Uses a no-op decorator (GPU detection is automatic via PyTorch)

**Status**: ✅ Fixed in the code

### 3. **Port Configuration** ✅ (Fixed)

**Problem**: Port configuration was inconsistent between local and Spaces environments.

**Solution**: The code now:
- Uses port 7860 by default (same as Spaces)
- Allows custom port via `--port` argument
- Automatically detects Hugging Face Spaces environment

**Status**: ✅ Fixed in the code

### 4. **Model Files Not Downloaded**

**Problem**: Model checkpoint files may not be downloaded yet.

**Solution**: The code will automatically download models on first run, but you can verify:
```bash
ls models/
```

Expected files:
- `resnet50_robust.pt`
- `standard_resnet50.pt` (optional)
- `resnet50_robust_face_100_checkpoint.pt` (optional)

### 5. **Missing Stimuli Images**

**Problem**: Example images may be missing.

**Solution**: Verify stimuli directory exists:
```bash
ls stimuli/
```

All example images should be present for the demo to work fully.

### 6. **CUDA/GPU Issues**

**Problem**: GPU may not be available or configured correctly.

**Solution**: The code automatically detects available hardware:
- CUDA (NVIDIA GPUs)
- MPS (Apple Silicon)
- CPU (fallback)

Check your setup:
```python
import torch
print("CUDA available:", torch.cuda.is_available())
print("Device:", torch.device("cuda" if torch.cuda.is_available() else "cpu"))
```

### 7. **Python Version**

**Problem**: Incompatible Python version.

**Solution**: Use Python 3.8+ (tested with 3.11.5):
```bash
python --version
```

## Quick Start Guide

1. **Install dependencies**:
   ```bash
   pip install -r requirements.txt
   ```

2. **Run the app**:
   ```bash
   python app.py
   ```
   
   Or with a custom port:
   ```bash
   python app.py --port 8080
   ```

3. **Access the web interface**:
   - Open your browser to `http://localhost:7860`
   - Or the port you specified

## Differences Between Hugging Face Spaces and Local

| Feature | Hugging Face Spaces | Local |
|---------|-------------------|-------|
| GPU Decorator | `@spaces.GPU` available | No-op decorator (automatic GPU) |
| Port | Set via `PORT` env var | Default 7860, or `--port` arg |
| Dependencies | Pre-installed | Must install manually |
| Environment | `SPACE_ID` env var set | Not set |
| Model Storage | Persistent storage | Local `models/` directory |

## Testing the Fixes

After applying the fixes, test with:
```bash
# Check imports work
python -c "import gradio, torch, numpy, PIL; print('All imports OK')"

# Run the app
python app.py --port 7860
```

## Still Having Issues?

1. **Check error messages**: Look for specific import errors or file not found errors
2. **Verify Python environment**: Make sure you're using the correct virtual environment
3. **Check file permissions**: Ensure the `models/` and `stimuli/` directories are writable
4. **Review logs**: Check the `logs/` directory for model loading issues