Spaces:
Runtime error
Runtime error
Fix: Embed GLB files as base64 data URLs instead of using /static/ routes
Browse files- Modified build_model_viewer_html() to read GLB file and encode as base64 data URL
- Embed model-viewer HTML directly without iframe (no /static/ route needed)
- Removed all FastAPI static file serving code (routes, startup handlers, etc.)
- This approach works with Gradio's demo.launch() without custom routes
- Fixes 'Not Found' error when displaying generated 3D models
- ALLOWED_PATHS_FIX.md +73 -0
- DEPLOYMENT_SOLUTIONS.md +184 -0
- GPU_DECORATOR_FIX.md +222 -0
- INVALID_PORT_FIX.md +96 -0
- PERSISTENT_GPU_SETUP.md +196 -0
- STATIC_ASSETS_404_FIX.md +136 -0
- STATIC_FILES_FIX.md +185 -0
- UI_LOADING_FIX.md +112 -0
- ZEROGPU_FIX.md +95 -0
- check_space.sh +24 -0
- gradio_app.py +26 -56
ALLOWED_PATHS_FIX.md
ADDED
|
@@ -0,0 +1,73 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Fix for InvalidPathError in Gradio
|
| 2 |
+
|
| 3 |
+
## Problem
|
| 4 |
+
When generating 3D shapes, Gradio threw an error:
|
| 5 |
+
|
| 6 |
+
```
|
| 7 |
+
gradio.exceptions.InvalidPathError: Cannot move /root/save_dir/.../white_mesh.glb
|
| 8 |
+
to the gradio cache dir because it was not created by the application or it is not
|
| 9 |
+
located in either the current working directory (/home/user/app), your system's
|
| 10 |
+
temp directory (/tmp) or add /root/save_dir/... to the allowed_paths parameter
|
| 11 |
+
of launch().
|
| 12 |
+
```
|
| 13 |
+
|
| 14 |
+
## Root Cause
|
| 15 |
+
Gradio 5.x has security restrictions that prevent serving files from arbitrary directories. By default, it only allows:
|
| 16 |
+
- Current working directory (`/home/user/app`)
|
| 17 |
+
- System temp directory (`/tmp`)
|
| 18 |
+
|
| 19 |
+
The application saves generated files to `/root/save_dir/` (configured via `--cache-path`), which is outside these allowed locations.
|
| 20 |
+
|
| 21 |
+
## Solution
|
| 22 |
+
Add the save directory to Gradio's `allowed_paths` parameter in `demo.launch()`.
|
| 23 |
+
|
| 24 |
+
### Change in gradio_app.py (line 928-933)
|
| 25 |
+
|
| 26 |
+
**Before:**
|
| 27 |
+
```python
|
| 28 |
+
demo.launch(
|
| 29 |
+
server_name=args.host,
|
| 30 |
+
server_port=args.port,
|
| 31 |
+
share=False
|
| 32 |
+
)
|
| 33 |
+
```
|
| 34 |
+
|
| 35 |
+
**After:**
|
| 36 |
+
```python
|
| 37 |
+
demo.launch(
|
| 38 |
+
server_name=args.host,
|
| 39 |
+
server_port=args.port,
|
| 40 |
+
share=False,
|
| 41 |
+
allowed_paths=[SAVE_DIR] # Allow access to generated files in save directory
|
| 42 |
+
)
|
| 43 |
+
```
|
| 44 |
+
|
| 45 |
+
## Why This Works
|
| 46 |
+
- `SAVE_DIR` is set from `args.cache_path` (default: `/root/save_dir`)
|
| 47 |
+
- `allowed_paths` tells Gradio it's safe to serve files from this directory
|
| 48 |
+
- Generated GLB, HTML, and other output files can now be accessed and downloaded
|
| 49 |
+
|
| 50 |
+
## Security Note
|
| 51 |
+
This is safe because:
|
| 52 |
+
- The directory is controlled by the application
|
| 53 |
+
- Files are created by the application itself
|
| 54 |
+
- The path is not user-controlled (set via argparse defaults)
|
| 55 |
+
- HuggingFace Spaces runs in an isolated container
|
| 56 |
+
|
| 57 |
+
## Testing
|
| 58 |
+
After this fix:
|
| 59 |
+
1. β
Upload an image
|
| 60 |
+
2. β
Click "Generate Shape"
|
| 61 |
+
3. β
3D model generates successfully
|
| 62 |
+
4. β
GLB file is downloadable
|
| 63 |
+
5. β
3D viewer shows the mesh
|
| 64 |
+
6. β
No InvalidPathError
|
| 65 |
+
|
| 66 |
+
## Deployment
|
| 67 |
+
- Commit: `210033c`
|
| 68 |
+
- Pushed to: HuggingFace Spaces
|
| 69 |
+
- Expected rebuild time: 5-10 minutes
|
| 70 |
+
|
| 71 |
+
## Related Gradio Documentation
|
| 72 |
+
- [allowed_paths parameter](https://www.gradio.app/docs/interface#launch)
|
| 73 |
+
- [File security in Gradio 5.x](https://www.gradio.app/guides/security-and-file-access)
|
DEPLOYMENT_SOLUTIONS.md
ADDED
|
@@ -0,0 +1,184 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# ZeroGPU Incompatibility - Solutions Guide
|
| 2 |
+
|
| 3 |
+
## Problem
|
| 4 |
+
HuggingFace's ZeroGPU system cannot handle Hunyuan3D-2.1's large models (~5GB). The error occurs when ZeroGPU tries to offload models to disk:
|
| 5 |
+
|
| 6 |
+
```
|
| 7 |
+
FileNotFoundError: [Errno 2] No such file or directory: '/data-nvme/zerogpu-offload/...'
|
| 8 |
+
```
|
| 9 |
+
|
| 10 |
+
## Why ZeroGPU Fails
|
| 11 |
+
- **Model Size**: Hunyuan3D-2.1 has ~5GB of models
|
| 12 |
+
- **Complex State**: Custom C++ extensions + PyTorch models + texture synthesis pipeline
|
| 13 |
+
- **Offloading Mechanism**: ZeroGPU's offload directory has issues with these large, complex models
|
| 14 |
+
- **Background Removal + 3D Generation**: Multiple models need to be in memory simultaneously
|
| 15 |
+
|
| 16 |
+
## Solution Implemented: Persistent GPU
|
| 17 |
+
|
| 18 |
+
### Changes Made (Commit: 77d72f8)
|
| 19 |
+
|
| 20 |
+
1. **Disabled ZeroGPU decorators** in `gradio_app.py`:
|
| 21 |
+
```python
|
| 22 |
+
# Before:
|
| 23 |
+
@spaces.GPU(duration=60)
|
| 24 |
+
def _gen_shape(...):
|
| 25 |
+
|
| 26 |
+
# After:
|
| 27 |
+
# Disabled ZeroGPU due to offloading errors with large models
|
| 28 |
+
# @spaces.GPU(duration=60)
|
| 29 |
+
def _gen_shape(...):
|
| 30 |
+
```
|
| 31 |
+
|
| 32 |
+
2. **Removed `zero.startup()` call**:
|
| 33 |
+
```python
|
| 34 |
+
# Before:
|
| 35 |
+
if ENV == 'Huggingface':
|
| 36 |
+
from spaces import zero
|
| 37 |
+
zero.startup()
|
| 38 |
+
|
| 39 |
+
# After:
|
| 40 |
+
# ZeroGPU disabled due to offloading errors - using persistent GPU instead
|
| 41 |
+
```
|
| 42 |
+
|
| 43 |
+
3. **Use CUDA directly**:
|
| 44 |
+
```python
|
| 45 |
+
# Before:
|
| 46 |
+
model_device = 'cpu' if ENV == 'Huggingface' else args.device
|
| 47 |
+
|
| 48 |
+
# After:
|
| 49 |
+
model_device = args.device # Always use CUDA for persistent GPU
|
| 50 |
+
```
|
| 51 |
+
|
| 52 |
+
4. **Removed `spaces` library** from `requirements.txt`:
|
| 53 |
+
```diff
|
| 54 |
+
- spaces>=0.28.3
|
| 55 |
+
+ # spaces>=0.28.3 # Disabled: ZeroGPU causes offloading errors
|
| 56 |
+
```
|
| 57 |
+
|
| 58 |
+
5. **Updated hardware request** in `README.md`:
|
| 59 |
+
```yaml
|
| 60 |
+
suggested_hardware: a10g-large # Was: a100-large
|
| 61 |
+
```
|
| 62 |
+
|
| 63 |
+
## Required Action: Upgrade to Paid GPU Tier
|
| 64 |
+
|
| 65 |
+
**You MUST upgrade your HuggingFace Space to a paid persistent GPU tier:**
|
| 66 |
+
|
| 67 |
+
### Steps:
|
| 68 |
+
1. Go to your Space: https://huggingface.co/spaces/minhho/Hunyuan-MT
|
| 69 |
+
2. Click **Settings** (top right)
|
| 70 |
+
3. Scroll to **Hardware** section
|
| 71 |
+
4. Select a persistent GPU tier:
|
| 72 |
+
- **A10G Large** (~$0.60/hour) - Recommended, 24GB VRAM
|
| 73 |
+
- **A10G Small** (~$0.30/hour) - Cheaper, 24GB VRAM (might work)
|
| 74 |
+
- **T4 Medium** (~$0.60/hour) - Budget option, 16GB VRAM (might be tight)
|
| 75 |
+
5. Click **Save** and wait for rebuild
|
| 76 |
+
|
| 77 |
+
### Cost Estimate
|
| 78 |
+
- **A10G Large**: ~$432/month if running 24/7
|
| 79 |
+
- **A10G Small**: ~$216/month if running 24/7
|
| 80 |
+
- **Tip**: Set up **Sleep after inactivity** to reduce costs
|
| 81 |
+
|
| 82 |
+
## Alternative Solutions
|
| 83 |
+
|
| 84 |
+
### Alternative 1: Use Different Deployment Platform (FREE)
|
| 85 |
+
|
| 86 |
+
Deploy to platforms with better GPU support:
|
| 87 |
+
|
| 88 |
+
#### **Replicate** (Pay-per-use, easier)
|
| 89 |
+
- Only pay when someone uses the model
|
| 90 |
+
- Better for demos/testing
|
| 91 |
+
- Setup: https://replicate.com/docs/guides/push-a-model
|
| 92 |
+
|
| 93 |
+
#### **RunPod Serverless** (More control)
|
| 94 |
+
- Deploy as serverless endpoint
|
| 95 |
+
- Pay only for compute time
|
| 96 |
+
- Setup: https://docs.runpod.io/serverless/overview
|
| 97 |
+
|
| 98 |
+
#### **Modal** (Python-native)
|
| 99 |
+
- Deploy Python apps with GPU
|
| 100 |
+
- Free tier available
|
| 101 |
+
- Setup: https://modal.com/docs/guide
|
| 102 |
+
|
| 103 |
+
### Alternative 2: Reduce Model Size
|
| 104 |
+
|
| 105 |
+
Modify `gradio_app.py` to use smaller/quantized models:
|
| 106 |
+
|
| 107 |
+
```python
|
| 108 |
+
# Use model quantization
|
| 109 |
+
i23d_worker = Hunyuan3DDiTFlowMatchingPipeline.from_pretrained(
|
| 110 |
+
args.model_path,
|
| 111 |
+
subfolder=args.subfolder,
|
| 112 |
+
use_safetensors=False,
|
| 113 |
+
device=model_device,
|
| 114 |
+
torch_dtype=torch.float16, # Half precision
|
| 115 |
+
variant="fp16", # Use FP16 variant if available
|
| 116 |
+
)
|
| 117 |
+
|
| 118 |
+
# Enable CPU offloading for parts of the model
|
| 119 |
+
i23d_worker.enable_model_cpu_offload()
|
| 120 |
+
```
|
| 121 |
+
|
| 122 |
+
### Alternative 3: Self-Host
|
| 123 |
+
|
| 124 |
+
Run on your own hardware:
|
| 125 |
+
|
| 126 |
+
**Local Development:**
|
| 127 |
+
```bash
|
| 128 |
+
python gradio_app.py \
|
| 129 |
+
--host 0.0.0.0 \
|
| 130 |
+
--port 7860 \
|
| 131 |
+
--device cuda
|
| 132 |
+
```
|
| 133 |
+
|
| 134 |
+
**Cloud VM (e.g., Vast.ai, Lambda Labs):**
|
| 135 |
+
1. Rent GPU instance (~$0.20-$0.50/hour for A10)
|
| 136 |
+
2. Clone repo and install dependencies
|
| 137 |
+
3. Run with `--host 0.0.0.0` to expose publicly
|
| 138 |
+
4. Use ngrok or cloudflared for public URL
|
| 139 |
+
|
| 140 |
+
### Alternative 4: Hybrid Approach
|
| 141 |
+
|
| 142 |
+
Keep HuggingFace Space for UI, but run inference on external API:
|
| 143 |
+
|
| 144 |
+
1. Deploy model on Replicate/Modal/RunPod
|
| 145 |
+
2. Modify `gradio_app.py` to call external API instead of local model
|
| 146 |
+
3. HuggingFace Space stays on free CPU tier (just serving UI)
|
| 147 |
+
|
| 148 |
+
## Recommendation
|
| 149 |
+
|
| 150 |
+
**For this project, I recommend:**
|
| 151 |
+
|
| 152 |
+
1. **Short-term**: Upgrade to **A10G Large** persistent GPU on HuggingFace
|
| 153 |
+
- Easiest solution
|
| 154 |
+
- Works immediately after rebuild
|
| 155 |
+
- Official support from HuggingFace
|
| 156 |
+
|
| 157 |
+
2. **Long-term**: Deploy to **Replicate**
|
| 158 |
+
- Pay-per-use pricing (much cheaper for demos)
|
| 159 |
+
- No idle costs
|
| 160 |
+
- Professional deployment platform
|
| 161 |
+
|
| 162 |
+
## Current Status
|
| 163 |
+
|
| 164 |
+
- β
Code updated to work with persistent GPU
|
| 165 |
+
- β
ZeroGPU decorators disabled
|
| 166 |
+
- β
Models load directly to CUDA
|
| 167 |
+
- β³ **Waiting for you to upgrade to paid GPU tier**
|
| 168 |
+
- β³ Space will fail until GPU tier is upgraded
|
| 169 |
+
|
| 170 |
+
## Next Steps
|
| 171 |
+
|
| 172 |
+
1. **Upgrade hardware tier** in Space settings
|
| 173 |
+
2. Wait for rebuild (5-10 minutes)
|
| 174 |
+
3. Test the application
|
| 175 |
+
4. Consider implementing **sleep after inactivity** to reduce costs
|
| 176 |
+
|
| 177 |
+
## Testing After Upgrade
|
| 178 |
+
|
| 179 |
+
Once upgraded, verify:
|
| 180 |
+
- β
Space status shows "Running"
|
| 181 |
+
- β
No ZeroGPU offloading errors
|
| 182 |
+
- β
Models load successfully
|
| 183 |
+
- β
Can generate 3D shapes from images
|
| 184 |
+
- β
GPU memory is sufficient (~16-20GB used)
|
GPU_DECORATOR_FIX.md
ADDED
|
@@ -0,0 +1,222 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Fix for "No @spaces.GPU function detected" Error
|
| 2 |
+
|
| 3 |
+
## Problem
|
| 4 |
+
After re-adding FastAPI for static file serving, the Space failed with:
|
| 5 |
+
```
|
| 6 |
+
runtime error
|
| 7 |
+
No @spaces.GPU function detected during startup
|
| 8 |
+
```
|
| 9 |
+
|
| 10 |
+
Then the server immediately shut down.
|
| 11 |
+
|
| 12 |
+
## Root Cause
|
| 13 |
+
|
| 14 |
+
### The Issue with `gr.mount_gradio_app()`
|
| 15 |
+
|
| 16 |
+
When using `gr.mount_gradio_app()` to mount Gradio on a custom FastAPI app:
|
| 17 |
+
|
| 18 |
+
```python
|
| 19 |
+
app = FastAPI()
|
| 20 |
+
app.mount("/static", StaticFiles(...))
|
| 21 |
+
app = gr.mount_gradio_app(app, demo, path="/")
|
| 22 |
+
uvicorn.run(app, ...)
|
| 23 |
+
```
|
| 24 |
+
|
| 25 |
+
The `@spaces.GPU` decorators are **not detected** during HuggingFace's startup validation. This is because:
|
| 26 |
+
- The Space startup scanner looks for GPU decorators in the main Gradio app
|
| 27 |
+
- When Gradio is mounted on a custom FastAPI app, the scanner doesn't find them
|
| 28 |
+
- HuggingFace enforces that GPU Spaces must have at least one `@spaces.GPU` decorator
|
| 29 |
+
|
| 30 |
+
## Solution: Use `demo.launch()` + Custom Route
|
| 31 |
+
|
| 32 |
+
Instead of mounting Gradio on FastAPI, we do the reverse:
|
| 33 |
+
1. Use Gradio's native `demo.launch()`
|
| 34 |
+
2. Access Gradio's internal FastAPI app (`demo.app`)
|
| 35 |
+
3. Add custom route for `/static/` files
|
| 36 |
+
|
| 37 |
+
### Implementation (Commit: 555ea3b)
|
| 38 |
+
|
| 39 |
+
```python
|
| 40 |
+
demo = build_app()
|
| 41 |
+
|
| 42 |
+
# Get Gradio's FastAPI app
|
| 43 |
+
app = demo.app
|
| 44 |
+
|
| 45 |
+
# Add static file serving route
|
| 46 |
+
@app.get("/static/{file_path:path}")
|
| 47 |
+
async def serve_static(file_path: str):
|
| 48 |
+
full_path = os.path.join(SAVE_DIR, file_path)
|
| 49 |
+
if os.path.exists(full_path) and os.path.isfile(full_path):
|
| 50 |
+
mime_type, _ = mimetypes.guess_type(full_path)
|
| 51 |
+
return FileResponse(full_path, media_type=mime_type)
|
| 52 |
+
return {"detail": "Not Found"}
|
| 53 |
+
|
| 54 |
+
# Launch Gradio (this initializes @spaces.GPU properly)
|
| 55 |
+
demo.launch(
|
| 56 |
+
server_name=args.host,
|
| 57 |
+
server_port=args.port,
|
| 58 |
+
share=False,
|
| 59 |
+
allowed_paths=[SAVE_DIR]
|
| 60 |
+
)
|
| 61 |
+
```
|
| 62 |
+
|
| 63 |
+
## Why This Works
|
| 64 |
+
|
| 65 |
+
### 1. GPU Decorator Detection β
|
| 66 |
+
- `demo.launch()` properly initializes the Gradio app
|
| 67 |
+
- HuggingFace's scanner detects `@spaces.GPU` decorators
|
| 68 |
+
- Space passes validation and starts successfully
|
| 69 |
+
|
| 70 |
+
### 2. Static File Serving β
|
| 71 |
+
- We access Gradio's internal FastAPI app via `demo.app`
|
| 72 |
+
- Add custom route `@app.get("/static/{file_path:path}")`
|
| 73 |
+
- Use `FileResponse` to serve files from `SAVE_DIR`
|
| 74 |
+
- Proper MIME type detection for different file types (HTML, GLB, JPG, etc.)
|
| 75 |
+
|
| 76 |
+
### 3. Security β
|
| 77 |
+
- Files are served from controlled directory (`SAVE_DIR`)
|
| 78 |
+
- Path validation: checks file exists and is a file (not directory)
|
| 79 |
+
- `allowed_paths=[SAVE_DIR]` ensures Gradio can access files
|
| 80 |
+
|
| 81 |
+
## Request Flow
|
| 82 |
+
|
| 83 |
+
### For 3D Model Viewer
|
| 84 |
+
|
| 85 |
+
1. **User clicks "Generate Shape"**
|
| 86 |
+
```
|
| 87 |
+
POST /api/predict β shape_generation()
|
| 88 |
+
```
|
| 89 |
+
|
| 90 |
+
2. **Generation creates files**
|
| 91 |
+
```
|
| 92 |
+
/root/save_dir/<uuid>/
|
| 93 |
+
βββ white_mesh.glb
|
| 94 |
+
βββ white_mesh.html
|
| 95 |
+
```
|
| 96 |
+
|
| 97 |
+
3. **Function returns HTML with iframe**
|
| 98 |
+
```html
|
| 99 |
+
<iframe src="/static/<uuid>/white_mesh.html" ...>
|
| 100 |
+
```
|
| 101 |
+
|
| 102 |
+
4. **Browser requests HTML**
|
| 103 |
+
```
|
| 104 |
+
GET /static/<uuid>/white_mesh.html
|
| 105 |
+
β Custom route serves file
|
| 106 |
+
β FileResponse returns HTML
|
| 107 |
+
```
|
| 108 |
+
|
| 109 |
+
5. **HTML loads GLB**
|
| 110 |
+
```html
|
| 111 |
+
<model-viewer src="./white_mesh.glb">
|
| 112 |
+
```
|
| 113 |
+
|
| 114 |
+
6. **Browser requests GLB**
|
| 115 |
+
```
|
| 116 |
+
GET /static/<uuid>/white_mesh.glb
|
| 117 |
+
β Custom route serves file
|
| 118 |
+
β FileResponse returns GLB with proper MIME type
|
| 119 |
+
```
|
| 120 |
+
|
| 121 |
+
7. **3D model displays** β
|
| 122 |
+
|
| 123 |
+
## Comparison of Approaches
|
| 124 |
+
|
| 125 |
+
### β Approach 1: Custom FastAPI + mount Gradio (Commit 289ffec - FAILED)
|
| 126 |
+
```python
|
| 127 |
+
app = FastAPI()
|
| 128 |
+
app.mount("/static", StaticFiles(...))
|
| 129 |
+
app = gr.mount_gradio_app(app, demo, path="/")
|
| 130 |
+
uvicorn.run(app, ...)
|
| 131 |
+
```
|
| 132 |
+
**Problem**: `@spaces.GPU` decorators not detected
|
| 133 |
+
|
| 134 |
+
### β
Approach 2: Gradio launch + custom route (Commit 555ea3b - WORKS)
|
| 135 |
+
```python
|
| 136 |
+
demo = build_app()
|
| 137 |
+
app = demo.app
|
| 138 |
+
@app.get("/static/{file_path:path}")
|
| 139 |
+
async def serve_static(...): ...
|
| 140 |
+
demo.launch(...)
|
| 141 |
+
```
|
| 142 |
+
**Result**: GPU decorators detected, static files served
|
| 143 |
+
|
| 144 |
+
## Code Changes
|
| 145 |
+
|
| 146 |
+
### Before (Broken)
|
| 147 |
+
```python
|
| 148 |
+
# Create FastAPI app for serving static files
|
| 149 |
+
app = FastAPI()
|
| 150 |
+
|
| 151 |
+
# Mount static files directory for generated GLB/HTML files
|
| 152 |
+
app.mount("/static", StaticFiles(directory=static_dir, html=True), name="static")
|
| 153 |
+
|
| 154 |
+
# Mount Gradio app at root path
|
| 155 |
+
app = gr.mount_gradio_app(app, demo, path="/", allowed_paths=[SAVE_DIR])
|
| 156 |
+
|
| 157 |
+
# Launch with Uvicorn
|
| 158 |
+
uvicorn.run(app, host=args.host, port=args.port)
|
| 159 |
+
```
|
| 160 |
+
|
| 161 |
+
### After (Working)
|
| 162 |
+
```python
|
| 163 |
+
# Create FastAPI app for serving static files alongside Gradio
|
| 164 |
+
from fastapi.responses import FileResponse
|
| 165 |
+
import mimetypes
|
| 166 |
+
|
| 167 |
+
# Get Gradio's FastAPI app
|
| 168 |
+
app = demo.app
|
| 169 |
+
|
| 170 |
+
# Add static file serving route
|
| 171 |
+
@app.get("/static/{file_path:path}")
|
| 172 |
+
async def serve_static(file_path: str):
|
| 173 |
+
full_path = os.path.join(SAVE_DIR, file_path)
|
| 174 |
+
if os.path.exists(full_path) and os.path.isfile(full_path):
|
| 175 |
+
mime_type, _ = mimetypes.guess_type(full_path)
|
| 176 |
+
return FileResponse(full_path, media_type=mime_type)
|
| 177 |
+
return {"detail": "Not Found"}
|
| 178 |
+
|
| 179 |
+
# Launch Gradio with allowed_paths
|
| 180 |
+
demo.launch(
|
| 181 |
+
server_name=args.host,
|
| 182 |
+
server_port=args.port,
|
| 183 |
+
share=False,
|
| 184 |
+
allowed_paths=[SAVE_DIR]
|
| 185 |
+
)
|
| 186 |
+
```
|
| 187 |
+
|
| 188 |
+
## Benefits of This Approach
|
| 189 |
+
|
| 190 |
+
1. **Minimal Code**: Just add one custom route to Gradio's app
|
| 191 |
+
2. **Native Integration**: Uses Gradio's built-in FastAPI app
|
| 192 |
+
3. **GPU Support**: Properly initializes `@spaces.GPU` decorators
|
| 193 |
+
4. **File Serving**: Serves static files with correct MIME types
|
| 194 |
+
5. **Security**: Validates file paths and checks existence
|
| 195 |
+
6. **Clean URLs**: `/static/` route works as expected
|
| 196 |
+
|
| 197 |
+
## Testing Checklist
|
| 198 |
+
|
| 199 |
+
After this fix:
|
| 200 |
+
|
| 201 |
+
- [ ] Space builds successfully
|
| 202 |
+
- [ ] **No "No @spaces.GPU detected" error** β
|
| 203 |
+
- [ ] Server starts: "Uvicorn running on http://0.0.0.0:7860"
|
| 204 |
+
- [ ] UI loads correctly
|
| 205 |
+
- [ ] Can upload image
|
| 206 |
+
- [ ] Click "Generate Shape" β works
|
| 207 |
+
- [ ] **3D model displays in viewer** (not "Not Found")
|
| 208 |
+
- [ ] Can interact with 3D viewer (rotate, zoom)
|
| 209 |
+
- [ ] Can download GLB file
|
| 210 |
+
|
| 211 |
+
## Deployment
|
| 212 |
+
- Commit: `555ea3b`
|
| 213 |
+
- Files changed: `gradio_app.py`
|
| 214 |
+
- Expected rebuild time: 5-10 minutes
|
| 215 |
+
|
| 216 |
+
## Summary
|
| 217 |
+
|
| 218 |
+
The fix is to use `demo.launch()` instead of `gr.mount_gradio_app()`, while adding a custom route to Gradio's internal FastAPI app for serving static files. This satisfies both requirements:
|
| 219 |
+
- HuggingFace detects the GPU decorators β
|
| 220 |
+
- Static files are served correctly β
|
| 221 |
+
|
| 222 |
+
**Expected Result**: Space should now start successfully and display 3D models! π
|
INVALID_PORT_FIX.md
ADDED
|
@@ -0,0 +1,96 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Invalid Port Error Fix
|
| 2 |
+
|
| 3 |
+
## Issue: "Invalid port: '7861_appimmutablechunksstores.TaiRvXLP.js'"
|
| 4 |
+
|
| 5 |
+
**Date:** October 8, 2025
|
| 6 |
+
|
| 7 |
+
### Problem Description:
|
| 8 |
+
|
| 9 |
+
HF Space logs showed hundreds of "Invalid port" errors with Gradio asset paths:
|
| 10 |
+
|
| 11 |
+
```
|
| 12 |
+
Invalid port: '7861_appimmutablechunksstores.TaiRvXLP.js'
|
| 13 |
+
Invalid port: '7861_appimmutableassetsIndex.CoeJ0f4i.css'
|
| 14 |
+
Invalid port: '7861_appimmutablechunkspreload-helper.DpQnamwV.js'
|
| 15 |
+
...
|
| 16 |
+
```
|
| 17 |
+
|
| 18 |
+
### Root Cause:
|
| 19 |
+
|
| 20 |
+
The problem was in how `sys.argv` was being constructed in `app.py`:
|
| 21 |
+
|
| 22 |
+
**Original code (WRONG):**
|
| 23 |
+
```python
|
| 24 |
+
sys.argv[0] = os.path.join(os.path.dirname(__file__), 'gradio_app.py')
|
| 25 |
+
sys.argv.extend([...]) # This ADDED to existing sys.argv
|
| 26 |
+
```
|
| 27 |
+
|
| 28 |
+
**What happened:**
|
| 29 |
+
1. HF Spaces environment sets `sys.argv` with internal Gradio URLs
|
| 30 |
+
2. `sys.argv.extend()` **appends** to existing arguments instead of replacing
|
| 31 |
+
3. Result: `sys.argv` contains both our arguments AND Gradio internal URLs
|
| 32 |
+
4. `argparse` in `gradio_app.py` tries to parse ALL arguments
|
| 33 |
+
5. It encounters URLs like `7861_appimmutablechunksstores.TaiRvXLP.js`
|
| 34 |
+
6. Tries to parse them as `--port` value β "Invalid port" error
|
| 35 |
+
|
| 36 |
+
### The Fix:
|
| 37 |
+
|
| 38 |
+
**Changed to (CORRECT):**
|
| 39 |
+
```python
|
| 40 |
+
sys.argv = [ # REPLACE sys.argv entirely, don't extend
|
| 41 |
+
'gradio_app.py',
|
| 42 |
+
'--model_path', 'tencent/Hunyuan3D-2.1',
|
| 43 |
+
'--subfolder', 'hunyuan3d-dit-v2-1',
|
| 44 |
+
'--texgen_model_path', 'tencent/Hunyuan3D-2.1',
|
| 45 |
+
'--port', '7860',
|
| 46 |
+
'--host', '0.0.0.0',
|
| 47 |
+
'--device', 'cuda',
|
| 48 |
+
'--mc_algo', 'mc',
|
| 49 |
+
'--cache-path', '/tmp/hunyuan3d_cache',
|
| 50 |
+
'--low_vram_mode'
|
| 51 |
+
]
|
| 52 |
+
```
|
| 53 |
+
|
| 54 |
+
**Key change:**
|
| 55 |
+
- β `sys.argv.extend([...])` - Adds to existing arguments
|
| 56 |
+
- β
`sys.argv = [...]` - Replaces all arguments cleanly
|
| 57 |
+
|
| 58 |
+
### Why This Works:
|
| 59 |
+
|
| 60 |
+
1. β
Completely replaces `sys.argv` with only our arguments
|
| 61 |
+
2. β
No Gradio internal URLs leak into argument parsing
|
| 62 |
+
3. β
`argparse.parse_args()` only sees valid arguments
|
| 63 |
+
4. β
No port parsing errors
|
| 64 |
+
|
| 65 |
+
### Commits History:
|
| 66 |
+
|
| 67 |
+
1. Initial broken app.py: `3a7c8f3`
|
| 68 |
+
2. Fix psutil version: `3e926e3`
|
| 69 |
+
3. Fix app execution: `efd7869`
|
| 70 |
+
4. **Fix sys.argv pollution:** `79a0702` β This fix
|
| 71 |
+
|
| 72 |
+
### Expected Behavior After Fix:
|
| 73 |
+
|
| 74 |
+
β
No "Invalid port" errors
|
| 75 |
+
β
Arguments parsed correctly
|
| 76 |
+
β
Gradio server starts on port 7860
|
| 77 |
+
β
App runs normally
|
| 78 |
+
|
| 79 |
+
### Verification:
|
| 80 |
+
|
| 81 |
+
After this fix, logs should show:
|
| 82 |
+
```
|
| 83 |
+
Loading example img list ...
|
| 84 |
+
Loading example txt list ...
|
| 85 |
+
Loading pipeline components...
|
| 86 |
+
Loading Hunyuan3D-Shape...
|
| 87 |
+
Loading Hunyuan3D-Paint...
|
| 88 |
+
Running on local URL: http://0.0.0.0:7860
|
| 89 |
+
```
|
| 90 |
+
|
| 91 |
+
**No more "Invalid port" errors!**
|
| 92 |
+
|
| 93 |
+
---
|
| 94 |
+
|
| 95 |
+
**Status:** β
Fixed and deployed
|
| 96 |
+
**Impact:** Critical - App could not start due to argument parsing errors
|
PERSISTENT_GPU_SETUP.md
ADDED
|
@@ -0,0 +1,196 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Persistent GPU Setup for HuggingFace Spaces
|
| 2 |
+
|
| 3 |
+
## Problem Solved
|
| 4 |
+
HuggingFace Spaces showed error: **"No @spaces.GPU function detected during startup"**
|
| 5 |
+
|
| 6 |
+
This occurred because we removed the `@spaces.GPU` decorators, but HuggingFace requires them even when using persistent GPU hardware.
|
| 7 |
+
|
| 8 |
+
## Solution: Decorators WITHOUT zero.startup()
|
| 9 |
+
|
| 10 |
+
The key insight is that you need **two different configurations**:
|
| 11 |
+
|
| 12 |
+
### For ZeroGPU (Free Tier - DOESN'T WORK for Hunyuan3D)
|
| 13 |
+
```python
|
| 14 |
+
from spaces import zero
|
| 15 |
+
|
| 16 |
+
# Call zero.startup() BEFORE loading models
|
| 17 |
+
if ENV == 'Huggingface':
|
| 18 |
+
zero.startup()
|
| 19 |
+
|
| 20 |
+
# Load models on CPU
|
| 21 |
+
model_device = 'cpu'
|
| 22 |
+
model = Model.from_pretrained(..., device=model_device)
|
| 23 |
+
|
| 24 |
+
# Decorate functions
|
| 25 |
+
@spaces.GPU(duration=60)
|
| 26 |
+
def inference(...):
|
| 27 |
+
# ZeroGPU moves models to GPU automatically
|
| 28 |
+
pass
|
| 29 |
+
```
|
| 30 |
+
|
| 31 |
+
### For Persistent GPU (Paid Tier - WORKS for Hunyuan3D) β
|
| 32 |
+
```python
|
| 33 |
+
# DO NOT call zero.startup()
|
| 34 |
+
# if ENV == 'Huggingface':
|
| 35 |
+
# zero.startup() # COMMENTED OUT!
|
| 36 |
+
|
| 37 |
+
# Load models on CUDA directly
|
| 38 |
+
model_device = 'cuda' # or args.device
|
| 39 |
+
model = Model.from_pretrained(..., device=model_device)
|
| 40 |
+
|
| 41 |
+
# Still need decorators (HF requirement)
|
| 42 |
+
@spaces.GPU(duration=60)
|
| 43 |
+
def inference(...):
|
| 44 |
+
# Models already on GPU, decorator is just a marker
|
| 45 |
+
pass
|
| 46 |
+
```
|
| 47 |
+
|
| 48 |
+
## Current Configuration (Commit: 60fde33)
|
| 49 |
+
|
| 50 |
+
### gradio_app.py
|
| 51 |
+
```python
|
| 52 |
+
# Line 890-893: zero.startup() is COMMENTED OUT
|
| 53 |
+
# ZeroGPU disabled due to offloading errors - using persistent GPU instead
|
| 54 |
+
# if ENV == 'Huggingface':
|
| 55 |
+
# from spaces import zero
|
| 56 |
+
# zero.startup()
|
| 57 |
+
|
| 58 |
+
# Line 897-898: Use CUDA directly
|
| 59 |
+
model_device = args.device # 'cuda' for persistent GPU
|
| 60 |
+
|
| 61 |
+
# Lines 272, 381, 463: Decorators are ENABLED
|
| 62 |
+
@spaces.GPU(duration=60)
|
| 63 |
+
def _gen_shape(...):
|
| 64 |
+
pass
|
| 65 |
+
|
| 66 |
+
@spaces.GPU(duration=180)
|
| 67 |
+
def generation_all(...):
|
| 68 |
+
pass
|
| 69 |
+
|
| 70 |
+
@spaces.GPU(duration=60)
|
| 71 |
+
def shape_generation(...):
|
| 72 |
+
pass
|
| 73 |
+
```
|
| 74 |
+
|
| 75 |
+
### requirements.txt
|
| 76 |
+
```python
|
| 77 |
+
spaces>=0.28.3 # Required for @spaces.GPU decorators
|
| 78 |
+
```
|
| 79 |
+
|
| 80 |
+
### README.md
|
| 81 |
+
```yaml
|
| 82 |
+
suggested_hardware: a10g-large # Persistent GPU request
|
| 83 |
+
```
|
| 84 |
+
|
| 85 |
+
## Why This Works
|
| 86 |
+
|
| 87 |
+
1. **@spaces.GPU decorators**: Satisfy HuggingFace's requirement for GPU Spaces
|
| 88 |
+
2. **NO zero.startup()**: Prevents ZeroGPU offloading mechanism from activating
|
| 89 |
+
3. **Models on CUDA**: Load directly to GPU memory (no CPU offloading)
|
| 90 |
+
4. **Persistent GPU**: Models stay in GPU memory between requests
|
| 91 |
+
|
| 92 |
+
## Hardware Requirements
|
| 93 |
+
|
| 94 |
+
You **MUST** use a paid persistent GPU tier:
|
| 95 |
+
|
| 96 |
+
| Hardware | VRAM | Cost/Hour | Monthly (24/7) | Recommended |
|
| 97 |
+
|----------|------|-----------|----------------|-------------|
|
| 98 |
+
| A10G Large | 24GB | ~$0.60 | ~$432 | β
Best choice |
|
| 99 |
+
| A10G Small | 24GB | ~$0.30 | ~$216 | β οΈ May work |
|
| 100 |
+
| T4 Medium | 16GB | ~$0.60 | ~$432 | β οΈ Tight fit |
|
| 101 |
+
| A100 Large | 80GB | ~$3.00 | ~$2,160 | π° Overkill |
|
| 102 |
+
|
| 103 |
+
## Setting Up Persistent GPU
|
| 104 |
+
|
| 105 |
+
### Step 1: Go to Space Settings
|
| 106 |
+
https://huggingface.co/spaces/minhho/Hunyuan-MT/settings
|
| 107 |
+
|
| 108 |
+
### Step 2: Select Hardware
|
| 109 |
+
Scroll to **Hardware** section β Select **A10G Large**
|
| 110 |
+
|
| 111 |
+
### Step 3: Enable Sleep (Optional - Saves Money)
|
| 112 |
+
- Enable **Sleep after inactivity**
|
| 113 |
+
- Set to 15-30 minutes
|
| 114 |
+
- Space will wake up automatically when accessed
|
| 115 |
+
- Reduces costs by ~80% for demo usage
|
| 116 |
+
|
| 117 |
+
### Step 4: Save and Wait
|
| 118 |
+
- Click **Save**
|
| 119 |
+
- Wait 5-10 minutes for rebuild
|
| 120 |
+
- Check logs for "Running on local URL: http://0.0.0.0:7860"
|
| 121 |
+
|
| 122 |
+
## Expected Behavior After Setup
|
| 123 |
+
|
| 124 |
+
### β
Success Indicators
|
| 125 |
+
- Space status shows **"Running"**
|
| 126 |
+
- Logs show: `Running on local URL: http://0.0.0.0:7860`
|
| 127 |
+
- No "runtime error" messages
|
| 128 |
+
- Can generate 3D shapes without errors
|
| 129 |
+
- Models load in ~3-4 minutes on first request
|
| 130 |
+
|
| 131 |
+
### β Failure Indicators
|
| 132 |
+
- "No @spaces.GPU function detected" β Decorators missing (now fixed)
|
| 133 |
+
- "FileNotFoundError: zerogpu-offload" β zero.startup() was called (now fixed)
|
| 134 |
+
- "CUDA out of memory" β Need larger GPU tier
|
| 135 |
+
- Space shows "Building" forever β Check logs for errors
|
| 136 |
+
|
| 137 |
+
## Cost Optimization Tips
|
| 138 |
+
|
| 139 |
+
1. **Enable Sleep Mode**: Reduce costs by 80%+
|
| 140 |
+
```yaml
|
| 141 |
+
# In Space settings:
|
| 142 |
+
sleep_after_inactivity: 15m
|
| 143 |
+
```
|
| 144 |
+
|
| 145 |
+
2. **Use Smaller GPU**: Try A10G Small first ($0.30/hr vs $0.60/hr)
|
| 146 |
+
|
| 147 |
+
3. **Consider Alternatives**:
|
| 148 |
+
- **Replicate**: Pay-per-use (~$0.0023 per second of GPU time)
|
| 149 |
+
- **Modal**: Free tier + pay-per-use
|
| 150 |
+
- **RunPod Serverless**: ~$0.00020/second
|
| 151 |
+
|
| 152 |
+
## Troubleshooting
|
| 153 |
+
|
| 154 |
+
### Issue: "No @spaces.GPU function detected"
|
| 155 |
+
**Solution**: Decorators are now enabled (commit 60fde33)
|
| 156 |
+
|
| 157 |
+
### Issue: "FileNotFoundError in zerogpu-offload"
|
| 158 |
+
**Solution**: `zero.startup()` is now commented out (commit 60fde33)
|
| 159 |
+
|
| 160 |
+
### Issue: "CUDA out of memory"
|
| 161 |
+
**Solutions**:
|
| 162 |
+
1. Use larger GPU tier (A100 Large)
|
| 163 |
+
2. Enable model CPU offloading:
|
| 164 |
+
```python
|
| 165 |
+
i23d_worker.enable_model_cpu_offload()
|
| 166 |
+
```
|
| 167 |
+
3. Use FP16 precision:
|
| 168 |
+
```python
|
| 169 |
+
torch_dtype=torch.float16
|
| 170 |
+
```
|
| 171 |
+
|
| 172 |
+
### Issue: Space stays in "Building" state
|
| 173 |
+
**Solution**: Check build logs for dependency errors, usually PyTorch/CUDA mismatch
|
| 174 |
+
|
| 175 |
+
## Verification Checklist
|
| 176 |
+
|
| 177 |
+
After rebuild completes:
|
| 178 |
+
|
| 179 |
+
- [ ] Space shows "Running" status
|
| 180 |
+
- [ ] No "runtime error" in logs
|
| 181 |
+
- [ ] Can access UI at https://minhho-hunyuan-mt.hf.space
|
| 182 |
+
- [ ] Can upload image and click "Generate"
|
| 183 |
+
- [ ] 3D model generates without FileNotFoundError
|
| 184 |
+
- [ ] Can download generated GLB file
|
| 185 |
+
|
| 186 |
+
## Summary
|
| 187 |
+
|
| 188 |
+
**Current Setup (Persistent GPU - Working):**
|
| 189 |
+
- β
`@spaces.GPU` decorators enabled
|
| 190 |
+
- β
`zero.startup()` disabled (commented out)
|
| 191 |
+
- β
Models load on CUDA
|
| 192 |
+
- β
`spaces>=0.28.3` in requirements
|
| 193 |
+
- β
`suggested_hardware: a10g-large`
|
| 194 |
+
- β³ **Waiting for you to select paid GPU tier in settings**
|
| 195 |
+
|
| 196 |
+
Once you upgrade the hardware tier, the Space should work correctly!
|
STATIC_ASSETS_404_FIX.md
ADDED
|
@@ -0,0 +1,136 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Static Assets 404 Error - Diagnosis and Fix
|
| 2 |
+
|
| 3 |
+
## Issue: UI Not Loading - Static Files Return 404
|
| 4 |
+
|
| 5 |
+
**Date:** October 9, 2025
|
| 6 |
+
|
| 7 |
+
### Symptoms:
|
| 8 |
+
|
| 9 |
+
```
|
| 10 |
+
INFO: Uvicorn running on http://0.0.0.0:7860 β
SERVER RUNNING
|
| 11 |
+
Invalid port: '7861config' β οΈ Warning (not fatal)
|
| 12 |
+
GET /_app/immutable/assets/0.DoW53xWM.css HTTP/1.1" 404 Not Found β REAL PROBLEM
|
| 13 |
+
```
|
| 14 |
+
|
| 15 |
+
### Key Observations:
|
| 16 |
+
|
| 17 |
+
1. β
**Server IS running** - Uvicorn starts successfully
|
| 18 |
+
2. β οΈ **"Invalid port" warnings** - Annoying but not the root cause
|
| 19 |
+
3. β **Static assets returning 404** - This breaks the UI
|
| 20 |
+
|
| 21 |
+
### Root Cause Analysis:
|
| 22 |
+
|
| 23 |
+
The issue is NOT the "Invalid port" warnings (those are harmless debug messages from somewhere in the stack).
|
| 24 |
+
|
| 25 |
+
**The REAL problem:**
|
| 26 |
+
- Gradio is mounted to a custom FastAPI app using `gr.mount_gradio_app(app, demo, path="/")`
|
| 27 |
+
- When Gradio is mounted this way, its internal static file routing can break
|
| 28 |
+
- Gradio's `/_app/immutable/` assets aren't being served correctly
|
| 29 |
+
- Result: UI loads skeleton HTML but CSS/JS files return 404
|
| 30 |
+
|
| 31 |
+
### The FastAPI + Gradio Integration Issue:
|
| 32 |
+
|
| 33 |
+
In `gradio_app.py` lines 909-919:
|
| 34 |
+
```python
|
| 35 |
+
app = FastAPI()
|
| 36 |
+
app.mount("/static", StaticFiles(directory=static_dir), name="static")
|
| 37 |
+
demo = build_app()
|
| 38 |
+
app = gr.mount_gradio_app(app, demo, path="/") # β Problem here
|
| 39 |
+
uvicorn.run(app, host=args.host, port=args.port)
|
| 40 |
+
```
|
| 41 |
+
|
| 42 |
+
This setup is meant to:
|
| 43 |
+
- Serve custom static files at `/static/`
|
| 44 |
+
- Mount Gradio at root `/`
|
| 45 |
+
|
| 46 |
+
But it causes Gradio's internal `/_app/` routes to malfunction.
|
| 47 |
+
|
| 48 |
+
### Solution Applied:
|
| 49 |
+
|
| 50 |
+
**Changed `app.py` to use `runpy.run_path()`:**
|
| 51 |
+
|
| 52 |
+
```python
|
| 53 |
+
# Before (using exec)
|
| 54 |
+
with open('gradio_app.py', 'r') as f:
|
| 55 |
+
code = compile(f.read(), 'gradio_app.py', 'exec')
|
| 56 |
+
exec(code)
|
| 57 |
+
|
| 58 |
+
# After (using runpy)
|
| 59 |
+
import runpy
|
| 60 |
+
runpy.run_path('gradio_app.py', run_name='__main__')
|
| 61 |
+
```
|
| 62 |
+
|
| 63 |
+
**Why this might help:**
|
| 64 |
+
- `runpy.run_path()` executes the script more cleanly
|
| 65 |
+
- It properly sets up the module namespace
|
| 66 |
+
- Better handles imports and module-level variables
|
| 67 |
+
- More similar to running `python gradio_app.py` directly
|
| 68 |
+
|
| 69 |
+
### Alternative Solutions to Try if This Doesn't Work:
|
| 70 |
+
|
| 71 |
+
**Option 1: Remove FastAPI Wrapper**
|
| 72 |
+
|
| 73 |
+
Modify `gradio_app.py` to use pure Gradio:
|
| 74 |
+
|
| 75 |
+
```python
|
| 76 |
+
# Instead of:
|
| 77 |
+
app = FastAPI()
|
| 78 |
+
app = gr.mount_gradio_app(app, demo, path="/")
|
| 79 |
+
uvicorn.run(app, ...)
|
| 80 |
+
|
| 81 |
+
# Use:
|
| 82 |
+
demo = build_app()
|
| 83 |
+
demo.launch(server_name=args.host, server_port=args.port)
|
| 84 |
+
```
|
| 85 |
+
|
| 86 |
+
**Option 2: Fix Static File Routing**
|
| 87 |
+
|
| 88 |
+
Add Gradio's static routes before mounting:
|
| 89 |
+
|
| 90 |
+
```python
|
| 91 |
+
from gradio import routes
|
| 92 |
+
app = FastAPI()
|
| 93 |
+
# Let Gradio handle its own static files
|
| 94 |
+
app = gr.mount_gradio_app(app, demo, path="/", app_kwargs={"static_url_path": "/_app"})
|
| 95 |
+
```
|
| 96 |
+
|
| 97 |
+
**Option 3: Use Gradio's Built-in FastAPI**
|
| 98 |
+
|
| 99 |
+
```python
|
| 100 |
+
demo = build_app()
|
| 101 |
+
app = demo.app # Gradio internally creates a FastAPI app
|
| 102 |
+
# Add custom routes to this app instead
|
| 103 |
+
app.mount("/static", StaticFiles(directory=static_dir), name="static")
|
| 104 |
+
demo.launch(...)
|
| 105 |
+
```
|
| 106 |
+
|
| 107 |
+
### Commits:
|
| 108 |
+
|
| 109 |
+
1. Initial deployment: `3a7c8f3`
|
| 110 |
+
2. Fix psutil: `3e926e3`
|
| 111 |
+
3. Fix app execution: `efd7869`
|
| 112 |
+
4. Fix sys.argv: `79a0702`
|
| 113 |
+
5. Rebuild trigger: `e255a99`
|
| 114 |
+
6. **Use runpy:** `539241b` β Current fix
|
| 115 |
+
|
| 116 |
+
### Expected Outcome:
|
| 117 |
+
|
| 118 |
+
After this fix:
|
| 119 |
+
- β
Server should still start
|
| 120 |
+
- β
"Invalid port" warnings may still appear (they're harmless)
|
| 121 |
+
- β
Static assets should load (no more 404s)
|
| 122 |
+
- β
UI should render properly
|
| 123 |
+
|
| 124 |
+
### If This Doesn't Work:
|
| 125 |
+
|
| 126 |
+
We may need to modify `gradio_app.py` directly to:
|
| 127 |
+
1. Remove the FastAPI wrapper entirely
|
| 128 |
+
2. Use `demo.launch()` instead of `uvicorn.run()`
|
| 129 |
+
3. Handle custom static files differently
|
| 130 |
+
|
| 131 |
+
The Gradio + FastAPI integration is tricky, especially when mounting at root path.
|
| 132 |
+
|
| 133 |
+
---
|
| 134 |
+
|
| 135 |
+
**Status:** β
Fix deployed, waiting for rebuild
|
| 136 |
+
**Next:** Monitor logs for static asset 404s
|
STATIC_FILES_FIX.md
ADDED
|
@@ -0,0 +1,185 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Fix for "Not Found" Error in 3D Model Viewer
|
| 2 |
+
|
| 3 |
+
## Problem
|
| 4 |
+
After successfully generating a 3D mesh, the UI displayed:
|
| 5 |
+
```json
|
| 6 |
+
{"detail": "Not Found"}
|
| 7 |
+
```
|
| 8 |
+
|
| 9 |
+
The generation worked (GLB files were created), but the 3D viewer couldn't load them.
|
| 10 |
+
|
| 11 |
+
## Root Cause Analysis
|
| 12 |
+
|
| 13 |
+
### The File Serving Flow
|
| 14 |
+
1. **Generation**: `_gen_shape()` creates `white_mesh.glb` in `/root/save_dir/<uuid>/`
|
| 15 |
+
2. **HTML Creation**: `build_model_viewer_html()` creates an HTML file with iframe pointing to `/static/<uuid>/white_mesh.html`
|
| 16 |
+
3. **Display**: The HTML file loads the GLB using relative path `./white_mesh.glb`
|
| 17 |
+
4. **Serving**: Both the HTML and GLB need to be served via `/static/` route
|
| 18 |
+
|
| 19 |
+
### What Went Wrong
|
| 20 |
+
In commit `8978946`, we removed the FastAPI wrapper to fix Gradio static file routing issues:
|
| 21 |
+
|
| 22 |
+
```python
|
| 23 |
+
# REMOVED (but needed for /static/ route):
|
| 24 |
+
app = FastAPI()
|
| 25 |
+
app.mount("/static", StaticFiles(directory=static_dir, html=True), name="static")
|
| 26 |
+
app = gr.mount_gradio_app(app, demo, path="/")
|
| 27 |
+
uvicorn.run(app, host=args.host, port=args.port)
|
| 28 |
+
|
| 29 |
+
# REPLACED WITH (broke /static/ route):
|
| 30 |
+
demo.launch(server_name=args.host, server_port=args.port)
|
| 31 |
+
```
|
| 32 |
+
|
| 33 |
+
This broke the `/static/` URLs that the HTML viewer relied on.
|
| 34 |
+
|
| 35 |
+
## Solution: Hybrid Approach
|
| 36 |
+
|
| 37 |
+
**Use both FastAPI (for `/static/`) AND Gradio (for main app):**
|
| 38 |
+
|
| 39 |
+
### Implementation (Commit: 289ffec)
|
| 40 |
+
|
| 41 |
+
```python
|
| 42 |
+
# Create FastAPI app for serving static files
|
| 43 |
+
app = FastAPI()
|
| 44 |
+
|
| 45 |
+
# Mount static files directory for generated GLB/HTML files
|
| 46 |
+
app.mount("/static", StaticFiles(directory=static_dir, html=True), name="static")
|
| 47 |
+
|
| 48 |
+
# Mount Gradio app at root path
|
| 49 |
+
app = gr.mount_gradio_app(app, demo, path="/", allowed_paths=[SAVE_DIR])
|
| 50 |
+
|
| 51 |
+
# Launch with Uvicorn
|
| 52 |
+
uvicorn.run(app, host=args.host, port=args.port)
|
| 53 |
+
```
|
| 54 |
+
|
| 55 |
+
### Key Changes
|
| 56 |
+
1. **FastAPI app**: Creates the FastAPI server
|
| 57 |
+
2. **StaticFiles mount**: Serves files from `SAVE_DIR` at `/static/` route
|
| 58 |
+
3. **Gradio mount**: Mounts Gradio UI at root path `/`
|
| 59 |
+
4. **allowed_paths**: Ensures Gradio can access generated files
|
| 60 |
+
5. **Uvicorn**: Single server running both FastAPI and Gradio
|
| 61 |
+
|
| 62 |
+
## How It Works Now
|
| 63 |
+
|
| 64 |
+
### Request Flow for 3D Viewer
|
| 65 |
+
|
| 66 |
+
1. **User clicks "Generate Shape"**
|
| 67 |
+
```
|
| 68 |
+
POST /api/predict β shape_generation()
|
| 69 |
+
```
|
| 70 |
+
|
| 71 |
+
2. **Generation creates files**
|
| 72 |
+
```
|
| 73 |
+
/root/save_dir/4e07aadf-c28b-4a74-a047-3c0aa6bb80b0/
|
| 74 |
+
βββ white_mesh.glb # 3D model
|
| 75 |
+
βββ white_mesh.html # Model viewer HTML
|
| 76 |
+
βββ env_maps/
|
| 77 |
+
βββ white.jpg # Environment map
|
| 78 |
+
```
|
| 79 |
+
|
| 80 |
+
3. **Function returns HTML with iframe**
|
| 81 |
+
```html
|
| 82 |
+
<iframe src="/static/4e07aadf-.../white_mesh.html" height="650" width="100%"></iframe>
|
| 83 |
+
```
|
| 84 |
+
|
| 85 |
+
4. **Browser requests HTML file**
|
| 86 |
+
```
|
| 87 |
+
GET /static/4e07aadf-.../white_mesh.html
|
| 88 |
+
β FastAPI StaticFiles serves the HTML
|
| 89 |
+
```
|
| 90 |
+
|
| 91 |
+
5. **HTML loads GLB file**
|
| 92 |
+
```html
|
| 93 |
+
<model-viewer src="./white_mesh.glb" ...>
|
| 94 |
+
```
|
| 95 |
+
|
| 96 |
+
6. **Browser requests GLB (relative to HTML)**
|
| 97 |
+
```
|
| 98 |
+
GET /static/4e07aadf-.../white_mesh.glb
|
| 99 |
+
β FastAPI StaticFiles serves the GLB
|
| 100 |
+
```
|
| 101 |
+
|
| 102 |
+
7. **3D model displays in viewer** β
|
| 103 |
+
|
| 104 |
+
## Why This Approach Works
|
| 105 |
+
|
| 106 |
+
### Advantages
|
| 107 |
+
- β
**Gradio UI**: All Gradio features work correctly (no routing conflicts)
|
| 108 |
+
- β
**Static Files**: `/static/` route serves generated files
|
| 109 |
+
- β
**Single Server**: Uvicorn runs both on same port
|
| 110 |
+
- β
**Clean Paths**: Gradio at `/`, static files at `/static/`
|
| 111 |
+
- β
**Security**: `allowed_paths` controls file access
|
| 112 |
+
|
| 113 |
+
### Route Distribution
|
| 114 |
+
| Route | Handler | Purpose |
|
| 115 |
+
|-------|---------|---------|
|
| 116 |
+
| `/` | Gradio | Main UI |
|
| 117 |
+
| `/api/*` | Gradio | API endpoints |
|
| 118 |
+
| `/_app/*` | Gradio | Internal static assets (CSS/JS) |
|
| 119 |
+
| `/static/*` | FastAPI | Generated files (GLB/HTML) |
|
| 120 |
+
|
| 121 |
+
## Previous Issues and Why They're Resolved
|
| 122 |
+
|
| 123 |
+
### Issue 1: Gradio `/_app/` 404 errors (Commit 8978946)
|
| 124 |
+
**Cause**: Mounting Gradio at root with FastAPI broke internal routing
|
| 125 |
+
**Previous Fix**: Removed FastAPI entirely
|
| 126 |
+
**Problem**: Lost `/static/` serving
|
| 127 |
+
**New Fix**: Mount Gradio with proper `allowed_paths`
|
| 128 |
+
**Result**: β
Both Gradio and static files work
|
| 129 |
+
|
| 130 |
+
### Issue 2: InvalidPathError (Commit 210033c)
|
| 131 |
+
**Cause**: Gradio blocked files outside allowed directories
|
| 132 |
+
**Fix**: Added `allowed_paths=[SAVE_DIR]`
|
| 133 |
+
**Result**: β
Still working in new setup
|
| 134 |
+
|
| 135 |
+
### Issue 3: "Not Found" error (This fix - Commit 289ffec)
|
| 136 |
+
**Cause**: No `/static/` route after removing FastAPI
|
| 137 |
+
**Fix**: Re-added FastAPI with StaticFiles mount
|
| 138 |
+
**Result**: β
3D viewer can load files
|
| 139 |
+
|
| 140 |
+
## Testing Checklist
|
| 141 |
+
|
| 142 |
+
After this fix, verify:
|
| 143 |
+
|
| 144 |
+
- [ ] Space builds successfully
|
| 145 |
+
- [ ] UI loads without CSS 404 errors
|
| 146 |
+
- [ ] Can upload image
|
| 147 |
+
- [ ] Click "Generate Shape" β works
|
| 148 |
+
- [ ] **3D model appears in viewer** (not "Not Found")
|
| 149 |
+
- [ ] Can rotate/zoom the 3D model
|
| 150 |
+
- [ ] Can download GLB file
|
| 151 |
+
- [ ] Environment map loads correctly
|
| 152 |
+
|
| 153 |
+
## Deployment
|
| 154 |
+
- Commit: `289ffec`
|
| 155 |
+
- Files changed: `gradio_app.py`
|
| 156 |
+
- Expected rebuild time: 5-10 minutes
|
| 157 |
+
|
| 158 |
+
## Related Code Locations
|
| 159 |
+
|
| 160 |
+
| Function | Line | Purpose |
|
| 161 |
+
|----------|------|---------|
|
| 162 |
+
| `build_model_viewer_html()` | 240-270 | Creates HTML with `/static/` URLs |
|
| 163 |
+
| `gen_save_folder()` | 172-195 | Generates unique folder for each request |
|
| 164 |
+
| `export_mesh()` | 197-238 | Saves GLB file to disk |
|
| 165 |
+
| FastAPI setup | 927-936 | Mounts static files and Gradio app |
|
| 166 |
+
|
| 167 |
+
## Alternative Solutions Considered
|
| 168 |
+
|
| 169 |
+
### Option 1: Change URLs to use Gradio file serving
|
| 170 |
+
**Rejected**: Would require rewriting HTML generation and model viewer templates
|
| 171 |
+
|
| 172 |
+
### Option 2: Use Gradio's native static file serving
|
| 173 |
+
**Rejected**: Gradio doesn't provide `/static/` route, uses internal mechanisms
|
| 174 |
+
|
| 175 |
+
### Option 3: Copy files to `/tmp` before serving
|
| 176 |
+
**Rejected**: Wasteful, doesn't solve the root issue
|
| 177 |
+
|
| 178 |
+
### Option 4: Hybrid FastAPI + Gradio (CHOSEN)
|
| 179 |
+
**Accepted**: β
Best of both worlds, minimal code changes
|
| 180 |
+
|
| 181 |
+
## Summary
|
| 182 |
+
|
| 183 |
+
The "Not Found" error occurred because we removed the `/static/` route when fixing a different issue. The solution is to use FastAPI for static file serving while keeping Gradio for the main UI. Both run on the same server via Uvicorn, with clean route separation.
|
| 184 |
+
|
| 185 |
+
**Expected Result**: 3D models now display correctly in the viewer! π
|
UI_LOADING_FIX.md
ADDED
|
@@ -0,0 +1,112 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# UI Loading Fix - Removed FastAPI Wrapper
|
| 2 |
+
|
| 3 |
+
## Problem
|
| 4 |
+
The Gradio UI was not loading in HuggingFace Spaces. Error logs showed:
|
| 5 |
+
- "Invalid port" warnings for internal Gradio URLs like `'7861_appimmutableassetsIndex.Cg6_qokC.css'`
|
| 6 |
+
- HTTP 404 errors for `/_app/immutable/assets/*.css` and `/_app/immutable/chunks/*.js`
|
| 7 |
+
|
| 8 |
+
## Root Cause
|
| 9 |
+
The FastAPI + Gradio integration in `gradio_app.py` was causing two issues:
|
| 10 |
+
|
| 11 |
+
1. **Static File Routing Conflict**: `gr.mount_gradio_app(app, demo, path="/")` was mounting Gradio to the FastAPI app at the root path, which broke Gradio's internal routing for static files in the `/_app/` directory.
|
| 12 |
+
|
| 13 |
+
2. **sys.argv Pollution**: Even though we controlled `sys.argv` in `app.py`, Gradio's internal code was somehow seeing HuggingFace's internal URLs and trying to parse them as arguments.
|
| 14 |
+
|
| 15 |
+
## Solution
|
| 16 |
+
**Removed the FastAPI wrapper entirely** and used Gradio's native server:
|
| 17 |
+
|
| 18 |
+
### Changes to gradio_app.py (lines 906-928)
|
| 19 |
+
**Before:**
|
| 20 |
+
```python
|
| 21 |
+
# create a FastAPI app
|
| 22 |
+
app = FastAPI()
|
| 23 |
+
|
| 24 |
+
# create a static directory to store the static files
|
| 25 |
+
static_dir = Path(SAVE_DIR).absolute()
|
| 26 |
+
static_dir.mkdir(parents=True, exist_ok=True)
|
| 27 |
+
app.mount("/static", StaticFiles(directory=static_dir, html=True), name="static")
|
| 28 |
+
shutil.copytree('./assets/env_maps', os.path.join(static_dir, 'env_maps'), dirs_exist_ok=True)
|
| 29 |
+
|
| 30 |
+
if args.low_vram_mode:
|
| 31 |
+
torch.cuda.empty_cache()
|
| 32 |
+
|
| 33 |
+
demo = build_app()
|
| 34 |
+
app = gr.mount_gradio_app(app, demo, path="/")
|
| 35 |
+
|
| 36 |
+
if ENV == 'Huggingface':
|
| 37 |
+
# for Zerogpu
|
| 38 |
+
from spaces import zero
|
| 39 |
+
zero.startup()
|
| 40 |
+
|
| 41 |
+
uvicorn.run(app, host=args.host, port=args.port)
|
| 42 |
+
```
|
| 43 |
+
|
| 44 |
+
**After:**
|
| 45 |
+
```python
|
| 46 |
+
# create a static directory to store the static files
|
| 47 |
+
static_dir = Path(SAVE_DIR).absolute()
|
| 48 |
+
static_dir.mkdir(parents=True, exist_ok=True)
|
| 49 |
+
shutil.copytree('./assets/env_maps', os.path.join(static_dir, 'env_maps'), dirs_exist_ok=True)
|
| 50 |
+
|
| 51 |
+
if args.low_vram_mode:
|
| 52 |
+
torch.cuda.empty_cache()
|
| 53 |
+
|
| 54 |
+
demo = build_app()
|
| 55 |
+
|
| 56 |
+
if ENV == 'Huggingface':
|
| 57 |
+
# for Zerogpu
|
| 58 |
+
from spaces import zero
|
| 59 |
+
zero.startup()
|
| 60 |
+
|
| 61 |
+
# Use Gradio's native server instead of FastAPI wrapper to avoid static file routing issues
|
| 62 |
+
demo.launch(
|
| 63 |
+
server_name=args.host,
|
| 64 |
+
server_port=args.port,
|
| 65 |
+
share=False
|
| 66 |
+
)
|
| 67 |
+
```
|
| 68 |
+
|
| 69 |
+
### Changes to app.py
|
| 70 |
+
Simplified to just set `sys.argv` and import `gradio_app`:
|
| 71 |
+
|
| 72 |
+
```python
|
| 73 |
+
#!/usr/bin/env python3
|
| 74 |
+
import sys
|
| 75 |
+
import os
|
| 76 |
+
|
| 77 |
+
os.chdir(os.path.dirname(os.path.abspath(__file__)))
|
| 78 |
+
|
| 79 |
+
# Configure arguments for gradio_app.py
|
| 80 |
+
sys.argv = [
|
| 81 |
+
'gradio_app.py',
|
| 82 |
+
'--host', '0.0.0.0',
|
| 83 |
+
'--port', '7860'
|
| 84 |
+
]
|
| 85 |
+
|
| 86 |
+
# Import gradio_app to execute its if __name__ == '__main__' block
|
| 87 |
+
if __name__ == '__main__':
|
| 88 |
+
import gradio_app
|
| 89 |
+
```
|
| 90 |
+
|
| 91 |
+
## Why This Works
|
| 92 |
+
1. **No FastAPI conflicts**: Gradio's native server (`demo.launch()`) handles all routing, including the `/_app/` static files
|
| 93 |
+
2. **Clean argument passing**: Setting `sys.argv` before import ensures argparse gets clean arguments
|
| 94 |
+
3. **Proper module execution**: The `if __name__ == '__main__'` guard in `gradio_app.py` executes when imported from `app.py`
|
| 95 |
+
|
| 96 |
+
## Trade-offs
|
| 97 |
+
- **Lost**: The `/static` endpoint for serving generated GLB files via FastAPI
|
| 98 |
+
- **Alternative**: Gradio has built-in file serving capabilities, so generated files can still be accessed
|
| 99 |
+
- **Benefit**: UI now loads correctly without 404 errors
|
| 100 |
+
|
| 101 |
+
## Deployment
|
| 102 |
+
- Commit: `8978946`
|
| 103 |
+
- Pushed to: `hf` remote (HuggingFace Spaces)
|
| 104 |
+
- Space URL: https://huggingface.co/spaces/minhho/Hunyuan-MT
|
| 105 |
+
- Expected rebuild time: 5-10 minutes
|
| 106 |
+
|
| 107 |
+
## Verification
|
| 108 |
+
After the HuggingFace Space rebuilds, check:
|
| 109 |
+
1. β
No "Invalid port" warnings in logs
|
| 110 |
+
2. β
No 404 errors for `/_app/immutable/` files
|
| 111 |
+
3. β
Gradio UI loads successfully in browser
|
| 112 |
+
4. β
Can interact with shape generation and texture synthesis tabs
|
ZEROGPU_FIX.md
ADDED
|
@@ -0,0 +1,95 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# ZeroGPU Initialization Fix
|
| 2 |
+
|
| 3 |
+
## Problem
|
| 4 |
+
When running the app on HuggingFace Spaces, the UI loaded but generated this error when using any feature:
|
| 5 |
+
|
| 6 |
+
```
|
| 7 |
+
FileNotFoundError: [Errno 2] No such file or directory: '/data-nvme/zerogpu-offload/140337662191712'
|
| 8 |
+
```
|
| 9 |
+
|
| 10 |
+
This occurred in `spaces/zero/torch/packing.py` when ZeroGPU tried to offload tensors.
|
| 11 |
+
|
| 12 |
+
## Root Cause
|
| 13 |
+
**Incorrect initialization order and device placement:**
|
| 14 |
+
|
| 15 |
+
1. Models were being loaded with `device='cuda'`
|
| 16 |
+
2. `zero.startup()` was called **AFTER** models were already loaded
|
| 17 |
+
3. ZeroGPU couldn't properly manage models that were already on CUDA
|
| 18 |
+
|
| 19 |
+
## How ZeroGPU Works
|
| 20 |
+
HuggingFace's ZeroGPU system:
|
| 21 |
+
- Automatically moves models to GPU **only when needed** (when decorated functions run)
|
| 22 |
+
- Offloads models back to CPU/disk after use to save GPU memory
|
| 23 |
+
- Requires models to be initialized on **CPU**, not CUDA
|
| 24 |
+
- Needs `zero.startup()` called **BEFORE** any model loading
|
| 25 |
+
|
| 26 |
+
## Solution
|
| 27 |
+
**Changed initialization order in gradio_app.py (lines 885-895):**
|
| 28 |
+
|
| 29 |
+
### Before (BROKEN):
|
| 30 |
+
```python
|
| 31 |
+
rmbg_worker = BackgroundRemover()
|
| 32 |
+
i23d_worker = Hunyuan3DDiTFlowMatchingPipeline.from_pretrained(
|
| 33 |
+
args.model_path,
|
| 34 |
+
subfolder=args.subfolder,
|
| 35 |
+
use_safetensors=False,
|
| 36 |
+
device=args.device, # 'cuda' - WRONG for ZeroGPU!
|
| 37 |
+
)
|
| 38 |
+
# ... more model initialization ...
|
| 39 |
+
|
| 40 |
+
demo = build_app()
|
| 41 |
+
|
| 42 |
+
if ENV == 'Huggingface':
|
| 43 |
+
from spaces import zero
|
| 44 |
+
zero.startup() # TOO LATE!
|
| 45 |
+
```
|
| 46 |
+
|
| 47 |
+
### After (FIXED):
|
| 48 |
+
```python
|
| 49 |
+
# Initialize ZeroGPU BEFORE loading any models
|
| 50 |
+
if ENV == 'Huggingface':
|
| 51 |
+
from spaces import zero
|
| 52 |
+
zero.startup() # Called FIRST
|
| 53 |
+
|
| 54 |
+
rmbg_worker = BackgroundRemover()
|
| 55 |
+
|
| 56 |
+
# For ZeroGPU, use 'cpu' as device - ZeroGPU will move to GPU automatically
|
| 57 |
+
model_device = 'cpu' if ENV == 'Huggingface' else args.device
|
| 58 |
+
|
| 59 |
+
i23d_worker = Hunyuan3DDiTFlowMatchingPipeline.from_pretrained(
|
| 60 |
+
args.model_path,
|
| 61 |
+
subfolder=args.subfolder,
|
| 62 |
+
use_safetensors=False,
|
| 63 |
+
device=model_device, # 'cpu' for HF, 'cuda' for local
|
| 64 |
+
)
|
| 65 |
+
# ... more model initialization ...
|
| 66 |
+
|
| 67 |
+
demo = build_app()
|
| 68 |
+
# zero.startup() already called - removed duplicate
|
| 69 |
+
```
|
| 70 |
+
|
| 71 |
+
## Key Changes
|
| 72 |
+
1. **Move `zero.startup()`** to **line 890** (before any model loading)
|
| 73 |
+
2. **Use CPU device** when `ENV == 'Huggingface'`: `model_device = 'cpu' if ENV == 'Huggingface' else args.device`
|
| 74 |
+
3. **Remove duplicate** `zero.startup()` call after `demo = build_app()`
|
| 75 |
+
|
| 76 |
+
## Why This Works
|
| 77 |
+
- Models start on CPU, so they don't consume GPU memory at startup
|
| 78 |
+
- ZeroGPU tracks the models and knows when to move them
|
| 79 |
+
- When `@spaces.GPU()` decorated functions run, ZeroGPU:
|
| 80 |
+
- Moves required models to GPU
|
| 81 |
+
- Executes the function
|
| 82 |
+
- Offloads models back to CPU/disk
|
| 83 |
+
- This allows running large models on limited GPU memory
|
| 84 |
+
|
| 85 |
+
## Testing
|
| 86 |
+
After rebuild, verify:
|
| 87 |
+
1. β
App starts without errors
|
| 88 |
+
2. β
Can click "Generate" without FileNotFoundError
|
| 89 |
+
3. β
Models are properly offloaded between requests
|
| 90 |
+
4. β
GPU memory is managed efficiently
|
| 91 |
+
|
| 92 |
+
## Deployment
|
| 93 |
+
- Commit: `1f2ca9f`
|
| 94 |
+
- Pushed to: HuggingFace Spaces
|
| 95 |
+
- Expected rebuild time: 5-10 minutes
|
check_space.sh
ADDED
|
@@ -0,0 +1,24 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/bin/bash
|
| 2 |
+
# Script to check and restart HF Space
|
| 3 |
+
|
| 4 |
+
echo "=== Hugging Face Space Status Checker ==="
|
| 5 |
+
echo ""
|
| 6 |
+
echo "Your Space URL: https://huggingface.co/spaces/minhho/Hunyuan-MT"
|
| 7 |
+
echo "Direct App URL: https://minhho-hunyuan-mt.hf.space"
|
| 8 |
+
echo ""
|
| 9 |
+
echo "Current running commit: efd78693 (OLD - has invalid port bug)"
|
| 10 |
+
echo "Latest pushed commit: 79a0702 (NEW - should fix the issue)"
|
| 11 |
+
echo ""
|
| 12 |
+
echo "=== How to Fix ==="
|
| 13 |
+
echo ""
|
| 14 |
+
echo "1. Go to: https://huggingface.co/spaces/minhho/Hunyuan-MT"
|
| 15 |
+
echo "2. Click 'Settings' button (gear icon)"
|
| 16 |
+
echo "3. Scroll to 'Factory Reboot' section"
|
| 17 |
+
echo "4. Click 'Factory Reboot' button"
|
| 18 |
+
echo ""
|
| 19 |
+
echo "OR simply push an empty commit to trigger rebuild:"
|
| 20 |
+
echo ""
|
| 21 |
+
echo " git commit --allow-empty -m 'Trigger rebuild with latest fixes'"
|
| 22 |
+
echo " git push hf main --no-verify"
|
| 23 |
+
echo ""
|
| 24 |
+
echo "This will force HF to use commit 79a0702 which has all the fixes."
|
gradio_app.py
CHANGED
|
@@ -238,34 +238,40 @@ def randomize_seed_fn(seed: int, randomize_seed: bool) -> int:
|
|
| 238 |
|
| 239 |
|
| 240 |
def build_model_viewer_html(save_folder, height=660, width=790, textured=False):
|
| 241 |
-
|
|
|
|
|
|
|
| 242 |
if textured:
|
| 243 |
-
|
| 244 |
template_name = './assets/modelviewer-textured-template.html'
|
| 245 |
-
output_html_path = os.path.join(save_folder, f'textured_mesh.html')
|
| 246 |
else:
|
| 247 |
-
|
| 248 |
template_name = './assets/modelviewer-template.html'
|
| 249 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 250 |
offset = 50 if textured else 10
|
| 251 |
with open(os.path.join(CURRENT_DIR, template_name), 'r', encoding='utf-8') as f:
|
| 252 |
template_html = f.read()
|
| 253 |
|
| 254 |
-
|
| 255 |
-
|
| 256 |
-
|
| 257 |
-
|
| 258 |
-
f.write(template_html)
|
| 259 |
|
| 260 |
-
|
| 261 |
-
iframe_tag = f'<iframe src="/static/{rel_path}" \
|
| 262 |
-
height="{height}" width="100%" frameborder="0"></iframe>'
|
| 263 |
-
print(f'Find html file {output_html_path}, \
|
| 264 |
-
{os.path.exists(output_html_path)}, relative HTML path is /static/{rel_path}')
|
| 265 |
|
|
|
|
| 266 |
return f"""
|
| 267 |
-
<div style='height: {height}; width: 100%;'>
|
| 268 |
-
{
|
| 269 |
</div>
|
| 270 |
"""
|
| 271 |
|
|
@@ -925,50 +931,14 @@ if __name__ == '__main__':
|
|
| 925 |
# Build the Gradio app
|
| 926 |
demo = build_app()
|
| 927 |
|
| 928 |
-
#
|
| 929 |
-
from fastapi import Response
|
| 930 |
-
from fastapi.responses import FileResponse
|
| 931 |
-
import mimetypes
|
| 932 |
-
|
| 933 |
-
@demo.app.get("/static/{file_path:path}")
|
| 934 |
-
async def serve_static_files(file_path: str):
|
| 935 |
-
"""Serve static files from SAVE_DIR"""
|
| 936 |
-
full_path = os.path.join(SAVE_DIR, file_path)
|
| 937 |
-
print(f"[STATIC] Request: /static/{file_path}")
|
| 938 |
-
print(f"[STATIC] Full path: {full_path}")
|
| 939 |
-
print(f"[STATIC] File exists: {os.path.exists(full_path)}")
|
| 940 |
-
|
| 941 |
-
if not os.path.exists(full_path):
|
| 942 |
-
print(f"[STATIC] ERROR: File not found")
|
| 943 |
-
return Response(content='{"detail":"Not Found"}', status_code=404, media_type="application/json")
|
| 944 |
-
|
| 945 |
-
if not os.path.isfile(full_path):
|
| 946 |
-
print(f"[STATIC] ERROR: Path is not a file")
|
| 947 |
-
return Response(content='{"detail":"Not Found"}', status_code=404, media_type="application/json")
|
| 948 |
-
|
| 949 |
-
mime_type, _ = mimetypes.guess_type(full_path)
|
| 950 |
-
print(f"[STATIC] Serving with MIME type: {mime_type}")
|
| 951 |
-
return FileResponse(full_path, media_type=mime_type)
|
| 952 |
-
|
| 953 |
-
# Add startup event to verify routes
|
| 954 |
-
@demo.app.on_event("startup")
|
| 955 |
-
async def startup_event():
|
| 956 |
-
print("=== [STARTUP] Application starting ===")
|
| 957 |
-
print(f"[STARTUP] SAVE_DIR: {SAVE_DIR}")
|
| 958 |
-
print("[STARTUP] Registered routes:")
|
| 959 |
-
for route in demo.app.routes:
|
| 960 |
-
route_info = f"{route.methods if hasattr(route, 'methods') else 'N/A'} {route.path if hasattr(route, 'path') else str(route)}"
|
| 961 |
-
print(f" {route_info}")
|
| 962 |
-
if hasattr(route, 'path') and '/static' in route.path:
|
| 963 |
-
print(f" ^^^ /static route FOUND!")
|
| 964 |
-
|
| 965 |
-
# Enable queue for @spaces.GPU to work (AFTER adding routes)
|
| 966 |
demo.queue()
|
| 967 |
|
| 968 |
-
# Launch Gradio
|
| 969 |
demo.launch(
|
| 970 |
server_name=args.host,
|
| 971 |
server_port=args.port,
|
| 972 |
share=False,
|
| 973 |
allowed_paths=[SAVE_DIR]
|
| 974 |
)
|
|
|
|
|
|
| 238 |
|
| 239 |
|
| 240 |
def build_model_viewer_html(save_folder, height=660, width=790, textured=False):
|
| 241 |
+
import base64
|
| 242 |
+
|
| 243 |
+
# Determine which mesh file to use
|
| 244 |
if textured:
|
| 245 |
+
glb_filename = 'textured_mesh.glb'
|
| 246 |
template_name = './assets/modelviewer-textured-template.html'
|
|
|
|
| 247 |
else:
|
| 248 |
+
glb_filename = 'white_mesh.glb'
|
| 249 |
template_name = './assets/modelviewer-template.html'
|
| 250 |
+
|
| 251 |
+
glb_path = os.path.join(save_folder, glb_filename)
|
| 252 |
+
|
| 253 |
+
# Read and encode GLB file as base64 data URL
|
| 254 |
+
with open(glb_path, 'rb') as f:
|
| 255 |
+
glb_data = f.read()
|
| 256 |
+
glb_base64 = base64.b64encode(glb_data).decode('utf-8')
|
| 257 |
+
glb_data_url = f'data:model/gltf-binary;base64,{glb_base64}'
|
| 258 |
+
|
| 259 |
+
# Read template and replace placeholders
|
| 260 |
offset = 50 if textured else 10
|
| 261 |
with open(os.path.join(CURRENT_DIR, template_name), 'r', encoding='utf-8') as f:
|
| 262 |
template_html = f.read()
|
| 263 |
|
| 264 |
+
# Replace placeholders with actual values
|
| 265 |
+
template_html = template_html.replace('#height#', f'{height - offset}')
|
| 266 |
+
template_html = template_html.replace('#width#', f'{width}')
|
| 267 |
+
template_html = template_html.replace('#src#', glb_data_url) # Use data URL instead of file path
|
|
|
|
| 268 |
|
| 269 |
+
print(f'[HTML] Embedded {glb_filename} as data URL ({len(glb_base64)} bytes base64)')
|
|
|
|
|
|
|
|
|
|
|
|
|
| 270 |
|
| 271 |
+
# Return the HTML directly embedded (no iframe needed!)
|
| 272 |
return f"""
|
| 273 |
+
<div style='height: {height}px; width: 100%;'>
|
| 274 |
+
{template_html}
|
| 275 |
</div>
|
| 276 |
"""
|
| 277 |
|
|
|
|
| 931 |
# Build the Gradio app
|
| 932 |
demo = build_app()
|
| 933 |
|
| 934 |
+
# Enable queue for @spaces.GPU to work
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 935 |
demo.queue()
|
| 936 |
|
| 937 |
+
# Launch Gradio with allowed paths for any file operations
|
| 938 |
demo.launch(
|
| 939 |
server_name=args.host,
|
| 940 |
server_port=args.port,
|
| 941 |
share=False,
|
| 942 |
allowed_paths=[SAVE_DIR]
|
| 943 |
)
|
| 944 |
+
|