Spaces:
Running
on
Zero
Running
on
Zero
A newer version of the Gradio SDK is available:
6.1.0
Additional Environment Configuration for ZeroGPU
Add this to your Hugging Face Space's Settings → Variables:
Environment Variables
Required:
ZEROGPU_OFFLOAD_DIR=/tmp/zerogpu-offload
Recommended:
PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True
CUDA_LAUNCH_BLOCKING=0
HF_HUB_ENABLE_HF_TRANSFER=1
Alternative: Direct Folder Creation
If the above doesn't work, you can also try creating a startup script in your Space.
Space Configuration File
Create or modify your Space's README.md to include:
---
title: Wan2.2-Fast-I2I
emoji: 💻
colorFrom: purple
colorTo: gray
sdk: gradio
sdk_version: 5.44.1
app_file: app.py
pinned: false
hardware: a10g-large
The hardware: a10g-large ensures you get a ZeroGPU instance with sufficient memory.
Dockerfile Alternative
If you need more control, create a Dockerfile:
FROM python:3.10
# Create offload directory
RUN mkdir -p /data-nvme/zerogpu-offload && chmod 755 /data-nvme/zerogpu-offload
# Set environment variables
ENV ZEROGPU_OFFLOAD_DIR=/data-nvme/zerogpu-offload
ENV PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True
# Install your requirements
COPY requirements.txt .
RUN pip install -r requirements.txt
# Copy your app
COPY . /app
WORKDIR /app
CMD ["python", "app.py"]
Testing the Fix
The modifications I made to app.py should handle:
- ✅ Automatic directory creation - Creates
/data-nvme/zerogpu-offloador falls back to/tmp/zerogpu-offload - ✅ Permission handling - Gracefully handles cases where NVMe isn't writable
- ✅ Environment variables - Sets proper PyTorch memory configuration
- ✅ ZeroGPU decorators restored - Keeps
@spaces.GPU()for proper GPU allocation - ✅ Memory optimization - Added garbage collection and CUDA cache clearing
The error should be resolved and your Space should run on ZeroGPU infrastructure properly.