Spaces:
Running on Zero
Running on Zero
A newer version of the Gradio SDK is available: 6.10.0
Deploy LoRA Studio To Your Own HF Space
This guide deploys the full LoRA Studio UI to your own Hugging Face Space.
For the dedicated Qwen captioning UI, see docs/deploy/QWEN_SPACE.md.
Prerequisites
- Hugging Face account
HF_TOKENwith repo write access- Python environment with
requirements.txtinstalled
Fast Path (Recommended)
python scripts/hf_clone.py space --repo-id YOUR_USERNAME/YOUR_SPACE_NAME
Optional private Space:
python scripts/hf_clone.py space --repo-id YOUR_USERNAME/YOUR_SPACE_NAME --private
Manual Path
- Create a new Space on Hugging Face:
- SDK:
Gradio
- SDK:
- Push this repo content (excluding local artifacts) to that Space repo.
- Ensure README front matter has:
sdk: gradioapp_file: app.py
- In Space settings:
- select GPU hardware (A10G/A100/etc.) if needed
- add secrets (
HF_TOKEN) if your flow requires private Hub access
Runtime Notes
- Space output defaults to
/data/lora_outputon Hugging Face Spaces. - Enable persistent storage if you need checkpoint retention across restarts.
- For long-running non-interactive training, HF Jobs may be more cost-efficient than keeping a Space running.