Spaces:
Sleeping
A newer version of the Gradio SDK is available:
6.1.0
RTX 5080 (Blackwell) GPU Support β
Good News!
The NVIDIA GeForce RTX 5080 uses the Blackwell architecture with compute capability sm_120 (12.0). PyTorch nightly builds with CUDA 12.8+ now support RTX 5080!
Current Status
- GPU Model: NVIDIA GeForce RTX 5080
- Compute Capability: sm_120 (12.0)
- Required CUDA Version: 12.8+
- Required PyTorch: Nightly builds with CUDA 12.8
- Support Status: β Supported (via nightly builds)
Automatic Installation
Our setup.py script automatically detects RTX 5080 and installs the correct PyTorch version:
# Create and activate virtual environment
python -m venv venv
source venv/bin/activate # Windows: venv\Scripts\activate
# Run smart installer (automatically installs PyTorch nightly for RTX 5080)
python setup.py
The script will:
- π Detect your RTX 5080 GPU
- π¦ Install PyTorch nightly with CUDA 12.8 support
- β Verify GPU compatibility
- π Enable full GPU acceleration
Running the Application
After installation, just run:
python app.py
You'll see:
β
Detected Blackwell GPU (NVIDIA GeForce RTX 5080)
Installing PyTorch nightly with CUDA 12.8 support (sm_120 compatible)
π₯οΈ Local - GPU (NVIDIA GeForce RTX 5080)
π Using device: cuda
Manual Installation (Alternative)
If you prefer manual installation:
# Uninstall existing PyTorch
pip uninstall torch torchvision torchaudio -y
# Install PyTorch nightly with CUDA 12.8
pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu128
Verification
Check if your RTX 5080 is working:
import torch
print(f"PyTorch: {torch.__version__}")
print(f"CUDA available: {torch.cuda.is_available()}")
print(f"GPU name: {torch.cuda.get_device_name(0)}")
print(f"Compute capability: {torch.cuda.get_device_capability(0)}")
Expected output:
PyTorch: 2.7.0.dev20250310+cu128
CUDA available: True
GPU name: NVIDIA GeForce RTX 5080
Compute capability: (12, 0)
Alternative Solutions
1. Build PyTorch from Source (Advanced)
# Clone PyTorch
git clone --recursive https://github.com/pytorch/pytorch
cd pytorch
# Set CUDA architecture flags
export TORCH_CUDA_ARCH_LIST="12.0"
export CUDA_HOME=/usr/local/cuda
# Build (takes 1-2 hours)
python setup.py develop
Note: This is time-consuming and may not work until PyTorch officially adds sm_120 support.
2. Use Older GPU (Temporary)
If available, use an older GPU (RTX 40xx, 30xx, etc.) that has compute capability β€ 9.0.
3. Wait for Official Support
The most practical approach is to use CPU mode until PyTorch adds official support.
Performance Notes
CPU Mode Performance:
- Inference is slower but functional
- Small models (< 1B parameters): Acceptable
- Large models (> 7B parameters): Very slow
- Consider using smaller models for now
Questions?
Check PyTorch compatibility:
python -c "import torch; print(f'CUDA available: {torch.cuda.is_available()}'); print(f'Compute capability: {torch.cuda.get_device_capability(0) if torch.cuda.is_available() else \"N/A\"}')"