Installs Torch without Cuda which causes xformers error

#1
by Redtash1 - opened

Thank you for this Gradio version.
I got this working on my local Computer

Windows 11
RTX 4060Ti 16GB
Ram 64GB

with 2 modifications.

1st Replaced standard ffmpeg with ffmpeg shared.

2nd I installed Torch with Cuda, then pip installed requirements.txt but it uninstalled Torch with Cuda, torchaudio, torchvision & replaced it with Torch 2.9.1 (CPU) without Cuda, torchaudio, torchvision which caused xformers errors when launching using app.py, so I pip uninstalled torch, torchaudio, torchvision & pip installed torch 2.9.1 with Cuda, torchaudio, torchvision & everything worked correctly.
But I wanted to get it working correctly without having to uninstall the wrong torch & reinstall the correct torch with Cuda. So I removed Torch, torchaudio, torchvision from requirements.txt & deleted the python_embeded (Virtual Environment) & installed everything again but it still uninstalled the torch with Cuda I had installed & replaced it with torch without Cuda again, so it must be getting the torch requirements from the Github link in the requirements.txt file. Is it possible for you to change it so this doesn't happen? Thank you.

Hi, thanks for the detailed feedback!

The issue you encountered is a known limitation of the Python/PyTorch ecosystem:

The sam-audio package declares torch, torchaudio, and torchvision as dependencies in its pyproject.toml. When pip installs this package, it automatically pulls these dependencies from the default PyPI index, which only hosts the CPU-only version of PyTorch.

Note: My fork of sam-audio only modified the Python version requirements to fix compatibility issues. The torch dependencies are exactly the same as the official Meta sam-audio repository. This is a limitation from the upstream package, not something I can change.

Unfortunately, there's no way to prevent this behavior from the requirements.txt side, because:

  1. The torch dependencies are declared inside the sam-audio package itself, not in our requirements.txt
  2. pip doesn't support specifying custom index URLs for transitive dependencies

The solution you found is actually the correct approach:

# Step 1: Install all dependencies (this will install CPU torch)
pip install -r requirements.txt

# Step 2: Reinstall PyTorch with CUDA support (this will override the CPU version)
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121

Thank you for your fast & detailed explanation. Would it work if you to removed Torch, Torchaudio, Torchvision from your pyproject.toml file since it is the first one that would be used to install those & other dependencies? Thank you.

Sign up or log in to comment