llm_topic_modelling / requirements_gpu.txt
seanpedrickcase's picture
Sync: Removed another s3 key, and unnecessary xlsx save print statement. Formatter check.
2cad7c3
raw
history blame contribute delete
852 Bytes
gradio==6.0.2
transformers==4.57.2
spaces==0.42.1
boto3>=1.42.1
pandas>=2.3.3
pyarrow>=21.0.0
openpyxl>=3.1.5
markdown>=3.7
tabulate>=0.9.0
lxml>=5.3.0
google-genai>=1.52.0
openai>=2.8.1
html5lib>=1.1
beautifulsoup4>=4.12.3
rapidfuzz>=3.13.0
python-dotenv>=1.1.0
# Torch/Unsloth
# Latest compatible with CUDA 12.4
torch<=2.9.1 --extra-index-url https://download.pytorch.org/whl/cu128
unsloth[cu128-torch280]<=2025.11.6 # Refer here for more details on installation: https://pypi.org/project/unsloth
unsloth_zoo<=2025.11.6
# Additional for Windows and CUDA 12.4 older GPUS (RTX 3x or similar):
#triton-windows<3.3
timm
# Llama CPP Python
llama-cpp-python>=0.3.16 -C cmake.args="-DGGML_CUDA=on"
# If above doesn't work, try specific wheels for your system, see files in https://github.com/JamePeng/llama-cpp-python/releases for different python versions