Fix issue where Qwen model_name was passed as model_path for loading 06ef027 jena-shreyas commited on 10 days ago
Add BF16/INT8/INT4 quantization support for LLaVA-Video to fit within 23GB VRAM HF Spaces limit 5644567 jena-shreyas commited on 10 days ago
Remove s3fs version since it causes fsspec version conflict w/ other pkgs b648ad9 jena-shreyas commited on 10 days ago
1. Add flash-attn 2.8.3 release link with ABI=False since it works on HF Spaces f5837a1 jena-shreyas commited on 10 days ago
Just the flash-attn error. Adding it to requirements.txt 1c5650b jena-shreyas commited on 12 days ago
Delete llava-next local copy, fork and update requirements.txt 6645aaf jena-shreyas commited on 12 days ago
Install LLaVA-NeXT without deps to avoid version conflicts 8728571 jena-shreyas commited on 12 days ago
Drop python_version to 3.10, fix flash-attn .whl file for Python=3.10, torch=2.6 f9a14b4 jena-shreyas commited on 12 days ago
Fix flash-attn python version to 3.13 for demo compatibility 988ebb2 jena-shreyas commited on 12 days ago