Fix issue where Qwen model_name was passed as model_path for loading 06ef027 jena-shreyas commited on Feb 10
Add BF16/INT8/INT4 quantization support for LLaVA-Video to fit within 23GB VRAM HF Spaces limit 5644567 jena-shreyas commited on Feb 10
Remove s3fs version since it causes fsspec version conflict w/ other pkgs b648ad9 jena-shreyas commited on Feb 10
1. Add flash-attn 2.8.3 release link with ABI=False since it works on HF Spaces f5837a1 jena-shreyas commited on Feb 10
Delete llava-next local copy, fork and update requirements.txt 6645aaf jena-shreyas commited on Feb 8
Drop python_version to 3.10, fix flash-attn .whl file for Python=3.10, torch=2.6 f9a14b4 jena-shreyas commited on Feb 8