LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architecture
Paper β’ 2409.02889 β’ Published β’ 54
docker model run hf.co/FreedomIntelligence/Jamba-9B-Instructπ Paper β’ π Demo β’ π Github β’ π€ LongLLaVA-53B-A13B β’ π€ LongLLaVA-9B
Get the model inference code from Github.
git clone https://github.com/FreedomIntelligence/LongLLaVA.git
pip install -r requirements.txt
python cli.py --model_dir path-to-longllava
query = 'What does the picture show?'
image_paths = ['image_path1'] # image or video path
from cli import Chatbot
bot = Chatbot(path-to-longllava)
output = bot.chat(query, image_paths)
print(output) # Prints the output of the model
@misc{wang2024longllavascalingmultimodalllms,
title={LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architecture},
author={Xidong Wang and Dingjie Song and Shunian Chen and Chen Zhang and Benyou Wang},
year={2024},
eprint={2409.02889},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2409.02889},
}
Install from pip and serve model
# Install vLLM from pip: pip install vllm# Start the vLLM server: vllm serve "FreedomIntelligence/Jamba-9B-Instruct"# Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "FreedomIntelligence/Jamba-9B-Instruct", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'