How to use from
vLLM
Install from pip and serve model
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "IVGSZ/Flash-VStream-7b"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/completions" \
	-H "Content-Type: application/json" \
	--data '{
		"model": "IVGSZ/Flash-VStream-7b",
		"prompt": "Once upon a time,",
		"max_tokens": 512,
		"temperature": 0.5
	}'
Use Docker
docker model run hf.co/IVGSZ/Flash-VStream-7b
Quick Links

Flash-VStream Model Card

Model details

We proposed Flash-VStream, a video-language model that simulates the memory mechanism of human. Our model is able to process extremely long video streams in real-time and respond to user queries simultaneously.

Training data

This model is trained based on image data from LLaVA-1.5 dataset, and video data from WebVid and ActivityNet datasets following LLaMA-VID, including

  • 558K filtered image-text pairs from LAION/CC/SBU, captioned by BLIP.
  • 158K GPT-generated multimodal instruction-following data.
  • 450K academic-task-oriented VQA data mixture.
  • 40K ShareGPT data.
  • 232K video-caption pairs sampled from the WebVid 2.5M dataset.
  • 98K videos from ActivityNet with QA pairs from Video-ChatGPT.

License

This project is licensed under the LLAMA 2 License.

Downloads last month
859
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Spaces using IVGSZ/Flash-VStream-7b 2

Paper for IVGSZ/Flash-VStream-7b