| | --- |
| | inference: false |
| | license: apache-2.0 |
| | --- |
| | |
| | <br> |
| |
|
| | # LLaVA-Next-Video Model Card |
| |
|
| | ## Model details |
| |
|
| | **Model type:** |
| | <br> |
| | LLaVA-Next-Video is an open-source chatbot trained by fine-tuning LLM on multimodal instruction-following data. |
| | <br> |
| | Base LLM: [NousResearch/Nous-Hermes-2-Yi-34B](https://huggingface.co/NousResearch/Nous-Hermes-2-Yi-34B) |
| |
|
| | **Model date:** |
| | <br> |
| | LLaVA-Next-Video-34B was trained in April 2024. |
| |
|
| | **Paper or resources for more information:** |
| | <br> |
| | https://github.com/LLaVA-VL/LLaVA-NeXT |
| |
|
| | ## License |
| | [NousResearch/Nous-Hermes-2-Yi-34B](https://huggingface.co/NousResearch/Nous-Hermes-2-Yi-34B) license. |
| |
|
| |
|
| | ## Where to send questions or comments about the model |
| | https://github.com/LLaVA-VL/LLaVA-NeXT/issues |
| |
|
| | ## Intended use |
| | **Primary intended uses:** |
| | <br> |
| | The primary use of LLaVA is research on large multimodal models and chatbots. |
| |
|
| | **Primary intended users:** |
| | <br> |
| | The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence. |
| |
|
| | ## Training dataset |
| |
|
| | ### Image |
| | - 558K filtered image-text pairs from LAION/CC/SBU, captioned by BLIP. |
| | - 158K GPT-generated multimodal instruction-following data. |
| | - 500K academic-task-oriented VQA data mixture. |
| | - 50K GPT-4V data mixture. |
| | - 40K ShareGPT data. |
| | ### Video |
| | - 100K VideoChatGPT-Instruct. |
| |
|
| | ## Evaluation dataset |
| | A collection of 4 benchmarks, including 3 academic VQA benchmarks and 1 captioning benchmark. |