Update README.md
Browse files
README.md
CHANGED
|
@@ -207,6 +207,17 @@ SmolVLM2 is built upon [the shape-optimized SigLIP](https://huggingface.co/googl
|
|
| 207 |
|
| 208 |
We release the SmolVLM2 checkpoints under the Apache 2.0 license.
|
| 209 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 210 |
## Training Data
|
| 211 |
SmolVLM2 used 3.3M samples for training originally from ten different datasets: [LlaVa Onevision](https://huggingface.co/datasets/lmms-lab/LLaVA-OneVision-Data), [M4-Instruct](https://huggingface.co/datasets/lmms-lab/M4-Instruct-Data), [Mammoth](https://huggingface.co/datasets/MAmmoTH-VL/MAmmoTH-VL-Instruct-12M), [LlaVa Video 178K](https://huggingface.co/datasets/lmms-lab/LLaVA-Video-178K), [FineVideo](https://huggingface.co/datasets/HuggingFaceFV/finevideo), [VideoStar](https://huggingface.co/datasets/orrzohar/Video-STaR), [VRipt](https://huggingface.co/datasets/Mutonix/Vript), [Vista-400K](https://huggingface.co/datasets/TIGER-Lab/VISTA-400K), [MovieChat](https://huggingface.co/datasets/Enxin/MovieChat-1K_train) and [ShareGPT4Video](https://huggingface.co/datasets/ShareGPT4Video/ShareGPT4Video).
|
| 212 |
In the following plots we give a general overview of the samples across modalities and the source of those samples.
|
|
|
|
| 207 |
|
| 208 |
We release the SmolVLM2 checkpoints under the Apache 2.0 license.
|
| 209 |
|
| 210 |
+
## Citation information
|
| 211 |
+
You can cite us in the following way:
|
| 212 |
+
```bibtex
|
| 213 |
+
@misc{smolvlm2,
|
| 214 |
+
title = {SmolVLM2: Bringing Video Understanding to Every Device},
|
| 215 |
+
author = {Orr Zohar and Miquel Farré and Andi Marafioti and Merve Noyan and Pedro Cuenca and Cyril Zakka and Joshua Lochner},
|
| 216 |
+
year = {2025},
|
| 217 |
+
url = {https://huggingface.co/blog/smolvlm2}
|
| 218 |
+
}
|
| 219 |
+
```
|
| 220 |
+
|
| 221 |
## Training Data
|
| 222 |
SmolVLM2 used 3.3M samples for training originally from ten different datasets: [LlaVa Onevision](https://huggingface.co/datasets/lmms-lab/LLaVA-OneVision-Data), [M4-Instruct](https://huggingface.co/datasets/lmms-lab/M4-Instruct-Data), [Mammoth](https://huggingface.co/datasets/MAmmoTH-VL/MAmmoTH-VL-Instruct-12M), [LlaVa Video 178K](https://huggingface.co/datasets/lmms-lab/LLaVA-Video-178K), [FineVideo](https://huggingface.co/datasets/HuggingFaceFV/finevideo), [VideoStar](https://huggingface.co/datasets/orrzohar/Video-STaR), [VRipt](https://huggingface.co/datasets/Mutonix/Vript), [Vista-400K](https://huggingface.co/datasets/TIGER-Lab/VISTA-400K), [MovieChat](https://huggingface.co/datasets/Enxin/MovieChat-1K_train) and [ShareGPT4Video](https://huggingface.co/datasets/ShareGPT4Video/ShareGPT4Video).
|
| 223 |
In the following plots we give a general overview of the samples across modalities and the source of those samples.
|