mfarre commited on
Commit
7b54b5e
·
verified ·
1 Parent(s): f279f5a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -0
README.md CHANGED
@@ -194,6 +194,17 @@ SmolVLM2 is built upon [SigLIP](https://huggingface.co/google/siglip-base-patch1
194
 
195
  We release the SmolVLM2 checkpoints under the Apache 2.0 license.
196
 
 
 
 
 
 
 
 
 
 
 
 
197
  ## Training Data
198
  SmolVLM2 used 3.3M samples for training originally from ten different datasets: [LlaVa Onevision](https://huggingface.co/datasets/lmms-lab/LLaVA-OneVision-Data), [M4-Instruct](https://huggingface.co/datasets/lmms-lab/M4-Instruct-Data), [Mammoth](https://huggingface.co/datasets/MAmmoTH-VL/MAmmoTH-VL-Instruct-12M), [LlaVa Video 178K](https://huggingface.co/datasets/lmms-lab/LLaVA-Video-178K), [FineVideo](https://huggingface.co/datasets/HuggingFaceFV/finevideo), [VideoStar](https://huggingface.co/datasets/orrzohar/Video-STaR), [VRipt](https://huggingface.co/datasets/Mutonix/Vript), [Vista-400K](https://huggingface.co/datasets/TIGER-Lab/VISTA-400K), [MovieChat](https://huggingface.co/datasets/Enxin/MovieChat-1K_train) and [ShareGPT4Video](https://huggingface.co/datasets/ShareGPT4Video/ShareGPT4Video).
199
  In the following plots we give a general overview of the samples across modalities and the source of those samples.
 
194
 
195
  We release the SmolVLM2 checkpoints under the Apache 2.0 license.
196
 
197
+ ## Citation information
198
+ You can cite us in the following way:
199
+ ```bibtex
200
+ @misc{smolvlm2,
201
+ title = {SmolVLM2: Bringing Video Understanding to Every Device},
202
+ author = {Orr Zohar and Miquel Farré and Andi Marafioti and Merve Noyan and Pedro Cuenca and Cyril Zakka and Joshua Lochner},
203
+ year = {2025},
204
+ url = {https://huggingface.co/blog/smolvlm2}
205
+ }
206
+ ```
207
+
208
  ## Training Data
209
  SmolVLM2 used 3.3M samples for training originally from ten different datasets: [LlaVa Onevision](https://huggingface.co/datasets/lmms-lab/LLaVA-OneVision-Data), [M4-Instruct](https://huggingface.co/datasets/lmms-lab/M4-Instruct-Data), [Mammoth](https://huggingface.co/datasets/MAmmoTH-VL/MAmmoTH-VL-Instruct-12M), [LlaVa Video 178K](https://huggingface.co/datasets/lmms-lab/LLaVA-Video-178K), [FineVideo](https://huggingface.co/datasets/HuggingFaceFV/finevideo), [VideoStar](https://huggingface.co/datasets/orrzohar/Video-STaR), [VRipt](https://huggingface.co/datasets/Mutonix/Vript), [Vista-400K](https://huggingface.co/datasets/TIGER-Lab/VISTA-400K), [MovieChat](https://huggingface.co/datasets/Enxin/MovieChat-1K_train) and [ShareGPT4Video](https://huggingface.co/datasets/ShareGPT4Video/ShareGPT4Video).
210
  In the following plots we give a general overview of the samples across modalities and the source of those samples.