Add model card for ModernVBERT

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +40 -0
README.md ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pipeline_tag: visual-document-retrieval
3
+ library_name: transformers
4
+ license: apache-2.0
5
+ ---
6
+
7
+ # ModernVBERT: Towards Smaller Visual Document Retrievers 👁️
8
+
9
+ [![Paper](https://img.shields.io/badge/Paper-2510.01149-red?style=for-the-badge&logo=arxiv&labelColor=black)](https://huggingface.co/papers/2510.01149)
10
+ [![HuggingFace Org](https://img.shields.io/badge/HuggingFace-yellow?style=for-the-badge&logo=huggingface&labelColor=black)](https://huggingface.co/ModernVBERT)
11
+ [![GitHub](https://img.shields.io/badge/GitHub-code-keygen.svg?logo=github&style=for-the-badge)](https://github.com/illuin-tech/modernvbert)
12
+ [![Blog Post](https://img.shields.io/badge/Blog_Post-018EF5?logo=readme&logoColor=fff&labelColor=black&style=for-the-badge)](https://huggingface.co/blog/paultltc/modernvbert)
13
+
14
+ This repository contains the **ModernVBERT** model, a compact 250M-parameter vision-language encoder designed for efficient Visual Document Retrieval (VDR). As presented in the paper "[ModernVBERT: Towards Smaller Visual Document Retrievers](https://huggingface.co/papers/2510.01149)", this model establishes a principled recipe for improving VDR models by revisiting the entire training pipeline. It outperforms models up to 10 times larger while enabling efficient inference on CPU hardware, significantly reducing latency and costs. Key factors measured for improvement include attention masking, image resolution, modality alignment data regimes, and late interaction-centered contrastive objectives.
15
+
16
+ <div align="center">
17
+ <img src="https://github.com/illuin-tech/modernvbert/raw/main/assets/imgs/architecture.png" alt="ModernVBERT Architecture" width="700">
18
+ </div>
19
+
20
+ ## Usage
21
+
22
+ A detailed tutorial for fine-tuning and using ModernVBERT, including all information required to launch a model post-training, is available in a Google Colab notebook:
23
+
24
+ [Go to Tutorial](https://colab.research.google.com/drive/1bT5LWeO1gPL83GKUZsFeFEleHmEDEQRy)
25
+
26
+ ## Citation
27
+
28
+ If you use ModernVBERT in your research, please cite the paper as follows:
29
+
30
+ ```latex
31
+ @misc{teiletche2025modernvbertsmallervisualdocument,
32
+ title={ModernVBERT: Towards Smaller Visual Document Retrievers},
33
+ author={Paul Teiletche and Quentin Macé and Max Conti and Antonio Loison and Gautier Viaud and Pierre Colombo and Manuel Faysse},
34
+ year={2025},
35
+ eprint={2510.01149},
36
+ archivePrefix={arXiv},
37
+ primaryClass={cs.IR},
38
+ url={https://arxiv.org/abs/2510.01149},
39
+ }
40
+ ```