|
|
--- |
|
|
library_name: transformers |
|
|
license: mit |
|
|
datasets: |
|
|
- lmms-lab/DocVQA |
|
|
--- |
|
|
|
|
|
## 1 Introduction |
|
|
DIVE-Doc is a VLM architecture built as a trade-off between end-to-end lightweight architectures and LVLMs for the DocVQA task. |
|
|
Without relying on external tools such as OCR, it processes the inputs in an end-to-end way. |
|
|
It takes an image document and a question as input and returns an answer. <br> |
|
|
- **Repository:** [GitHub](https://github.com/JayRay5/DIVE-Doc) |
|
|
- **Paper [optional]:** [More Information Needed] |
|
|
|
|
|
|
|
|
## 2 Model Summary |
|
|
DIVE-Doc is built as a trade-off between end-to-end lightweight architectures and LVLMs. |
|
|
Where the first category has both a lightweight visual encoder and a language decoder, and LVLMs have both a large visual encoder and a large decoder, |
|
|
DIVE-Doc contains a small visual encoder in combination with a large decoder in order to balance model size and performance. |
|
|
It is built by distilling the [SigLIP-400m](https://arxiv.org/abs/2303.15343) visual encoder of [PaliGEMMA](https://arxiv.org/abs/2407.07726) into a small hierarchical [Swin transformer](https://openaccess.thecvf.com/content/ICCV2021/html/Liu_Swin_Transformer_Hierarchical_Vision_Transformer_Using_Shifted_Windows_ICCV_2021_paper) initialized with the weights of [Donut](https://link.springer.com/chapter/10.1007/978-3-031-19815-1_29), while reusing the original [GEMMA](https://arxiv.org/abs/2403.08295) decoder. |
|
|
This enables DIVE‑Doc to reduce its visual encoder’s parameter count by 80%. |
|
|
Moreover, the model is finetuned using LoRA adapters, which have been merged into the base model using [merge_and_unload](https://huggingface.co/docs/peft/main/en/package_reference/lora#peft.LoraModel.merge_and_unload)). |
|
|
Trained on the [DocVQA dataset](https://openaccess.thecvf.com/content/WACV2021/html/Mathew_DocVQA_A_Dataset_for_VQA_on_Document_Images_WACV_2021_paper.html) for both the distillation and finetuning steps, this strategy allows DIVE-Doc to be competitive with LVLMs while outperforming ligthweight architectures. |
|
|
|
|
|
|
|
|
## 3 Quick Start |
|
|
|
|
|
### Installation |
|
|
```bash |
|
|
git clone https://github.com/JayRay5/DIVE-Doc.git |
|
|
cd DIVE-Doc |
|
|
conda create -n dive-doc-env python=3.11.5 |
|
|
conda activate dive-doc-env |
|
|
pip install -r requirements.txt |
|
|
``` |
|
|
### Inference example using the model repository and gradio |
|
|
In app.py, modify the path variable to "JayRay5/DIVE-Doc-ARD-LRes": |
|
|
```bash |
|
|
if __name__ == "__main__": |
|
|
path = "JayRay5/DIVE-Doc-ARD-LRes" |
|
|
app(path) |
|
|
``` |
|
|
Then run: |
|
|
```bash |
|
|
python app.py |
|
|
``` |
|
|
This will start a [gradio](https://www.gradio.app/) web interface where you can use the model. |
|
|
## Notification |
|
|
|
|
|
|
|
|
### Direct Use |
|
|
|
|
|
This model is designed to answer a question from a single-page image document and is mostly trained on industrial documents [DocVQA dataset](https://openaccess.thecvf.com/content/WACV2021/html/Mathew_DocVQA_A_Dataset_for_VQA_on_Document_Images_WACV_2021_paper.html). |
|
|
|
|
|
|
|
|
### Downstream Use |
|
|
|
|
|
This model can be finetuned on other DocVQA datasets such as [InfoGraphVQA](https://openaccess.thecvf.com/content/WACV2022/html/Mathew_InfographicVQA_WACV_2022_paper.html) to improve its performance on infographic documents. |
|
|
|
|
|
|
|
|
|
|
|
## Citation [optional] |
|
|
|
|
|
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> |
|
|
|
|
|
**BibTeX:** |
|
|
|
|
|
[More Information Needed] |
|
|
|
|
|
**APA:** |
|
|
|
|
|
[More Information Needed] |
|
|
|
|
|
## Glossary [optional] |
|
|
|
|
|
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> |
|
|
|
|
|
[More Information Needed] |
|
|
|
|
|
## More Information [optional] |
|
|
|
|
|
[More Information Needed] |
|
|
|
|
|
## Model Card Authors [optional] |
|
|
|
|
|
[More Information Needed] |
|
|
|
|
|
## Model Card Contact |
|
|
|
|
|
[More Information Needed] |