tonywu71 commited on
Commit
5b6a0ba
·
verified ·
1 Parent(s): 96fc583

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +24 -30
README.md CHANGED
@@ -11,38 +11,35 @@ pinned: true
11
  <img src="https://cdn-uploads.huggingface.co/production/uploads/66211794ae2f58da4f00d317/9hcMZ0KiikrSz4TYlCZzT.png" width="700">
12
 
13
  ## Description
14
- This Organisation contains all artefacts released with the paper [ColPali: Efficient Document Retrieval with Vision Language Models.]() [TODO add link],
 
15
  including the [ViDoRe](https://huggingface.co/collections/vidore/vidore-benchmark-667173f98e70a1c0fa4db00d) benchmark and our SOTA document retrieval model [*ColPali*](https://huggingface.co/vidore/colpali).
16
 
17
  A repository with **training** scripts can be found here. [GitHub](https://github.com/ManuelFay/colpali)
18
  A repository with **evaluation** scripts can be found here. [GitHub](https://github.com/tonywu71/vidore-benchmark)
19
 
20
-
21
-
22
-
23
  ### Abstract
24
- Documents are visually rich structures that convey information through text, as well as tables, figures, page layouts, or fonts.
25
- While modern document retrieval systems exhibit strong performance on query-to-text matching, they struggle to exploit visual cues efficiently, hindering their performance on practical document retrieval applications such as Retrieval Augmented Generation.
26
- To benchmark current systems on visually rich document retrieval, we introduce the Visual Document Retrieval Benchmark *ViDoRe*, composed of various page-level retrieving tasks spanning multiple domains, languages, and settings.
27
- The inherent shortcomings of modern systems motivate the introduction of a new retrieval model architecture, *ColPali*, which leverages the document understanding capabilities of recent Vision Language Models to produce high-quality contextualized embeddings solely from images of document pages.
28
- Combined with a late interaction matching mechanism, *ColPali* largely outperforms modern document retrieval pipelines while being drastically faster and end-to-end trainable.
29
 
30
- ## Organisation
 
 
 
 
31
 
 
32
 
33
- ### Models
34
- - [*ColPali*](https://huggingface.co/vidore/colpali): *ColPali* is our main contribution, it is a model based on a novel model architecture and training strategy based on Vision Language Models (VLMs), to efficiently index documents from their visual features.
35
  It is a [PaliGemma-3B](https://huggingface.co/google/paligemma-3b-mix-448) extension that generates [ColBERT](https://arxiv.org/abs/2004.12832)- style multi-vector representations of text and images.
36
-
37
- - [*BiPali*](https://huggingface.co/vidore/bipali): It is an extension of original SigLip architecture, the SigLIP-generated patch embeddings are fed to a text language model, PaliGemma-3B, to obtain LLM contextualized output patch embeddings.
38
- These representations are pool-averaged to get a single vector representation and create a PaliGemma bi-encoder, *BiPali*.
39
 
40
- - [*BiSigLip*](https://huggingface.co/vidore/bisiglip): Finetuned version of original [SigLip](https://huggingface.co/google/siglip-so400m-patch14-384), a strong vision-language bi-encoder model.
 
41
 
 
42
 
 
43
 
44
- ### Datasets
45
  We organized datasets into collections to constitute our benchmark ViDoRe and its derivates (OCR and Captioning). Below is a brief description of each of them.
 
46
  - [*ViDoRe Benchmark*](https://huggingface.co/collections/vidore/vidore-benchmark-667173f98e70a1c0fa4db00d): collection regrouping all datasets constituting the ViDoRe benchmark. It includes the test sets from different academic
47
  datasets ([ArXiVQA](https://huggingface.co/datasets/vidore/arxivqa_test_subsampled), [DocVQA](https://huggingface.co/datasets/vidore/docvqa_test_subsampled),
48
  [InfoVQA](https://huggingface.co/datasets/vidore/infovqa_test_subsampled), [TATDQA](https://huggingface.co/datasets/vidore/tatdqa_test), [TabFQuAD](https://huggingface.co/datasets/vidore/tabfquad_test_subsampled)) and from datasets synthetically generated spanning various themes and industrial applications:
@@ -51,11 +48,11 @@ We organized datasets into collections to constitute our benchmark ViDoRe and it
51
 
52
  - [*OCR Baseline*](https://huggingface.co/collections/vidore/vidore-chunk-ocr-baseline-666acce88c294ef415548a56): Datasets in this collection are the same as in ViDoRe but preprocessed for textual retrieving. The original ViDoRe benchmark was passed to Unstructured to partition each page into chunks. Visual chunks are OCRized with Tesseract.
53
 
54
- - [*Captioning Baseline*](https://huggingface.co/collections/vidore/vidore-captioning-baseline-6658a2a62d857c7a345195fd): Datasets in this collection are the same as in ViDoRe but preprocessed for textual retrieving. The original ViDoRe benchmark was passed to Unstructured to partition each page into chunks. Visual chunks are captioned using Claude Sonnet.
55
 
56
- **Intended use**
57
 
58
- You can either load a specific dataset using the standard `load_dataset` function from huggingface.
59
 
60
  ```python
61
  from datasets import load_dataset
@@ -80,20 +77,17 @@ To use the whole benchmark, you can list the datasets in the collection using th
80
  datasets.append(dataset)
81
 
82
  ```
83
- ## Autorship + Citation
84
 
85
- **Contact**
86
 
87
- Please report any issues with the models or the benchmark or contact us:
 
 
88
 
89
- - Manuel Faysse : [email?]()
90
- - Hugues Sibille : [email?]()
91
- - Tony Wu : [email?]()
92
 
93
- **BibTeX Citation**
94
-
95
- If you use any datasets or models from this organisation in your research, please cite the original dataset as follows:
96
 
97
  ```latex
98
  [include BibTeX]
99
- ```
 
11
  <img src="https://cdn-uploads.huggingface.co/production/uploads/66211794ae2f58da4f00d317/9hcMZ0KiikrSz4TYlCZzT.png" width="700">
12
 
13
  ## Description
14
+
15
+ This organization contains all artefacts released with the paper [ColPali: Efficient Document Retrieval with Vision Language Models.]() [TODO add link],
16
  including the [ViDoRe](https://huggingface.co/collections/vidore/vidore-benchmark-667173f98e70a1c0fa4db00d) benchmark and our SOTA document retrieval model [*ColPali*](https://huggingface.co/vidore/colpali).
17
 
18
  A repository with **training** scripts can be found here. [GitHub](https://github.com/ManuelFay/colpali)
19
  A repository with **evaluation** scripts can be found here. [GitHub](https://github.com/tonywu71/vidore-benchmark)
20
 
 
 
 
21
  ### Abstract
 
 
 
 
 
22
 
23
+ Documents are visually rich structures that convey information through text, as well as tables, figures, page layouts, or fonts.
24
+ While modern document retrieval systems exhibit strong performance on query-to-text matching, they struggle to exploit visual cues efficiently, hindering their performance on practical document retrieval applications such as Retrieval Augmented Generation.
25
+ To benchmark current systems on visually rich document retrieval, we introduce the Visual Document Retrieval Benchmark *ViDoRe*, composed of various page-level retrieving tasks spanning multiple domains, languages, and settings.
26
+ The inherent shortcomings of modern systems motivate the introduction of a new retrieval model architecture, *ColPali*, which leverages the document understanding capabilities of recent Vision Language Models to produce high-quality contextualized embeddings solely from images of document pages.
27
+ Combined with a late interaction matching mechanism, *ColPali* largely outperforms modern document retrieval pipelines while being drastically faster and end-to-end trainable.
28
 
29
+ ## Models
30
 
31
+ - [*ColPali*](https://huggingface.co/vidore/colpali): *ColPali* is our main contribution, it is a model based on a novel model architecture and training strategy based on Vision Language Models (VLMs), to efficiently index documents from their visual features.
 
32
  It is a [PaliGemma-3B](https://huggingface.co/google/paligemma-3b-mix-448) extension that generates [ColBERT](https://arxiv.org/abs/2004.12832)- style multi-vector representations of text and images.
 
 
 
33
 
34
+ - [*BiPali*](https://huggingface.co/vidore/bipali): It is an extension of original SigLIP architecture, the SigLIP-generated patch embeddings are fed to a text language model, PaliGemma-3B, to obtain LLM contextualized output patch embeddings.
35
+ These representations are pool-averaged to get a single vector representation and create a PaliGemma bi-encoder, *BiPali*.
36
 
37
+ - [*BiSigLIP*](https://huggingface.co/vidore/bisiglip): Finetuned version of original [SigLIP](https://huggingface.co/google/siglip-so400m-patch14-384), a strong vision-language bi-encoder model.
38
 
39
+ ## Datasets
40
 
 
41
  We organized datasets into collections to constitute our benchmark ViDoRe and its derivates (OCR and Captioning). Below is a brief description of each of them.
42
+
43
  - [*ViDoRe Benchmark*](https://huggingface.co/collections/vidore/vidore-benchmark-667173f98e70a1c0fa4db00d): collection regrouping all datasets constituting the ViDoRe benchmark. It includes the test sets from different academic
44
  datasets ([ArXiVQA](https://huggingface.co/datasets/vidore/arxivqa_test_subsampled), [DocVQA](https://huggingface.co/datasets/vidore/docvqa_test_subsampled),
45
  [InfoVQA](https://huggingface.co/datasets/vidore/infovqa_test_subsampled), [TATDQA](https://huggingface.co/datasets/vidore/tatdqa_test), [TabFQuAD](https://huggingface.co/datasets/vidore/tabfquad_test_subsampled)) and from datasets synthetically generated spanning various themes and industrial applications:
 
48
 
49
  - [*OCR Baseline*](https://huggingface.co/collections/vidore/vidore-chunk-ocr-baseline-666acce88c294ef415548a56): Datasets in this collection are the same as in ViDoRe but preprocessed for textual retrieving. The original ViDoRe benchmark was passed to Unstructured to partition each page into chunks. Visual chunks are OCRized with Tesseract.
50
 
51
+ - [*Captioning Baseline*](https://huggingface.co/collections/vidore/vidore-captioning-baseline-6658a2a62d857c7a345195fd): Datasets in this collection are the same as in ViDoRe but preprocessed for textual retrieving. The original ViDoRe benchmark was passed to Unstructured to partition each page into chunks. Visual chunks are captioned using Claude Sonnet.
52
 
53
+ ## Intended use
54
 
55
+ You can either load a specific dataset using the standard `load_dataset` function from huggingface.
56
 
57
  ```python
58
  from datasets import load_dataset
 
77
  datasets.append(dataset)
78
 
79
  ```
 
80
 
81
+ ## Contact
82
 
83
+ - Manuel Faysse: manuel.faysse@illuin.tech
84
+ - Hugues Sibille: hugues.sibille@illuin.tech
85
+ - Tony Wu: tony.wu@illuin.tech
86
 
87
+ ## Citation
 
 
88
 
89
+ If you use any datasets or models from this organization in your research, please cite the original dataset as follows:
 
 
90
 
91
  ```latex
92
  [include BibTeX]
93
+ ```