Update README.md
Browse files
README.md
CHANGED
|
@@ -26,31 +26,6 @@ This version is the untrained base version to guarantee deterministic projection
|
|
| 26 |
> This version should not be used: it is solely the base version useful for deterministic LoRA initialization.
|
| 27 |
|
| 28 |
|
| 29 |
-
## Model Training
|
| 30 |
-
|
| 31 |
-
### Dataset
|
| 32 |
-
Our training dataset of 127,460 query-page pairs is comprised of train sets of openly available academic datasets (63%) and a synthetic dataset made up of pages from web-crawled PDF documents and augmented with VLM-generated (Claude-3 Sonnet) pseudo-questions (37%).
|
| 33 |
-
Our training set is fully English by design, enabling us to study zero-shot generalization to non-English languages. We explicitly verify no multi-page PDF document is used both [*ViDoRe*](https://huggingface.co/collections/vidore/vidore-benchmark-667173f98e70a1c0fa4db00d) and in the train set to prevent evaluation contamination.
|
| 34 |
-
A validation set is created with 2% of the samples to tune hyperparameters.
|
| 35 |
-
|
| 36 |
-
*Note: Multilingual data is present in the pretraining corpus of the language model and most probably in the multimodal training.*
|
| 37 |
-
|
| 38 |
-
### Parameters
|
| 39 |
-
|
| 40 |
-
Unless specified otherwise, we train models in `bfloat16` format, use low-rank adapters ([LoRA](https://arxiv.org/abs/2106.09685))
|
| 41 |
-
with `alpha=32` and `r=32` on the transformer layers from the language model,
|
| 42 |
-
as well as the final randomly initialized projection layer, and use a `paged_adamw_8bit` optimizer.
|
| 43 |
-
We train on a 4 GPU setup with data parallelism, a learning rate of 5e-4 with linear decay with 2.5% warmup steps, and a batch size of 8.
|
| 44 |
-
|
| 45 |
-
## Usage
|
| 46 |
-
|
| 47 |
-
This should not be used as it is the base model, used only for initiliasation of the linear head weights of the model.
|
| 48 |
-
|
| 49 |
-
## Limitations
|
| 50 |
-
|
| 51 |
-
- **Focus**: The model primarily focuses on PDF-type documents and high-ressources languages, potentially limiting its generalization to other document types or less represented languages.
|
| 52 |
-
- **Support**: The model relies on multi-vector retreiving derived from the ColBERT late interaction mechanism, which may require engineering efforts to adapt to widely used vector retrieval frameworks that lack native multi-vector support.
|
| 53 |
-
|
| 54 |
## License
|
| 55 |
|
| 56 |
ColQwen2's vision language backbone model (Qwen2-VL) is under `apache2.0` license. The adapters attached to the model are under MIT license.
|
|
|
|
| 26 |
> This version should not be used: it is solely the base version useful for deterministic LoRA initialization.
|
| 27 |
|
| 28 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 29 |
## License
|
| 30 |
|
| 31 |
ColQwen2's vision language backbone model (Qwen2-VL) is under `apache2.0` license. The adapters attached to the model are under MIT license.
|