| | --- |
| | library_name: peft |
| | license: other |
| | base_model: Qwen/Qwen2-VL-2B |
| | tags: |
| | - llama-factory |
| | - lora |
| | model-index: |
| | - name: qwen2_2b_lora_expert_generalv2-102400 |
| | results: [] |
| | task_categories: |
| | - visual-question-answering |
| | language: |
| | - en |
| | pretty_name: Domain Expert Datasets |
| | size_categories: |
| | - 100K<n<1M |
| | --- |
| | |
| | <h2 align="center"> |
| | Linear Model Merging Unlocks Simple and Scalable Multimodal Data Mixture Optimization |
| |
|
| | <br> |
| |
|
| | [](https://www.arxiv.org/pdf/2602.04937) |
| | [](https://huggingface.co/collections/daviBera/mllms-merging-4-dmo) |
| | [](https://huggingface.co/datasets/daviBera/experts_datasets-102400) |
| | [](https://github.com/BerasiDavide/mLLMs_merging_4_DMO) |
| | </h2> |
| |
|
| | This are the domain-specific datasets from the paper: "Linear Model Merging Unlocks Simple and Scalable Multimodal Data Mixture Optimization |
| | " ([link](https://www.arxiv.org/pdf/2602.04937)). |
| |
|
| | Each dataset contains 102400 VQA samples from a specific domain: General VQA, OCR, Counting & Visual Perception, Chart Understanding. |
| |
|
| | You can find many models trained on mixtures of these datasets in [this Huggingface Collection](https://huggingface.co/collections/daviBera/mllms-merging-4-dmo). |
| |
|
| | ### Composition |
| | <img src="https://cdn-uploads.huggingface.co/production/uploads/6597e62f929cd840d808b8c9/c7X6YXUDUcXIRFsnjHv-w.png" width="800"> |