daviBera's picture
Create README.md
a342a0a verified
---
library_name: peft
license: other
base_model: Qwen/Qwen2-VL-2B
tags:
- llama-factory
- lora
model-index:
- name: qwen2_2b_lora_expert_generalv2-102400
results: []
task_categories:
- visual-question-answering
language:
- en
pretty_name: Domain Expert Datasets
size_categories:
- 100K<n<1M
---
<h2 align="center">
Linear Model Merging Unlocks Simple and Scalable Multimodal Data Mixture Optimization
<br>
[![arXiv](https://img.shields.io/badge/arXiv-2602.04937-b31b1b.svg)](https://www.arxiv.org/pdf/2602.04937)
[![🤗 Model (HuggingFace)](https://img.shields.io/badge/Models-HuggingFace-FFD21E.svg?logo=huggingface&logoColor=yellow)](https://huggingface.co/collections/daviBera/mllms-merging-4-dmo)
[![🤗 Dataset (HuggingFace)](https://img.shields.io/badge/Datasets-HuggingFace-FFD21E.svg?logo=huggingface&logoColor=yellow)](https://huggingface.co/datasets/daviBera/experts_datasets-102400)
[![github](https://img.shields.io/badge/github-repo-blue?logo=github)](https://github.com/BerasiDavide/mLLMs_merging_4_DMO)
</h2>
This are the domain-specific datasets from the paper: "Linear Model Merging Unlocks Simple and Scalable Multimodal Data Mixture Optimization
" ([link](https://www.arxiv.org/pdf/2602.04937)).
Each dataset contains 102400 VQA samples from a specific domain: General VQA, OCR, Counting & Visual Perception, Chart Understanding.
You can find many models trained on mixtures of these datasets in [this Huggingface Collection](https://huggingface.co/collections/daviBera/mllms-merging-4-dmo).
### Composition
<img src="https://cdn-uploads.huggingface.co/production/uploads/6597e62f929cd840d808b8c9/c7X6YXUDUcXIRFsnjHv-w.png" width="800">