--- language: - en license: apache-2.0 size_categories: - 1M` placeholder) | | `qry_image_id` | string | Query image path (empty if text-only) | | `pos_text` | string | Positive sample text | | `pos_image_id` | string | Positive sample image path | | `neg_text` | string | Negative sample text (optional) | | `neg_image_id` | string | Negative sample image path (optional) | ### Images (`images/{dataset}.lance`) | Field | Type | Description | |-------|------|-------------| | `image_id` | string | Image path identifier | | `data` | binary | Image binary data (JPEG) | ## Dataset Statistics | Dataset | Samples | Images | |---------|---------|--------| | A-OKVQA | 17,056 | 17,056 | | ChartQA | 28,299 | 28,299 | | CIRR | 26,116 | 16,640 | | DocVQA | 39,463 | 39,463 | | HatefulMemes | 8,500 | 8,500 | | ImageNet_1K | 100,000 | 100,000 | | InfographicsVQA | 23,946 | 4,406 | | MSCOCO | 100,000 | 59,969 | | MSCOCO_i2t | 113,287 | 113,287 | | MSCOCO_t2i | 100,000 | 70,414 | | N24News | 48,988 | 48,988 | | NIGHTS | 15,941 | 31,882 | | OK-VQA | 9,009 | 9,009 | | SUN397 | 19,850 | 19,850 | | VisDial | 123,287 | 123,287 | | Visual7W | 69,817 | 14,366 | | VisualNews_i2t | 100,000 | 100,000 | | VisualNews_t2i | 99,903 | 99,903 | | VOC2007 | 7,844 | 7,844 | | WebQA | 17,166 | 12,873 | Each dataset has 3 variants: `train`, `original`, and `diverse_instruction` (same sample count, different instruction templates). ## Original Dataset This dataset is derived from [TIGER-Lab/MMEB-train](https://huggingface.co/datasets/TIGER-Lab/MMEB-train). For evaluation, please refer to [TIGER-Lab/MMEB-eval](https://huggingface.co/datasets/TIGER-Lab/MMEB-eval). ## Citation ```bibtex @article{jiang2024vlm2vec, title={VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks}, author={Jiang, Ziyan and Meng, Rui and Yang, Xinyi and Yavuz, Semih and Zhou, Yingbo and Chen, Wenhu}, journal={arXiv preprint arXiv:2410.05160}, year={2024} } ``` ## License Apache-2.0 (same as the original dataset)