laughatwill's picture
Upload README.md with huggingface_hub
0aed2e7 verified
metadata
language:
  - en
license: apache-2.0
size_categories:
  - 1M<n<10M
pretty_name: MMEB-train-lance
tags:
  - embedding
  - lance
  - multimodal
configs:
  - config_name: A-OKVQA
    data_files:
      - split: train
        path: data/A-OKVQA/train.lance/**
      - split: original
        path: data/A-OKVQA/original.lance/**
      - split: diverse_instruction
        path: data/A-OKVQA/diverse.lance/**
  - config_name: ChartQA
    data_files:
      - split: train
        path: data/ChartQA/train.lance/**
      - split: original
        path: data/ChartQA/original.lance/**
      - split: diverse_instruction
        path: data/ChartQA/diverse.lance/**
  - config_name: CIRR
    data_files:
      - split: train
        path: data/CIRR/train.lance/**
      - split: original
        path: data/CIRR/original.lance/**
      - split: diverse_instruction
        path: data/CIRR/diverse.lance/**
  - config_name: DocVQA
    data_files:
      - split: train
        path: data/DocVQA/train.lance/**
      - split: original
        path: data/DocVQA/original.lance/**
      - split: diverse_instruction
        path: data/DocVQA/diverse.lance/**
  - config_name: HatefulMemes
    data_files:
      - split: train
        path: data/HatefulMemes/train.lance/**
      - split: original
        path: data/HatefulMemes/original.lance/**
      - split: diverse_instruction
        path: data/HatefulMemes/diverse.lance/**
  - config_name: ImageNet_1K
    data_files:
      - split: train
        path: data/ImageNet_1K/train.lance/**
      - split: original
        path: data/ImageNet_1K/original.lance/**
      - split: diverse_instruction
        path: data/ImageNet_1K/diverse.lance/**
  - config_name: InfographicsVQA
    data_files:
      - split: train
        path: data/InfographicsVQA/train.lance/**
      - split: original
        path: data/InfographicsVQA/original.lance/**
      - split: diverse_instruction
        path: data/InfographicsVQA/diverse.lance/**
  - config_name: MSCOCO
    data_files:
      - split: train
        path: data/MSCOCO/train.lance/**
      - split: original
        path: data/MSCOCO/original.lance/**
      - split: diverse_instruction
        path: data/MSCOCO/diverse.lance/**
  - config_name: MSCOCO_i2t
    data_files:
      - split: train
        path: data/MSCOCO_i2t/train.lance/**
      - split: original
        path: data/MSCOCO_i2t/original.lance/**
      - split: diverse_instruction
        path: data/MSCOCO_i2t/diverse.lance/**
  - config_name: MSCOCO_t2i
    data_files:
      - split: train
        path: data/MSCOCO_t2i/train.lance/**
      - split: original
        path: data/MSCOCO_t2i/original.lance/**
      - split: diverse_instruction
        path: data/MSCOCO_t2i/diverse.lance/**
  - config_name: N24News
    data_files:
      - split: train
        path: data/N24News/train.lance/**
      - split: original
        path: data/N24News/original.lance/**
      - split: diverse_instruction
        path: data/N24News/diverse.lance/**
  - config_name: NIGHTS
    data_files:
      - split: train
        path: data/NIGHTS/train.lance/**
      - split: original
        path: data/NIGHTS/original.lance/**
      - split: diverse_instruction
        path: data/NIGHTS/diverse.lance/**
  - config_name: OK-VQA
    data_files:
      - split: train
        path: data/OK-VQA/train.lance/**
      - split: original
        path: data/OK-VQA/original.lance/**
      - split: diverse_instruction
        path: data/OK-VQA/diverse.lance/**
  - config_name: SUN397
    data_files:
      - split: train
        path: data/SUN397/train.lance/**
      - split: original
        path: data/SUN397/original.lance/**
      - split: diverse_instruction
        path: data/SUN397/diverse.lance/**
  - config_name: VOC2007
    data_files:
      - split: train
        path: data/VOC2007/train.lance/**
      - split: original
        path: data/VOC2007/original.lance/**
      - split: diverse_instruction
        path: data/VOC2007/diverse.lance/**
  - config_name: VisDial
    data_files:
      - split: train
        path: data/VisDial/train.lance/**
      - split: original
        path: data/VisDial/original.lance/**
      - split: diverse_instruction
        path: data/VisDial/diverse.lance/**
  - config_name: Visual7W
    data_files:
      - split: train
        path: data/Visual7W/train.lance/**
      - split: original
        path: data/Visual7W/original.lance/**
      - split: diverse_instruction
        path: data/Visual7W/diverse.lance/**
  - config_name: VisualNews_i2t
    data_files:
      - split: train
        path: data/VisualNews_i2t/train.lance/**
      - split: original
        path: data/VisualNews_i2t/original.lance/**
      - split: diverse_instruction
        path: data/VisualNews_i2t/diverse.lance/**
  - config_name: VisualNews_t2i
    data_files:
      - split: train
        path: data/VisualNews_t2i/train.lance/**
      - split: original
        path: data/VisualNews_t2i/original.lance/**
      - split: diverse_instruction
        path: data/VisualNews_t2i/diverse.lance/**
  - config_name: WebQA
    data_files:
      - split: train
        path: data/WebQA/train.lance/**
      - split: original
        path: data/WebQA/original.lance/**
      - split: diverse_instruction
        path: data/WebQA/diverse.lance/**
  - config_name: images
    data_files: data/images/**

MMEB Training Dataset (Lance Format)

This is a Lance-format version of the TIGER-Lab/MMEB-train dataset, optimized for efficient storage and fast random access.

The original dataset is used for training VLM2Vec models in the paper VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks (ICLR 2025).

Directory Structure

TIGER-Lab_MMEB-train/
└── data/
    ├── A-OKVQA/
    │   ├── train.lance
    │   ├── original.lance
    │   └── diverse.lance
    ├── MSCOCO/
    │   └── ...
    └── images/
        ├── A-OKVQA.lance
        ├── MSCOCO.lance
        └── ...

Schema

Metadata ({dataset}/{variant}.lance)

Field Type Description
qry string Query text (may contain <|image_1|> placeholder)
qry_image_id string Query image path (empty if text-only)
pos_text string Positive sample text
pos_image_id string Positive sample image path
neg_text string Negative sample text (optional)
neg_image_id string Negative sample image path (optional)

Images (images/{dataset}.lance)

Field Type Description
image_id string Image path identifier
data binary Image binary data (JPEG)

Dataset Statistics

Dataset Samples Images
A-OKVQA 17,056 17,056
ChartQA 28,299 28,299
CIRR 26,116 16,640
DocVQA 39,463 39,463
HatefulMemes 8,500 8,500
ImageNet_1K 100,000 100,000
InfographicsVQA 23,946 4,406
MSCOCO 100,000 59,969
MSCOCO_i2t 113,287 113,287
MSCOCO_t2i 100,000 70,414
N24News 48,988 48,988
NIGHTS 15,941 31,882
OK-VQA 9,009 9,009
SUN397 19,850 19,850
VisDial 123,287 123,287
Visual7W 69,817 14,366
VisualNews_i2t 100,000 100,000
VisualNews_t2i 99,903 99,903
VOC2007 7,844 7,844
WebQA 17,166 12,873

Each dataset has 3 variants: train, original, and diverse_instruction (same sample count, different instruction templates).

Original Dataset

This dataset is derived from TIGER-Lab/MMEB-train. For evaluation, please refer to TIGER-Lab/MMEB-eval.

Citation

@article{jiang2024vlm2vec,
  title={VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks},
  author={Jiang, Ziyan and Meng, Rui and Yang, Xinyi and Yavuz, Semih and Zhou, Yingbo and Chen, Wenhu},
  journal={arXiv preprint arXiv:2410.05160},
  year={2024}
}

License

Apache-2.0 (same as the original dataset)