Datasets:

Languages:
English
ArXiv:
License:
File size: 5,726 Bytes
ec4eff4
58f48f5
 
ed42be0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a70090b
179538e
 
 
aef3ca9
179538e
ed42be0
 
ec4eff4
 
bac486d
 
ed42be0
 
 
 
2dc3f6e
 
 
 
bac486d
9c709ec
877283e
 
 
24ee9fa
 
877283e
 
 
 
f260f52
 
877283e
5397621
 
 
2dc3f6e
 
 
 
5397621
46c03ce
f2e5f42
5397621
 
 
 
 
2dc3f6e
5397621
f2e5f42
5397621
 
 
2dc3f6e
 
 
 
46c03ce
5397621
2dc3f6e
 
 
 
 
 
44ac4b9
 
2dc3f6e
 
 
 
44ac4b9
2dc3f6e
44ac4b9
2dc3f6e
44ac4b9
2dc3f6e
44ac4b9
2dc3f6e
 
44ac4b9
 
 
 
 
 
2dc3f6e
44ac4b9
2dc3f6e
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
---
language:
- en
license: cc-by-nc-4.0
task_categories:
- visual-document-retrieval
- video-retrieval
- temporal-grounding
- video-classification
- video-question-answering
- visual-question-answering
tags:
- multimodal
- embedding
- benchmark
- video
- image
- document
- temporal-grounding
- moment-retrieval
viewer: false
configs:
- config_name: splits
  data_files:
  - split: eval
    path:
    - video_tasks
    - image_tasks
---

# MMEB-V2 (Massive Multimodal Embedding Benchmark)

## Paper Abstract

Multimodal embedding models have been crucial in enabling various downstream tasks such as semantic similarity, information retrieval, and clustering over different modalities. However, existing multimodal embeddings like VLM2Vec, E5-V, GME are predominantly focused on natural images, with limited support for other visual forms such as videos and visual documents. This restricts their applicability in real-world scenarios, including AI agents, multi-modal search and recommendation, and retrieval-augmented generation (RAG). To close this gap, we propose VLM2Vec-V2, a unified framework for learning embeddings across diverse visual forms. First, we introduce MMEB-V2, a comprehensive benchmark that extends MMEB with five new task types: visual document retrieval, video retrieval, temporal grounding, video classification and video question answering - spanning text, image, video, and visual document inputs. Next, we train VLM2Vec-V2, a general-purpose embedding model that supports text, image, video, and visual document inputs. Extensive experiments show that VLM2Vec-V2 achieves strong performance not only on the newly introduced video and document retrieval tasks, but also improves over prior baselines on the original image benchmarks. Through extensive evaluation, our study offers insights into the generalizability of various multimodal embedding models and highlights effective strategies for unified embedding learning, laying the groundwork for more scalable and adaptable representation learning in both research and real-world settings.

Building upon on our original [**MMEB**](https://arxiv.org/abs/2410.05160), **MMEB-V2** expands the evaluation scope to include five new tasks: four video-based tasks β€” Video Retrieval, Moment Retrieval, Video Classification, and Video Question Answering β€” and one task focused on visual documents, Visual Document Retrieval. This comprehensive suite enables robust evaluation of multimodal embedding models across static, temporal, and structured visual data settings.

**This Hugging Face repository contains only the raw image and video files used in MMEB-V2, which need to be downloaded in advance.**
The test data for each task in MMEB-V2 is available [here](https://huggingface.co/VLM2Vec) and will be automatically downloaded and used by our code. More details on how to set it up are provided in the following sections.

[**Website**](https://tiger-ai-lab.github.io/VLM2Vec/) |[**Github**](https://github.com/TIGER-AI-Lab/VLM2Vec) | [**πŸ†Leaderboard**](https://huggingface.co/spaces/TIGER-Lab/MMEB) | [**πŸ“–MMEB-V2/VLM2Vec-V2 Paper**](https://arxiv.org/abs/2507.04590) | | [**πŸ“–MMEB-V1/VLM2Vec-V1 Paper**](https://arxiv.org/abs/2410.05160) |


## πŸš€ What's New
- **\[2025.07\]** Release [tech report](https://arxiv.org/abs/2507.04590).
- **\[2025.05\]** Initial release of MMEB-V2/VLM2Vec-V2.


## Dataset Overview

We present an overview of the MMEB-V2 dataset below:
<img width="900" alt="abs" src="overview.png">


## Dataset Structure

The directory structure of this Hugging Face repository is shown below. 
For video tasks, we provide both sampled frames and raw videos (the latter will be released later). For image tasks, we provide the raw images.
Files from each meta-task are zipped together, resulting in six files. For example, ``video_cls.tar.gz`` contains the sampled frames for the video classification task.

```

β†’ video-tasks/
β”œβ”€β”€ frames/
β”‚   β”œβ”€β”€ video_cls.tar.gz
β”‚   β”œβ”€β”€ video_qa.tar.gz
β”‚   β”œβ”€β”€ video_ret.tar.gz
β”‚   └── video_mret.tar.gz
β”œβ”€β”€ raw videos/ (To be released)

β†’ image-tasks/
β”œβ”€β”€ mmeb_v1.tar.gz
└── visdoc.tar.gz

```

After downloading and unzipping these files locally, you can organize them as shown below. (You may choose to use ``Git LFS`` or ``wget`` for downloading.)
Then, simply specify the correct file path in the configuration file used by your code.

```

β†’ MMEB
β”œβ”€β”€ video-tasks/
β”‚   └── frames/
β”‚       β”œβ”€β”€ video_cls/
β”‚       β”‚   β”œβ”€β”€ UCF101/
β”‚       β”‚   β”‚   └── video_1/              # video ID
β”‚       β”‚   β”‚       β”œβ”€β”€ frame1.png        # frame from video_1
β”‚       β”‚   β”‚       β”œβ”€β”€ frame2.png
β”‚       β”‚   β”‚       └── ...
β”‚       β”‚   β”œβ”€β”€ HMDB51/
β”‚       β”‚   β”œβ”€β”€ Breakfast/
β”‚       β”‚   └── ...                       # other datasets from video classification category
β”‚       β”œβ”€β”€ video_qa/
β”‚       β”‚   └── ...                       # video QA datasets
β”‚       β”œβ”€β”€ video_ret/
β”‚       β”‚   └── ...                       # video retrieval datasets
β”‚       └── video_mret/
β”‚           └── ...                       # moment retrieval datasets
β”œβ”€β”€ image-tasks/
β”‚   β”œβ”€β”€ mmeb_v1/
β”‚   β”‚   β”œβ”€β”€ OK-VQA/
β”‚   β”‚   β”‚   β”œβ”€β”€ image1.png
β”‚   β”‚   β”‚   β”œβ”€β”€ image2.png
β”‚   β”‚   β”‚   └── ...
β”‚   β”‚   β”œβ”€β”€ ImageNet-1K/
β”‚   β”‚   └── ...                           # other datasets from MMEB-V1 category
β”‚   └── visdoc/
β”‚       └── ...                           # visual document retrieval datasets


```