File size: 4,212 Bytes
b9b832e
4d6a380
b9b832e
 
 
 
 
4d6a380
 
 
b9b832e
 
 
4d6a380
 
b9b832e
 
4d6a380
 
b9b832e
 
4d6a380
 
b9b832e
 
4d6a380
 
b9b832e
 
4d6a380
 
b9b832e
 
4d6a380
 
b9b832e
 
4d6a380
 
b9b832e
 
4d6a380
 
b9b832e
 
4d6a380
 
b9b832e
 
4d6a380
 
b9b832e
 
4d6a380
 
b9b832e
 
4d6a380
 
b9b832e
 
4d6a380
 
b9b832e
 
4d6a380
 
b9b832e
 
4d6a380
 
b9b832e
 
4d6a380
 
b9b832e
 
4d6a380
 
b9b832e
 
4d6a380
 
b9b832e
 
4d6a380
 
b9b832e
 
4d6a380
 
b9b832e
 
4d6a380
 
b9b832e
 
4d6a380
 
b9b832e
 
4d6a380
 
b9b832e
 
4d6a380
 
b9b832e
fa9e8bc
b9b832e
4d6a380
 
 
 
b9b832e
 
 
 
 
 
 
 
 
 
 
 
 
4d6a380
b9b832e
4d6a380
 
8f93cca
4d6a380
 
b9b832e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
---
license: mit
language:
- en
tags:
- embedding
- multimodal
pretty_name: mmE5 labeled data
size_categories:
- 1M<n<10M
configs:
- config_name: TAT-DQA
  data_files:
    - split: train
      path: "TAT-DQA/TAT-DQA.parquet"
- config_name: ArxivQA
  data_files:
    - split: train
      path: "ArxivQA/ArxivQA.parquet"
- config_name: InfoSeek_it2t
  data_files:
    - split: train
      path: "InfoSeek_it2t/InfoSeek_it2t.parquet"
- config_name: InfoSeek_it2it
  data_files:
    - split: train
      path: "InfoSeek_it2it/InfoSeek_it2it.parquet"
- config_name: ImageNet_1K
  data_files:
    - split: train
      path: "ImageNet_1K/ImageNet_1K.parquet"
- config_name: N24News
  data_files:
    - split: train
      path: "N24News/N24News.parquet"
- config_name: HatefulMemes
  data_files:
    - split: train
      path: "HatefulMemes/HatefulMemes.parquet"
- config_name: SUN397
  data_files:
    - split: train
      path: "SUN397/SUN397.parquet"
- config_name: VOC2007
  data_files:
    - split: train
      path: "VOC2007/VOC2007.parquet"
- config_name: InfographicsVQA
  data_files:
    - split: train
      path: "InfographicsVQA/InfographicsVQA.parquet"
- config_name: ChartQA
  data_files:
    - split: train
      path: "ChartQA/ChartQA.parquet"
- config_name: A-OKVQA
  data_files:
    - split: train
      path: "A-OKVQA/A-OKVQA.parquet"
- config_name: DocVQA
  data_files:
    - split: train
      path: "DocVQA/DocVQA.parquet"
- config_name: OK-VQA
  data_files:
    - split: train
      path: "OK-VQA/OK-VQA.parquet"
- config_name: Visual7W
  data_files:
    - split: train
      path: "Visual7W/Visual7W.parquet"
- config_name: VisDial
  data_files:
    - split: train
      path: "VisDial/VisDial.parquet"
- config_name: CIRR
  data_files:
    - split: train
      path: "CIRR/CIRR.parquet"
- config_name: NIGHTS
  data_files:
    - split: train
      path: "NIGHTS/NIGHTS.parquet"
- config_name: WebQA
  data_files:
    - split: train
      path: "WebQA/WebQA.parquet"
- config_name: VisualNews_i2t
  data_files:
    - split: train
      path: "VisualNews_i2t/VisualNews_i2t.parquet"
- config_name: VisualNews_t2i
  data_files:
    - split: train
      path: "VisualNews_t2i/VisualNews_t2i.parquet"
- config_name: MSCOCO_i2t
  data_files:
    - split: train
      path: "MSCOCO_i2t/MSCOCO_i2t.parquet"
- config_name: MSCOCO_t2i
  data_files:
    - split: train
      path: "MSCOCO_t2i/MSCOCO_t2i.parquet"
- config_name: MSCOCO
  data_files:
    - split: train
      path: "MSCOCO/MSCOCO.parquet"
---
# mmE5 Labeled Data




This dataset contains datasets used for the supervised finetuning of mmE5 ([mmE5: Improving Multimodal Multilingual Embeddings via High-quality Synthetic Data](https://arxiv.org/abs/2502.08468)):
- **MMEB** (with hard negative)
- **InfoSeek** (from M-BEIR)
- **TAT-DQA**
- **ArxivQA**

[Github](https://github.com/haon-chen/mmE5)

## Image Preparation

First, you should prepare the images used for training:

### Image Downloads

- **Download All Images Used in mmE5**:

You can use the script provided in our [source code](https://github.com/haon-chen/mmE5) to download all images used in mmE5.
```bash
git clone https://github.com/haon-chen/mmE5.git
cd mmE5
bash scripts/prepare_images.sh
```

### Image Organization

```
  images/
  ├── mbeir_images/
  │     └── oven_images/
  │           └── ... .jpg (InfoSeek)
  ├── ArxivQA/
  │     └── images/
  │           └── ... .jpg (ArxivQA)
  └── TAT-DQA/
  │     └── ... .png (TAT-DQA)
  └── A-OKVQA/
        └── Train/
  │           └── ... .jpg (A-OKVQA)

  ... (MMEB Training images)
```

You can refer to the image paths in each subset to view the image organization.

You can also customize your image paths by altering the image_path fields.
## Citation
If you use this dataset in your research, please cite the associated paper.
```
@article{chen2025mmE5,
  title={mmE5: Improving Multimodal Multilingual Embeddings via High-quality Synthetic Data},
  author={Chen, Haonan and Wang, Liang and Yang, Nan and Zhu, Yutao and Zhao, Ziliang and Wei, Furu and Dou, Zhicheng},
  journal={arXiv preprint arXiv:2502.08468},
  year={2025}
}
```