intfloat commited on
Commit
b9b832e
·
verified ·
1 Parent(s): f366874

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +182 -0
README.md ADDED
@@ -0,0 +1,182 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - en
5
+ tags:
6
+ - embedding
7
+ - multimodal
8
+ pretty_name: MMEB with hard negative
9
+ size_categories:
10
+ - 1M<n<10M
11
+ configs:
12
+ - config_name: TAT-DQA
13
+ data_files:
14
+ - split: train
15
+ path: "TAT-DQA/TAT-DQA.parquet"
16
+ - config_name: ArxivQA
17
+ data_files:
18
+ - split: train
19
+ path: "ArxivQA/ArxivQA.parquet"
20
+ - config_name: InfoSeek_it2t
21
+ data_files:
22
+ - split: train
23
+ path: "InfoSeek_it2t/InfoSeek_it2t.parquet"
24
+ - config_name: InfoSeek_it2it
25
+ data_files:
26
+ - split: train
27
+ path: "InfoSeek_it2it/InfoSeek_it2it.parquet"
28
+ - config_name: ImageNet_1K
29
+ data_files:
30
+ - split: train
31
+ path: "ImageNet_1K/ImageNet_1K.parquet"
32
+ - config_name: N24News
33
+ data_files:
34
+ - split: train
35
+ path: "N24News/N24News.parquet"
36
+ - config_name: HatefulMemes
37
+ data_files:
38
+ - split: train
39
+ path: "HatefulMemes/HatefulMemes.parquet"
40
+ - config_name: SUN397
41
+ data_files:
42
+ - split: train
43
+ path: "SUN397/SUN397.parquet"
44
+ - config_name: VOC2007
45
+ data_files:
46
+ - split: train
47
+ path: "VOC2007/VOC2007.parquet"
48
+ - config_name: InfographicsVQA
49
+ data_files:
50
+ - split: train
51
+ path: "InfographicsVQA/InfographicsVQA.parquet"
52
+ - config_name: ChartQA
53
+ data_files:
54
+ - split: train
55
+ path: "ChartQA/ChartQA.parquet"
56
+ - config_name: A-OKVQA
57
+ data_files:
58
+ - split: train
59
+ path: "A-OKVQA/A-OKVQA.parquet"
60
+ - config_name: DocVQA
61
+ data_files:
62
+ - split: train
63
+ path: "DocVQA/DocVQA.parquet"
64
+ - config_name: OK-VQA
65
+ data_files:
66
+ - split: train
67
+ path: "OK-VQA/OK-VQA.parquet"
68
+ - config_name: Visual7W
69
+ data_files:
70
+ - split: train
71
+ path: "Visual7W/Visual7W.parquet"
72
+ - config_name: VisDial
73
+ data_files:
74
+ - split: train
75
+ path: "VisDial/VisDial.parquet"
76
+ - config_name: CIRR
77
+ data_files:
78
+ - split: train
79
+ path: "CIRR/CIRR.parquet"
80
+ - config_name: NIGHTS
81
+ data_files:
82
+ - split: train
83
+ path: "NIGHTS/NIGHTS.parquet"
84
+ - config_name: WebQA
85
+ data_files:
86
+ - split: train
87
+ path: "WebQA/WebQA.parquet"
88
+ - config_name: VisualNews_i2t
89
+ data_files:
90
+ - split: train
91
+ path: "VisualNews_i2t/VisualNews_i2t.parquet"
92
+ - config_name: VisualNews_t2i
93
+ data_files:
94
+ - split: train
95
+ path: "VisualNews_t2i/VisualNews_t2i.parquet"
96
+ - config_name: MSCOCO_i2t
97
+ data_files:
98
+ - split: train
99
+ path: "MSCOCO_i2t/MSCOCO_i2t.parquet"
100
+ - config_name: MSCOCO_t2i
101
+ data_files:
102
+ - split: train
103
+ path: "MSCOCO_t2i/MSCOCO_t2i.parquet"
104
+ - config_name: MSCOCO
105
+ data_files:
106
+ - split: train
107
+ path: "MSCOCO/MSCOCO.parquet"
108
+ ---
109
+ # mmE5 Labeled Data
110
+
111
+
112
+
113
+
114
+ This dataset contains datasets used for the supervised finetuning of mmE5 ([mmE5: Improving Multimodal Multilingual Embeddings via High-quality Synthetic Data](https://arxiv.org/abs/2502.08468)):
115
+ - **MMEB** (with hard negative)
116
+ - **InfoSeek** (from M-BEIR)
117
+ - **TAT-DQA**
118
+ - **ArxivQA**
119
+
120
+ [Github](https://github.com/haon-chen/mmE5)
121
+
122
+ ## Image Preparation
123
+
124
+ First, you should prepare the images used for training:
125
+
126
+ ### Image Downloads
127
+
128
+ - **Download Links**: Download image resources for each dataset via the following links:
129
+ - [**MMEB**](https://huggingface.co/datasets/TIGER-Lab/MMEB-train)
130
+ - [**InfoSeek**](https://huggingface.co/datasets/TIGER-Lab/M-BEIR)
131
+ - [**ArxivQA**](https://huggingface.co/datasets/MMInstruction/ArxivQA)
132
+ - [**TAT-DQA**](https://huggingface.co/datasets/vidore/tatdqa_train/tree/main)
133
+
134
+ For TAT-DQA, you need to first save images into the overall image folder to align usage:
135
+
136
+ ```
137
+ dataset = load_dataset(
138
+ "vidore/tatdqa_train",
139
+ split="train"
140
+ )
141
+ image_out_dir = "images/TAT-DQA"
142
+ os.makedirs(image_out_dir, exist_ok=True)
143
+ for i, sample in enumerate(dataset):
144
+ save_path = os.path.join(image_out_dir, f"tatdqa_{i}.png")
145
+ if os.path.exists(save_path):
146
+ continue
147
+ image = sample["image"]
148
+ image.save(save_path, format="PNG")
149
+ ```
150
+
151
+ ### Image Organization
152
+
153
+ ```
154
+ images/
155
+ ├── mbeir_images/
156
+ │ └── oven_images/
157
+ │ └── ... .jpg (InfoSeek)
158
+ ├── ArxivQA/
159
+ │ └── images/
160
+ │ └── ... .jpg (ArxivQA)
161
+ └── TAT-DQA/
162
+ │ └── ... .png (TAT-DQA)
163
+ └── A-OKVQA/
164
+ └── Train/
165
+ │ └── ... .jpg (A-OKVQA)
166
+
167
+ ... (MMEB Training images)
168
+ ```
169
+
170
+ You can refer to the image paths in each subset to view the image organization.
171
+
172
+ You can also customize your image paths by altering the image_path fields.
173
+ ## Citation
174
+ If you use this dataset in your research, please cite the associated paper.
175
+ ```
176
+ @article{chen2025mmE5,
177
+ title={mmE5: Improving Multimodal Multilingual Embeddings via High-quality Synthetic Data},
178
+ author={Chen, Haonan and Wang, Liang and Yang, Nan and Zhu, Yutao and Zhao, Ziliang and Wei, Furu and Dou, Zhicheng},
179
+ journal={arXiv preprint arXiv:2502.08468},
180
+ year={2025}
181
+ }
182
+ ```