MIEB
Collection
MIEB(Multilingual) is a comprehensive image embeddings benchmark, spanning 10 task types, covering 130 tasks and a total of 39 languages.
In ad... • 102 items • Updated
sentence1 imagewidth (px) 448 448 | sentence2 imagewidth (px) 448 448 | score float64 0 5 |
|---|---|---|
0.6 | ||
0.8 | ||
0.8 | ||
1.2 | ||
0.4 | ||
1.8 | ||
2.2 | ||
0 | ||
1.6 | ||
0.8 | ||
1.2 | ||
2.4 | ||
0 | ||
2.2 | ||
2 | ||
0.4 | ||
0.2 | ||
1.6 | ||
2.8 | ||
1.2 | ||
2.4 | ||
2 | ||
0.8 | ||
4 | ||
2.4 | ||
2.4 | ||
0 | ||
2.6 | ||
1.8 | ||
2.8 | ||
3.8 | ||
1.6 | ||
2.2 | ||
1.2 | ||
0 | ||
1.4 | ||
1.6 | ||
1.4 | ||
1.5 | ||
3.2 | ||
2.6 | ||
1 | ||
0.2 | ||
0.6 | ||
1.8 | ||
1.6 | ||
3 | ||
2 | ||
2.6 | ||
3.4 | ||
1 | ||
2.6 | ||
1.4 | ||
0.4 | ||
2.8 | ||
1.6 | ||
1.8 | ||
1.4 | ||
1.2 | ||
2.8 | ||
1 | ||
0 | ||
2.6 | ||
2.4 | ||
1.2 | ||
0.8 | ||
1.2 | ||
1.4 | ||
2 | ||
2.6 | ||
1 | ||
0 | ||
2.2 | ||
0.4 | ||
0.8 | ||
2.4 | ||
0.2 | ||
0 | ||
1.6 | ||
0.8 | ||
0 | ||
2.2 | ||
2 | ||
2.2 | ||
1.6 | ||
1 | ||
1.2 | ||
1.6 | ||
0 | ||
0.8 | ||
0 | ||
0.8 | ||
2 | ||
2.4 | ||
0.4 | ||
0 | ||
1 | ||
0 | ||
0.8 | ||
2.2 |
This dataset is rendered to images from STS-13. We envision the need to assess vision encoders' abilities to understand texts. A natural way will be assessing them with the STS protocols, with texts rendered into images.
Examples of Use
Load test split:
from datasets import load_dataset
dataset = load_dataset("Pixel-Linguist/rendered-sts13", split="test")
English-only; for multilingual and cross-lingual datasets, see Pixel-Linguist/rendered-stsb and Pixel-Linguist/rendered-sts17
@article{xiao2024pixel,
title={Pixel Sentence Representation Learning},
author={Xiao, Chenghao and Huang, Zhuoxu and Chen, Danlu and Hudson, G Thomas and Li, Yizhi and Duan, Haoran and Lin, Chenghua and Fu, Jie and Han, Jungong and Moubayed, Noura Al},
journal={arXiv preprint arXiv:2402.08183},
year={2024}
}