Pixel Sentence Representation Learning
Paper
•
2402.08183
•
Published
•
2
sentence1
imagewidth (px) 448
448
| sentence2
imagewidth (px) 448
448
| score
float64 0
5
|
|---|---|---|
3
|
||
3
|
||
0
|
||
0
|
||
2
|
||
0
|
||
5
|
||
3
|
||
2
|
||
0
|
||
2
|
||
5
|
||
1
|
||
4
|
||
4
|
||
1
|
||
1
|
||
1
|
||
4
|
||
1
|
||
5
|
||
4
|
||
4
|
||
4
|
||
4
|
||
3
|
||
4
|
||
0
|
||
0
|
||
1
|
||
4
|
||
5
|
||
2
|
||
4
|
||
0
|
||
2
|
||
1
|
||
0
|
||
2
|
||
5
|
||
1
|
||
3
|
||
2
|
||
5
|
||
2
|
||
1
|
||
5
|
||
0
|
||
5
|
||
5
|
||
0
|
||
2
|
||
4
|
||
0
|
||
2
|
||
3
|
||
0
|
||
3
|
||
1
|
||
3
|
||
2
|
||
4
|
||
0
|
||
4
|
||
4
|
||
2
|
||
2
|
||
5
|
||
2
|
||
2
|
||
0
|
||
2
|
||
0
|
||
2
|
||
4
|
||
4
|
||
1
|
||
4
|
||
4
|
||
2
|
||
2
|
||
2
|
||
0
|
||
3
|
||
1
|
||
5
|
||
1
|
||
0
|
||
2
|
||
2
|
||
0
|
||
3
|
||
3
|
||
3
|
||
5
|
||
5
|
||
4
|
||
1
|
||
0
|
||
5
|
This dataset is rendered to images from STS-16. We envision the need to assess vision encoders' abilities to understand texts. A natural way will be assessing them with the STS protocols, with texts rendered into images.
Examples of Use
Load test split:
from datasets import load_dataset
dataset = load_dataset("Pixel-Linguist/rendered-sts16", split="test")
English-only; for multilingual and cross-lingual datasets, see Pixel-Linguist/rendered-stsb and Pixel-Linguist/rendered-sts17
@article{xiao2024pixel,
title={Pixel Sentence Representation Learning},
author={Xiao, Chenghao and Huang, Zhuoxu and Chen, Danlu and Hudson, G Thomas and Li, Yizhi and Duan, Haoran and Lin, Chenghua and Fu, Jie and Han, Jungong and Moubayed, Noura Al},
journal={arXiv preprint arXiv:2402.08183},
year={2024}
}