Dataset Viewer
Auto-converted to Parquet Duplicate
id
stringlengths
36
36
filename
stringclasses
310 values
label
stringclasses
13 values
text
stringlengths
0
25.7k
caption_text
stringlengths
2
1.49k
image
imagewidth (px)
2
1.11k
width
int64
0
1.11k
height
int64
0
1.42k
dpi
int64
72
72
mimetype
stringclasses
1 value
page_no
int64
2
305
mime_type
stringclasses
1 value
version
stringclasses
1 value
tags
sequencelengths
0
0
properties
null
error
stringclasses
1 value
raw_response
stringlengths
255
1.07M
synced_at
null
f7aaa3c0-262d-4894-a0d2-1a4d50f0117c
2302.06555v2.pdf
page_header
arXiv:2302.06555v2 [cs.CL] 6 Jul 2024
null
39
660
72
image/png
2
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/0", "parent": {"cref": "#/body"}, "children": [], "label": "page_header", "prov": [{"page_no": 1, "bbox": {"l": 17.23870086669922, "t": 566.97998046875, "r": 36.33979415893555, "b": 236.99996948242188, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 37]}], "orig": "arXiv:2302.06555v2 [cs.CL] 6 Jul ...
null
1efbff73-8fa6-4fe1-9456-df4f893eb0cf
2302.06555v2.pdf
section_header
Do Vision and Language Models Share Concepts? A Vector Space Alignment Study
null
613
60
72
image/png
2
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/1", "parent": {"cref": "#/body"}, "children": [], "label": "section_header", "prov": [{"page_no": 1, "bbox": {"l": 145.64881896972656, "t": 772.0592651367188, "r": 451.7751159667969, "b": 741.8055419921875, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 76]}], "orig": "Do Vision and Language Model...
null
2f6fc597-e04a-4529-8017-f9e746c2bf7c
2302.06555v2.pdf
section_header
Jiaang Li † Yova Kementchedjhieva ‡ Constanza Fierro † Anders Søgaard †
null
757
26
72
image/png
2
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/2", "parent": {"cref": "#/body"}, "children": [], "label": "section_header", "prov": [{"page_no": 1, "bbox": {"l": 110.50749206542969, "t": 728.587646484375, "r": 488.74749755859375, "b": 715.568359375, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 71]}], "orig": "Jiaang Li \u2020 Yova Kementched...
null
835d05b7-a176-4446-8c03-5cfd00098e08
2302.06555v2.pdf
text
† University of Copenhagen
null
267
28
72
image/png
2
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/3", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 1, "bbox": {"l": 233.31304931640625, "t": 700.5826416015625, "r": 366.739990234375, "b": 686.6187744140625, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 26]}], "orig": "\u2020 University of Copenhagen", "text...
null
9960238e-0bda-414a-b41a-5179b70ccfad
2302.06555v2.pdf
text
‡ Mohamed bin Zayed University of Artificial Intelligence
null
558
27
72
image/png
2
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/4", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 1, "bbox": {"l": 160.5359344482422, "t": 686.524658203125, "r": 439.458984375, "b": 672.8939208984375, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 57]}], "orig": "\u2021 Mohamed bin Zayed University of Artif...
null
d223e870-906b-4539-969c-496215626791
2302.06555v2.pdf
text
{jili,c.fierro,soegaard}@di.ku.dk, yova.kementchedjhieva@mbzuai.ac.ae
null
990
25
72
image/png
2
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/5", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 1, "bbox": {"l": 52.79401397705078, "t": 671.6949462890625, "r": 547.7392578125, "b": 659.25048828125, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 69]}], "orig": "{jili,c.fierro,soegaard}@di.ku.dk, yova.keme...
null
7e2361f3-7c96-4be2-ad2b-02bb14073d09
2302.06555v2.pdf
section_header
Abstract
null
91
23
72
image/png
2
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/6", "parent": {"cref": "#/body"}, "children": [], "label": "section_header", "prov": [{"page_no": 1, "bbox": {"l": 158.09361267089844, "t": 635.4051513671875, "r": 203.5260009765625, "b": 623.8914184570312, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 8]}], "orig": "Abstract", "text": "Abstract"...
null
b4c1466b-92d1-411f-8b3e-c23c96a2cd1c
2302.06555v2.pdf
text
Large-scale pretrained language models (LMs) are said to "lack the ability to connect utterances to the world" (Bender and Koller, 2020), because they do not have "mental models of the world" (Mitchell and Krakauer, 2023). If so, one would expect LM representations to be unrelated to representations induced by vision m...
null
355
449
72
image/png
2
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/7", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 1, "bbox": {"l": 92.48450469970703, "t": 607.5615844726562, "r": 270.1062316894531, "b": 382.78509521484375, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 774]}], "orig": "Large-scale pretrained language model...
null
2aa84d33-241d-454f-aff6-090baeb93711
2302.06555v2.pdf
section_header
1 Introduction
null
166
22
72
image/png
2
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/8", "parent": {"cref": "#/body"}, "children": [], "label": "section_header", "prov": [{"page_no": 1, "bbox": {"l": 71.96525573730469, "t": 357.04547119140625, "r": 154.81365966796875, "b": 345.7883605957031, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 14]}], "orig": "1 Introduction", "text": "1...
null
43ebcc8a-6b10-4dc7-b1bb-1d1df53ce5e0
2302.06555v2.pdf
text
The debate around whether LMs can be said to understand is often portrayed as a back-and-forth between two opposing sides (Mitchell and Krakauer, 2023), but in reality, there are many positions. Some researchers have argued that LMs are 'all syntax, no semantics', i.e., that they learn form, but not meaning (Searle, 19...
null
442
213
72
image/png
2
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/9", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 1, "bbox": {"l": 70.90017700195312, "t": 334.8507995605469, "r": 292.1755065917969, "b": 228.49066162109375, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 400]}], "orig": "The debate around whether LMs can be ...
null
15bed161-2aec-4ca4-ab02-b65cd1cab586
2302.06555v2.pdf
text
dataset:
null
57
17
72
image/png
2
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/10", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 1, "bbox": {"l": 141.05947875976562, "t": 217.11761474609375, "r": 169.70974731445312, "b": 208.6852569580078, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 8]}], "orig": "dataset:", "text": "dataset:"}
null
7e34928e-b3ee-434d-94db-81cb08d3c4d6
2302.06555v2.pdf
text
https://github.com/
null
206
19
72
image/png
2
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/11", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 1, "bbox": {"l": 188.3842010498047, "t": 217.300048828125, "r": 291.3439636230469, "b": 208.0350341796875, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 19]}], "orig": "https://github.com/", "text": "https://...
null
9bf2f166-4937-4ace-bb94-dfb344f88b46
2302.06555v2.pdf
text
$^{1}$Code and jiaangli/VLCA .
null
144
37
72
image/png
2
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/12", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 1, "bbox": {"l": 72.0, "t": 216.7012176513672, "r": 144.17959594726562, "b": 197.72625732421875, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 30]}], "orig": "$^{1}$Code and jiaangli/VLCA .", "text": "$^{1}$C...
null
6321b136-a71f-471e-a05c-035bebe99d9a
2302.06555v2.pdf
text
$^{2}$The idea that computers are 'all syntax, no semantics' can be traced back to German 17th century philosopher Leibniz's Mill Argument (Lodge and Bobro, 1998). The Mill Argument states that mental states cannot be reduced to physical states, so if the capacity to understand language requires mental states, this cap...
null
442
238
72
image/png
2
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/13", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 1, "bbox": {"l": 71.13861846923828, "t": 195.3333740234375, "r": 291.759765625, "b": 76.2840576171875, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 651]}], "orig": "$^{2}$The idea that computers are 'all syn...
null
8bb27c97-a19f-443c-b6c3-10c0c3fea2fa
2302.06555v2.pdf
text
have inferential semantics, but not referential semantics (Rapaport, 2002; Sahlgren and Carlsson, 2021; Piantadosi and Hill, 2022), 3 whereas some have posited that a form of externalist referential semantics is possible, at least for chatbots engaged in direct conversation (Cappelen and Dever, 2021; Butlin, 2021; Moll...
null
443
320
72
image/png
2
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/14", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 1, "bbox": {"l": 306.2083740234375, "t": 634.934326171875, "r": 527.3598022460938, "b": 474.9033508300781, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 579]}], "orig": "have inferential semantics, but not re...
null
78879112-8454-4241-9808-cdc6670cb00a
2302.06555v2.pdf
text
This study provides evidence to the contrary: Language models and computer vision models (VMs) are trained on independent data sources (at least for unsupervised computer vision models). The only common source of bias is the world. If LMs and VMs exhibit similarities, it must be because they both model the world. We ex...
null
443
457
72
image/png
2
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/15", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 1, "bbox": {"l": 305.9925231933594, "t": 471.6637268066406, "r": 527.4515380859375, "b": 242.90240478515625, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 809]}], "orig": "This study provides evidence to the ...
null
13524540-4a6d-47b7-b3ad-05ded451e35c
2302.06555v2.pdf
text
Contributions. We present a series of evaluations of the vector spaces induced by three families of VMs and four families of LMs, i.e., a total of fourteen VMs and fourteen LMs. We show that within each family, the larger the LMs, the more their vector spaces become structurally similar to
null
442
158
72
image/png
2
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/16", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 1, "bbox": {"l": 306.3404541015625, "t": 230.98822021484375, "r": 527.3585815429688, "b": 151.7589111328125, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 290]}], "orig": "Contributions. We present a series o...
null
35f66de4-2a68-4b7f-99ab-fe619540be51
2302.06555v2.pdf
text
receives text messages in this language and follows a rule book to reply to the messages. The interlocutor is Searle's caricature of artificial intelligence, and is obviously, Searle claims, not endowed with meaning or understanding, but merely symbol manipulation.
null
439
106
72
image/png
2
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/17", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 1, "bbox": {"l": 306.3285217285156, "t": 140.958984375, "r": 525.7665405273438, "b": 88.13725280761719, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 265]}], "orig": "receives text messages in this language a...
null
cbc18cec-292c-43a7-a222-f02d37ed6869
2302.06555v2.pdf
text
$^{3}$See Marconi (1997) for this distinction.
null
291
16
72
image/png
2
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/18", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 1, "bbox": {"l": 319.9280090332031, "t": 85.00321197509766, "r": 465.3711242675781, "b": 76.98725128173828, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 46]}], "orig": "$^{3}$See Marconi (1997) for this dist...
null
a4008690-b978-45e3-ab18-4930e3416f4c
2302.06555v2.pdf
text
those of computer vision models. This enables retrieval of language representations of images (referential semantics) with minimal supervision. Retrieval precision depends on dispersion of image and language, polysemy, and frequency, but consistently improves with language model size. We discuss the implications of the...
null
442
238
72
image/png
3
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/19", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 2, "bbox": {"l": 71.13309478759766, "t": 776.2091064453125, "r": 292.083251953125, "b": 657.3726196289062, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 416]}], "orig": "those of computer vision models. This ...
null
7c58bc30-6e15-40ce-b04a-380f6e1ab0e2
2302.06555v2.pdf
section_header
2 Related Work
null
180
23
72
image/png
3
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/20", "parent": {"cref": "#/body"}, "children": [], "label": "section_header", "prov": [{"page_no": 2, "bbox": {"l": 71.47272491455078, "t": 645.0098876953125, "r": 161.3809814453125, "b": 633.350341796875, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 14]}], "orig": "2 Related Work", "text": "2 R...
null
553eacd2-92df-4108-be99-f90ab08aac40
2302.06555v2.pdf
text
Inspiration from cognitive science. Computational modeling is a cornerstone of cognitive science in the pursuit for a better understanding of how representations in the brain come about. As such, the field has shown a growing interest in computational representations induced with self-supervised learning (Orhan et al.,...
null
442
373
72
image/png
3
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/21", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 2, "bbox": {"l": 71.06283569335938, "t": 622.7850952148438, "r": 292.0802001953125, "b": 436.3576354980469, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 659]}], "orig": "Inspiration from cognitive science. C...
null
e92fdcac-17c2-416a-a570-6a4387d1f6b8
2302.06555v2.pdf
text
Studies have looked at the alignability of neural language representations and human brain activations, with more promising results as language models grow better at modeling language (Sassenhagen and Fiebach, 2020; Schrimpf et al., 2021). In these studies, the partial alignability of brain and model representations is...
null
441
240
72
image/png
3
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/22", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 2, "bbox": {"l": 71.25102233886719, "t": 433.5199279785156, "r": 292.1755065917969, "b": 313.6751708984375, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 434]}], "orig": "Studies have looked at the alignabili...
null
3da74941-87ce-468b-8e8a-cd69a132d125
2302.06555v2.pdf
text
Cross-modal alignment. The idea of crossmodal retrieval is not new (Lazaridou et al., 2014), but previously it has mostly been studied with practical considerations in mind. Recently, Merullo et al. (2023) showed that language representations in LMs are functionally similar to image representations in VMs, in that a li...
null
442
457
72
image/png
3
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/23", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 2, "bbox": {"l": 71.18010711669922, "t": 304.51641845703125, "r": 292.0832214355469, "b": 76.014892578125, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 828]}], "orig": "Cross-modal alignment. The idea of cro...
null
6d136f70-fd12-4469-8bbf-ac08f34149d3
2302.06555v2.pdf
picture
null
null
125
108
72
image/png
3
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/pictures/0", "parent": {"cref": "#/body"}, "children": [], "label": "picture", "prov": [{"page_no": 2, "bbox": {"l": 313.8261413574219, "t": 763.364501953125, "r": 376.35577392578125, "b": 709.6014404296875, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 0]}], "captions": [], "references": [], "footnote...
null
ca6cf0de-1cb4-44f8-a79b-4cdd409904e2
2302.06555v2.pdf
picture
null
null
124
104
72
image/png
3
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/pictures/1", "parent": {"cref": "#/body"}, "children": [], "label": "picture", "prov": [{"page_no": 2, "bbox": {"l": 313.7725524902344, "t": 701.4967041015625, "r": 376.127685546875, "b": 649.528076171875, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 0]}], "captions": [], "references": [], "footnotes"...
null
f1d01f98-a855-444d-84e0-3eae0a49e3fc
2302.06555v2.pdf
picture
null
null
123
103
72
image/png
3
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/pictures/2", "parent": {"cref": "#/body"}, "children": [], "label": "picture", "prov": [{"page_no": 2, "bbox": {"l": 313.82672119140625, "t": 640.9229736328125, "r": 375.7113952636719, "b": 589.3406372070312, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 0]}], "captions": [], "references": [], "footnot...
null
254effb8-5802-4b97-844a-f2e7488e6665
2302.06555v2.pdf
picture
null
null
123
104
72
image/png
3
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/pictures/3", "parent": {"cref": "#/body"}, "children": [], "label": "picture", "prov": [{"page_no": 2, "bbox": {"l": 314.2514343261719, "t": 580.6930541992188, "r": 375.9010314941406, "b": 528.97998046875, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 0]}], "captions": [], "references": [], "footnotes"...
null
0cb7a702-d938-4fc6-b040-3cf8abb2bfb4
2302.06555v2.pdf
picture
null
null
124
104
72
image/png
3
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/pictures/4", "parent": {"cref": "#/body"}, "children": [], "label": "picture", "prov": [{"page_no": 2, "bbox": {"l": 377.7059326171875, "t": 762.4067993164062, "r": 439.4959716796875, "b": 710.218505859375, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 0]}], "captions": [], "references": [], "footnotes...
null
655a5b29-665d-4321-a868-f1be33b24d37
2302.06555v2.pdf
picture
null
null
124
101
72
image/png
3
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/pictures/5", "parent": {"cref": "#/body"}, "children": [], "label": "picture", "prov": [{"page_no": 2, "bbox": {"l": 377.6625061035156, "t": 700.3104248046875, "r": 439.6013488769531, "b": 649.751953125, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 0]}], "captions": [], "references": [], "footnotes": ...
null
9e733fb5-c421-4396-9509-261f80adbc70
2302.06555v2.pdf
picture
null
null
123
102
72
image/png
3
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/pictures/6", "parent": {"cref": "#/body"}, "children": [], "label": "picture", "prov": [{"page_no": 2, "bbox": {"l": 377.5209045410156, "t": 640.98388671875, "r": 439.1893310546875, "b": 589.8641357421875, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 0]}], "captions": [], "references": [], "footnotes"...
null
6413aaf0-41b3-4717-8799-d013d7a0fea6
2302.06555v2.pdf
caption
Figure 1: Mapping from MAE$_{Huge}$ (images) to OPT$_{30B}$ (text). Gold labels are in green.
null
438
48
72
image/png
3
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/24", "parent": {"cref": "#/body"}, "children": [], "label": "caption", "prov": [{"page_no": 2, "bbox": {"l": 306.5188903808594, "t": 511.5455017089844, "r": 525.5476684570312, "b": 487.3186340332031, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 93]}], "orig": "Figure 1: Mapping from MAE$_{Huge}$...
null
ab01c853-c183-4c88-968b-f0ded2001a60
2302.06555v2.pdf
picture
null
Figure 1: Mapping from MAE$_{Huge}$ (images) to OPT$_{30B}$ (text). Gold labels are in green.
124
103
72
image/png
3
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/pictures/7", "parent": {"cref": "#/body"}, "children": [], "label": "picture", "prov": [{"page_no": 2, "bbox": {"l": 377.6405029296875, "t": 580.822265625, "r": 439.61199951171875, "b": 529.2573852539062, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 93]}], "captions": [{"cref": "#/texts/24"}], "refere...
null
005ca179-8bd9-467e-a2cb-95f325429644
2302.06555v2.pdf
text
Huh et al. (2024) proposes a similar hypothesis, although studying it from a different perspective, and our findings corroborate theirs.
null
441
76
72
image/png
3
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/25", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 2, "bbox": {"l": 306.4500427246094, "t": 463.1133728027344, "r": 526.9063720703125, "b": 424.90692138671875, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 136]}], "orig": "Huh et al. (2024) proposes a similar...
null
727ec5f5-36bf-4d02-b9c4-e4782346cbcf
2302.06555v2.pdf
section_header
3 Methodology
null
172
24
72
image/png
3
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/26", "parent": {"cref": "#/body"}, "children": [], "label": "section_header", "prov": [{"page_no": 2, "bbox": {"l": 306.342529296875, "t": 412.8549499511719, "r": 392.2894287109375, "b": 400.87078857421875, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 13]}], "orig": "3 Methodology", "text": "3 M...
null
be759c2b-2025-4928-8f5d-db721824f5ac
2302.06555v2.pdf
text
Our primary objective is to compare the representations derived from VMs and LMs and assess their alignability, i.e. the extent to which LMs converge toward VMs' geometries. In the following sections, we introduce the procedures for obtaining the representations and aligning them, with an illustration of our methodolog...
null
442
186
72
image/png
3
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/27", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 2, "bbox": {"l": 306.3543395996094, "t": 390.9617919921875, "r": 527.3591918945312, "b": 297.9390869140625, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 343]}], "orig": "Our primary objective is to compare t...
null
7a40986a-6ccd-463e-88ae-91d4adc8cb14
2302.06555v2.pdf
text
Vision models. We include fourteen VMs in our experiments, representing three model families: SegFormer (Xie et al., 2021), MAE (He et al., 2022), and ResNet (He et al., 2016). For all three types of VMs, we only employ the encoder component as a visual feature extractor. 4
null
442
157
72
image/png
3
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/28", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 2, "bbox": {"l": 306.2715148925781, "t": 288.335693359375, "r": 527.359375, "b": 210.1456298828125, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 274]}], "orig": "Vision models. We include fourteen VMs in our...
null
2d6f6757-f889-4c4f-84e5-e1b6fca83920
2302.06555v2.pdf
text
SegFormer models consist of a Transformerbased encoder and a light-weight feed-forward decoder. They are pretrained on object classification data and finetuned on scene parsing data for scene segmentation and object classification. We hypothesize that the reasoning necessary to
null
443
158
72
image/png
3
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/29", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 2, "bbox": {"l": 306.04150390625, "t": 207.15533447265625, "r": 527.4514770507812, "b": 127.76580810546875, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 278]}], "orig": "SegFormer models consist of a Transfo...
null
8572076e-cbfc-4318-8d1d-4c83905882ef
2302.06555v2.pdf
footnote
$^{4}$We ran experiments with CLIP (Radford et al., 2021), but report on these separately, since CLIP does not meet the criteria of our study, being trained on a mixture of text and images. CLIP results are presented in Appendix C.
null
440
86
72
image/png
3
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/30", "parent": {"cref": "#/body"}, "children": [], "label": "footnote", "prov": [{"page_no": 2, "bbox": {"l": 306.3114013671875, "t": 119.45751953125, "r": 526.66015625, "b": 76.37322998046875, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 231]}], "orig": "$^{4}$We ran experiments with CLIP (Radf...
null
2ac5a30c-715e-483c-9d3d-113e3844e6a6
2302.06555v2.pdf
caption
Figure 2: Experiments stages: During our experiments, words, sentences, and images are selected from the aliases list (wordlist and ImageNet-21K aliases), Wikipedia and ImageNet-21K, respectively. The source and target spaces are constructed utilizing image and word embeddings which are extracted by specialized vision ...
null
909
103
72
image/png
4
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/31", "parent": {"cref": "#/body"}, "children": [], "label": "caption", "prov": [{"page_no": 3, "bbox": {"l": 71.41864013671875, "t": 595.5836181640625, "r": 525.9263305664062, "b": 543.7525024414062, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 340]}], "orig": "Figure 2: Experiments stages: Duri...
null
7d1defea-4674-4d83-93b4-7dc5fd44cef9
2302.06555v2.pdf
picture
null
Figure 2: Experiments stages: During our experiments, words, sentences, and images are selected from the aliases list (wordlist and ImageNet-21K aliases), Wikipedia and ImageNet-21K, respectively. The source and target spaces are constructed utilizing image and word embeddings which are extracted by specialized vision ...
909
332
72
image/png
4
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/pictures/8", "parent": {"cref": "#/body"}, "children": [], "label": "picture", "prov": [{"page_no": 3, "bbox": {"l": 70.01079559326172, "t": 777.8876953125, "r": 524.3985595703125, "b": 611.6873168945312, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 340]}], "captions": [{"cref": "#/texts/31"}], "refer...
null
00423384-00ba-411c-87d9-5681643567ad
2302.06555v2.pdf
text
perform segmentation in context promotes representations that are more similar to those of LMs, which also operate in a discrete space (a vocabulary). The SegFormer models we use are pretrained with ImageNet-1K (Russakovsky et al., 2015) and finetuned with ADE20K (Zhou et al., 2017).
null
442
157
72
image/png
4
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/32", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 3, "bbox": {"l": 71.2052993774414, "t": 519.85791015625, "r": 292.0829162597656, "b": 441.4917297363281, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 284]}], "orig": "perform segmentation in context promotes...
null
d85c634d-dbb3-48ac-9c94-342db0c51fb3
2302.06555v2.pdf
text
MAE models relies on a Transformer-based encoder-decoder architecture, with the VisionTransformer (ViT) (Dosovitskiy et al., 2021) as the encoder backbone. MAE models are trained to reconstruct masked patches in images, i.e., a fully unsupervised training objective, similar to masked language modeling. The encoder take...
null
442
320
72
image/png
4
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/33", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 3, "bbox": {"l": 71.08782196044922, "t": 437.7105407714844, "r": 292.07537841796875, "b": 277.903564453125, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 552]}], "orig": "MAE models relies on a Transformer-ba...
null
a5381d8b-f763-484f-9b97-4bb4c9d89391
2302.06555v2.pdf
text
ResNet models for object classification consist of a bottleneck convolutional neural network with residual blocks as an encoder, with a classification head. They are pretrained on the ImageNet-1K.
null
438
104
72
image/png
4
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/34", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 3, "bbox": {"l": 71.25859069824219, "t": 274.5047607421875, "r": 290.271728515625, "b": 222.460205078125, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 196]}], "orig": "ResNet models for object classification...
null
5adb76e8-32a6-481f-924e-e903b49075e4
2302.06555v2.pdf
text
Language models. We include fourteen Transformer-based LMs in our experiments, representing four model families: BERT (Devlin et al., 2019), GPT-2 (Radford et al., 2019), OPT (Zhang et al., 2022) and LLaMA-2 (Touvron et al., 2023). We use six different sizes of BERT (all uncased): BERT$_{Base}$ and BERT$_{Large}$, whic...
null
443
266
72
image/png
4
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/35", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 3, "bbox": {"l": 70.57910919189453, "t": 208.846435546875, "r": 292.1816101074219, "b": 76.1505126953125, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 490]}], "orig": "Language models. We include fourteen Tr...
null
35c1f1fe-1b43-4ff4-8be7-b962ec994878
2302.06555v2.pdf
text
GPT-2, an auto-regressive decoder-only LM, comes in three sizes, pretrained on the WebText dataset (Radford et al., 2019). OPT also comes in three sizes, pretrained on the union of five datasets (Zhang et al., 2022). LLaMA-2 was pretrained on two trillion tokens.
null
441
158
72
image/png
4
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/36", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 3, "bbox": {"l": 306.2762145996094, "t": 520.4324340820312, "r": 526.9061889648438, "b": 441.523681640625, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 263]}], "orig": "GPT-2, an auto-regressive decoder-only...
null
1d1c5a43-a827-4da9-af56-3b13c0e76332
2302.06555v2.pdf
text
Vision representations. The visual representation of a concept is obtained by embedding the images available for the concept with a given VM encoder and then averaging these representations. When applying SegFormer, we average the patches' representations from the last hidden state as the basis for every image, whereas...
null
442
266
72
image/png
4
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/37", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 3, "bbox": {"l": 306.2260437011719, "t": 429.16314697265625, "r": 527.4528198242188, "b": 296.0677490234375, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 461]}], "orig": "Vision representations. The visual r...
null
a561ca2b-6f29-40bb-88cd-8a7475f2c7d2
2302.06555v2.pdf
text
Language representations. The LMs included here were trained on text segments, so applying them to words in isolation could result in unpredictable behavior. We therefore represent words by embedding English Wikipedia sentences, using the token representations that form the concept, decontextualizing these representati...
null
442
239
72
image/png
4
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/38", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 3, "bbox": {"l": 306.23394775390625, "t": 283.43695068359375, "r": 527.3591918945312, "b": 164.11749267578125, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 434]}], "orig": "Language representations. The LMs ...
null
58b5272c-7ffa-4e31-a994-75ac7501824f
2302.06555v2.pdf
footnote
$^{5}$We also experimented with utilizing the representations from the last hidden state; however, the results were not as promising as those obtained from the penultimate hidden state. Caron et al. (2021) demonstrate the penultimate-layer features in ViTs trained with DINO exhibit strong correlations with saliency inf...
null
441
151
72
image/png
4
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/39", "parent": {"cref": "#/body"}, "children": [], "label": "footnote", "prov": [{"page_no": 3, "bbox": {"l": 306.2687072753906, "t": 152.629638671875, "r": 527.111083984375, "b": 76.98725891113281, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 386]}], "orig": "$^{5}$We also experimented with uti...
null
1bb9027b-a593-4c15-a20d-54e53fbb922d
2302.06555v2.pdf
text
an averaging approach on the token representations forming the concept; otherwise, we choose for the last token within the concept (Zou et al., 2023).
null
438
75
72
image/png
5
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/40", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 4, "bbox": {"l": 71.54341125488281, "t": 776.0916748046875, "r": 290.26824951171875, "b": 738.40966796875, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 150]}], "orig": "an averaging approach on the token rep...
null
4e3bb844-dfc1-4d69-8906-303f9098040b
2302.06555v2.pdf
text
Linear projection. Since we are interested in the extent to which vision and language representations are isomorphic, we focus on linear projections. 6 Following Conneau et al. (2018), we use Procrustes analysis (Schönemann, 1966) to align the representations of VMs to those of LMs, given a bimodal dictionary (§ 4.1). ...
null
442
618
72
image/png
5
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/41", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 4, "bbox": {"l": 70.89579772949219, "t": 726.5772705078125, "r": 292.08294677734375, "b": 417.6516418457031, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 1121]}], "orig": "Linear projection. Since we are int...
null
db03dd4e-1bdf-433d-b4bd-05021749e207
2302.06555v2.pdf
section_header
4 Experimental Setup
null
242
24
72
image/png
5
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/42", "parent": {"cref": "#/body"}, "children": [], "label": "section_header", "prov": [{"page_no": 4, "bbox": {"l": 71.08319091796875, "t": 403.1234130859375, "r": 191.88674926757812, "b": 390.90045166015625, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 20]}], "orig": "4 Experimental Setup", "te...
null
33c633e8-b91c-4fde-90cf-a16b59f21946
2302.06555v2.pdf
text
In this section, we discuss details around bimodal dictionary compilation (§ 4.1), evaluation metrics, as well as our baselines (§ 4.2).
null
441
76
72
image/png
5
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/43", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 4, "bbox": {"l": 71.15853881835938, "t": 380.3851623535156, "r": 291.63519287109375, "b": 342.40765380859375, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 136]}], "orig": "In this section, we discuss details...
null
288e8dfc-c707-4f66-b71e-f47ce763588a
2302.06555v2.pdf
section_header
4.1 Bimodal Dictionary Compilation
null
357
22
72
image/png
5
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/44", "parent": {"cref": "#/body"}, "children": [], "label": "section_header", "prov": [{"page_no": 4, "bbox": {"l": 71.00336456298828, "t": 329.13519287109375, "r": 249.28378295898438, "b": 317.93157958984375, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 34]}], "orig": "4.1 Bimodal Dictionary Co...
null
1afe2f60-0a37-4fee-b2e2-ebdd5f6eb3c1
2302.06555v2.pdf
text
We build bimodal dictionaries of image-text pairs based on the ImageNet21K dataset (Russakovsky et al., 2015) and the CLDI (cross-lingual dictionary induction) dataset (Hartmann and Søgaard, 2018). In ImageNet, a concept class has a unique ID and is represented by multiple images and one or more names (which we refer t...
null
442
295
72
image/png
5
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/45", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 4, "bbox": {"l": 70.98979187011719, "t": 310.3228759765625, "r": 292.07537841796875, "b": 163.00567626953125, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 537]}], "orig": "We build bimodal dictionaries of im...
null
c20d0945-d3c3-440c-9645-e23e6f264b44
2302.06555v2.pdf
footnote
$^{6}$For work on non-linear projection between representation spaces, see Nakashole (2018); Zhao and Gilman (2020); Glavaš and Vuli´c (2020).
null
441
63
72
image/png
5
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/46", "parent": {"cref": "#/body"}, "children": [], "label": "footnote", "prov": [{"page_no": 4, "bbox": {"l": 71.50354766845703, "t": 152.3046875, "r": 291.7594909667969, "b": 120.85226440429688, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 142]}], "orig": "$^{6}$For work on non-linear projectio...
null
8242f6ac-1f27-4b3f-9a93-f2fdc5fe0d78
2302.06555v2.pdf
footnote
$^{7}$The variance is retained for most models after dimensionality reduction, except for a few cases where there is some loss of information. The cumulative of explained variance ratios for different models are presented in Table 8.
null
441
85
72
image/png
5
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/47", "parent": {"cref": "#/body"}, "children": [], "label": "footnote", "prov": [{"page_no": 4, "bbox": {"l": 71.3525619506836, "t": 118.89080810546875, "r": 291.7541809082031, "b": 76.62451171875, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 233]}], "orig": "$^{7}$The variance is retained for m...
null
65f280ac-2ac5-43b8-903c-d0c2b8f47d32
2302.06555v2.pdf
caption
Table 1: Statistics of the bimodal dictionaries.
null
408
21
72
image/png
5
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/48", "parent": {"cref": "#/body"}, "children": [], "label": "caption", "prov": [{"page_no": 4, "bbox": {"l": 313.8568420410156, "t": 716.146240234375, "r": 517.947021484375, "b": 705.5986328125, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 48]}], "orig": "Table 1: Statistics of the bimodal dicti...
null
958fcef6-7205-483c-a140-0bc9acb055f1
2302.06555v2.pdf
table
<table><tbody><tr><th>Set</th><th>Num. of classes</th><th>Num. of aliases</th><th>Num. of pairs</th></tr><tr><td>Only-1K</td><td>491</td><td>655</td><td>655</td></tr><tr><td>Exclude-1K</td><td>5,942</td><td>7,194</td><td>7,194</td></tr><tr><td>EN-CLDI</td><td>1,690</td><td>1,690</td><td>1,690</td></tr></tbody></table>
Table 1: Statistics of the bimodal dictionaries.
435
103
72
image/png
5
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/tables/0", "parent": {"cref": "#/body"}, "children": [], "label": "table", "prov": [{"page_no": 4, "bbox": {"l": 308.0361328125, "t": 778.875, "r": 525.6458740234375, "b": 727.481201171875, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 0]}], "captions": [{"cref": "#/texts/48"}], "references": [], "foot...
null
27d964d3-7aea-42d2-aa69-cddcfe5b13ef
2302.06555v2.pdf
text
least one alias. As a result, 11,338 classes and 13,460 aliases meet the criteria. We further filter aliases that are shared by two different class IDs, and aliases for which their hyponyms are already in the aliases set. 8 To avoid any form of bias, given that the VMs we experiment with have been pretrained on ImageNe...
null
442
237
72
image/png
5
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/49", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 4, "bbox": {"l": 306.3482360839844, "t": 682.125, "r": 527.3558349609375, "b": 563.3076171875, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 411]}], "orig": "least one alias. As a result, 11,338 classes and 1...
null
6a58186a-e7f7-4af5-bed5-9cd344e844e7
2302.06555v2.pdf
text
One important limitation of the Exclude-1K bimodal dictionary is that all concepts are nouns. Therefore, to investigate how our results generalize to other parts of speech (POS), we also use the English subset of CLDI dataset (EN-CLDI), which contains images paired with verbs and adjectives. Each word within this set i...
null
443
238
72
image/png
5
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/50", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 4, "bbox": {"l": 306.14984130859375, "t": 560.3670654296875, "r": 527.4514770507812, "b": 441.3656311035156, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 432]}], "orig": "One important limitation of the Excl...
null
4486b15d-794a-4f20-87c0-89454067deb3
2302.06555v2.pdf
text
The pairs in these bimodal dictionaries are split 70-30 for training and testing based on the class IDs to avoid train-test leakage. 9 We compute five such splits at random and report averaged results. See § 6 for the impact of training set size variations.
null
442
131
72
image/png
5
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/51", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 4, "bbox": {"l": 306.2119140625, "t": 438.3727722167969, "r": 527.4554443359375, "b": 373.1012878417969, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 257]}], "orig": "The pairs in these bimodal dictionaries ...
null
57508efd-4361-4518-8d6f-0a141a1b30a1
2302.06555v2.pdf
section_header
4.2 Evaluation
null
152
22
72
image/png
5
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/52", "parent": {"cref": "#/body"}, "children": [], "label": "section_header", "prov": [{"page_no": 4, "bbox": {"l": 306.46258544921875, "t": 362.63116455078125, "r": 382.63604736328125, "b": 351.5690002441406, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 14]}], "orig": "4.2 Evaluation", "text": ...
null
fb9d2a8c-5170-4f2d-99f2-e9ede95d7298
2302.06555v2.pdf
text
We induce a linear mapping Ω based on training image-text pairs sampled from A and B , respectively. We then evaluate how close A Ω is to B by computing retrieval precision on held-out imagetext pairs. To make the retrieval task as challenging as possible, the target space B is expanded with 65,599 words from an Englis...
null
442
240
72
image/png
5
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/53", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 4, "bbox": {"l": 306.2489013671875, "t": 345.0776672363281, "r": 527.352783203125, "b": 225.149169921875, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 427]}], "orig": "We induce a linear mapping \u2126 based...
null
e4e28a87-f60f-436a-b661-d73190c39776
2302.06555v2.pdf
text
Metrics. We evaluate alignment in terms of precision-atk (P@ k ), a well-established metric employed in the evaluation of multilingual word embeddings (Conneau et al., 2018), with k ∈ { 1 , 10 , 100 } . 10 Note that this performance metric
null
438
134
72
image/png
5
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/54", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 4, "bbox": {"l": 306.3006286621094, "t": 215.8372802734375, "r": 525.5789184570312, "b": 148.8270263671875, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 239]}], "orig": "Metrics. We evaluate alignment in ter...
null
510f1161-7607-48cd-8001-42c1c446291e
2302.06555v2.pdf
footnote
$^{8}$We obtain the aliases hypernyms and hyponyms from the Princeton WordNet (Fellbaum, 2010).
null
438
41
72
image/png
5
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/55", "parent": {"cref": "#/body"}, "children": [], "label": "footnote", "prov": [{"page_no": 4, "bbox": {"l": 306.5758361816406, "t": 141.65576171875, "r": 525.547119140625, "b": 120.88531494140625, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 95]}], "orig": "$^{8}$We obtain the aliases hypernym...
null
62693f5e-2a68-4dc6-9215-cfcc0d77a24e
2302.06555v2.pdf
footnote
$^{9}$In the EN-CLDI set, we simply use words to mitigate the risk of train-test leakage.
null
437
40
72
image/png
5
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/56", "parent": {"cref": "#/body"}, "children": [], "label": "footnote", "prov": [{"page_no": 4, "bbox": {"l": 306.7635803222656, "t": 119.1673583984375, "r": 525.5420532226562, "b": 98.96624755859375, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 89]}], "orig": "$^{9}$In the EN-CLDI set, we simpl...
null
817261b3-6267-4805-89fa-7af4c9287a4f
2302.06555v2.pdf
footnote
$^{10}$For example, we could use the mapping of the image of an apple into the word ‘apple’, and the mapping of the image
null
439
42
72
image/png
5
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/57", "parent": {"cref": "#/body"}, "children": [], "label": "footnote", "prov": [{"page_no": 4, "bbox": {"l": 306.67706298828125, "t": 96.8956298828125, "r": 525.7845458984375, "b": 75.97491455078125, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 121]}], "orig": "$^{10}$For example, we could use ...
null
e038a730-a293-4950-a8fd-df077cd03f23
2302.06555v2.pdf
caption
Table 2: Alignment results for our baselines. All the Precision@ k scores are reported in percentage.
null
439
49
72
image/png
6
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/58", "parent": {"cref": "#/body"}, "children": [], "label": "caption", "prov": [{"page_no": 5, "bbox": {"l": 71.00687408447266, "t": 713.8568115234375, "r": 290.2685546875, "b": 689.1964721679688, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 101]}], "orig": "Table 2: Alignment results for our ba...
null
908f9f51-c9ec-454a-8c0e-c609f8146b96
2302.06555v2.pdf
table
<table><tbody><tr><th>Baseline</th><th>P@1</th><th>P@10</th><th>P@100</th></tr><tr><td>Random retrieval</td><td>0.0015</td><td>0.0153</td><td>0.1531</td></tr><tr><td>Length-frequency alignment</td><td>0.0032</td><td>0.0127</td><td>0.6053</td></tr><tr><td>Non-isomorphic alignment</td><td>0.0000</td><td>0.0121</td><td>0....
Table 2: Alignment results for our baselines. All the Precision@ k scores are reported in percentage.
437
111
72
image/png
6
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/tables/1", "parent": {"cref": "#/body"}, "children": [], "label": "table", "prov": [{"page_no": 5, "bbox": {"l": 72.87548065185547, "t": 779.512451171875, "r": 291.2668762207031, "b": 723.8392333984375, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 0]}], "captions": [{"cref": "#/texts/58"}], "reference...
null
9cffc2a5-950c-425e-848f-99d35b459371
2302.06555v2.pdf
text
is much more conservative than other metrics used for similar problems, including pairwise matching accuracy, percentile rank, and Pearson correlation (Minnema and Herbelot, 2019). Pairwise matching accuracy and percentile rank have random baseline scores of 0.5, and they converge in the limit. If a has a percentile ra...
null
441
485
72
image/png
6
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/59", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 5, "bbox": {"l": 71.28717041015625, "t": 664.7708740234375, "r": 292.0834045410156, "b": 422.3381652832031, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 887]}], "orig": "is much more conservative than other ...
null
5c634071-8d05-4683-9f05-c402eec59de6
2302.06555v2.pdf
text
Random retrieval baseline. Our target space of 79,059 words makes the random retrieval baseline:
null
442
49
72
image/png
6
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/60", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 5, "bbox": {"l": 71.1332015991211, "t": 412.76654052734375, "r": 291.7831726074219, "b": 388.6746520996094, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 96]}], "orig": "Random retrieval baseline. Our target ...
null
1278ce02-1320-42a5-9cfb-d30565291a10
2302.06555v2.pdf
formula
P@ 1 = 1 N N ∑ i =1 n$_{i}$ U (1)
null
301
70
72
image/png
6
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/61", "parent": {"cref": "#/body"}, "children": [], "label": "formula", "prov": [{"page_no": 5, "bbox": {"l": 140.55914306640625, "t": 377.66351318359375, "r": 290.9989929199219, "b": 342.2218017578125, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 33]}], "orig": "P@ 1 = 1 N N \u2211 i =1 n$_{i}$ ...
null
89af5241-b077-4675-971d-19f36ea358fa
2302.06555v2.pdf
text
where N represents the total number of image classes; i iterates over each image class; n$_{i}$ denotes the number of labels for image class i ; U refers to the total number of unique aliases. From Equation 1, we get P@1 ≈ 0 . 0015% .
null
441
130
72
image/png
6
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/62", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 5, "bbox": {"l": 71.25822448730469, "t": 331.0116271972656, "r": 292.0785217285156, "b": 265.8031005859375, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 234]}], "orig": "where N represents the total number o...
null
55e1d195-297b-4a19-9e09-b0abb40ec9f9
2302.06555v2.pdf
text
Length-frequency alignment baseline. The random retrieval baseline tells us how well we can align representations across the two modalities in the absence of any signal (by chance). However, the fact that we can do better than a random baseline, does not, strictly speaking, prove that our models partially converge towa...
null
442
213
72
image/png
6
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/63", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 5, "bbox": {"l": 71.22360229492188, "t": 255.29852294921875, "r": 292.08270263671875, "b": 149.0504150390625, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 386]}], "orig": "Length-frequency alignment baseline...
null
315395b8-2614-46b1-9ef5-e562835a112b
2302.06555v2.pdf
footnote
of a banana into the word ‘banana’, as training pairs to induce a mapping Ω . If Ω then maps the image of a lemon onto the word ‘lemon’ as its nearest neighbor, we say that the precisionat-one for this mapping is 100%. If two target aliases were listed in the bimodal dictionary for the source image, mapping the image o...
null
442
130
72
image/png
6
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/64", "parent": {"cref": "#/body"}, "children": [], "label": "footnote", "prov": [{"page_no": 5, "bbox": {"l": 70.99034881591797, "t": 141.355224609375, "r": 291.7580261230469, "b": 76.57489013671875, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 368]}], "orig": "of a banana into the word \u2018ba...
null
801cd3c0-c84e-4cd1-9371-710f36591cc2
2302.06555v2.pdf
caption
Figure 3: t-SNE plot of 5 words mapped from MAE$_{Huge}$ (blue) to OPT$_{30B}$ (orange) using Procrustes analysis. The green represent the mapped MAE$_{Huge}$ embeddings.
null
442
105
72
image/png
6
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/65", "parent": {"cref": "#/body"}, "children": [], "label": "caption", "prov": [{"page_no": 5, "bbox": {"l": 306.365478515625, "t": 638.8778076171875, "r": 527.3575439453125, "b": 586.4058837890625, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 170]}], "orig": "Figure 3: t-SNE plot of 5 words map...
null
00842dd0-5503-45db-93eb-e97de5e70515
2302.06555v2.pdf
picture
null
Figure 3: t-SNE plot of 5 words mapped from MAE$_{Huge}$ (blue) to OPT$_{30B}$ (orange) using Procrustes analysis. The green represent the mapped MAE$_{Huge}$ embeddings.
432
255
72
image/png
6
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/pictures/9", "parent": {"cref": "#/body"}, "children": [], "label": "picture", "prov": [{"page_no": 5, "bbox": {"l": 308.20428466796875, "t": 778.9642944335938, "r": 524.2200927734375, "b": 651.4071655273438, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 170]}], "captions": [{"cref": "#/texts/65"}], "r...
null
a114e3d5-0d2c-4bc9-96a2-f3c4388fe4b6
2302.06555v2.pdf
text
pick up on shallow characteristics shared across the two spaces. One example is frequency: frequent words may refer to frequently depicted objects. Learning what is rare is learning about the world, but more is at stake in the debate around whether LMs understand. Or consider length: word length may correlate with the ...
null
443
509
72
image/png
6
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/66", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 5, "bbox": {"l": 306.16961669921875, "t": 560.0623779296875, "r": 527.3591918945312, "b": 305.34613037109375, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 927]}], "orig": "pick up on shallow characteristics ...
null
e58f7f49-4941-4c85-a409-6e922bdc74fb
2302.06555v2.pdf
text
Non-isomorphic alignment baseline. The former two baselines examine the possibility of aligning representations across two modalities based on chance or shallow signals. While informative, neither strictly demonstrates that a linear projection cannot effectively establish a connection between two non-isomorphic represe...
null
443
428
72
image/png
6
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/67", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 5, "bbox": {"l": 306.20733642578125, "t": 289.99688720703125, "r": 527.4515380859375, "b": 76.0753173828125, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 786]}], "orig": "Non-isomorphic alignment baseline. T...
null
46f07d41-0be5-4d86-9cf0-7991da5d9684
2302.06555v2.pdf
caption
Figure 4: LMs converge toward the geometry of visual models as they grow larger on Exclude-1K set.
null
893
21
72
image/png
7
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/68", "parent": {"cref": "#/body"}, "children": [], "label": "caption", "prov": [{"page_no": 6, "bbox": {"l": 73.5337905883789, "t": 389.9848327636719, "r": 519.9508056640625, "b": 379.31463623046875, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 98]}], "orig": "Figure 4: LMs converge toward the g...
null
0ab8b3df-d493-4755-90ce-a662ff499418
2302.06555v2.pdf
picture
null
Figure 4: LMs converge toward the geometry of visual models as they grow larger on Exclude-1K set.
853
689
72
image/png
7
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/pictures/10", "parent": {"cref": "#/body"}, "children": [], "label": "picture", "prov": [{"page_no": 6, "bbox": {"l": 77.1706314086914, "t": 752.3418579101562, "r": 503.3846435546875, "b": 408.00994873046875, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 98]}], "captions": [{"cref": "#/texts/68"}], "re...
null
1a77051c-a165-494b-ba02-f228783e0409
2302.06555v2.pdf
text
computing the alignment. Table 2 presents a comparison of the three different baselines. All baselines have P@100 well below 1%. Our mappings between VMs and LMs score much higher (up to 64%), showing the strength of the correlation between the geometries induced by these models with respect to a conservative performan...
null
441
185
72
image/png
7
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/69", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 6, "bbox": {"l": 71.34822082519531, "t": 354.8316955566406, "r": 292.0813903808594, "b": 262.27301025390625, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 330]}], "orig": "computing the alignment. Table 2 pre...
null
84a88245-492a-4513-8107-e1c0ba51c59d
2302.06555v2.pdf
section_header
5 Results
null
112
23
72
image/png
7
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/70", "parent": {"cref": "#/body"}, "children": [], "label": "section_header", "prov": [{"page_no": 6, "bbox": {"l": 71.40553283691406, "t": 246.7689208984375, "r": 127.26032257080078, "b": 235.4713592529297, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 9]}], "orig": "5 Results", "text": "5 Resul...
null
8442b477-b8e0-451c-83fe-54f5c4bac58c
2302.06555v2.pdf
text
Similarities between visual and textual representations and how they are recovered through Procrustes Analysis are visualized through t-SNE in Figure 3. Our main results for nine VMs and all LMs are presented in Figure 4. The best P@100 scores are around 64%, with baseline scores lower than 1% (Table 2). In general, ev...
null
442
295
72
image/png
7
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/71", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 6, "bbox": {"l": 71.2156753540039, "t": 222.7291259765625, "r": 292.0754089355469, "b": 75.686279296875, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 527]}], "orig": "Similarities between visual and textual ...
null
e4b5caf4-460f-4b2f-af09-49e178d47915
2302.06555v2.pdf
text
an artifact such as a vehicle may be denoted by many lexemes (car, automobile, SUV, etc.), each of which may have multiple inflections and derivations (car, cars, car's, etc.). Figure 5 shows examples where the top predictions seem 'as good' as the gold standard. We find that a region of 10 neighbours corresponds rough...
null
442
536
72
image/png
7
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/72", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 6, "bbox": {"l": 306.32073974609375, "t": 354.6774597167969, "r": 527.4515380859375, "b": 86.65802001953125, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 970]}], "orig": "an artifact such as a vehicle may be...
null
2d965b10-92ed-40d9-bbed-0865ffec6183
2302.06555v2.pdf
paragraph
Image Classes
null
118
20
72
image/png
8
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/73", "parent": {"cref": "#/body"}, "children": [], "label": "paragraph", "prov": [{"page_no": 7, "bbox": {"l": 71.53941345214844, "t": 774.43701171875, "r": 130.51449584960938, "b": 764.5503540039062, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 13]}], "orig": "Image Classes", "text": "Image Cla...
null
50c5d387-e5f5-4af4-b690-5fd28eb9a8ae
2302.06555v2.pdf
section_header
Nearest Neighbors (Top 100)
null
233
20
72
image/png
8
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/74", "parent": {"cref": "#/body"}, "children": [], "label": "section_header", "prov": [{"page_no": 7, "bbox": {"l": 195.791259765625, "t": 774.43701171875, "r": 312.44464111328125, "b": 764.481201171875, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 27]}], "orig": "Nearest Neighbors (Top 100)", "...
null
a44f83d5-3acc-4bee-a5c9-721fedfa3748
2302.06555v2.pdf
picture
null
null
119
101
72
image/png
8
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/pictures/11", "parent": {"cref": "#/body"}, "children": [], "label": "picture", "prov": [{"page_no": 7, "bbox": {"l": 71.45142364501953, "t": 762.3618774414062, "r": 131.0451202392578, "b": 712.0277099609375, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 0]}], "captions": [], "references": [], "footnot...
null
aa678786-5264-43ad-a1ef-2f3a7e9f2b7e
2302.06555v2.pdf
picture
null
null
119
109
72
image/png
8
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/pictures/12", "parent": {"cref": "#/body"}, "children": [], "label": "picture", "prov": [{"page_no": 7, "bbox": {"l": 71.35606384277344, "t": 704.283935546875, "r": 130.9281768798828, "b": 650.0250244140625, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 0]}], "captions": [], "references": [], "footnote...
null
f17e02d8-34f7-457b-96c8-91c8487bc4d5
2302.06555v2.pdf
picture
null
null
119
101
72
image/png
8
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/pictures/13", "parent": {"cref": "#/body"}, "children": [], "label": "picture", "prov": [{"page_no": 7, "bbox": {"l": 71.25289154052734, "t": 645.5927734375, "r": 131.10714721679688, "b": 594.8673095703125, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 0]}], "captions": [], "references": [], "footnotes...
null
2a4dbed3-c4be-46e1-918b-042f4b54da0d
2302.06555v2.pdf
picture
null
null
119
100
72
image/png
8
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/pictures/14", "parent": {"cref": "#/body"}, "children": [], "label": "picture", "prov": [{"page_no": 7, "bbox": {"l": 71.36942291259766, "t": 587.2476806640625, "r": 130.98825073242188, "b": 537.20166015625, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 0]}], "captions": [], "references": [], "footnote...
null
985b2396-eabc-4d66-9946-e641632253c2
2302.06555v2.pdf
text
palmyra, palmyra palm, palm, palais, palatines, royal palm , palazzi, palazzo, palisades, palatinate, regency, palatial, palas, palatinates, palms, palimony, caribe, palmier, paladins, banyan tree, bermudas, bruneian, palazzos, bahamian, palmers, malacca, madeira, ceiba tree, palmettos, palmtop, oil palm, pal, royal, r...
null
639
95
72
image/png
8
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/75", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 7, "bbox": {"l": 195.43907165527344, "t": 760.7074584960938, "r": 514.689697265625, "b": 713.5289916992188, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 647]}], "orig": "palmyra, palmyra palm, palm, palais, ...
null
402ad2ae-7330-49bf-b722-cf3f2833f75b
2302.06555v2.pdf
text
drinking fountain , water fountain , cesspools, water cooler, manhole cover, bird feeder, birdbath, water jug, drainage system, fountain, water tap, watering can, garbage disposal, cesspit, recycling bin, water tank, garbage can, water pipe, manhole, toilet bowl, water closet, cement mixer, trash bin, soda fountain, bu...
null
636
95
72
image/png
8
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/76", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 7, "bbox": {"l": 195.55520629882812, "t": 702.2905883789062, "r": 513.342041015625, "b": 654.7896728515625, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 717]}], "orig": "drinking fountain , water fountain , ...
null
5635b7b6-31c2-46b3-bd21-153192905bcc
2302.06555v2.pdf
text
clamp, wrench, screwdriver, socket wrench, carabiner , torque wrench, screwdrivers, fastener, elastic bandage, pliers, retractor, screw thread, carabiners, plunger, spanner, corer, screw, aspirator, clamps, adjustable spanner, applicator, center punch, latch, extractor, lever, adaptor, hose, gripper, compensator, pipe ...
null
634
95
72
image/png
8
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/77", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 7, "bbox": {"l": 195.4573974609375, "t": 643.7977905273438, "r": 512.2069091796875, "b": 596.356689453125, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 665]}], "orig": "clamp, wrench, screwdriver, socket wre...
null
02fdcde5-29ec-4075-beb5-e487b6f1d1af
2302.06555v2.pdf
text
community center, training school, school, youth hostel, service department, conference center, music school, day school, student union , academy, life office, hall, orphanage, school system, meeting, college, ministry, school principal, government building, house, council, clinic, business office, schoolmaster, worksh...
null
643
94
72
image/png
8
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/78", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 7, "bbox": {"l": 195.45948791503906, "t": 585.3809204101562, "r": 516.7018432617188, "b": 538.2024536132812, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 631]}], "orig": "community center, training school, s...
null
df877eec-5049-4528-b781-622bae4f52bb
2302.06555v2.pdf
picture
null
null
120
101
72
image/png
8
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/pictures/15", "parent": {"cref": "#/body"}, "children": [], "label": "picture", "prov": [{"page_no": 7, "bbox": {"l": 132.97372436523438, "t": 762.504150390625, "r": 192.79490661621094, "b": 712.1260375976562, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 0]}], "captions": [], "references": [], "footno...
null
c4b4432c-40b2-40db-8e20-c6d1c3c4c07e
2302.06555v2.pdf
picture
null
null
120
109
72
image/png
8
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/pictures/16", "parent": {"cref": "#/body"}, "children": [], "label": "picture", "prov": [{"page_no": 7, "bbox": {"l": 132.55625915527344, "t": 704.5750732421875, "r": 192.52426147460938, "b": 649.9854125976562, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 0]}], "captions": [], "references": [], "footn...
null
4feea742-44e4-42a3-9b88-999e8919e6ef
2302.06555v2.pdf
picture
null
null
119
101
72
image/png
8
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/pictures/17", "parent": {"cref": "#/body"}, "children": [], "label": "picture", "prov": [{"page_no": 7, "bbox": {"l": 132.8827362060547, "t": 645.2271118164062, "r": 192.2962646484375, "b": 594.8204956054688, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 0]}], "captions": [], "references": [], "footnot...
null
80a9e68e-cf48-473c-9b83-3729fc5b71ac
2302.06555v2.pdf
caption
Figure 5: Examples featuring the 100 nearest neighbors in the mapping of image classes into the language representation space (from MAE$_{Huge}$ to OPT$_{30B}$). The golden labels are highlighted in green.
null
908
49
72
image/png
8
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/79", "parent": {"cref": "#/body"}, "children": [], "label": "caption", "prov": [{"page_no": 7, "bbox": {"l": 71.26412200927734, "t": 521.2467041015625, "r": 525.5408325195312, "b": 497.1179504394531, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 205]}], "orig": "Figure 5: Examples featuring the 1...
null
End of preview. Expand in Data Studio

Dataset Card for Dataset Name

Dataset Details

Dataset Description

  • Curated by: [More Information Needed]
  • Funded by [optional]: [More Information Needed]
  • Shared by [optional]: [More Information Needed]
  • Language(s) (NLP): [More Information Needed]
  • License: [More Information Needed]

Dataset Sources [optional]

  • Repository: [More Information Needed]
  • Paper [optional]: [More Information Needed]
  • Demo [optional]: [More Information Needed]

Uses

Direct Use

[More Information Needed]

Out-of-Scope Use

[More Information Needed]

Dataset Structure

[More Information Needed]

Dataset Creation

Curation Rationale

[More Information Needed]

Source Data

Data Collection and Processing

[More Information Needed]

Who are the source data producers?

[More Information Needed]

Annotations [optional]

Annotation process

[More Information Needed]

Who are the annotators?

[More Information Needed]

Personal and Sensitive Information

[More Information Needed]

Bias, Risks, and Limitations

[More Information Needed]

Recommendations

Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.

Citation [optional]

BibTeX:

[More Information Needed]

APA:

[More Information Needed]

Glossary [optional]

[More Information Needed]

More Information [optional]

[More Information Needed]

Dataset Card Authors [optional]

[More Information Needed]

Dataset Card Contact

[More Information Needed]

Downloads last month
32