jaagli commited on
Commit
6e1abd3
·
verified ·
1 Parent(s): 3c687bb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -8
README.md CHANGED
@@ -1735,14 +1735,21 @@ common_words = load_dataset("jaagli/en-cldi", split="train")
1735
  # Citation
1736
 
1737
  ```
1738
- @misc{li2024visionlanguagemodelsshare,
1739
- title={Do Vision and Language Models Share Concepts? A Vector Space Alignment Study},
1740
- author={Jiaang Li and Yova Kementchedjhieva and Constanza Fierro and Anders Søgaard},
1741
- year={2024},
1742
- eprint={2302.06555},
1743
- archivePrefix={arXiv},
1744
- primaryClass={cs.CL},
1745
- url={https://arxiv.org/abs/2302.06555},
 
 
 
 
 
 
 
1746
  }
1747
  ```
1748
  ```
 
1735
  # Citation
1736
 
1737
  ```
1738
+ @article{li-etal-2024-vision-language,
1739
+ title = "Do Vision and Language Models Share Concepts? A Vector Space Alignment Study",
1740
+ author = "Li, Jiaang and
1741
+ Kementchedjhieva, Yova and
1742
+ Fierro, Constanza and
1743
+ S{\o}gaard, Anders",
1744
+ journal = "Transactions of the Association for Computational Linguistics",
1745
+ volume = "12",
1746
+ year = "2024",
1747
+ address = "Cambridge, MA",
1748
+ publisher = "MIT Press",
1749
+ url = "https://aclanthology.org/2024.tacl-1.68/",
1750
+ doi = "10.1162/tacl_a_00698",
1751
+ pages = "1232--1249",
1752
+ abstract = "Large-scale pretrained language models (LMs) are said to {\textquotedblleft}lack the ability to connect utterances to the world{\textquotedblright} (Bender and Koller, 2020), because they do not have {\textquotedblleft}mental models of the world{\textquotedblright} (Mitchell and Krakauer, 2023). If so, one would expect LM representations to be unrelated to representations induced by vision models. We present an empirical evaluation across four families of LMs (BERT, GPT-2, OPT, and LLaMA-2) and three vision model architectures (ResNet, SegFormer, and MAE). Our experiments show that LMs partially converge towards representations isomorphic to those of vision models, subject to dispersion, polysemy, and frequency. This has important implications for both multi-modal processing and the LM understanding debate (Mitchell and Krakauer, 2023).1"
1753
  }
1754
  ```
1755
  ```