Update README.md
Browse files
README.md
CHANGED
|
@@ -7,7 +7,7 @@ language_creators:
|
|
| 7 |
- machine-generated
|
| 8 |
multilinguality:
|
| 9 |
- multilingual
|
| 10 |
-
pretty_name:
|
| 11 |
size_categories:
|
| 12 |
- 100K<n<1M
|
| 13 |
task_categories:
|
|
@@ -128,11 +128,21 @@ language:
|
|
| 128 |
|
| 129 |
### Dataset Summary
|
| 130 |
|
| 131 |
-
This
|
| 132 |
|
| 133 |
-
###
|
| 134 |
|
| 135 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 136 |
|
| 137 |
### Languages
|
| 138 |
|
|
@@ -209,12 +219,11 @@ This dataset card aims to be a base template for new datasets. It has been gener
|
|
| 209 |
### Citation Information
|
| 210 |
|
| 211 |
```
|
| 212 |
-
@misc{
|
| 213 |
-
author = {
|
| 214 |
-
title = {
|
| 215 |
-
year = {2023}
|
| 216 |
publisher = {GitHub},
|
| 217 |
-
journal = {GitHub repository},
|
| 218 |
howpublished = {\url{https://github.com/daniel-furman/Capstone}},
|
| 219 |
}
|
| 220 |
```
|
|
@@ -243,26 +252,3 @@ This dataset card aims to be a base template for new datasets. It has been gener
|
|
| 243 |
}
|
| 244 |
```
|
| 245 |
|
| 246 |
-
```
|
| 247 |
-
@inproceedings{elsahar-etal-2018-rex,
|
| 248 |
-
title = "{T}-{RE}x: A Large Scale Alignment of Natural Language with Knowledge Base Triples",
|
| 249 |
-
author = "Elsahar, Hady and
|
| 250 |
-
Vougiouklis, Pavlos and
|
| 251 |
-
Remaci, Arslen and
|
| 252 |
-
Gravier, Christophe and
|
| 253 |
-
Hare, Jonathon and
|
| 254 |
-
Laforest, Frederique and
|
| 255 |
-
Simperl, Elena",
|
| 256 |
-
booktitle = "Proceedings of the Eleventh International Conference on Language Resources and Evaluation ({LREC} 2018)",
|
| 257 |
-
month = may,
|
| 258 |
-
year = "2018",
|
| 259 |
-
address = "Miyazaki, Japan",
|
| 260 |
-
publisher = "European Language Resources Association (ELRA)",
|
| 261 |
-
url = "https://aclanthology.org/L18-1544",
|
| 262 |
-
}
|
| 263 |
-
|
| 264 |
-
```
|
| 265 |
-
|
| 266 |
-
### Contributions
|
| 267 |
-
|
| 268 |
-
[More Information Needed]
|
|
|
|
| 7 |
- machine-generated
|
| 8 |
multilinguality:
|
| 9 |
- multilingual
|
| 10 |
+
pretty_name: Polyglot or Not? Fact-Completion Benchmark
|
| 11 |
size_categories:
|
| 12 |
- 100K<n<1M
|
| 13 |
task_categories:
|
|
|
|
| 128 |
|
| 129 |
### Dataset Summary
|
| 130 |
|
| 131 |
+
This is the dataset for **Polyglot or Not?: Measuring Multilingual Encyclopedic Knowledge Retrieval from Foundation Language Models**.
|
| 132 |
|
| 133 |
+
### Test Description
|
| 134 |
|
| 135 |
+
Given a factual association such as *The capital of France is **Paris***, we determine whether a model adequately "knows" this information with the following test:
|
| 136 |
+
|
| 137 |
+
* Step **1**: prompt the model to predict the likelihood of the token **Paris** following *The Capital of France is*
|
| 138 |
+
|
| 139 |
+
* Step **2**: prompt the model to predict the average likelihood of a set of false, counterfactual tokens following the same stem.
|
| 140 |
+
|
| 141 |
+
If the value from **1** is greater than the value from **2** we conclude that model adequately recalls that fact. Formally, this is an application of the Contrastive Knowledge Assessment proposed in [[1][bib]].
|
| 142 |
+
|
| 143 |
+
For every foundation model of interest (like [LLaMA](https://arxiv.org/abs/2302.13971)), we perform this assessment on a set of facts translated into 20 languages. All told, we score foundation models on 303k fact-completions ([results](https://github.com/daniel-furman/capstone#multilingual-fact-completion-results)).
|
| 144 |
+
|
| 145 |
+
We also score monolingual models (like [GPT-2](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)) on English-only fact-completion ([results](https://github.com/daniel-furman/capstone#english-fact-completion-results)).
|
| 146 |
|
| 147 |
### Languages
|
| 148 |
|
|
|
|
| 219 |
### Citation Information
|
| 220 |
|
| 221 |
```
|
| 222 |
+
@misc{polyglot_or_not,
|
| 223 |
+
author = {Daniel Furman and Tim Schott and Shreshta Bhat},
|
| 224 |
+
title = {Polyglot or Not?: Measuring Multilingual Encyclopedic Knowledge Retrieval from Foundation Language Models},
|
| 225 |
+
year = {2023}
|
| 226 |
publisher = {GitHub},
|
|
|
|
| 227 |
howpublished = {\url{https://github.com/daniel-furman/Capstone}},
|
| 228 |
}
|
| 229 |
```
|
|
|
|
| 252 |
}
|
| 253 |
```
|
| 254 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|