Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -106,6 +106,12 @@ We evaluate LLMs in two formats: short-answer questions, and multiple-choice que
|
|
| 106 |
We show that LLMs perform better in cultures that are more present online, with a maximum 57.34% difference in GPT-4, the best-performing model, in the short-answer format.
|
| 107 |
Furthermore, we find that LLMs perform better in their local languages for mid-to-high-resource languages. Interestingly, for languages deemed to be low-resource, LLMs provide better answers in English.
|
| 108 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 109 |
## Dataset
|
| 110 |
All the data samples for short-answer questions, including the human-annotated answers, can be found in the `data/` directory.
|
| 111 |
Specifically, the annotations from each country are included in the `annotations` split, and each country/region's data can be accessed by **[country codes](https://huggingface.co/datasets/nayeon212/BLEnD#countryregion-codes)**.
|
|
|
|
| 106 |
We show that LLMs perform better in cultures that are more present online, with a maximum 57.34% difference in GPT-4, the best-performing model, in the short-answer format.
|
| 107 |
Furthermore, we find that LLMs perform better in their local languages for mid-to-high-resource languages. Interestingly, for languages deemed to be low-resource, LLMs provide better answers in English.
|
| 108 |
|
| 109 |
+
## Requirements
|
| 110 |
+
```Python
|
| 111 |
+
datasets >= 2.19.2
|
| 112 |
+
pandas >= 2.1.4
|
| 113 |
+
```
|
| 114 |
+
|
| 115 |
## Dataset
|
| 116 |
All the data samples for short-answer questions, including the human-annotated answers, can be found in the `data/` directory.
|
| 117 |
Specifically, the annotations from each country are included in the `annotations` split, and each country/region's data can be accessed by **[country codes](https://huggingface.co/datasets/nayeon212/BLEnD#countryregion-codes)**.
|