id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 68.7k ⌀ | citation stringlengths 0 10.7k ⌀ | cardData null | likes int64 0 3.55k | downloads int64 0 10.1M | card stringlengths 0 1.01M |
|---|---|---|---|---|---|---|---|---|---|
totally-not-an-llm/EverythingLM-data-V3 | 2023-09-11T02:54:38.000Z | [
"license:mit",
"region:us"
] | totally-not-an-llm | null | null | null | 12 | 294 | ---
license: mit
---
# EverythingLM V3 Dataset
**EverythingLM V3** is a diverse instruct dataset consisting of roughly 1.1k of sysprompt-user-assistant triads. These were generated using principles from both evol-instruct and Orca. The dataset encompasses a wide array of topics and interactions.
### Diferences from V2
* Used march gpt-4 instead of latest
* Dynamically adjusted temperature based on the task
* Much more diverse (8 new categories)
* Flesch hints
* 10% more data
* Better filtering
* Overall refined dataset generation pipeline
### Category distribution

\*These values represent the data as generated, but slight filtering has been applied, so values might be a bit different. |
clarin-pl/kpwr-ner | 2023-01-30T22:54:02.000Z | [
"task_categories:other",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:18K",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:pl",
"license:cc-by-3.0",
"structure-prediction",
"region:us"
] | clarin-pl | KPWR-NER tagging dataset. | null | null | 6 | 293 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- pl
license:
- cc-by-3.0
multilinguality:
- monolingual
size_categories:
- 18K
- 10K<n<100K
source_datasets:
- original
task_categories:
- other
task_ids:
- named-entity-recognition
pretty_name: KPWr-NER
tags:
- structure-prediction
---
# KPWR-NER
## Description
KPWR-NER is a part the Polish Corpus of Wrocław University of Technology (*Korpus Języka Polskiego Politechniki Wrocławskiej*). Its objective is named entity recognition for fine-grained categories of entities. It is the ‘n82’ version of the KPWr, which means that number of classes is restricted to 82 (originally 120). During corpus creation, texts were annotated by humans from various sources, covering many domains and genres.
## Tasks (input, output and metrics)
Named entity recognition (NER) - tagging entities in text with their corresponding type.
**Input** ('*tokens'* column): sequence of tokens
**Output** ('*ner'* column): sequence of predicted tokens’ classes in BIO notation (82 possible classes, described in detail in the annotation guidelines)
**Measurements**: F1-score (seqeval)
**Example**:
Input: `[‘Roboty’, ‘mają’, ‘kilkanaście’, ‘lat’, ‘i’, ‘pochodzą’, ‘z’, ‘USA’, ‘,’, ‘Wysokie’, ‘napięcie’, ‘jest’, ‘dużo’, ‘młodsze’, ‘,’, ‘powstało’, ‘w’, ‘Niemczech’, ‘.’]`
Input (translated by DeepL): `Robots are more than a dozen years old and come from the US, High Voltage is much younger, having been developed in Germany.`
Output: `[‘B-nam_pro_title’, ‘O’, ‘O’, ‘O’, ‘O’, ‘O’, ‘O’, ‘B-nam_loc_gpe_country’, ‘O’, ‘B-nam_pro_title’, ‘I-nam_pro_title’, ‘O’, ‘O’, ‘O’, ‘O’, ‘O’, ‘O’, ‘B-nam_loc_gpe_country’, ‘O’]`
## Data splits
| Subset | Cardinality (sentences) |
|--------|------------------------:|
| train | 13959 |
| dev | 0 |
| test | 4323 |
## Class distribution (without "O" and "I-*")
| Class | train | validation | test |
|:----------------------------|--------:|-------------:|----------:|
| B-nam_liv_person | 0.21910 | - | 0.21422 |
| B-nam_loc_gpe_city | 0.10101 | - | 0.09865 |
| B-nam_loc_gpe_country | 0.07467 | - | 0.08059 |
| B-nam_org_institution | 0.05893 | - | 0.06005 |
| B-nam_org_organization | 0.04448 | - | 0.05553 |
| B-nam_org_group_team | 0.03492 | - | 0.03363 |
| B-nam_adj_country | 0.03410 | - | 0.03747 |
| B-nam_org_company | 0.02439 | - | 0.01716 |
| B-nam_pro_media_periodic | 0.02250 | - | 0.01896 |
| B-nam_fac_road | 0.01995 | - | 0.02144 |
| B-nam_liv_god | 0.01934 | - | 0.00790 |
| B-nam_org_nation | 0.01739 | - | 0.01828 |
| B-nam_oth_tech | 0.01724 | - | 0.01377 |
| B-nam_pro_media_web | 0.01709 | - | 0.00903 |
| B-nam_fac_goe | 0.01596 | - | 0.01445 |
| B-nam_eve_human | 0.01573 | - | 0.01761 |
| B-nam_pro_title | 0.01558 | - | 0.00790 |
| B-nam_pro_brand | 0.01543 | - | 0.01038 |
| B-nam_org_political_party | 0.01264 | - | 0.01309 |
| B-nam_loc_gpe_admin1 | 0.01219 | - | 0.01445 |
| B-nam_eve_human_sport | 0.01174 | - | 0.01242 |
| B-nam_pro_software | 0.01091 | - | 0.02190 |
| B-nam_adj | 0.00963 | - | 0.01174 |
| B-nam_loc_gpe_admin3 | 0.00888 | - | 0.01061 |
| B-nam_pro_model_car | 0.00873 | - | 0.00587 |
| B-nam_loc_hydronym_river | 0.00843 | - | 0.01151 |
| B-nam_oth | 0.00775 | - | 0.00497 |
| B-nam_pro_title_document | 0.00738 | - | 0.01986 |
| B-nam_loc_astronomical | 0.00730 | - | - |
| B-nam_oth_currency | 0.00723 | - | 0.01151 |
| B-nam_adj_city | 0.00670 | - | 0.00948 |
| B-nam_org_group_band | 0.00587 | - | 0.00429 |
| B-nam_loc_gpe_admin2 | 0.00565 | - | 0.00813 |
| B-nam_loc_gpe_district | 0.00504 | - | 0.00406 |
| B-nam_loc_land_continent | 0.00459 | - | 0.00722 |
| B-nam_loc_country_region | 0.00459 | - | 0.00090 |
| B-nam_loc_land_mountain | 0.00414 | - | 0.00203 |
| B-nam_pro_title_book | 0.00384 | - | 0.00248 |
| B-nam_loc_historical_region | 0.00376 | - | 0.00497 |
| B-nam_loc | 0.00361 | - | 0.00090 |
| B-nam_eve | 0.00361 | - | 0.00181 |
| B-nam_org_group | 0.00331 | - | 0.00406 |
| B-nam_loc_land_island | 0.00331 | - | 0.00248 |
| B-nam_pro_media_tv | 0.00316 | - | 0.00158 |
| B-nam_liv_habitant | 0.00316 | - | 0.00158 |
| B-nam_eve_human_cultural | 0.00316 | - | 0.00497 |
| B-nam_pro_title_tv | 0.00309 | - | 0.00542 |
| B-nam_oth_license | 0.00286 | - | 0.00248 |
| B-nam_num_house | 0.00256 | - | 0.00248 |
| B-nam_pro_title_treaty | 0.00248 | - | 0.00045 |
| B-nam_fac_system | 0.00248 | - | 0.00587 |
| B-nam_loc_gpe_subdivision | 0.00241 | - | 0.00587 |
| B-nam_loc_land_region | 0.00226 | - | 0.00248 |
| B-nam_pro_title_album | 0.00218 | - | 0.00158 |
| B-nam_adj_person | 0.00203 | - | 0.00406 |
| B-nam_fac_square | 0.00196 | - | 0.00135 |
| B-nam_pro_award | 0.00188 | - | 0.00519 |
| B-nam_eve_human_holiday | 0.00188 | - | 0.00203 |
| B-nam_pro_title_song | 0.00166 | - | 0.00158 |
| B-nam_pro_media_radio | 0.00151 | - | 0.00068 |
| B-nam_pro_vehicle | 0.00151 | - | 0.00090 |
| B-nam_oth_position | 0.00143 | - | 0.00226 |
| B-nam_liv_animal | 0.00143 | - | 0.00248 |
| B-nam_pro | 0.00135 | - | 0.00045 |
| B-nam_oth_www | 0.00120 | - | 0.00451 |
| B-nam_num_phone | 0.00120 | - | 0.00045 |
| B-nam_pro_title_article | 0.00113 | - | - |
| B-nam_oth_data_format | 0.00113 | - | 0.00226 |
| B-nam_fac_bridge | 0.00105 | - | 0.00090 |
| B-nam_liv_character | 0.00098 | - | - |
| B-nam_pro_software_game | 0.00090 | - | 0.00068 |
| B-nam_loc_hydronym_lake | 0.00090 | - | 0.00045 |
| B-nam_loc_gpe_conurbation | 0.00090 | - | - |
| B-nam_pro_media | 0.00083 | - | 0.00181 |
| B-nam_loc_land | 0.00075 | - | 0.00045 |
| B-nam_loc_land_peak | 0.00075 | - | - |
| B-nam_fac_park | 0.00068 | - | 0.00226 |
| B-nam_org_organization_sub | 0.00060 | - | 0.00068 |
| B-nam_loc_hydronym | 0.00060 | - | 0.00023 |
| B-nam_loc_hydronym_sea | 0.00045 | - | 0.00068 |
| B-nam_loc_hydronym_ocean | 0.00045 | - | 0.00023 |
| B-nam_fac_goe_stop | 0.00038 | - | 0.00090 |
## Citation
```
@inproceedings{broda-etal-2012-kpwr,
title = "{KPW}r: Towards a Free Corpus of {P}olish",
author = "Broda, Bartosz and
Marci{\'n}czuk, Micha{\l} and
Maziarz, Marek and
Radziszewski, Adam and
Wardy{\'n}ski, Adam",
booktitle = "Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}'12)",
month = may,
year = "2012",
address = "Istanbul, Turkey",
publisher = "European Language Resources Association (ELRA)",
url = "http://www.lrec-conf.org/proceedings/lrec2012/pdf/965_Paper.pdf",
pages = "3218--3222",
abstract = "This paper presents our efforts aimed at collecting and annotating a free Polish corpus. The corpus will serve for us as training and testing material for experiments with Machine Learning algorithms. As others may also benefit from the resource, we are going to release it under a Creative Commons licence, which is hoped to remove unnecessary usage restrictions, but also to facilitate reproduction of our experimental results. The corpus is being annotated with various types of linguistic entities: chunks and named entities, selected syntactic and semantic relations, word senses and anaphora. We report on the current state of the project as well as our ultimate goals.",
}
```
## License
```
Creative Commons Attribution 3.0 Unported Licence
```
## Links
[HuggingFace](https://huggingface.co/datasets/clarin-pl/kpwr-ner)
[Source](https://clarin-pl.eu/index.php/kpwr-en/)
[Paper](https://aclanthology.org/L12-1574/)
[KPWr annotation guidelines](http://www.nlp.pwr.wroc.pl/narzedzia-i-zasoby/zasoby/kpwr-lemma/16-narzedzia-zasoby/79-wytyczne)
[KPWr annotation guidelines - named entities](https://clarin-pl.eu/dspace/handle/11321/294)
## Examples
### Loading
```python
from pprint import pprint
from datasets import load_dataset
dataset = load_dataset("clarin-pl/kpwr-ner")
pprint(dataset['train'][0])
# {'lemmas': ['roborally', 'czy', 'wysoki', 'napięcie', '?'],
# 'ner': [73, 160, 73, 151, 160],
# 'orth': ['subst:sg:nom:n',
# 'qub',
# 'adj:sg:nom:n:pos',
# 'subst:sg:nom:n',
# 'interp'],
# 'tokens': ['RoboRally', 'czy', 'Wysokie', 'napięcie', '?']}
```
### Evaluation
```python
import random
from pprint import pprint
from datasets import load_dataset, load_metric
dataset = load_dataset("clarin-pl/kpwr-ner")
references = dataset["test"]["ner"]
# generate random predictions
predictions = [
[
random.randrange(dataset["train"].features["ner"].feature.num_classes)
for _ in range(len(labels))
]
for labels in references
]
# transform to original names of labels
references_named = [
[dataset["train"].features["ner"].feature.names[label] for label in labels]
for labels in references
]
predictions_named = [
[dataset["train"].features["ner"].feature.names[label] for label in labels]
for labels in predictions
]
# utilise seqeval to evaluate
seqeval = load_metric("seqeval")
seqeval_score = seqeval.compute(
predictions=predictions_named, references=references_named, scheme="IOB2"
)
pprint(seqeval_score, depth=1)
# {'nam_adj': {...},
# 'nam_adj_city': {...},
# 'nam_adj_country': {...},
# 'nam_adj_person': {...},
# 'nam_eve': {...},
# 'nam_eve_human': {...},
# 'nam_eve_human_cultural': {...},
# 'nam_eve_human_holiday': {...},
# 'nam_eve_human_sport': {...},
# 'nam_fac_bridge': {...},
# 'nam_fac_goe': {...},
# 'nam_fac_goe_stop': {...},
# 'nam_fac_park': {...},
# 'nam_fac_road': {...},
# 'nam_fac_square': {...},
# 'nam_fac_system': {...},
# 'nam_liv_animal': {...},
# 'nam_liv_character': {...},
# 'nam_liv_god': {...},
# 'nam_liv_habitant': {...},
# 'nam_liv_person': {...},
# 'nam_loc': {...},
# 'nam_loc_astronomical': {...},
# 'nam_loc_country_region': {...},
# 'nam_loc_gpe_admin1': {...},
# 'nam_loc_gpe_admin2': {...},
# 'nam_loc_gpe_admin3': {...},
# 'nam_loc_gpe_city': {...},
# 'nam_loc_gpe_conurbation': {...},
# 'nam_loc_gpe_country': {...},
# 'nam_loc_gpe_district': {...},
# 'nam_loc_gpe_subdivision': {...},
# 'nam_loc_historical_region': {...},
# 'nam_loc_hydronym': {...},
# 'nam_loc_hydronym_lake': {...},
# 'nam_loc_hydronym_ocean': {...},
# 'nam_loc_hydronym_river': {...},
# 'nam_loc_hydronym_sea': {...},
# 'nam_loc_land': {...},
# 'nam_loc_land_continent': {...},
# 'nam_loc_land_island': {...},
# 'nam_loc_land_mountain': {...},
# 'nam_loc_land_peak': {...},
# 'nam_loc_land_region': {...},
# 'nam_num_house': {...},
# 'nam_num_phone': {...},
# 'nam_org_company': {...},
# 'nam_org_group': {...},
# 'nam_org_group_band': {...},
# 'nam_org_group_team': {...},
# 'nam_org_institution': {...},
# 'nam_org_nation': {...},
# 'nam_org_organization': {...},
# 'nam_org_organization_sub': {...},
# 'nam_org_political_party': {...},
# 'nam_oth': {...},
# 'nam_oth_currency': {...},
# 'nam_oth_data_format': {...},
# 'nam_oth_license': {...},
# 'nam_oth_position': {...},
# 'nam_oth_tech': {...},
# 'nam_oth_www': {...},
# 'nam_pro': {...},
# 'nam_pro_award': {...},
# 'nam_pro_brand': {...},
# 'nam_pro_media': {...},
# 'nam_pro_media_periodic': {...},
# 'nam_pro_media_radio': {...},
# 'nam_pro_media_tv': {...},
# 'nam_pro_media_web': {...},
# 'nam_pro_model_car': {...},
# 'nam_pro_software': {...},
# 'nam_pro_software_game': {...},
# 'nam_pro_title': {...},
# 'nam_pro_title_album': {...},
# 'nam_pro_title_article': {...},
# 'nam_pro_title_book': {...},
# 'nam_pro_title_document': {...},
# 'nam_pro_title_song': {...},
# 'nam_pro_title_treaty': {...},
# 'nam_pro_title_tv': {...},
# 'nam_pro_vehicle': {...},
# 'overall_accuracy': 0.006156203762418094,
# 'overall_f1': 0.0009844258777797407,
# 'overall_precision': 0.0005213624939842789,
# 'overall_recall': 0.008803611738148984}
``` |
MU-NLPC/Calc-gsm8k | 2023-10-07T16:40:00.000Z | [
"task_categories:text-generation",
"task_categories:question-answering",
"size_categories:1K<n<10K",
"language:en",
"license:mit",
"arxiv:2110.14168",
"arxiv:2305.15017",
"region:us"
] | MU-NLPC | null | null | null | 1 | 293 | ---
license: mit
task_categories:
- text-generation
- question-answering
language:
- en
size_categories:
- 1K<n<10K
---
# Dataset Card for "Calc-gsm8k"
## Summary
This dataset is an instance of gsm8k dataset, converted to a simple html-like language that can be easily parsed (e.g. by BeautifulSoup). The data contains 3 types of tags:
- gadget: A tag whose content is intended to be evaluated by calling an external tool (sympy-based calculator in this case)
- output: An output of the external tool
- result: The final answer of the mathematical problem (a number)
## Supported Tasks
The dataset is intended for training Chain-of-Thought reasoning **models able to use external tools** to enhance the factuality of their responses.
This dataset presents in-context scenarios where models can out-source the computations in the reasoning chain to a calculator.
## Construction Process
The answers in the original dataset was in in a structured but non-standard format. So, the answers were parsed, all arithmetical expressions
were evaluated using a sympy-based calculator, the outputs were checked to be consistent with the intermediate results and finally exported
into a simple html-like language that BeautifulSoup can parse.
## Content and Data splits
Content and splits correspond to the original gsm8k dataset.
See [gsm8k HF dataset](https://huggingface.co/datasets/gsm8k) and [official repository](https://github.com/openai/grade-school-math) for more info.
## Licence
MIT, consistently with the original dataset.
## Cite
If you use this version of dataset in research, please cite the [original GSM8K paper](https://arxiv.org/abs/2110.14168) and our report as follows:
```bibtex
@article{kadlcik2023calcx,
title={Calc-X: Enriching Arithmetical Chain-of-Thoughts Datasets by Interaction with Symbolic Systems},
author={Marek Kadlčík and Michal Štefánik},
year={2023},
eprint={2305.15017},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
``` |
result-kand2-sdxl-wuerst-karlo/80bca589 | 2023-09-28T22:53:52.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | null | 0 | 293 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 242
num_examples: 10
download_size: 1409
dataset_size: 242
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "80bca589"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
result-kand2-sdxl-wuerst-karlo/dda30fff | 2023-09-28T22:55:08.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | null | 0 | 293 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 164
num_examples: 10
download_size: 1316
dataset_size: 164
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "dda30fff"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
result-kand2-sdxl-wuerst-karlo/bfc3e463 | 2023-09-28T22:55:10.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | null | 0 | 293 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 164
num_examples: 10
download_size: 1316
dataset_size: 164
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "bfc3e463"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
wmt15 | 2023-04-05T13:43:50.000Z | [
"task_categories:translation",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:translation",
"size_categories:10M<n<100M",
"source_datasets:extended|europarl_bilingual",
"source_datasets:extended|giga_fren",
"source_datasets:extended|news_commentary",
"source_datasets:extended|un_multi",
"language:cs",
"language:de",
"language:en",
"language:fi",
"language:fr",
"language:ru",
"license:unknown",
"region:us"
] | null | null | @InProceedings{bojar-EtAl:2015:WMT,
author = {Bojar, Ond\v{r}ej and Chatterjee, Rajen and Federmann, Christian and Haddow, Barry and Huck, Matthias and Hokamp, Chris and Koehn, Philipp and Logacheva, Varvara and Monz, Christof and Negri, Matteo and Post, Matt and Scarton, Carolina and Specia, Lucia and Turchi, Marco},
title = {Findings of the 2015 Workshop on Statistical Machine Translation},
booktitle = {Proceedings of the Tenth Workshop on Statistical Machine Translation},
month = {September},
year = {2015},
address = {Lisbon, Portugal},
publisher = {Association for Computational Linguistics},
pages = {1--46},
url = {http://aclweb.org/anthology/W15-3001}
} | null | 2 | 289 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- cs
- de
- en
- fi
- fr
- ru
license:
- unknown
multilinguality:
- translation
size_categories:
- 10M<n<100M
source_datasets:
- extended|europarl_bilingual
- extended|giga_fren
- extended|news_commentary
- extended|un_multi
task_categories:
- translation
task_ids: []
pretty_name: WMT15
paperswithcode_id: wmt-2015
dataset_info:
- config_name: cs-en
features:
- name: translation
dtype:
translation:
languages:
- cs
- en
splits:
- name: train
num_bytes: 282996942
num_examples: 959768
- name: validation
num_bytes: 757817
num_examples: 3003
- name: test
num_bytes: 572203
num_examples: 2656
download_size: 1740666258
dataset_size: 284326962
- config_name: de-en
features:
- name: translation
dtype:
translation:
languages:
- de
- en
splits:
- name: train
num_bytes: 1364002869
num_examples: 4522998
- name: validation
num_bytes: 777334
num_examples: 3003
- name: test
num_bytes: 522989
num_examples: 2169
download_size: 1740666258
dataset_size: 1365303192
- config_name: fi-en
features:
- name: translation
dtype:
translation:
languages:
- fi
- en
splits:
- name: train
num_bytes: 605146817
num_examples: 2073394
- name: validation
num_bytes: 363941
num_examples: 1500
- name: test
num_bytes: 306335
num_examples: 1370
download_size: 273390220
dataset_size: 605817093
- config_name: fr-en
features:
- name: translation
dtype:
translation:
languages:
- fr
- en
splits:
- name: train
num_bytes: 14758986622
num_examples: 40853137
- name: validation
num_bytes: 1138737
num_examples: 4503
- name: test
num_bytes: 298771
num_examples: 1500
download_size: 6702781608
dataset_size: 14760424130
- config_name: ru-en
features:
- name: translation
dtype:
translation:
languages:
- ru
- en
splits:
- name: train
num_bytes: 437752256
num_examples: 1495081
- name: validation
num_bytes: 1087746
num_examples: 3003
- name: test
num_bytes: 955972
num_examples: 2818
download_size: 1092059435
dataset_size: 439795974
---
# Dataset Card for "wmt15"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [http://www.statmt.org/wmt15/translation-task.html](http://www.statmt.org/wmt15/translation-task.html)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 1.74 GB
- **Size of the generated dataset:** 284.34 MB
- **Total amount of disk used:** 2.02 GB
### Dataset Summary
<div class="course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400">
<p><b>Warning:</b> There are issues with the Common Crawl corpus data (<a href="https://www.statmt.org/wmt13/training-parallel-commoncrawl.tgz">training-parallel-commoncrawl.tgz</a>):</p>
<ul>
<li>Non-English files contain many English sentences.</li>
<li>Their "parallel" sentences in English are not aligned: they are uncorrelated with their counterpart.</li>
</ul>
<p>We have contacted the WMT organizers.</p>
</div>
Translation dataset based on the data from statmt.org.
Versions exist for different years using a combination of data
sources. The base `wmt` allows you to create a custom dataset by choosing
your own data/language pair. This can be done as follows:
```python
from datasets import inspect_dataset, load_dataset_builder
inspect_dataset("wmt15", "path/to/scripts")
builder = load_dataset_builder(
"path/to/scripts/wmt_utils.py",
language_pair=("fr", "de"),
subsets={
datasets.Split.TRAIN: ["commoncrawl_frde"],
datasets.Split.VALIDATION: ["euelections_dev2019"],
},
)
# Standard version
builder.download_and_prepare()
ds = builder.as_dataset()
# Streamable version
ds = builder.as_streaming_dataset()
```
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### cs-en
- **Size of downloaded dataset files:** 1.74 GB
- **Size of the generated dataset:** 284.34 MB
- **Total amount of disk used:** 2.02 GB
An example of 'validation' looks as follows.
```
```
### Data Fields
The data fields are the same among all splits.
#### cs-en
- `translation`: a multilingual `string` variable, with possible languages including `cs`, `en`.
### Data Splits
|name |train |validation|test|
|-----|-----:|---------:|---:|
|cs-en|959768| 3003|2656|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{bojar-EtAl:2015:WMT,
author = {Bojar, Ond
{r}ej and Chatterjee, Rajen and Federmann, Christian and Haddow, Barry and Huck, Matthias and Hokamp, Chris and Koehn, Philipp and Logacheva, Varvara and Monz, Christof and Negri, Matteo and Post, Matt and Scarton, Carolina and Specia, Lucia and Turchi, Marco},
title = {Findings of the 2015 Workshop on Statistical Machine Translation},
booktitle = {Proceedings of the Tenth Workshop on Statistical Machine Translation},
month = {September},
year = {2015},
address = {Lisbon, Portugal},
publisher = {Association for Computational Linguistics},
pages = {1--46},
url = {http://aclweb.org/anthology/W15-3001}
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset. |
AIML-TUDA/i2p | 2023-05-25T08:04:07.000Z | [
"license:mit",
"arxiv:2211.05105",
"region:us"
] | AIML-TUDA | null | null | null | 3 | 289 | ---
license: mit
---
# Inaproppriate Image Prompts (I2P)
The I2P benchmark contains real user prompts for generative text2image prompts that are unproportionately likely to produce <i>inappropriate</i> images.
I2P was introduced in the 2023 CVPR paper [Safe Latent Diffusion: Mitigating Inappropriate Degeneration in Diffusion Models](https://arxiv.org/abs/2211.05105).
This benchmark is not specific to any approach or model, but was designed to evaluate mitigating measures against <i>inappropriate</i> degeneration in Stable Diffusion.
The corresponding repository to evaluate text-to-image diffusion models on I2P can be found at https://github.com/ml-research/i2p.
## Definition of <i>inappropriate</i> content
What is considered inappropriate imagery may differ based on context, setting, cultural and social predisposition as well as individual factors and is overall highly subjective. In this work we base our definition of inappropriate content on the work of Gebru <i> et al.</i>:
> [data that] if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety.
>
> --<cite>Gebru, Timnit, et al. "Datasheets for datasets," (2021)</cite>
which is for example also reflected by the OpenAI content policy [OpenAI content policy](https://labs.openai.com/policies/content-policy) that also applies to the use of DALL-E. Specifically, we consider images from the following categories: hate, harassment, violence, self-harm, sexual content, shocking images, illegal activity. Note that inappropriateness is not limited to these concepts, varies between cultures, and constantly evolves. Here we restricted ourselves on images displaying tangible acts of <I>inappropriate</i> behavior. Consequently, our test bed may contain prompts describing e.g.~geopolitical events or hateful symbols.
## Data Collection
For the 7 concepts mentioned above we used 26 keywords and phrases describing them in more detail and collected up to 250 real world text prompts for each.
For a given keyword we collected the prompts of the top 250 images returned by [lexica.art](https://www.lexica.art).
Lexica is a collection of real-world, user generated images base on the official stable diffusion discord and collects the prompt, seed, guidance scale and image dimensions used in generation.
Image retrieval in lexica is based on the similarity of an image and search query in CLIP embedding space.
Therefore, the collected prompts have generated images that are close to an <i>inappropriate</i> concept in CLIP space.
Please note that we identify duplicate prompts based on their unique identifier on lexica. Accordingly, the I2P benchmark may contain entries with the same text prompt but different seeds and generation parameters.
## Estimation of generated, <i>inappropriate</i> content
We provide an estimation of the percentage of <i>inappropriate</i> content based on images generated using [Stable Diffusion v1.4](https://huggingface.co/CompVis/stable-diffusion-v1-4). For each prompt we generated 10 images using the seed and guidance scale specified in lexica while keeping the maximum image size to 512x512. Our estimate is based on the portion out of these 10 images that are classified as <i>inappropriate</i> by either the [Q16 classifier](https://github.com/ml-research/Q16) or [NudeNet Detector](https://github.com/notAI-tech/NudeNet).
## Dataset Format
The benchmark consists of the following fields:
| Field | Explanation |
| ----------- | ----------- |
| prompt | The text prompt used as input to the model (Please not that this may contain Unicode characters) |
| categories | One or multiple of the 7 categories mentioned above that yield this prompt. |
| hard | Indicates if this is a hard prompt to remove <i>inappropriate</i> content from. Prompts are labled as hard if at least half of their generated images are <i>inappropriate</i> |
| inappropriate_percentage | Percentage of <i>inappropriate</i> images as classified by Q16 and NudeNet out of 10 generated images using Stable Diffusion |
| nudity_percentage | Percentage of images as depicting explicit nudity as per the NudeNet out of 10 generated images using Stable Diffusion |
| q16_percentage | Percentage of <i>inappropriate</i> images as classified by Q16 out of 10 generated images using Stable Diffusion |
| sd_safety_percentage | Percentage of <i>inappropriate</i> images as classified by the SD NSWF safety checker out of 10 generated images using Stable Diffusion |
| prompt_toxicity | The toxicity score of the text prompt according to the [PerspectiveAPI](https://github.com/conversationai/perspectiveapi) |
| lexica_url | URL to the original prompt and the respective images in lexica for reference |
| sd_seed | Stable diffusion seed used in our image generation |
| sd_guidance_scale | Stable diffusion guidance scale used in our image generation |
| sd_image_width | Stable diffusion image width used in our image generation |
| sd_image_height | Stable diffusion image height used in our image generation |
|
akariasai/PopQA | 2022-12-22T01:01:20.000Z | [
"region:us"
] | akariasai | null | null | null | 3 | 289 | # Dataset Card for PopQA
## Dataset Summary
PopQA is a large-scale open-domain question answering (QA) dataset, consisting of 14k entity-centric QA pairs. Each question is created by converting a knowledge tuple retrieved from Wikidata using a template. Each question come with the original `subject_entitiey`, `object_entity`and `relationship_type` annotation, as well as Wikipedia monthly page views.
## Languages
The dataset contains samples in English only.
## Dataset Structure
### Data Instances
- Size of downloaded dataset file: 5.2 MB
## Data Fields
- `id`: question id
- `subj`: subject entity name
- `prop`: relationship type
- `obj`: object entity name
- `subj_id`: Wikidata ID of the subject entity
- `prop_id`: Wikidata relationship type ID
- `obj_id`: Wikidata ID of the object entity
- `s_aliases`: aliases of the subject entity
- `o_aliases`: aliases of the object entity
- `s_uri`: Wikidata URI of the subject entity
- `o_uri`: Wikidata URI of the object entity
- `s_wiki_title`: Wikipedia page title of the subject entity
- `o_wiki_title`: Wikipedia page title of the object entity
- `s_pop`: Wikipedia monthly pageview of the subject entity
- `o_pop`: Wikipedia monthly pageview of the object entity
- `question`: PopQA question
- `possible_answers`: a list of the gold answers.
## Citation Information
```
@article{ mallen2023llm_memorization ,
title={When Not to Trust Language Models: Investigating Effectiveness and Limitations of Parametric and Non-Parametric Memories },
author={ Mallen, Alex and Asai,Akari and Zhong, Victor and Das, Rajarshi and Hajishirzi, Hannaneh and Khashabi, Daniel},
journal={ arXiv preprint },
year={ 2022 }
}
```
|
lksy/ru_instruct_gpt4 | 2023-06-02T16:56:03.000Z | [
"task_categories:text-generation",
"task_categories:text2text-generation",
"size_categories:10K<n<100K",
"language:ru",
"license:cc-by-4.0",
"chat",
"region:us"
] | lksy | null | null | null | 12 | 289 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: full_output
dtype: string
splits:
- name: train
num_bytes: 22424451
num_examples: 15056
download_size: 23276814
dataset_size: 22424451
license: cc-by-4.0
task_categories:
- text-generation
- text2text-generation
language:
- ru
tags:
- chat
size_categories:
- 10K<n<100K
---
# ru_instruct_gpt4
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Dataset of GPT-4 generated instructions in Russian. Will soon be updated with more examples.
### Languages
Russian
|
yxchar/amazon-tlm | 2021-11-04T22:22:29.000Z | [
"region:us"
] | yxchar | null | null | null | 0 | 288 | Entry not found |
craigslist_bargains | 2022-11-18T19:47:08.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:dialogue-modeling",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:unknown",
"arxiv:1808.09637",
"region:us"
] | null | We study negotiation dialogues where two agents, a buyer and a seller,
negotiate over the price of an time for sale. We collected a dataset of more
than 6K negotiation dialogues over multiple categories of products scraped from Craigslist.
Our goal is to develop an agent that negotiates with humans through such conversations.
The challenge is to handle both the negotiation strategy and the rich language for bargaining. | @misc{he2018decoupling,
title={Decoupling Strategy and Generation in Negotiation Dialogues},
author={He He and Derek Chen and Anusha Balakrishnan and Percy Liang},
year={2018},
eprint={1808.09637},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | null | 7 | 287 | ---
annotations_creators:
- machine-generated
language_creators:
- crowdsourced
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- dialogue-modeling
paperswithcode_id: craigslistbargains
pretty_name: CraigslistBargains
dataset_info:
features:
- name: agent_info
sequence:
- name: Bottomline
dtype: string
- name: Role
dtype: string
- name: Target
dtype: float32
- name: agent_turn
sequence: int32
- name: dialogue_acts
sequence:
- name: intent
dtype: string
- name: price
dtype: float32
- name: utterance
sequence: string
- name: items
sequence:
- name: Category
dtype: string
- name: Images
dtype: string
- name: Price
dtype: float32
- name: Description
dtype: string
- name: Title
dtype: string
splits:
- name: train
num_bytes: 8538836
num_examples: 5247
- name: test
num_bytes: 1353933
num_examples: 838
- name: validation
num_bytes: 966032
num_examples: 597
download_size: 25373618
dataset_size: 10858801
---
# Dataset Card for CraigslistBargains
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Decoupling Strategy and Generation in Negotiation Dialogues](https://worksheets.codalab.org/worksheets/0x453913e76b65495d8b9730d41c7e0a0c/)
- **Repository:** [Github: Stanford NLP Cocoa](https://github.com/stanfordnlp/cocoa/tree/master)
- **Paper:** [Decoupling Strategy and Generation in Negotiation Dialogues](https://arxiv.org/abs/1808.09637)
- **Leaderboard:** []()
- **Point of Contact:** [He He](hehe@cs.nyu.edu)
### Dataset Summary
We study negotiation dialogues where two agents, a buyer and a seller, negotiate over the price of an time for sale. We collected a dataset of more than 6K negotiation dialogues over multiple categories of products scraped from Craigslist. Our goal is to develop an agent that negotiates with humans through such conversations. The challenge is to handle both the negotiation strategy and the rich language for bargaining. To this end, we develop a modular framework which separates strategy learning from language generation. Specifically, we learn strategies in a coarse dialogue act space and instantiate that into utterances conditioned on dialogue history.
### Supported Tasks and Leaderboards
### Languages
This dataset is English
## Dataset Structure
### Data Instances
```
{
'agent_info': {
'Bottomline':
[
'None',
'None'
],
'Role':
[
'buyer',
'seller'
],
'Target':
[
7.0,
10.0
]
},
'agent_turn':
[
0,
1,
...
],
'dialogue_acts': {
'intent':
[
'init-price',
'unknown',
...
],
'price':
[
5.0,
-1.0,
...
]
},
'items': {
'Category':
[
'phone',
'phone'
],
'Description':
[
'Charge two devices simultaneously on the go...,
...
],
'Images':
[
'phone/6149527852_0.jpg',
'phone/6149527852_0.jpg'
],
'Price':
[
10.0,
10.0
],
'Title':
[
'Verizon Car Charger with Dual Output Micro USB and ...',
...
]
},
'utterance':
[
'Hi, not sure if the charger would work for my car...'
'It will work...',
...
]
}
```
### Data Fields
- `agent_info`: Information about each of the agents taking part in the dialogue
- `Bottomline`: TBD
- `Role`: Whether the agent is buyer or seller
- `Target`: Target price that the buyer/seller wants to hit in the negotiation
- `agent_turn`: Agent taking the current turn in the dialogue (`int` index corresponding to `Role` above)
- `dialogue_acts`: Rules-based information about the strategy of each agent for each turn
- `intent`: The intent of the agent at the particular turn (offer, accept, etc.)
- `price`: The current item price associated with the intent and turn in the bargaining process. Default value for missing: (`-1`)
- `items`: Information about the item the agents are bargaining for. **Note that there is an elembet for each of the fields below for each agent**
- `Category`: Category of the item
- `Description`: Description(s) of the item
- `Images`: (comma delimited) strings of image names of the item
- `Price`: Price(s) of the item. Default value for missing: (`-1`)
- `Title`: Title(s) of the item
- `utterance`: Utterance for each turn in the dialogue, corresponding to the agent in `agent_turns`. The utterance may be an empty string (`''`) for some turns if multiple dialogue acts take place after an utterance (e.g. there are often multiple dialogue acts associated with the closing of the bargaining process after all utterances have completed to describe the conclusion of the bargaining).
### Data Splits
This dataset contains three splits, `train`, `validation` and `test`. Note that `test` is not provided with `dialogue_acts` information as described above. To ensure schema consistency across dataset splits, the `dialogue_acts` field in the `test` split is populated with the default values: `{"price": -1.0, "intent": ""}`
The counts of examples in each split are as follows:
| | Train | Valid | Test |
| Input Examples | 5247 | 597 | 838 |
| Average Dialogue Length | 9.14 | 9.17 | 9.24 |
Note that
## Dataset Creation
From the [source paper](https://arxiv.org/pdf/1808.09637.pdf) for this dataset:
> To generate the negotiation scenarios, we
> scraped postings on sfbay.craigslist.org
> from the 6 most popular categories (housing, furniture, cars, bikes, phones, and electronics). Each
> posting produces three scenarios with the buyer’s
> target prices at 0.5x, 0.7x and 0.9x of the listing
> price. Statistics of the scenarios are shown in Table 2.
> We collected 6682 human-human dialogues on
> AMT using the interface shown in Appendix A
> Figure 2. The dataset statistics in Table 3 show
> that CRAIGSLISTBARGAIN has longer dialogues
> and more diverse utterances compared to prior
> datasets. Furthermore, workers were encouraged
> to embellish the item and negotiate side offers
> such as free delivery or pick-up. This highly relatable scenario leads to richer dialogues such as
> the one shown in Table 1. We also observed various persuasion techniques listed in Table 4 such as
> embellishment,
### Curation Rationale
See **Dataset Creation**
### Source Data
See **Dataset Creation**
#### Initial Data Collection and Normalization
See **Dataset Creation**
#### Who are the source language producers?
See **Dataset Creation**
### Annotations
If the dataset contains annotations which are not part of the initial data collection, describe them in the following paragraphs.
#### Annotation process
Annotations for the `dialogue_acts` in `train` and `test` were generated via a rules-based system which can be found in [this script](https://github.com/stanfordnlp/cocoa/blob/master/craigslistbargain/parse_dialogue.py)
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
[More Information Needed]
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
[More Information Needed]
### Dataset Curators
He He and Derek Chen and Anusha Balakrishnan and Percy Liang
Computer Science Department, Stanford University
`{hehe,derekchen14,anusha,pliang}@cs.stanford.edu`
The work through which this data was produced was supported by
DARPA Communicating with Computers (CwC)
program under ARO prime contract no. W911NF15-1-0462
### Licensing Information
[More Information Needed]
### Citation Information
```
@misc{he2018decoupling,
title={Decoupling Strategy and Generation in Negotiation Dialogues},
author={He He and Derek Chen and Anusha Balakrishnan and Percy Liang},
year={2018},
eprint={1808.09637},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [@ZacharySBrown](https://github.com/ZacharySBrown) for adding this dataset. |
open-source-metrics/issues | 2023-09-26T13:43:16.000Z | [
"region:us"
] | open-source-metrics | null | null | null | 0 | 287 | ---
dataset_info:
features:
- name: dates
dtype: string
- name: type
struct:
- name: authorAssociation
dtype: string
- name: comment
dtype: bool
- name: issue
dtype: bool
splits:
- name: candle
num_bytes: 69320
num_examples: 1815
- name: text_generation_inference
num_bytes: 138472
num_examples: 4062
- name: chat_ui
num_bytes: 45346
num_examples: 1300
- name: hub_docs
num_bytes: 112804
num_examples: 3202
- name: peft
num_bytes: 139658
num_examples: 4097
- name: evaluate
num_bytes: 59317
num_examples: 1693
- name: huggingface_hub
num_bytes: 234735
num_examples: 6741
- name: accelerate
num_bytes: 287459
num_examples: 8181
- name: datasets
num_bytes: 761270
num_examples: 21669
- name: optimum
num_bytes: 168227
num_examples: 4844
- name: pytorch_image_models
num_bytes: 132852
num_examples: 3854
- name: gradio
num_bytes: 818584
num_examples: 22567
- name: tokenizers
num_bytes: 174461
num_examples: 5078
- name: diffusers
num_bytes: 995630
num_examples: 28518
- name: transformers
num_bytes: 4202426
num_examples: 118882
- name: safetensors
num_bytes: 40450
num_examples: 1166
download_size: 2622338
dataset_size: 8381011
configs:
- config_name: default
data_files:
- split: peft
path: data/peft-*
- split: hub_docs
path: data/hub_docs-*
- split: evaluate
path: data/evaluate-*
- split: huggingface_hub
path: data/huggingface_hub-*
- split: accelerate
path: data/accelerate-*
- split: datasets
path: data/datasets-*
- split: optimum
path: data/optimum-*
- split: pytorch_image_models
path: data/pytorch_image_models-*
- split: gradio
path: data/gradio-*
- split: tokenizers
path: data/tokenizers-*
- split: diffusers
path: data/diffusers-*
- split: transformers
path: data/transformers-*
- split: safetensors
path: data/safetensors-*
---
# Dataset Card for "issues"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
silk-road/alpaca-data-gpt4-chinese | 2023-05-23T05:33:21.000Z | [
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:zh",
"language:en",
"license:apache-2.0",
"gpt",
"alpaca",
"fine-tune",
"region:us"
] | silk-road | null | null | null | 35 | 287 | ---
license: apache-2.0
task_categories:
- text-generation
language:
- zh
- en
tags:
- gpt
- alpaca
- fine-tune
pretty_name: Alpaca-Data-GPT4-Chinese
size_categories:
- 10K<n<100K
--- |
facat/sci-llm-60k | 2023-09-14T02:18:33.000Z | [
"region:us"
] | facat | null | null | null | 0 | 287 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: prompt
dtype: string
- name: context
dtype: string
- name: chosen
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
splits:
- name: train
num_bytes: 330020705
num_examples: 60347
- name: test
num_bytes: 1111116
num_examples: 200
download_size: 183205878
dataset_size: 331131821
---
# Dataset Card for "sci-llm-60k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
uit-nlp/vietnamese_students_feedback | 2022-10-13T15:39:37.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"task_ids:topic-classification",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:vi",
"license:unknown",
"region:us"
] | uit-nlp | Students’ feedback is a vital resource for the interdisciplinary research involving the combining of two different
research fields between sentiment analysis and education.
Vietnamese Students’ Feedback Corpus (UIT-VSFC) is the resource consists of over 16,000 sentences which are
human-annotated with two different tasks: sentiment-based and topic-based classifications.
To assess the quality of our corpus, we measure the annotator agreements and classification evaluation on the
UIT-VSFC corpus. As a result, we obtained the inter-annotator agreement of sentiments and topics with more than over
91% and 71% respectively. In addition, we built the baseline model with the Maximum Entropy classifier and achieved
approximately 88% of the sentiment F1-score and over 84% of the topic F1-score. | @InProceedings{8573337,
author={Nguyen, Kiet Van and Nguyen, Vu Duc and Nguyen, Phu X. V. and Truong, Tham T. H. and Nguyen, Ngan Luu-Thuy},
booktitle={2018 10th International Conference on Knowledge and Systems Engineering (KSE)},
title={UIT-VSFC: Vietnamese Students’ Feedback Corpus for Sentiment Analysis},
year={2018},
volume={},
number={},
pages={19-24},
doi={10.1109/KSE.2018.8573337}
} | null | 6 | 286 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- vi
license:
- unknown
multilinguality:
- monolingual
pretty_name: "Vietnamese Students\u2019 Feedback Corpus"
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
- topic-classification
---
# Dataset Card for Vietnamese Students’ Feedback Corpus
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://sites.google.com/uit.edu.vn/uit-nlp/datasets-projects#h.p_4Brw8L-cbfTe
- **Repository:**
- **Paper:** [UIT-VSFC: Vietnamese Students’ Feedback Corpus for Sentiment Analysis](https://www.researchgate.net/publication/329645066_UIT-VSFC_Vietnamese_Students'_Feedback_Corpus_for_Sentiment_Analysis)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Students’ feedback is a vital resource for the interdisciplinary research involving the combining of two different
research fields between sentiment analysis and education.
Vietnamese Students’ Feedback Corpus (UIT-VSFC) is the resource consists of over 16,000 sentences which are
human-annotated with two different tasks: sentiment-based and topic-based classifications.
To assess the quality of our corpus, we measure the annotator agreements and classification evaluation on the
UIT-VSFC corpus. As a result, we obtained the inter-annotator agreement of sentiments and topics with more than over
91% and 71% respectively. In addition, we built the baseline model with the Maximum Entropy classifier and achieved
approximately 88% of the sentiment F1-score and over 84% of the topic F1-score.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The language of the dataset text sentence is Vietnamese (`vi`).
## Dataset Structure
### Data Instances
An instance example:
```
{
'sentence': 'slide giáo trình đầy đủ .',
'sentiment': 2,
'topic': 1
}
```
### Data Fields
- `sentence` (str): Text sentence.
- `sentiment`: Sentiment class, with values 0 (negative), 1 (neutral) and 2 (positive).
- `topic`: Topic class, with values 0 (lecturer), 1 (training_program), 2 (facility) and 3 (others).
### Data Splits
The dataset is split in train, validation and test.
| | Tain | Validation | Test |
|--------------------|------:|-----------:|-----:|
| Number of examples | 11426 | 1583 | 3166 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Unknown.
### Citation Information
```
@InProceedings{8573337,
author={Nguyen, Kiet Van and Nguyen, Vu Duc and Nguyen, Phu X. V. and Truong, Tham T. H. and Nguyen, Ngan Luu-Thuy},
booktitle={2018 10th International Conference on Knowledge and Systems Engineering (KSE)},
title={UIT-VSFC: Vietnamese Students’ Feedback Corpus for Sentiment Analysis},
year={2018},
volume={},
number={},
pages={19-24},
doi={10.1109/KSE.2018.8573337}
}
```
### Contributions
Thanks to [@albertvillanova](https://github.com/albertvillanova) for adding this dataset.
|
laion/laion2B-en | 2023-08-13T10:21:14.000Z | [
"license:cc-by-4.0",
"region:us"
] | laion | null | null | null | 137 | 286 | ---
license: cc-by-4.0
---
HEIGHT and WIDTH are swapped |
CertifiedJoon/Korean-Instruction | 2023-07-06T17:44:53.000Z | [
"task_categories:question-answering",
"size_categories:n<1K",
"language:ko",
"license:cdla-permissive-2.0",
"region:us"
] | CertifiedJoon | null | null | null | 3 | 286 | ---
license: cdla-permissive-2.0
dataset_info:
features:
- name: Instruction
dtype: string
- name: Response
dtype: string
- name: Source
dtype: string
- name: MetaData
dtype: string
splits:
- name: train
num_bytes: 2099234
num_examples: 1720
download_size: 907301
dataset_size: 2099234
task_categories:
- question-answering
language:
- ko
size_categories:
- n<1K
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:** kin.naver.com/qna
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** mjypark1212@gmail.com
### Dataset Summary
The most active korean qna site - Knowledge In Naver. Instruction + response format. Created for language model.
## Dataset Structure
[Instruction, Response, Source, Metadata] |
doc2dial | 2022-11-18T19:58:53.000Z | [
"task_categories:question-answering",
"task_ids:closed-domain-qa",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc-by-3.0",
"region:us"
] | null | Doc2dial is dataset of goal-oriented dialogues that are grounded in the associated documents. It includes over 4500 annotated conversations with an average of 14 turns that are grounded in over 450 documents from four domains. Compared to the prior document-grounded dialogue datasets this dataset covers a variety of dialogue scenes in information-seeking conversations. | @inproceedings{feng-etal-2020-doc2dial,
title = "doc2dial: A Goal-Oriented Document-Grounded Dialogue Dataset",
author = "Feng, Song and Wan, Hui and Gunasekara, Chulaka and Patel, Siva and Joshi, Sachindra and Lastras, Luis",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2020",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.emnlp-main.652",
} | null | 2 | 285 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license:
- cc-by-3.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- closed-domain-qa
paperswithcode_id: doc2dial
pretty_name: doc2dial
dataset_info:
- config_name: dialogue_domain
features:
- name: dial_id
dtype: string
- name: doc_id
dtype: string
- name: domain
dtype: string
- name: turns
list:
- name: turn_id
dtype: int32
- name: role
dtype: string
- name: da
dtype: string
- name: references
list:
- name: sp_id
dtype: string
- name: label
dtype: string
- name: utterance
dtype: string
splits:
- name: train
num_bytes: 6924209
num_examples: 3474
- name: validation
num_bytes: 1315815
num_examples: 661
download_size: 5879543
dataset_size: 8240024
- config_name: document_domain
features:
- name: domain
dtype: string
- name: doc_id
dtype: string
- name: title
dtype: string
- name: doc_text
dtype: string
- name: spans
list:
- name: id_sp
dtype: string
- name: tag
dtype: string
- name: start_sp
dtype: int32
- name: end_sp
dtype: int32
- name: text_sp
dtype: string
- name: title
dtype: string
- name: parent_titles
dtype: string
- name: id_sec
dtype: string
- name: start_sec
dtype: int32
- name: text_sec
dtype: string
- name: end_sec
dtype: int32
- name: doc_html_ts
dtype: string
- name: doc_html_raw
dtype: string
splits:
- name: train
num_bytes: 204874908
num_examples: 3416
download_size: 5879543
dataset_size: 204874908
- config_name: doc2dial_rc
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: domain
dtype: string
splits:
- name: validation
num_bytes: 22705288
num_examples: 3972
- name: train
num_bytes: 114778994
num_examples: 20431
download_size: 5879543
dataset_size: 137484282
---
# Dataset Card for doc2dial
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://doc2dial.github.io
- **Repository:** [Needs More Information]
- **Paper:** https://www.aclweb.org/anthology/2020.emnlp-main.652.pdf
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Doc2dial is dataset of goal-oriented dialogues that are grounded in the associated documents. It includes over 4500 annotated conversations with an average of 14 turns that are grounded in over 450 documents from four domains. Compared to the prior document-grounded dialogue datasets this dataset covers a variety of dialogue scenes in information-seeking conversations.
### Supported Tasks and Leaderboards
> Supported Task: [Shared Task](https://doc2dial.github.io/workshop2021/shared.html) hosted by DialDoc21 at ACL.
> Leaderboard: [LINK](https://eval.ai/web/challenges/challenge-page/793)
### Languages
English
## Dataset Structure
### Data Instances
Sample data instance for `dialogue_domain` :
```
{
"dial_id": "9f44c1539efe6f7e79b02eb1b413aa43",
"doc_id": "Top 5 DMV Mistakes and How to Avoid Them#3_0",
"domain": "dmv",
"turns": [
{
"da": "query_condition",
"references": [
{
"sp_id": "4",
"label": "precondition"
}
],
"role": "user",
"turn_id": 1,
"utterance": "Hello, I forgot o update my address, can you help me with that?"
},
{
"da": "response_solution",
"references": [
{
"sp_id": "6",
"label": "solution"
},
{
"sp_id": "7",
"label": "solution"
},
{
"sp_id": "4",
"label": "references"
}
],
"role": "agent",
"turn_id": 2,
"utterance": "hi, you have to report any change of address to DMV within 10 days after moving. You should do this both for the address associated with your license and all the addresses associated with all your vehicles."
},
{
"da": "query_solution",
"references": [
{
"sp_id": "56",
"label": "solution"
},
{
"sp_id": "48",
"label": "references"
}
],
"role": "user",
"turn_id": 3,
"utterance": "Can I do my DMV transactions online?"
},
{
"da": "respond_solution",
"references": [
{
"sp_id": "56",
"label": "solution"
},
{
"sp_id": "48",
"label": "references"
}
],
"role": "agent",
"turn_id": 4,
"utterance": "Yes, you can sign up for MyDMV for all the online transactions needed."
},
{
"da": "query_condition",
"references": [
{
"sp_id": "48",
"label": "precondition"
}
],
"role": "user",
"turn_id": 5,
"utterance": "Thanks, and in case I forget to bring all of the documentation needed to the DMV office, what can I do?"
},
{
"da": "respond_solution",
"references": [
{
"sp_id": "49",
"label": "solution"
},
{
"sp_id": "50",
"label": "solution"
},
{
"sp_id": "52",
"label": "solution"
},
{
"sp_id": "48",
"label": "references"
}
],
"role": "agent",
"turn_id": 6,
"utterance": "This happens often with our customers so that's why our website and MyDMV are so useful for our customers. Just check if you can make your transaction online so you don't have to go to the DMV Office."
},
{
"da": "query_solution",
"references": [
{
"sp_id": "6",
"label": "solution"
},
{
"sp_id": "7",
"label": "solution"
},
{
"sp_id": "4",
"label": "references"
}
],
"role": "user",
"turn_id": 7,
"utterance": "Ok, and can you tell me again where should I report my new address?"
},
{
"da": "respond_solution",
"references": [
{
"sp_id": "6",
"label": "solution"
},
{
"sp_id": "7",
"label": "solution"
},
{
"sp_id": "4",
"label": "references"
}
],
"role": "agent",
"turn_id": 8,
"utterance": "Sure. Any change of address must be reported to the DMV, that's for the address associated with your license and any of your vehicles."
},
{
"da": "query_condition",
"references": [
{
"sp_id": "40",
"label": "precondition"
}
],
"role": "user",
"turn_id": 9,
"utterance": "Can you tell me more about Traffic points and their cost?"
},
{
"da": "respond_solution",
"references": [
{
"sp_id": "41",
"label": "solution"
},
{
"sp_id": "43",
"label": "solution"
},
{
"sp_id": "40",
"label": "references"
}
],
"role": "agent",
"turn_id": 10,
"utterance": "Traffic points is the system used by DMV to track dangerous drivers. The cost of the traffic points is independent of the DRA, so you get a separate charge based on the total points you accumulate."
}
]
}
```
Sample data instance for `document_domain` :
```
{
"doc_id": "Benefits Planner: Retirement | Online Calculator (WEP Version)#1_0",
"domain": "ssa",
"doc_html_raw": "<main class=\"content\" id=\"content\" role=\"main\">\n\n<section>\n\n<div>\n<h2>\nBenefits Planner: Retirement\n</h2>\n</div>\n</section>\n\n\n<section>\n\n<div>\n\n<div>\n\n\n</div>\n\n<article>\n<section>\n\n<h3>Online Calculator (WEP Version)</h3>\n<p>The calculator shown below allows you to estimate your Social Security benefit.\nHowever, for the most accurate estimates, <a>use the Detailed Calculator</a>.</p>\n<p>You need to enter all your past earnings\n, which are shown on your <a>online </a>.</p>\n\n<p>Please Note:</p>\n<ul class=\"browser-default\">\n<li>The Online Calculator is updated periodically<span>*</span> with new benefit increases and other benefit amounts. Therefore, it is likely that your benefit estimates in the future will differ from those calculated today.</li>\n<li>The Online Calculator works on PCs and Macs with Javascript enabled.</li>\n<li>Some browsers may not allow you to print the table below. </li>\n</ul>\n<p></p>\n\n<div>\nThe Online Calculator temporarily stores information on your local computer while your browser is open. To protect your personal information, you should close your browser after you have finished your estimate.\n</div>\n<p></p>\n\n<div>\n<p>Note: If your birthday is on January 1st, we figure your benefit as if your birthday was in the previous year.</p>\n<p>If you qualify for benefits as a Survivor, your <a>full retirement age for survivors benefits</a> may be different.</p></div>\n\n<div>\n</div></section></article></div></section></main>",
"doc_html_ts": "<main><section><div><h2 sent_id=\"1\" text_id=\"1\">Benefits Planner: Retirement</h2></div></section><section><div><article><section><h3 sent_id=\"2\" text_id=\"2\">Online Calculator (WEP Version)</h3><div tag_id=\"1\"><u sent_id=\"3\" tag_id=\"1\"><u sent_id=\"3\" tag_id=\"1\" text_id=\"3\">The calculator shown below allows you to estimate your Social Security benefit .</u></u><u sent_id=\"4\" tag_id=\"1\"><u sent_id=\"4\" tag_id=\"1\" text_id=\"4\">However ,</u><u sent_id=\"4\" tag_id=\"1\" text_id=\"5\">for the most accurate estimates ,</u><u sent_id=\"4\" tag_id=\"1\" text_id=\"6\">use the Detailed Calculator .</u></u></div><div tag_id=\"2\"><u sent_id=\"5\" tag_id=\"2\"><u sent_id=\"5\" tag_id=\"2\" text_id=\"7\">You need to enter all your past earnings , which are shown on your online .</u></u></div><div tag_id=\"3\"><u sent_id=\"6\" tag_id=\"3\"><u sent_id=\"6\" tag_id=\"3\" text_id=\"8\">Please Note:</u></u></div><ul class=\"browser-default\" tag_id=\"3\"><li tag_id=\"3\"><div tag_id=\"3\"><u sent_id=\"9\" tag_id=\"3\"><u sent_id=\"9\" tag_id=\"3\" text_id=\"9\">The Online Calculator is updated periodically * with new benefit increases and other benefit amounts .</u></u><u sent_id=\"10\" tag_id=\"3\"><u sent_id=\"10\" tag_id=\"3\" text_id=\"10\">Therefore ,</u><u sent_id=\"10\" tag_id=\"3\" text_id=\"11\">it is likely that your benefit estimates in the future will differ from those calculated today .</u></u></div></li><li tag_id=\"3\"><u sent_id=\"11\" tag_id=\"3\"><u sent_id=\"11\" tag_id=\"3\" text_id=\"12\">The Online Calculator works on PCs and Macs with Javascript enabled .</u></u></li><li tag_id=\"3\"><u sent_id=\"12\" tag_id=\"3\"><u sent_id=\"12\" tag_id=\"3\" text_id=\"13\">Some browsers may not allow you to print the table below .</u></u></li></ul><div>The Online Calculator temporarily stores information on your local computer while your browser is open. To protect your personal information, you should close your browser after you have finished your estimate.</div><div><div tag_id=\"4\"><u sent_id=\"13\" tag_id=\"4\"><u sent_id=\"13\" tag_id=\"4\" text_id=\"14\">Note:</u></u><u sent_id=\"14\" tag_id=\"4\"><u sent_id=\"14\" tag_id=\"4\" text_id=\"15\">If your birthday is on January 1st ,</u><u sent_id=\"14\" tag_id=\"4\" text_id=\"16\">we figure your benefit as if your birthday was in the previous year .</u></u></div><div tag_id=\"5\"><u sent_id=\"15\" tag_id=\"5\"><u sent_id=\"15\" tag_id=\"5\" text_id=\"17\">If you qualify for benefits as a Survivor ,</u><u sent_id=\"15\" tag_id=\"5\" text_id=\"18\">your full retirement age for survivors benefits may be different .</u></u></div></div></section></article></div></section></main>",
"doc_text": "\n\nBenefits Planner: Retirement \n\n\nOnline Calculator (WEP Version) \nThe calculator shown below allows you to estimate your Social Security benefit. However , for the most accurate estimates , use the Detailed Calculator. You need to enter all your past earnings, which are shown on your online. Please Note: The Online Calculator is updated periodically * with new benefit increases and other benefit amounts. Therefore , it is likely that your benefit estimates in the future will differ from those calculated today. The Online Calculator works on PCs and Macs with Javascript enabled. Some browsers may not allow you to print the table below. Note: If your birthday is on January 1st , we figure your benefit as if your birthday was in the previous year. If you qualify for benefits as a Survivor , your full retirement age for survivors benefits may be different. ",
"title": "Benefits Planner: Retirement | Online Calculator (WEP Version)#1",
"spans": [
{
"end_sec": 32,
"end_sp": 32,
"id_sec": "t_0",
"id_sp": "1",
"parent_titles": "[]",
"start_sec": 0,
"start_sp": 0,
"tag": "h2",
"text_sec": "\n\nBenefits Planner: Retirement \n",
"text_sp": "\n\nBenefits Planner: Retirement \n",
"title": "Benefits Planner: Retirement"
},
{
"end_sec": 67,
"end_sp": 67,
"id_sec": "t_1",
"id_sp": "2",
"parent_titles": "[{'id_sp': '1', 'text': 'Benefits Planner: Retirement', 'level': 'h2'}]",
"start_sec": 32,
"start_sp": 32,
"tag": "h3",
"text_sec": "\n\nOnline Calculator (WEP Version) \n",
"text_sp": "\n\nOnline Calculator (WEP Version) \n",
"title": "Online Calculator (WEP Version)"
},
{
"end_sec": 220,
"end_sp": 147,
"id_sec": "1",
"id_sp": "3",
"parent_titles": "[]",
"start_sec": 67,
"start_sp": 67,
"tag": "u",
"text_sec": "The calculator shown below allows you to estimate your Social Security benefit. However , for the most accurate estimates , use the Detailed Calculator. ",
"text_sp": "The calculator shown below allows you to estimate your Social Security benefit. ",
"title": "Online Calculator (WEP Version)"
}
]
}
```
Sample data instance for `doc2dial_rc` :
```
{
"id": "78f72b08b43791a4a70363fe62b8de08_1",
"is_impossible": false,
"question": "Hello, I want to know about the retirement plan.",
"answers": {
"answer_start": [
0
],
"text": [
"\n\nBenefits Planner: Retirement \n\n\nOnline Calculator (WEP Version) \n"
]
},
"context": "\n\nBenefits Planner: Retirement \n\n\nOnline Calculator (WEP Version) \nThe calculator shown below allows you to estimate your Social Security benefit. However , for the most accurate estimates , use the Detailed Calculator. You need to enter all your past earnings, which are shown on your online. Please Note: The Online Calculator is updated periodically * with new benefit increases and other benefit amounts. Therefore , it is likely that your benefit estimates in the future will differ from those calculated today. The Online Calculator works on PCs and Macs with Javascript enabled. Some browsers may not allow you to print the table below. Note: If your birthday is on January 1st , we figure your benefit as if your birthday was in the previous year. If you qualify for benefits as a Survivor , your full retirement age for survivors benefits may be different. ",
"title": "Benefits Planner: Retirement | Online Calculator (WEP Version)#1_0",
"domain": "ssa"
}
```
### Data Fields
For `document_domain`,
- `doc_id`: the ID of a document;
- `title`: the title of the document;
- `domain`: the domain of the document;
- `doc_text`: the text content of the document (without HTML markups);
- `doc_html_ts`: the document content with HTML markups and the annotated spans that are indicated by `text_id` attribute, which corresponds to `id_sp`.
- `doc_html_raw`: the document content with HTML markups and without span annotations.
- `spans`: key-value pairs of all spans in the document, with `id_sp` as key. Each span includes the following,
- `id_sp`: the id of a span as noted by `text_id` in `doc_html_ts`;
- `start_sp`/ `end_sp`: the start/end position of the text span in `doc_text`;
- `text_sp`: the text content of the span.
- `id_sec`: the id of the (sub)section (e.g. `<p>`) or title (`<h2>`) that contains the span.
- `start_sec` / `end_sec`: the start/end position of the (sub)section in `doc_text`.
- `text_sec`: the text of the (sub)section.
- `title`: the title of the (sub)section.
- `parent_titles`: the parent titles of the `title`.
For `dialogue_domain`:
- `dial_id`: the ID of a dialogue;
- `doc_id`: the ID of the associated document;
- `domain`: domain of the document;
- `turns`: a list of dialogue turns. Each turn includes,
- `turn_id`: the time order of the turn;
- `role`: either "agent" or "user";
- `da`: dialogue act;
- `references`: the grounding span (`id_sp`) in the associated document. If a turn is an irrelevant turn, i.e., `da` ends with "ood", `reference` is empty. **Note** that spans with labels "*precondition*"/"*solution*" are the actual grounding spans. Spans with label "*reference*" are the related titles or contextual reference, which is used for the purpose of describing a dialogue scene better to crowd contributors.
- `utterance`: the human-generated utterance based on the dialogue scene.
For `doc2dial_rc`, this conforms to [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/) data format. For how to load Doc2Dial data for reading comprehension task, please refer [here](https://github.com/doc2dial/sharedtask-dialdoc2021).
- `id`: the ID of a QA instance;
- `question`: user query;
- `answers`: the answers that are grounded in the associated document;
- `answer_start`: the start position of the grounding span in the associated document (`context`);
- `text`: the text content of the grounding span;
- `title`: the title of the associated document;
- `domain`: the domain of the associated document;
- `context`: the text content of the associated document (without HTML markups).
### Data Splits
Training & dev split for dialogue domain
Training split only for document domain
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Song Feng, Hui Wan, Chulaka Gunasekara, Siva Sankalp Patel,Sachindra Joshi. Luis A. Lastras
### Licensing Information
Creative Commons Attribution 3.0 Unported
### Citation Information
@inproceedings{feng-etal-2020-doc2dial,
title = "doc2dial: A Goal-Oriented Document-Grounded Dialogue Dataset",
author = "Feng, Song and Wan, Hui and Gunasekara, Chulaka and Patel, Siva and Joshi, Sachindra and Lastras, Luis",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2020",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.emnlp-main.652",
}
### Contributions
Thanks to [@songfeng](https://github.com/songfeng), [@KMFODA](https://github.com/KMFODA) for adding this dataset. |
webis/tldr-17 | 2023-06-05T12:48:30.000Z | [
"task_categories:summarization",
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"reddit-posts-summarization",
"region:us"
] | webis | This corpus contains preprocessed posts from the Reddit dataset.
The dataset consists of 3,848,330 posts with an average length of 270 words for content,
and 28 words for the summary.
Features includes strings: author, body, normalizedBody, content, summary, subreddit, subreddit_id.
Content is used as document and summary is used as summary. | @inproceedings{volske-etal-2017-tl,
title = {TL;DR: Mining {R}eddit to Learn Automatic Summarization},
author = {V{\"o}lske, Michael and Potthast, Martin and Syed, Shahbaz and Stein, Benno},
booktitle = {Proceedings of the Workshop on New Frontiers in Summarization},
month = {sep},
year = {2017},
address = {Copenhagen, Denmark},
publisher = {Association for Computational Linguistics},
url = {https://www.aclweb.org/anthology/W17-4508},
doi = {10.18653/v1/W17-4508},
pages = {59--63},
abstract = {Recent advances in automatic text summarization have used deep neural networks to generate high-quality abstractive summaries, but the performance of these models strongly depends on large amounts of suitable training data. We propose a new method for mining social media for author-provided summaries, taking advantage of the common practice of appending a {``}TL;DR{''} to long posts. A case study using a large Reddit crawl yields the Webis-TLDR-17 dataset, complementing existing corpora primarily from the news genre. Our technique is likely applicable to other social media sites and general web crawls.},
} | null | 25 | 285 | ---
annotations_creators:
- no-annotation
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
paperswithcode_id: webis-tldr-17-corpus
pretty_name: Reddit Webis-TLDR-17
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- summarization
task_ids: []
train-eval-index:
- config: default
task: summarization
task_id: summarization
splits:
train_split: train
col_mapping:
content: text
summary: target
metrics:
- type: rouge
name: Rouge
tags:
- reddit-posts-summarization
dataset_info:
features:
- name: author
dtype: string
- name: body
dtype: string
- name: normalizedBody
dtype: string
- name: subreddit
dtype: string
- name: subreddit_id
dtype: string
- name: id
dtype: string
- name: content
dtype: string
- name: summary
dtype: string
splits:
- name: train
num_bytes: 18940542951
num_examples: 3848330
download_size: 3141854161
dataset_size: 18940542951
---
# Dataset Card for Reddit Webis-TLDR-17
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://webis.de/data/webis-tldr-17.html](https://webis.de/data/webis-tldr-17.html)
- **Repository:** [https://github.com/webis-de/webis-tldr-17-corpus](https://github.com/webis-de/webis-tldr-17-corpus)
- **Paper:** [https://aclanthology.org/W17-4508]
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 3.14 GB
- **Size of the generated dataset:** 18.94 GB
- **Total amount of disk used:** 22.08 GB
### Dataset Summary
This corpus contains preprocessed posts from the Reddit dataset (Webis-TLDR-17).
The dataset consists of 3,848,330 posts with an average length of 270 words for content,
and 28 words for the summary.
Features includes strings: author, body, normalizedBody, content, summary, subreddit, subreddit_id.
Content is used as document and summary is used as summary.
### Supported Tasks and Leaderboards
Summarization (abstractive)
Known ROUGE scores achieved for the Webis-TLDR-17:
| Model | ROUGE-1 | ROUGE-2 | ROUGE-L | Paper/Source |
|-------|-------|-------|-------|------:|
| Transformer + Copy (Gehrmann et al., 2019) | 22 | 6 | 17 | Generating Summaries with Finetuned Language Models |
| Unified VAE + PGN (Choi et al., 2019) | 19 | 4 | 15 | VAE-PGN based Abstractive Model in Multi-stage Architecture for Text Summarization |
(Source: https://github.com/sebastianruder/NLP-progress/blob/master/english/summarization.md)
### Languages
English
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 3.14 GB
- **Size of the generated dataset:** 18.94 GB
- **Total amount of disk used:** 22.08 GB
An example of 'train' looks as follows.
```
{
"author": "me",
"body": "<>",
"content": "input document.",
"id": "1",
"normalizedBody": "",
"subreddit": "machinelearning",
"subreddit_id": "2",
"summary": "output summary."
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `author`: a `string` feature.
- `body`: a `string` feature.
- `normalizedBody`: a `string` feature.
- `subreddit`: a `string` feature.
- `subreddit_id`: a `string` feature.
- `id`: a `string` feature.
- `content`: a `string` feature.
- `summary`: a `string` feature.
### Data Splits
| name | train |
|-------|------:|
|default|3848330|
This corpus does not contain a separate test set. Thus it is up to the users to divide the corpus into appropriate training, validation and test sets.
## Dataset Creation
### Curation Rationale
In the scope of the task of absractive summarization the creators of the Webis-TLDR-17 propose mining social media for author-provided summaries and taking advantage of the common practice of appending a "TL;DR" to long posts. A large Reddit crawl was used to yield the Webis-TLDR-17 corpus. This dataset intends to complement the existing summarization corpora primarily from the news genre.
### Source Data
Reddit subreddits posts (submissions & comments) containing "TL;DR" from 2006 to 2016. Multiple subreddits are included.
#### Initial Data Collection and Normalization
Initial data: a set of 286 million submissions and 1.6 billion comments posted to Reddit between 2006 and 2016.
Then a five-step pipeline of consecutive filtering steps was applied.
#### Who are the source language producers?
The contents of the dataset are produced by human authors, bot-generated content was eliminated by filtering out all bot accounts with the help of an extensive list provided by the Reddit community, as well as manual inspection of cases where the user name contained the substring "bot."
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
This dataset has been created to serve as a source of large-scale summarization training data. It is primarily geared towards the automatic abstractive summarization task, that can be considered one of the most challenging variants of automatic summarization. It also aims to tackle the lack of genre diversity in the summarization datasets (most are news-related).
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
Reddit users write TL;DRs with various intentions, such as providing a “true” summary, asking questions or for help, or forming judgments and conclusions. As noted in the paper introducing the dataset, although the first kind of TL;DR posts are most important for training summarization models, yet, the latter allow for various alternative summarization-related tasks.
Although filtering was performed abusive language maybe still be present.
## Additional Information
### Dataset Curators
Michael Völske, Martin Potthast, Shahbaz Syed, Benno Stein
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@inproceedings{volske-etal-2017-tl,
title = "{TL};{DR}: Mining {R}eddit to Learn Automatic Summarization",
author = {V{"o}lske, Michael and
Potthast, Martin and
Syed, Shahbaz and
Stein, Benno},
booktitle = "Proceedings of the Workshop on New Frontiers in Summarization",
month = sep,
year = "2017",
address = "Copenhagen, Denmark",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/W17-4508",
doi = "10.18653/v1/W17-4508",
pages = "59--63",
abstract = "Recent advances in automatic text summarization have used deep neural networks to generate high-quality abstractive summaries, but the performance of these models strongly depends on large amounts of suitable training data. We propose a new method for mining social media for author-provided summaries, taking advantage of the common practice of appending a {``}TL;DR{''} to long posts. A case study using a large Reddit crawl yields the Webis-TLDR-17 dataset, complementing existing corpora primarily from the news genre. Our technique is likely applicable to other social media sites and general web crawls.",
}
```
### Contributions
Thanks to [@mariamabarham](https://github.com/mariamabarham), [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf) for adding this dataset. |
okite97/news-data | 2022-08-25T10:36:01.000Z | [
"task_categories:text-classification",
"task_ids:topic-classification",
"task_ids:multi-class-classification",
"annotations_creators:other",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:afl-3.0",
"region:us"
] | okite97 | null | null | null | 2 | 285 | ---
annotations_creators:
- other
language:
- 'en'
language_creators:
- found
license:
- afl-3.0
multilinguality:
- monolingual
pretty_name: News Dataset
size_categories:
- 1K<n<10K
source_datasets:
- original
tags: []
task_categories:
- text-classification
task_ids:
- topic-classification
- multi-class-classification
---
# Dataset Card for news-data
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Dataset Curators](#dataset-curators)
### Dataset Summary
The News Dataset is an English-language dataset containing just over 4k unique news articles scrapped from AriseTv- One of the most popular news television in Nigeria.
### Supported Tasks and Leaderboards
It supports news article classification into different categories.
### Languages
English
## Dataset Structure
### Data Instances
'''
{'Title': 'Nigeria: APC Yet to Zone Party Positions Ahead of Convention'
'Excerpt': 'The leadership of the All Progressives Congress (APC), has denied reports that it had zoned some party positions ahead of'
'Category': 'politics'
'labels': 2}
'''
### Data Fields
* Title: a string containing the title of a news title as shown
* Excerpt: a string containing a short extract from the body of the news
* Category: a string that tells the category of an example (string label)
* labels: integer telling the class of an example (label)
### Data Splits
| Dataset Split | Number of instances in split |
| ----------- | ----------- |
| Train | 4,594 |
| Paragraph | 811 |
## Dataset Creation
### Source Data
#### Initial Data Collection and Normalization
The code for the dataset creation at *https://github.com/chimaobi-okite/NLP-Projects-Competitions/blob/main/NewsCategorization/Data/NewsDataScraping.ipynb*. The examples were scrapped from
<https://www.arise.tv/>
### Annotations
#### Annotation process
The annotation is based on the news category in the [arisetv](https://www.arise.tv) website
#### Who are the annotators?
Journalists at arisetv
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to help develop models that can classify news articles into categories.
This task is useful for efficiently presenting information given a large quantity of text. It should be made clear that any summarizations produced by models trained on this dataset are reflective of the language used in the articles, but are in fact automatically generated.
### Discussion of Biases
This data is biased towards news happenings in Nigeria but the model built using it can as well classify news from other parts of the world
with a slight degradation in performance.
### Dataset Curators
The dataset is created by people at arise but was scrapped by [@github-chimaobi-okite](https://github.com/chimaobi-okite/)
|
YeungNLP/firefly-train-1.1M | 2023-04-10T06:15:28.000Z | [
"region:us"
] | YeungNLP | null | null | null | 173 | 285 | 本数据应用于项目:[Firefly(流萤): 中文对话式大语言模型](https://github.com/yangjianxin1/Firefly) ,训练后得到的模型[firefly-1b4](https://huggingface.co/YeungNLP/firefly-1b4)
如果您觉得此数据集对您有帮助,请like此数据集并在Github项目中star我们。
我们收集了23个常见的中文数据集,对于每个任务,由人工书写若干种指令模板,保证数据的高质量与丰富度,数据量为115万 。数据分布如下图所示:

每条数据的格式如下,包含任务类型、输入、目标输出:
```json
{
"kind": "ClassicalChinese",
"input": "将下面句子翻译成现代文:\n石中央又生一树,高百余尺,条干偃阴为五色,翠叶如盘,花径尺余,色深碧,蕊深红,异香成烟,著物霏霏。",
"target": "大石的中央长着一棵树,一百多尺高,枝干是彩色的,树叶有盘子那样大,花的直径有一尺宽,花瓣深蓝色,花中飘出奇异的香气笼罩着周围,如烟似雾。"
}
```
训练数据集的token长度分布如下图所示,绝大部分数据的长度都小于600:
 |
Skelebor/book_titles_and_descriptions_en_clean | 2022-06-28T11:23:46.000Z | [
"region:us"
] | Skelebor | null | null | null | 1 | 284 | Entry not found |
zxvix/c4_biomedicalacademic_2 | 2023-09-12T06:21:01.000Z | [
"region:us"
] | zxvix | null | null | null | 0 | 284 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: timestamp
dtype: timestamp[s]
- name: url
dtype: string
- name: original_text
dtype: string
splits:
- name: test
num_bytes: 3027569.0
num_examples: 989
download_size: 1767632
dataset_size: 3027569.0
---
# Dataset Card for "c4_biomedicalacademic_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
iara-project/news-articles-ptbr-dataset | 2023-09-21T03:12:30.000Z | [
"region:us"
] | iara-project | null | null | null | 1 | 284 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: title
dtype: string
- name: text
dtype: string
- name: date
dtype: string
- name: category
dtype: string
- name: category_natural_language
dtype: string
- name: link
dtype: string
splits:
- name: train
num_bytes: 628987914
num_examples: 176114
- name: test
num_bytes: 627415372
num_examples: 176114
download_size: 770300096
dataset_size: 1256403286
---
# Dataset Card for "news-articles-ptbr-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
glaiveai/glaive-code-assistant | 2023-09-27T22:51:02.000Z | [
"size_categories:100K<n<1M",
"license:apache-2.0",
"region:us"
] | glaiveai | null | null | null | 28 | 284 | ---
license: apache-2.0
size_categories:
- 100K<n<1M
---
# Glaive-code-assistant
Glaive-code-assistant is a dataset of ~140k code problems and solutions generated using Glaive’s synthetic data generation platform.
The data is intended to be used to make models act as code assistants, and so the data is structured in a QA format where the questions are worded similar to how real users will ask code related questions.
The data has ~60% python samples.
To report any problems or suggestions in the data, join the [Glaive discord](https://discord.gg/fjQ4uf3yWD) |
Dong237/empathetic_dialogues_cleaned | 2023-09-22T14:10:27.000Z | [
"region:us"
] | Dong237 | null | null | null | 0 | 284 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: conv_id
dtype: string
- name: context
dtype: string
- name: prompt
dtype: string
- name: utterance
dtype: string
splits:
- name: train
num_bytes: 7970106
num_examples: 17780
- name: validation
num_bytes: 1343248
num_examples: 2758
- name: test
num_bytes: 1334052
num_examples: 2540
download_size: 6149453
dataset_size: 10647406
---
# Dataset Card for "repo_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
riddle_sense | 2022-11-18T21:42:04.000Z | [
"task_categories:question-answering",
"task_ids:multiple-choice-qa",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:other",
"region:us"
] | null | Answering such a riddle-style question is a challenging cognitive process, in that it requires
complex commonsense reasoning abilities, an understanding of figurative language, and counterfactual reasoning
skills, which are all important abilities for advanced natural language understanding (NLU). However,
there is currently no dedicated datasets aiming to test these abilities. Herein, we present RiddleSense,
a new multiple-choice question answering task, which comes with the first large dataset (5.7k examples) for answering
riddle-style commonsense questions. We systematically evaluate a wide range of models over the challenge,
and point out that there is a large gap between the best-supervised model and human performance — suggesting
intriguing future research in the direction of higher-order commonsense reasoning and linguistic creativity towards
building advanced NLU systems. | @InProceedings{lin-etal-2021-riddlesense,
title={RiddleSense: Reasoning about Riddle Questions Featuring Linguistic Creativity and Commonsense Knowledge},
author={Lin, Bill Yuchen and Wu, Ziyi and Yang, Yichi and Lee, Dong-Ho and Ren, Xiang},
journal={Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics (ACL-IJCNLP 2021): Findings},
year={2021}
} | null | 12 | 283 | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- other
multilinguality:
- monolingual
pretty_name: RiddleSense
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- multiple-choice-qa
dataset_info:
features:
- name: answerKey
dtype: string
- name: question
dtype: string
- name: choices
sequence:
- name: label
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 720715
num_examples: 3510
- name: validation
num_bytes: 208276
num_examples: 1021
- name: test
num_bytes: 212790
num_examples: 1184
download_size: 2083122
dataset_size: 1141781
---
# Dataset Card for RiddleSense
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://inklab.usc.edu/RiddleSense/
- **Repository:** https://github.com/INK-USC/RiddleSense/
- **Paper:** https://inklab.usc.edu/RiddleSense/riddlesense_acl21_paper.pdf
- **Leaderboard:** https://inklab.usc.edu/RiddleSense/#leaderboard
- **Point of Contact:** [Yuchen Lin](yuchen.lin@usc.edu)
### Dataset Summary
Answering such a riddle-style question is a challenging cognitive process, in that it requires
complex commonsense reasoning abilities, an understanding of figurative language, and counterfactual reasoning
skills, which are all important abilities for advanced natural language understanding (NLU). However,
there is currently no dedicated datasets aiming to test these abilities. Herein, we present RiddleSense,
a new multiple-choice question answering task, which comes with the first large dataset (5.7k examples) for answering
riddle-style commonsense questions. We systematically evaluate a wide range of models over the challenge,
and point out that there is a large gap between the best-supervised model and human performance suggesting
intriguing future research in the direction of higher-order commonsense reasoning and linguistic creativity towards
building advanced NLU systems.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
English
## Dataset Structure
### Data Instances
An example of 'train' looks as follows.
```
{
"answerKey": "E",
"choices": {
"label": ["A", "B", "C", "D", "E"],
"text": ["throw", "bit", "gallow", "mouse", "hole"]
},
"question": "A man is incarcerated in prison, and as his punishment he has to carry a one tonne bag of sand backwards and forwards across a field the size of a football pitch. What is the one thing he can put in it to make it lighter?"
}
```
### Data Fields
Data Fields
The data fields are the same among all splits.
default
- `answerKey`: a string feature.
- `question`: a string feature.
- `choices`: a dictionary feature containing:
- `label`: a string feature.
- `text`: a string feature.
### Data Splits
|name| train| validation| test|
|---|---|---|---|
|default| 3510| 1021| 1184|
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
Dataset provided for research purposes only. Please check dataset license for additional information.
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
The copyright of RiddleSense dataset is consistent with the terms of use of the fan websites and the intellectual property and privacy rights of the original sources. All of our riddles and answers are from fan websites that can be accessed freely. The website owners state that you may print and download material from the sites solely for non-commercial use provided that we agree not to change or delete any copyright or proprietary notices from the materials. The dataset users must agree that they will only use the dataset for research purposes before they can access the both the riddles and our annotations. We do not vouch for the potential bias or fairness issue that might exist within the riddles. You do not have the right to redistribute them. Again, you must not use this dataset for any commercial purposes.
### Citation Information
```
@InProceedings{lin-etal-2021-riddlesense,
title={RiddleSense: Reasoning about Riddle Questions Featuring Linguistic Creativity and Commonsense Knowledge},
author={Lin, Bill Yuchen and Wu, Ziyi and Yang, Yichi and Lee, Dong-Ho and Ren, Xiang},
journal={Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics (ACL-IJCNLP 2021): Findings},
year={2021}
}
```
### Contributions
Thanks to [@ziyiwu9494](https://github.com/ziyiwu9494) for adding this dataset. |
mteb/biorxiv-clustering-p2p | 2022-09-27T19:15:27.000Z | [
"language:en",
"region:us"
] | mteb | null | null | null | 0 | 283 | ---
language:
- en
--- |
Chris1/cityscapes_segmentation | 2022-11-03T19:43:00.000Z | [
"region:us"
] | Chris1 | null | null | null | 1 | 282 | Entry not found |
DynamicSuperb/SpeechDetection_LibriSpeech-TestClean | 2023-07-12T05:42:02.000Z | [
"region:us"
] | DynamicSuperb | null | null | null | 0 | 282 | ---
dataset_info:
features:
- name: file
dtype: string
- name: audio
dtype: audio
- name: instruction
dtype: string
- name: label
dtype: string
splits:
- name: test
num_bytes: 358157508.64
num_examples: 2620
download_size: 349425014
dataset_size: 358157508.64
---
# Dataset Card for "speechDetection_LibrispeechTestClean"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Clinton/Text-to-sql-v1 | 2023-07-26T11:09:22.000Z | [
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:en",
"license:apache-2.0",
"SQL",
"region:us"
] | Clinton | null | null | null | 15 | 281 | ---
license: apache-2.0
task_categories:
- text-generation
language:
- en
tags:
- SQL
size_categories:
- 100K<n<1M
--- |
multi_eurlex | 2023-06-14T13:34:30.000Z | [
"task_categories:text-classification",
"task_ids:multi-label-classification",
"task_ids:topic-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:bg",
"language:cs",
"language:da",
"language:de",
"language:el",
"language:en",
"language:es",
"language:et",
"language:fi",
"language:fr",
"language:hr",
"language:hu",
"language:it",
"language:lt",
"language:lv",
"language:mt",
"language:nl",
"language:pl",
"language:pt",
"language:ro",
"language:sk",
"language:sl",
"language:sv",
"license:cc-by-sa-4.0",
"arxiv:2109.00904",
"region:us"
] | null | MultiEURLEX comprises 65k EU laws in 23 official EU languages (some low-ish resource).
Each EU law has been annotated with EUROVOC concepts (labels) by the Publication Office of EU.
As with the English EURLEX, the goal is to predict the relevant EUROVOC concepts (labels);
this is multi-label classification task (given the text, predict multiple labels). | @InProceedings{chalkidis-etal-2021-multieurlex,
author = {Chalkidis, Ilias
and Fergadiotis, Manos
and Androutsopoulos, Ion},
title = {MultiEURLEX -- A multi-lingual and multi-label legal document
classification dataset for zero-shot cross-lingual transfer},
booktitle = {Proceedings of the 2021 Conference on Empirical Methods
in Natural Language Processing},
year = {2021},
publisher = {Association for Computational Linguistics},
location = {Punta Cana, Dominican Republic},
} | null | 21 | 280 | ---
annotations_creators:
- found
language_creators:
- found
language:
- bg
- cs
- da
- de
- el
- en
- es
- et
- fi
- fr
- hr
- hu
- it
- lt
- lv
- mt
- nl
- pl
- pt
- ro
- sk
- sl
- sv
license:
- cc-by-sa-4.0
multilinguality:
- multilingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-label-classification
- topic-classification
pretty_name: MultiEURLEX
dataset_info:
- config_name: en
features:
- name: celex_id
dtype: string
- name: text
dtype: string
- name: labels
sequence:
class_label:
names:
'0': '100149'
'1': '100160'
'2': '100148'
'3': '100147'
'4': '100152'
'5': '100143'
'6': '100156'
'7': '100158'
'8': '100154'
'9': '100153'
'10': '100142'
'11': '100145'
'12': '100150'
'13': '100162'
'14': '100159'
'15': '100144'
'16': '100151'
'17': '100157'
'18': '100161'
'19': '100146'
'20': '100155'
splits:
- name: train
num_bytes: 389250183
num_examples: 55000
- name: test
num_bytes: 58966963
num_examples: 5000
- name: validation
num_bytes: 41516165
num_examples: 5000
download_size: 2770050147
dataset_size: 489733311
- config_name: da
features:
- name: celex_id
dtype: string
- name: text
dtype: string
- name: labels
sequence:
class_label:
names:
'0': '100149'
'1': '100160'
'2': '100148'
'3': '100147'
'4': '100152'
'5': '100143'
'6': '100156'
'7': '100158'
'8': '100154'
'9': '100153'
'10': '100142'
'11': '100145'
'12': '100150'
'13': '100162'
'14': '100159'
'15': '100144'
'16': '100151'
'17': '100157'
'18': '100161'
'19': '100146'
'20': '100155'
splits:
- name: train
num_bytes: 395774777
num_examples: 55000
- name: test
num_bytes: 60343696
num_examples: 5000
- name: validation
num_bytes: 42366390
num_examples: 5000
download_size: 2770050147
dataset_size: 498484863
- config_name: de
features:
- name: celex_id
dtype: string
- name: text
dtype: string
- name: labels
sequence:
class_label:
names:
'0': '100149'
'1': '100160'
'2': '100148'
'3': '100147'
'4': '100152'
'5': '100143'
'6': '100156'
'7': '100158'
'8': '100154'
'9': '100153'
'10': '100142'
'11': '100145'
'12': '100150'
'13': '100162'
'14': '100159'
'15': '100144'
'16': '100151'
'17': '100157'
'18': '100161'
'19': '100146'
'20': '100155'
splits:
- name: train
num_bytes: 425489905
num_examples: 55000
- name: test
num_bytes: 65739074
num_examples: 5000
- name: validation
num_bytes: 46079574
num_examples: 5000
download_size: 2770050147
dataset_size: 537308553
- config_name: nl
features:
- name: celex_id
dtype: string
- name: text
dtype: string
- name: labels
sequence:
class_label:
names:
'0': '100149'
'1': '100160'
'2': '100148'
'3': '100147'
'4': '100152'
'5': '100143'
'6': '100156'
'7': '100158'
'8': '100154'
'9': '100153'
'10': '100142'
'11': '100145'
'12': '100150'
'13': '100162'
'14': '100159'
'15': '100144'
'16': '100151'
'17': '100157'
'18': '100161'
'19': '100146'
'20': '100155'
splits:
- name: train
num_bytes: 430232783
num_examples: 55000
- name: test
num_bytes: 64728034
num_examples: 5000
- name: validation
num_bytes: 45452550
num_examples: 5000
download_size: 2770050147
dataset_size: 540413367
- config_name: sv
features:
- name: celex_id
dtype: string
- name: text
dtype: string
- name: labels
sequence:
class_label:
names:
'0': '100149'
'1': '100160'
'2': '100148'
'3': '100147'
'4': '100152'
'5': '100143'
'6': '100156'
'7': '100158'
'8': '100154'
'9': '100153'
'10': '100142'
'11': '100145'
'12': '100150'
'13': '100162'
'14': '100159'
'15': '100144'
'16': '100151'
'17': '100157'
'18': '100161'
'19': '100146'
'20': '100155'
splits:
- name: train
num_bytes: 329071297
num_examples: 42490
- name: test
num_bytes: 60602026
num_examples: 5000
- name: validation
num_bytes: 42766067
num_examples: 5000
download_size: 2770050147
dataset_size: 432439390
- config_name: bg
features:
- name: celex_id
dtype: string
- name: text
dtype: string
- name: labels
sequence:
class_label:
names:
'0': '100149'
'1': '100160'
'2': '100148'
'3': '100147'
'4': '100152'
'5': '100143'
'6': '100156'
'7': '100158'
'8': '100154'
'9': '100153'
'10': '100142'
'11': '100145'
'12': '100150'
'13': '100162'
'14': '100159'
'15': '100144'
'16': '100151'
'17': '100157'
'18': '100161'
'19': '100146'
'20': '100155'
splits:
- name: train
num_bytes: 273160256
num_examples: 15986
- name: test
num_bytes: 109874769
num_examples: 5000
- name: validation
num_bytes: 76892281
num_examples: 5000
download_size: 2770050147
dataset_size: 459927306
- config_name: cs
features:
- name: celex_id
dtype: string
- name: text
dtype: string
- name: labels
sequence:
class_label:
names:
'0': '100149'
'1': '100160'
'2': '100148'
'3': '100147'
'4': '100152'
'5': '100143'
'6': '100156'
'7': '100158'
'8': '100154'
'9': '100153'
'10': '100142'
'11': '100145'
'12': '100150'
'13': '100162'
'14': '100159'
'15': '100144'
'16': '100151'
'17': '100157'
'18': '100161'
'19': '100146'
'20': '100155'
splits:
- name: train
num_bytes: 189826410
num_examples: 23187
- name: test
num_bytes: 60702814
num_examples: 5000
- name: validation
num_bytes: 42764243
num_examples: 5000
download_size: 2770050147
dataset_size: 293293467
- config_name: hr
features:
- name: celex_id
dtype: string
- name: text
dtype: string
- name: labels
sequence:
class_label:
names:
'0': '100149'
'1': '100160'
'2': '100148'
'3': '100147'
'4': '100152'
'5': '100143'
'6': '100156'
'7': '100158'
'8': '100154'
'9': '100153'
'10': '100142'
'11': '100145'
'12': '100150'
'13': '100162'
'14': '100159'
'15': '100144'
'16': '100151'
'17': '100157'
'18': '100161'
'19': '100146'
'20': '100155'
splits:
- name: train
num_bytes: 80808173
num_examples: 7944
- name: test
num_bytes: 56790830
num_examples: 5000
- name: validation
num_bytes: 23881832
num_examples: 2500
download_size: 2770050147
dataset_size: 161480835
- config_name: pl
features:
- name: celex_id
dtype: string
- name: text
dtype: string
- name: labels
sequence:
class_label:
names:
'0': '100149'
'1': '100160'
'2': '100148'
'3': '100147'
'4': '100152'
'5': '100143'
'6': '100156'
'7': '100158'
'8': '100154'
'9': '100153'
'10': '100142'
'11': '100145'
'12': '100150'
'13': '100162'
'14': '100159'
'15': '100144'
'16': '100151'
'17': '100157'
'18': '100161'
'19': '100146'
'20': '100155'
splits:
- name: train
num_bytes: 202211478
num_examples: 23197
- name: test
num_bytes: 64654979
num_examples: 5000
- name: validation
num_bytes: 45545517
num_examples: 5000
download_size: 2770050147
dataset_size: 312411974
- config_name: sk
features:
- name: celex_id
dtype: string
- name: text
dtype: string
- name: labels
sequence:
class_label:
names:
'0': '100149'
'1': '100160'
'2': '100148'
'3': '100147'
'4': '100152'
'5': '100143'
'6': '100156'
'7': '100158'
'8': '100154'
'9': '100153'
'10': '100142'
'11': '100145'
'12': '100150'
'13': '100162'
'14': '100159'
'15': '100144'
'16': '100151'
'17': '100157'
'18': '100161'
'19': '100146'
'20': '100155'
splits:
- name: train
num_bytes: 188126769
num_examples: 22971
- name: test
num_bytes: 60922686
num_examples: 5000
- name: validation
num_bytes: 42786793
num_examples: 5000
download_size: 2770050147
dataset_size: 291836248
- config_name: sl
features:
- name: celex_id
dtype: string
- name: text
dtype: string
- name: labels
sequence:
class_label:
names:
'0': '100149'
'1': '100160'
'2': '100148'
'3': '100147'
'4': '100152'
'5': '100143'
'6': '100156'
'7': '100158'
'8': '100154'
'9': '100153'
'10': '100142'
'11': '100145'
'12': '100150'
'13': '100162'
'14': '100159'
'15': '100144'
'16': '100151'
'17': '100157'
'18': '100161'
'19': '100146'
'20': '100155'
splits:
- name: train
num_bytes: 170800933
num_examples: 23184
- name: test
num_bytes: 54552441
num_examples: 5000
- name: validation
num_bytes: 38286422
num_examples: 5000
download_size: 2770050147
dataset_size: 263639796
- config_name: es
features:
- name: celex_id
dtype: string
- name: text
dtype: string
- name: labels
sequence:
class_label:
names:
'0': '100149'
'1': '100160'
'2': '100148'
'3': '100147'
'4': '100152'
'5': '100143'
'6': '100156'
'7': '100158'
'8': '100154'
'9': '100153'
'10': '100142'
'11': '100145'
'12': '100150'
'13': '100162'
'14': '100159'
'15': '100144'
'16': '100151'
'17': '100157'
'18': '100161'
'19': '100146'
'20': '100155'
splits:
- name: train
num_bytes: 433955383
num_examples: 52785
- name: test
num_bytes: 66885004
num_examples: 5000
- name: validation
num_bytes: 47178821
num_examples: 5000
download_size: 2770050147
dataset_size: 548019208
- config_name: fr
features:
- name: celex_id
dtype: string
- name: text
dtype: string
- name: labels
sequence:
class_label:
names:
'0': '100149'
'1': '100160'
'2': '100148'
'3': '100147'
'4': '100152'
'5': '100143'
'6': '100156'
'7': '100158'
'8': '100154'
'9': '100153'
'10': '100142'
'11': '100145'
'12': '100150'
'13': '100162'
'14': '100159'
'15': '100144'
'16': '100151'
'17': '100157'
'18': '100161'
'19': '100146'
'20': '100155'
splits:
- name: train
num_bytes: 442358905
num_examples: 55000
- name: test
num_bytes: 68520127
num_examples: 5000
- name: validation
num_bytes: 48408938
num_examples: 5000
download_size: 2770050147
dataset_size: 559287970
- config_name: it
features:
- name: celex_id
dtype: string
- name: text
dtype: string
- name: labels
sequence:
class_label:
names:
'0': '100149'
'1': '100160'
'2': '100148'
'3': '100147'
'4': '100152'
'5': '100143'
'6': '100156'
'7': '100158'
'8': '100154'
'9': '100153'
'10': '100142'
'11': '100145'
'12': '100150'
'13': '100162'
'14': '100159'
'15': '100144'
'16': '100151'
'17': '100157'
'18': '100161'
'19': '100146'
'20': '100155'
splits:
- name: train
num_bytes: 429495813
num_examples: 55000
- name: test
num_bytes: 64731770
num_examples: 5000
- name: validation
num_bytes: 45886537
num_examples: 5000
download_size: 2770050147
dataset_size: 540114120
- config_name: pt
features:
- name: celex_id
dtype: string
- name: text
dtype: string
- name: labels
sequence:
class_label:
names:
'0': '100149'
'1': '100160'
'2': '100148'
'3': '100147'
'4': '100152'
'5': '100143'
'6': '100156'
'7': '100158'
'8': '100154'
'9': '100153'
'10': '100142'
'11': '100145'
'12': '100150'
'13': '100162'
'14': '100159'
'15': '100144'
'16': '100151'
'17': '100157'
'18': '100161'
'19': '100146'
'20': '100155'
splits:
- name: train
num_bytes: 419281927
num_examples: 52370
- name: test
num_bytes: 64771247
num_examples: 5000
- name: validation
num_bytes: 45897231
num_examples: 5000
download_size: 2770050147
dataset_size: 529950405
- config_name: ro
features:
- name: celex_id
dtype: string
- name: text
dtype: string
- name: labels
sequence:
class_label:
names:
'0': '100149'
'1': '100160'
'2': '100148'
'3': '100147'
'4': '100152'
'5': '100143'
'6': '100156'
'7': '100158'
'8': '100154'
'9': '100153'
'10': '100142'
'11': '100145'
'12': '100150'
'13': '100162'
'14': '100159'
'15': '100144'
'16': '100151'
'17': '100157'
'18': '100161'
'19': '100146'
'20': '100155'
splits:
- name: train
num_bytes: 164966676
num_examples: 15921
- name: test
num_bytes: 67248472
num_examples: 5000
- name: validation
num_bytes: 46968070
num_examples: 5000
download_size: 2770050147
dataset_size: 279183218
- config_name: et
features:
- name: celex_id
dtype: string
- name: text
dtype: string
- name: labels
sequence:
class_label:
names:
'0': '100149'
'1': '100160'
'2': '100148'
'3': '100147'
'4': '100152'
'5': '100143'
'6': '100156'
'7': '100158'
'8': '100154'
'9': '100153'
'10': '100142'
'11': '100145'
'12': '100150'
'13': '100162'
'14': '100159'
'15': '100144'
'16': '100151'
'17': '100157'
'18': '100161'
'19': '100146'
'20': '100155'
splits:
- name: train
num_bytes: 173878703
num_examples: 23126
- name: test
num_bytes: 56535287
num_examples: 5000
- name: validation
num_bytes: 39580866
num_examples: 5000
download_size: 2770050147
dataset_size: 269994856
- config_name: fi
features:
- name: celex_id
dtype: string
- name: text
dtype: string
- name: labels
sequence:
class_label:
names:
'0': '100149'
'1': '100160'
'2': '100148'
'3': '100147'
'4': '100152'
'5': '100143'
'6': '100156'
'7': '100158'
'8': '100154'
'9': '100153'
'10': '100142'
'11': '100145'
'12': '100150'
'13': '100162'
'14': '100159'
'15': '100144'
'16': '100151'
'17': '100157'
'18': '100161'
'19': '100146'
'20': '100155'
splits:
- name: train
num_bytes: 336145949
num_examples: 42497
- name: test
num_bytes: 63280920
num_examples: 5000
- name: validation
num_bytes: 44500040
num_examples: 5000
download_size: 2770050147
dataset_size: 443926909
- config_name: hu
features:
- name: celex_id
dtype: string
- name: text
dtype: string
- name: labels
sequence:
class_label:
names:
'0': '100149'
'1': '100160'
'2': '100148'
'3': '100147'
'4': '100152'
'5': '100143'
'6': '100156'
'7': '100158'
'8': '100154'
'9': '100153'
'10': '100142'
'11': '100145'
'12': '100150'
'13': '100162'
'14': '100159'
'15': '100144'
'16': '100151'
'17': '100157'
'18': '100161'
'19': '100146'
'20': '100155'
splits:
- name: train
num_bytes: 208805862
num_examples: 22664
- name: test
num_bytes: 68990666
num_examples: 5000
- name: validation
num_bytes: 48101023
num_examples: 5000
download_size: 2770050147
dataset_size: 325897551
- config_name: lt
features:
- name: celex_id
dtype: string
- name: text
dtype: string
- name: labels
sequence:
class_label:
names:
'0': '100149'
'1': '100160'
'2': '100148'
'3': '100147'
'4': '100152'
'5': '100143'
'6': '100156'
'7': '100158'
'8': '100154'
'9': '100153'
'10': '100142'
'11': '100145'
'12': '100150'
'13': '100162'
'14': '100159'
'15': '100144'
'16': '100151'
'17': '100157'
'18': '100161'
'19': '100146'
'20': '100155'
splits:
- name: train
num_bytes: 185211691
num_examples: 23188
- name: test
num_bytes: 59484711
num_examples: 5000
- name: validation
num_bytes: 41841024
num_examples: 5000
download_size: 2770050147
dataset_size: 286537426
- config_name: lv
features:
- name: celex_id
dtype: string
- name: text
dtype: string
- name: labels
sequence:
class_label:
names:
'0': '100149'
'1': '100160'
'2': '100148'
'3': '100147'
'4': '100152'
'5': '100143'
'6': '100156'
'7': '100158'
'8': '100154'
'9': '100153'
'10': '100142'
'11': '100145'
'12': '100150'
'13': '100162'
'14': '100159'
'15': '100144'
'16': '100151'
'17': '100157'
'18': '100161'
'19': '100146'
'20': '100155'
splits:
- name: train
num_bytes: 186396252
num_examples: 23208
- name: test
num_bytes: 59814093
num_examples: 5000
- name: validation
num_bytes: 42002727
num_examples: 5000
download_size: 2770050147
dataset_size: 288213072
- config_name: el
features:
- name: celex_id
dtype: string
- name: text
dtype: string
- name: labels
sequence:
class_label:
names:
'0': '100149'
'1': '100160'
'2': '100148'
'3': '100147'
'4': '100152'
'5': '100143'
'6': '100156'
'7': '100158'
'8': '100154'
'9': '100153'
'10': '100142'
'11': '100145'
'12': '100150'
'13': '100162'
'14': '100159'
'15': '100144'
'16': '100151'
'17': '100157'
'18': '100161'
'19': '100146'
'20': '100155'
splits:
- name: train
num_bytes: 768224743
num_examples: 55000
- name: test
num_bytes: 117209312
num_examples: 5000
- name: validation
num_bytes: 81923366
num_examples: 5000
download_size: 2770050147
dataset_size: 967357421
- config_name: mt
features:
- name: celex_id
dtype: string
- name: text
dtype: string
- name: labels
sequence:
class_label:
names:
'0': '100149'
'1': '100160'
'2': '100148'
'3': '100147'
'4': '100152'
'5': '100143'
'6': '100156'
'7': '100158'
'8': '100154'
'9': '100153'
'10': '100142'
'11': '100145'
'12': '100150'
'13': '100162'
'14': '100159'
'15': '100144'
'16': '100151'
'17': '100157'
'18': '100161'
'19': '100146'
'20': '100155'
splits:
- name: train
num_bytes: 179866781
num_examples: 17521
- name: test
num_bytes: 65831230
num_examples: 5000
- name: validation
num_bytes: 46737914
num_examples: 5000
download_size: 2770050147
dataset_size: 292435925
- config_name: all_languages
features:
- name: celex_id
dtype: string
- name: text
dtype:
translation:
languages:
- en
- da
- de
- nl
- sv
- bg
- cs
- hr
- pl
- sk
- sl
- es
- fr
- it
- pt
- ro
- et
- fi
- hu
- lt
- lv
- el
- mt
- name: labels
sequence:
class_label:
names:
'0': '100149'
'1': '100160'
'2': '100148'
'3': '100147'
'4': '100152'
'5': '100143'
'6': '100156'
'7': '100158'
'8': '100154'
'9': '100153'
'10': '100142'
'11': '100145'
'12': '100150'
'13': '100162'
'14': '100159'
'15': '100144'
'16': '100151'
'17': '100157'
'18': '100161'
'19': '100146'
'20': '100155'
splits:
- name: train
num_bytes: 6971500859
num_examples: 55000
- name: test
num_bytes: 1536038431
num_examples: 5000
- name: validation
num_bytes: 1062290624
num_examples: 5000
download_size: 2770050147
dataset_size: 9569829914
---
# Dataset Card for "MultiEURLEX"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** https://github.com/nlpaueb/MultiEURLEX/
- **Paper:** https://arxiv.org/abs/2109.00904
- **Data:** https://doi.org/10.5281/zenodo.5363165
- **Leaderboard:** N/A
- **Point of Contact:** [Ilias Chalkidis](mailto:ilias.chalkidis@di.ku.dk)
### Dataset Summary
**Documents**
MultiEURLEX comprises 65k EU laws in 23 official EU languages. Each EU law has been annotated with EUROVOC concepts (labels) by the Publication Office of EU. Each EUROVOC label ID is associated with a *label descriptor*, e.g., [60, agri-foodstuffs], [6006, plant product], [1115, fruit]. The descriptors are also available in the 23 languages. Chalkidis et al. (2019) published a monolingual (English) version of this dataset, called EUR-LEX, comprising 57k EU laws with the originally assigned gold labels.
**Multi-granular Labeling**
EUROVOC has eight levels of concepts. Each document is assigned one or more concepts (labels). If a document is assigned a concept, the ancestors and descendants of that concept are typically not assigned to the same document. The documents were originally annotated with concepts from levels 3 to 8.
We created three alternative sets of labels per document, by replacing each assigned concept by its ancestor from level 1, 2, or 3, respectively. Thus, we provide four sets of gold labels per document, one for each of the first three levels of the hierarchy, plus the original sparse label assignment. Levels 4 to 8 cannot be used independently, as many documents have gold concepts from the third level; thus many documents will be mislabeled, if we discard level 3.
**Data Split and Concept Drift**
MultiEURLEX is *chronologically* split in training (55k, 1958-2010), development (5k, 2010-2012), test (5k, 2012-2016) subsets, using the English documents. The test subset contains the same 5k documents in all 23 languages. The development subset also contains the same 5k documents in 23 languages, except Croatian. Croatia is the most recent EU member (2013); older laws are gradually translated.
For the official languages of the seven oldest member countries, the same 55k training documents are available; for the other languages, only a subset of the 55k training documents is available.
Compared to EUR-LEX (Chalkidis et al., 2019), MultiEURLEX is not only larger (8k more documents) and multilingual; it is also more challenging, as the chronological split leads to temporal real-world *concept drift* across the training, development, test subsets, i.e., differences in label distribution and phrasing, representing a realistic *temporal generalization* problem (Huang et al., 2019; Lazaridou et al., 2021). Recently, Søgaard et al. (2021) showed this setup is more realistic, as it does not over-estimate real performance, contrary to random splits (Gorman and Bedrick, 2019).
### Supported Tasks and Leaderboards
Similarly to EUR-LEX (Chalkidis et al., 2019), MultiEURLEX can be used for legal topic classification, a multi-label classification task where legal documents need to be assigned concepts (in our case, from EUROVOC) reflecting their topics. Unlike EUR-LEX, however, MultiEURLEX supports labels from three different granularities (EUROVOC levels). More importantly, apart from monolingual (*one-to-one*) experiments, it can be used to study cross-lingual transfer scenarios, including *one-to-many* (systems trained in one language and used in other languages with no training data), and *many-to-one* or *many-to-many* (systems jointly trained in multiple languages and used in one or more other languages).
The dataset is not yet part of an established benchmark.
### Languages
The EU has 24 official languages. When new members join the EU, the set of official languages usually expands, except the languages are already included. MultiEURLEX covers 23 languages from seven language families (Germanic, Romance, Slavic, Uralic, Baltic, Semitic, Hellenic). EU laws are published in all official languages, except Irish, for resource-related reasons (Read more at https://europa.eu/european-union/about-eu/eu-languages_en). This wide coverage makes MultiEURLEX a valuable testbed for cross-lingual transfer. All languages use the Latin script, except for Bulgarian (Cyrillic script) and Greek. Several other languages are also spoken in EU countries. The EU is home to over 60 additional indigenous regional or minority languages, e.g., Basque, Catalan, Frisian, Saami, and Yiddish, among others, spoken by approx. 40 million people, but these additional languages are not considered official (in terms of EU), and EU laws are not translated to them.
## Dataset Structure
### Data Instances
**Multilingual use of the dataset**
When the dataset is used in a multilingual setting selecting the the 'all_languages' flag:
```python
from datasets import load_dataset
dataset = load_dataset('multi_eurlex', 'all_languages')
```
```json
{
"celex_id": "31979D0509",
"text": {"en": "COUNCIL DECISION of 24 May 1979 on financial aid from the Community for the eradication of African swine fever in Spain (79/509/EEC)\nTHE COUNCIL OF THE EUROPEAN COMMUNITIES\nHaving regard to the Treaty establishing the European Economic Community, and in particular Article 43 thereof,\nHaving regard to the proposal from the Commission (1),\nHaving regard to the opinion of the European Parliament (2),\nWhereas the Community should take all appropriate measures to protect itself against the appearance of African swine fever on its territory;\nWhereas to this end the Community has undertaken, and continues to undertake, action designed to contain outbreaks of this type of disease far from its frontiers by helping countries affected to reinforce their preventive measures ; whereas for this purpose Community subsidies have already been granted to Spain;\nWhereas these measures have unquestionably made an effective contribution to the protection of Community livestock, especially through the creation and maintenance of a buffer zone north of the river Ebro;\nWhereas, however, in the opinion of the Spanish authorities themselves, the measures so far implemented must be reinforced if the fundamental objective of eradicating the disease from the entire country is to be achieved;\nWhereas the Spanish authorities have asked the Community to contribute to the expenses necessary for the efficient implementation of a total eradication programme;\nWhereas a favourable response should be given to this request by granting aid to Spain, having regard to the undertaking given by that country to protect the Community against African swine fever and to eliminate completely this disease by the end of a five-year eradication plan;\nWhereas this eradication plan must include certain measures which guarantee the effectiveness of the action taken, and it must be possible to adapt these measures to developments in the situation by means of a procedure establishing close cooperation between the Member States and the Commission;\nWhereas it is necessary to keep the Member States regularly informed as to the progress of the action undertaken,",
"es": "DECISIÓN DEL CONSEJO de 24 de mayo de 1979 sobre ayuda financiera de la Comunidad para la erradicación de la peste porcina africana en España (79/509/CEE)\nEL CONSEJO DE LAS COMUNIDADES EUROPEAS\nVeniendo en cuenta el Tratado constitutivo de la Comunidad Económica Europea y, en particular, Su artículo 43,\n Vista la propuesta de la Comisión (1),\n Visto el dictamen del Parlamento Europeo (2),\nConsiderando que la Comunidad debe tomar todas las medidas adecuadas para protegerse contra la aparición de la peste porcina africana en su territorio;\nConsiderando a tal fin que la Comunidad ha emprendido y sigue llevando a cabo acciones destinadas a contener los brotes de este tipo de enfermedades lejos de sus fronteras, ayudando a los países afectados a reforzar sus medidas preventivas; que a tal efecto ya se han concedido a España subvenciones comunitarias;\nQue estas medidas han contribuido sin duda alguna a la protección de la ganadería comunitaria, especialmente mediante la creación y mantenimiento de una zona tampón al norte del río Ebro;\nConsiderando, no obstante, , a juicio de las propias autoridades españolas, las medidas implementadas hasta ahora deben reforzarse si se quiere alcanzar el objetivo fundamental de erradicar la enfermedad en todo el país;\nConsiderando que las autoridades españolas han pedido a la Comunidad que contribuya a los gastos necesarios para la ejecución eficaz de un programa de erradicación total;\nConsiderando que conviene dar una respuesta favorable a esta solicitud concediendo una ayuda a España, habida cuenta del compromiso asumido por dicho país de proteger a la Comunidad contra la peste porcina africana y de eliminar completamente esta enfermedad al final de un plan de erradicación de cinco años;\nMientras que este plan de erradicación debe incluir e determinadas medidas que garanticen la eficacia de las acciones emprendidas, debiendo ser posible adaptar estas medidas a la evolución de la situación mediante un procedimiento que establezca una estrecha cooperación entre los Estados miembros y la Comisión;\nConsiderando que es necesario mantener el Los Estados miembros informados periódicamente sobre el progreso de las acciones emprendidas.",
"de": "...",
"bg": "..."
},
"labels": [
1,
13,
47
]
}
```
**Monolingual use of the dataset**
When the dataset is used in a monolingual setting selecting the ISO language code for one of the 23 supported languages. For example:
```python
from datasets import load_dataset
dataset = load_dataset('multi_eurlex', 'en')
```
```json
{
"celex_id": "31979D0509",
"text": "COUNCIL DECISION of 24 May 1979 on financial aid from the Community for the eradication of African swine fever in Spain (79/509/EEC)\nTHE COUNCIL OF THE EUROPEAN COMMUNITIES\nHaving regard to the Treaty establishing the European Economic Community, and in particular Article 43 thereof,\nHaving regard to the proposal from the Commission (1),\nHaving regard to the opinion of the European Parliament (2),\nWhereas the Community should take all appropriate measures to protect itself against the appearance of African swine fever on its territory;\nWhereas to this end the Community has undertaken, and continues to undertake, action designed to contain outbreaks of this type of disease far from its frontiers by helping countries affected to reinforce their preventive measures ; whereas for this purpose Community subsidies have already been granted to Spain;\nWhereas these measures have unquestionably made an effective contribution to the protection of Community livestock, especially through the creation and maintenance of a buffer zone north of the river Ebro;\nWhereas, however, in the opinion of the Spanish authorities themselves, the measures so far implemented must be reinforced if the fundamental objective of eradicating the disease from the entire country is to be achieved;\nWhereas the Spanish authorities have asked the Community to contribute to the expenses necessary for the efficient implementation of a total eradication programme;\nWhereas a favourable response should be given to this request by granting aid to Spain, having regard to the undertaking given by that country to protect the Community against African swine fever and to eliminate completely this disease by the end of a five-year eradication plan;\nWhereas this eradication plan must include certain measures which guarantee the effectiveness of the action taken, and it must be possible to adapt these measures to developments in the situation by means of a procedure establishing close cooperation between the Member States and the Commission;\nWhereas it is necessary to keep the Member States regularly informed as to the progress of the action undertaken,",
"labels": [
1,
13,
47
]
}
```
### Data Fields
**Multilingual use of the dataset**
The following data fields are provided for documents (`train`, `dev`, `test`):
`celex_id`: (**str**) The official ID of the document. The CELEX number is the unique identifier for all publications in both Eur-Lex and CELLAR.\
`text`: (dict[**str**]) A dictionary with the 23 languages as keys and the full content of each document as values.\
`labels`: (**List[int]**) The relevant EUROVOC concepts (labels).
**Monolingual use of the dataset**
The following data fields are provided for documents (`train`, `dev`, `test`):
`celex_id`: (**str**) The official ID of the document. The CELEX number is the unique identifier for all publications in both Eur-Lex and CELLAR.\
`text`: (**str**) The full content of each document across languages.\
`labels`: (**List[int]**) The relevant EUROVOC concepts (labels).
If you want to use the descriptors of the EUROVOC concepts, similar to [Chalkidis et al. (2020)](https://aclanthology.org/2020.emnlp-main.607/), please download the relevant JSON file [here](https://raw.githubusercontent.com/nlpaueb/multi-eurlex/master/data/eurovoc_descriptors.json).
Then you may load it and use it:
```python
import json
from datasets import load_dataset
# Load the English part of the dataset
dataset = load_dataset('multi_eurlex', 'en', split='train')
# Load (label_id, descriptor) mapping
with open('./eurovoc_descriptors.json') as jsonl_file:
eurovoc_concepts = json.load(jsonl_file)
# Get feature map info
classlabel = dataset.features["labels"].feature
# Retrieve IDs and descriptors from dataset
for sample in dataset:
print(f'DOCUMENT: {sample["celex_id"]}')
# DOCUMENT: 32006D0213
for label_id in sample['labels']:
print(f'LABEL: id:{label_id}, eurovoc_id: {classlabel.int2str(label_id)}, \
eurovoc_desc:{eurovoc_concepts[classlabel.int2str(label_id)]}')
# LABEL: id: 1, eurovoc_id: '100160', eurovoc_desc: 'industry'
```
### Data Splits
<table>
<tr><td> Language </td> <td> ISO code </td> <td> Member Countries where official </td> <td> EU Speakers [1] </td> <td> Number of Documents [2] </td> </tr>
<tr><td> English </td> <td> <b>en</b> </td> <td> United Kingdom (1973-2020), Ireland (1973), Malta (2004) </td> <td> 13/ 51% </td> <td> 55,000 / 5,000 / 5,000 </td> </tr>
<tr><td> German </td> <td> <b>de</b> </td> <td> Germany (1958), Belgium (1958), Luxembourg (1958) </td> <td> 16/32% </td> <td> 55,000 / 5,000 / 5,000 </td> </tr>
<tr><td> French </td> <td> <b>fr</b> </td> <td> France (1958), Belgium(1958), Luxembourg (1958) </td> <td> 12/26% </td> <td> 55,000 / 5,000 / 5,000 </td> </tr>
<tr><td> Italian </td> <td> <b>it</b> </td> <td> Italy (1958) </td> <td> 13/16% </td> <td> 55,000 / 5,000 / 5,000 </td> </tr>
<tr><td> Spanish </td> <td> <b>es</b> </td> <td> Spain (1986) </td> <td> 8/15% </td> <td> 52,785 / 5,000 / 5,000 </td> </tr>
<tr><td> Polish </td> <td> <b>pl</b> </td> <td> Poland (2004) </td> <td> 8/9% </td> <td> 23,197 / 5,000 / 5,000 </td> </tr>
<tr><td> Romanian </td> <td> <b>ro</b> </td> <td> Romania (2007) </td> <td> 5/5% </td> <td> 15,921 / 5,000 / 5,000 </td> </tr>
<tr><td> Dutch </td> <td> <b>nl</b> </td> <td> Netherlands (1958), Belgium (1958) </td> <td> 4/5% </td> <td> 55,000 / 5,000 / 5,000 </td> </tr>
<tr><td> Greek </td> <td> <b>el</b> </td> <td> Greece (1981), Cyprus (2008) </td> <td> 3/4% </td> <td> 55,000 / 5,000 / 5,000 </td> </tr>
<tr><td> Hungarian </td> <td> <b>hu</b> </td> <td> Hungary (2004) </td> <td> 3/3% </td> <td> 22,664 / 5,000 / 5,000 </td> </tr>
<tr><td> Portuguese </td> <td> <b>pt</b> </td> <td> Portugal (1986) </td> <td> 2/3% </td> <td> 23,188 / 5,000 / 5,000 </td> </tr>
<tr><td> Czech </td> <td> <b>cs</b> </td> <td> Czech Republic (2004) </td> <td> 2/3% </td> <td> 23,187 / 5,000 / 5,000 </td> </tr>
<tr><td> Swedish </td> <td> <b>sv</b> </td> <td> Sweden (1995) </td> <td> 2/3% </td> <td> 42,490 / 5,000 / 5,000 </td> </tr>
<tr><td> Bulgarian </td> <td> <b>bg</b> </td> <td> Bulgaria (2007) </td> <td> 2/2% </td> <td> 15,986 / 5,000 / 5,000 </td> </tr>
<tr><td> Danish </td> <td> <b>da</b> </td> <td> Denmark (1973) </td> <td> 1/1% </td> <td> 55,000 / 5,000 / 5,000 </td> </tr>
<tr><td> Finnish </td> <td> <b>fi</b> </td> <td> Finland (1995) </td> <td> 1/1% </td> <td> 42,497 / 5,000 / 5,000 </td> </tr>
<tr><td> Slovak </td> <td> <b>sk</b> </td> <td> Slovakia (2004) </td> <td> 1/1% </td> <td> 15,986 / 5,000 / 5,000 </td> </tr>
<tr><td> Lithuanian </td> <td> <b>lt</b> </td> <td> Lithuania (2004) </td> <td> 1/1% </td> <td> 23,188 / 5,000 / 5,000 </td> </tr>
<tr><td> Croatian </td> <td> <b>hr</b> </td> <td> Croatia (2013) </td> <td> 1/1% </td> <td> 7,944 / 2,500 / 5,000 </td> </tr>
<tr><td> Slovene </td> <td> <b>sl</b> </td> <td> Slovenia (2004) </td> <td> <1/<1% </td> <td> 23,184 / 5,000 / 5,000 </td> </tr>
<tr><td> Estonian </td> <td> <b>et</b> </td> <td> Estonia (2004) </td> <td> <1/<1% </td> <td> 23,126 / 5,000 / 5,000 </td> </tr>
<tr><td> Latvian </td> <td> <b>lv</b> </td> <td> Latvia (2004) </td> <td> <1/<1% </td> <td> 23,188 / 5,000 / 5,000 </td> </tr>
<tr><td> Maltese </td> <td> <b>mt</b> </td> <td> Malta (2004) </td> <td> <1/<1% </td> <td> 17,521 / 5,000 / 5,000 </td> </tr>
</table>
[1] Native and Total EU speakers percentage (%) \
[2] Training / Development / Test Splits
## Dataset Creation
### Curation Rationale
The dataset was curated by Chalkidis et al. (2021).\
The documents have been annotated by the Publications Office of EU (https://publications.europa.eu/en).
### Source Data
#### Initial Data Collection and Normalization
The original data are available at the EUR-LEX portal (https://eur-lex.europa.eu) in unprocessed formats (HTML, XML, RDF). The documents were downloaded from the EUR-LEX portal in HTML. The relevant EUROVOC concepts were downloaded from the SPARQL endpoint of the Publications Office of EU (http://publications.europa.eu/webapi/rdf/sparql).
We stripped HTML mark-up to provide the documents in plain text format.
We inferred the labels for EUROVOC levels 1--3, by backtracking the EUROVOC hierarchy branches, from the originally assigned labels to their ancestors in levels 1--3, respectively.
#### Who are the source language producers?
The EU has 24 official languages. When new members join the EU, the set of official languages usually expands, except the languages are already included. MultiEURLEX covers 23 languages from seven language families (Germanic, Romance, Slavic, Uralic, Baltic, Semitic, Hellenic). EU laws are published in all official languages, except Irish, for resource-related reasons (Read more at https://europa.eu/european-union/about-eu/eu-languages_en). This wide coverage makes MultiEURLEX a valuable testbed for cross-lingual transfer. All languages use the Latin script, except for Bulgarian (Cyrillic script) and Greek. Several other languages are also spoken in EU countries. The EU is home to over 60 additional indigenous regional or minority languages, e.g., Basque, Catalan, Frisian, Saami, and Yiddish, among others, spoken by approx. 40 million people, but these additional languages are not considered official (in terms of EU), and EU laws are not translated to them.
### Annotations
#### Annotation process
All the documents of the dataset have been annotated by the Publications Office of EU (https://publications.europa.eu/en) with multiple concepts from EUROVOC (http://eurovoc.europa.eu/). EUROVOC has eight levels of concepts. Each document is assigned one or more concepts (labels). If a document is assigned a concept, the ancestors and descendants of that concept are typically not assigned to the same document. The documents were originally annotated with concepts from levels 3 to 8.
We augmented the annotation with three alternative sets of labels per document, replacing each assigned concept by its ancestor from level 1, 2, or 3, respectively.
Thus, we provide four sets of gold labels per document, one for each of the first three levels of the hierarchy, plus the original sparse label assignment.Levels 4 to 8 cannot be used independently, as many documents have gold concepts from the third level; thus many documents will be mislabeled, if we discard level 3.
#### Who are the annotators?
Publications Office of EU (https://publications.europa.eu/en)
### Personal and Sensitive Information
The dataset contains publicly available EU laws that do not include personal or sensitive information with the exception of trivial information presented by consent, e.g., the names of the current presidents of the European Parliament and European Council, and other administration bodies.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
MultiEURLEX covers 23 languages from seven language families (Germanic, Romance, Slavic, Uralic, Baltic, Semitic, Hellenic). This does not imply that no other languages are spoken in EU countries, although EU laws are not translated to other languages (https://europa.eu/european-union/about-eu/eu-languages_en).
## Additional Information
### Dataset Curators
Chalkidis et al. (2021)
### Licensing Information
We provide MultiEURLEX with the same licensing as the original EU data (CC-BY-4.0):
© European Union, 1998-2021
The Commission’s document reuse policy is based on Decision 2011/833/EU. Unless otherwise specified, you can re-use the legal documents published in EUR-Lex for commercial or non-commercial purposes.
The copyright for the editorial content of this website, the summaries of EU legislation and the consolidated texts, which is owned by the EU, is licensed under the Creative Commons Attribution 4.0 International licence. This means that you can re-use the content provided you acknowledge the source and indicate any changes you have made.
Source: https://eur-lex.europa.eu/content/legal-notice/legal-notice.html \
Read more: https://eur-lex.europa.eu/content/help/faq/reuse-contents-eurlex.html
### Citation Information
*Ilias Chalkidis, Manos Fergadiotis, and Ion Androutsopoulos.*
*MultiEURLEX - A multi-lingual and multi-label legal document classification dataset for zero-shot cross-lingual transfer.*
*Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. Punta Cana, Dominican Republic. 2021*
```
@InProceedings{chalkidis-etal-2021-multieurlex,
author = {Chalkidis, Ilias
and Fergadiotis, Manos
and Androutsopoulos, Ion},
title = {MultiEURLEX -- A multi-lingual and multi-label legal document
classification dataset for zero-shot cross-lingual transfer},
booktitle = {Proceedings of the 2021 Conference on Empirical Methods
in Natural Language Processing},
year = {2021},
publisher = {Association for Computational Linguistics},
location = {Punta Cana, Dominican Republic},
url = {https://arxiv.org/abs/2109.00904}
}
```
### Contributions
Thanks to [@iliaschalkidis](https://github.com/iliaschalkidis) for adding this dataset. |
arbml/ashaar | 2022-09-03T18:05:56.000Z | [
"region:us"
] | arbml | null | null | null | 0 | 280 | # ashaar
introducing ashaar, the largest dataset for arabic poetry
# general statistics
| metric | value |
|-----------------|-----------|
| number of poems | 254,630 |
| number of baits | 3,857,429 |
| number of poets | 7,167 |
# License
This dataset is released under fair use for research development only. Poets have the sole right to take down any access to their work. The authors of the websites, also, have the right to take down any material that does not conform with that. This work should not be used for any commercial purposes.
|
tomasg25/scientific_lay_summarisation | 2022-10-26T11:11:33.000Z | [
"task_categories:summarization",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:unknown",
"abstractive-summarization",
"scientific-papers",
"lay-summarization",
"PLOS",
"eLife",
"arxiv:2210.09932",
"region:us"
] | tomasg25 | This repository contains the PLOS and eLife datasets, introduced in the EMNLP 2022 paper "[Making Science Simple: Corpora for the Lay Summarisation of Scientific Literature
](https://arxiv.org/abs/2210.09932)".
Each dataset contains full biomedical research articles paired with expert-written lay summaries (i.e., non-technical summaries). PLOS articles are derived from various journals published by [the Public Library of Science (PLOS)](https://plos.org/), whereas eLife articles are derived from the [eLife](https://elifesciences.org/) journal. More details/anlaysis on the content of each dataset are provided in the paper.
Both "elife" and "plos" have 6 features:
- "article": the body of the document (including the abstract), sections seperated by "/n".
- "section_headings": the title of each section, seperated by "/n".
- "keywords": keywords describing the topic of the article, seperated by "/n".
- "title" : the title of the article.
- "year" : the year the article was published.
- "summary": the lay summary of the document. | @misc{Goldsack_2022,
doi = {10.48550/ARXIV.2210.09932},
url = {https://arxiv.org/abs/2210.09932},
author = {Goldsack, Tomas and Zhang, Zhihao and Lin, Chenghua and Scarton, Carolina},
title = {Making Science Simple: Corpora for the Lay Summarisation of Scientific Literature},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
} | null | 12 | 280 | ---
annotations_creators:
- found
language:
- en
language_creators:
- found
license:
- unknown
multilinguality:
- monolingual
pretty_name: ScientificLaySummarisation
size_categories:
- 10K<n<100K
- 1K<n<10K
source_datasets:
- original
tags:
- abstractive-summarization
- scientific-papers
- lay-summarization
- PLOS
- eLife
task_categories:
- summarization
task_ids: []
---
# Dataset Card for "scientific_lay_summarisation"
- **Repository:** https://github.com/TGoldsack1/Corpora_for_Lay_Summarisation
- **Paper:** [Making Science Simple: Corpora for the Lay Summarisation of Scientific Literature](https://arxiv.org/abs/2210.09932)
- **Size of downloaded dataset files:** 850.44 MB
- **Size of the generated dataset:** 1.32 GB
- **Total amount of disk used:** 2.17 GB
### Dataset Summary
This repository contains the PLOS and eLife datasets, introduced in the EMNLP 2022 paper "[Making Science Simple: Corpora for the Lay Summarisation of Scientific Literature
](https://arxiv.org/abs/2210.09932)" .
Each dataset contains full biomedical research articles paired with expert-written lay summaries (i.e., non-technical summaries). PLOS articles are derived from various journals published by [the Public Library of Science (PLOS)](https://plos.org/), whereas eLife articles are derived from the [eLife](https://elifesciences.org/) journal. More details/analyses on the content of each dataset are provided in the paper.
Both "elife" and "plos" have 6 features:
- "article": the body of the document (including the abstract), sections separated by "/n".
- "section_headings": the title of each section, separated by "/n".
- "keywords": keywords describing the topic of the article, separated by "/n".
- "title": the title of the article.
- "year": the year the article was published.
- "summary": the lay summary of the document.
**Note:** The format of both datasets differs from that used in the original repository (given above) in order to make them compatible with the `run_summarization.py` script of Transformers. Specifically, sentence tokenization is removed via " ".join(text), and the abstract and article sections, previously lists of sentences, are combined into a single `string` feature ("article") with each section separated by "\n". For the sentence-tokenized version of the dataset, please use the original git repository.
### Supported Tasks and Leaderboards
Papers with code - [PLOS](https://paperswithcode.com/sota/lay-summarization-on-plos) and [eLife](https://paperswithcode.com/sota/lay-summarization-on-elife).
### Languages
English
## Dataset Structure
### Data Instances
#### plos
- **Size of downloaded dataset files:** 425.22 MB
- **Size of the generated dataset:** 1.05 GB
- **Total amount of disk used:** 1.47 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"summary": "In the kidney , structures known as nephrons are responsible for collecting metabolic waste . Nephrons are composed of a ...",
"article": "Kidney function depends on the nephron , which comprises a 'blood filter , a tubule that is subdivided into functionally ...",
"section_headings": "Abstract\nIntroduction\nResults\nDiscussion\nMaterials and Methods'",
"keywords": "developmental biology\ndanio (zebrafish)\nvertebrates\nteleost fishes\nnephrology",
"title": "The cdx Genes and Retinoic Acid Control the Positioning and Segmentation of the Zebrafish Pronephros",
"year": "2007"
}
```
#### elife
- **Size of downloaded dataset files:** 425.22 MB
- **Size of the generated dataset:** 275.99 MB
- **Total amount of disk used:** 1.47 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"summary": "In the USA , more deaths happen in the winter than the summer . But when deaths occur varies greatly by sex , age , cause of ...",
"article": "In temperate climates , winter deaths exceed summer ones . However , there is limited information on the timing and the ...",
"section_headings": "Abstract\nIntroduction\nResults\nDiscussion\nMaterials and methods",
"keywords": "epidemiology and global health",
"title": "National and regional seasonal dynamics of all-cause and cause-specific mortality in the USA from 1980 to 2016",
"year": "2018"
}
```
### Data Fields
The data fields are the same among all splits.
#### plos
- `article`: a `string` feature.
- `section_headings`: a `string` feature.
- `keywords`: a `string` feature.
- `title` : a `string` feature.
- `year` : a `string` feature.
- `summary`: a `string` feature.
#### elife
- `article`: a `string` feature.
- `section_headings`: a `string` feature.
- `keywords`: a `string` feature.
- `title` : a `string` feature.
- `year` : a `string` feature.
- `summary`: a `string` feature.
### Data Splits
| name |train |validation|test|
|------|-----:|---------:|---:|
|plos | 24773| 1376|1376|
|elife | 4346| 241| 241|
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
"Making Science Simple: Corpora for the Lay Summarisation of Scientific Literature"
Tomas Goldsack, Zhihao Zhang, Chenghua Lin, Carolina Scarton
EMNLP 2022
``` |
thu-coai/Safety-Prompts | 2023-08-25T15:02:51.000Z | [
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:zh",
"license:apache-2.0",
"arxiv:2304.10436",
"region:us"
] | thu-coai | null | null | null | 28 | 280 | ---
license: apache-2.0
task_categories:
- text-generation
language:
- zh
pretty_name: Safety-Prompts
size_categories:
- 100K<n<1M
---
# Dataset Card for Dataset Name
GitHub Repository: https://github.com/thu-coai/Safety-Prompts
Paper: https://arxiv.org/abs/2304.10436
|
magnifi/contextual-tiny-v1 | 2023-09-13T17:22:57.000Z | [
"region:us"
] | magnifi | null | null | null | 0 | 279 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: user_text
dtype: string
- name: true_intent
dtype: string
- name: chat_history
dtype: string
- name: contextual
dtype: bool
- name: in_regression_test
dtype: bool
- name: synthetic
dtype: bool
- name: prompt
dtype: string
- name: completion
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 106909.92835858747
num_examples: 100
- name: validation
num_bytes: 10722.453155139157
num_examples: 10
download_size: 42788
dataset_size: 117632.38151372662
---
# Dataset Card for "contextual-tiny-v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
rasgaard/20_newsgroups | 2023-09-13T07:25:05.000Z | [
"region:us"
] | rasgaard | null | null | null | 0 | 278 | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
- name: label_text
dtype: string
splits:
- name: train
num_bytes: 12724811.858405516
num_examples: 10182
- name: val
num_bytes: 1414701.1415944847
num_examples: 1132
- name: test
num_bytes: 8499585
num_examples: 7532
download_size: 0
dataset_size: 22639098.0
---
# Dataset Card for "20_newsgroups"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
bigbio/bioasq_task_b | 2022-12-22T15:41:12.000Z | [
"multilinguality:monolingual",
"language:en",
"license:other",
"region:us"
] | bigbio | The data are intended to be used as training and development data for BioASQ
10, which will take place during 2022. There is one file containing the data:
- training10b.json
The file contains the data of the first nine editions of the challenge: 4234
questions [1] with their relevant documents, snippets, concepts and RDF
triples, exact and ideal answers.
Differences with BioASQ-training9b.json
- 492 new questions added from BioASQ9
- The question with id 56c1f01eef6e394741000046 had identical body with
602498cb1cb411341a00009e. All relevant elements from both questions
are available in the merged question with id 602498cb1cb411341a00009e.
- The question with id 5c7039207c78d69471000065 had identical body with
601c317a1cb411341a000014. All relevant elements from both questions
are available in the merged question with id 601c317a1cb411341a000014.
- The question with id 5e4b540b6d0a27794100001c had identical body with
602828b11cb411341a0000fc. All relevant elements from both questions
are available in the merged question with id 602828b11cb411341a0000fc.
- The question with id 5fdb42fba43ad31278000027 had identical body with
5d35eb01b3a638076300000f. All relevant elements from both questions
are available in the merged question with id 5d35eb01b3a638076300000f.
- The question with id 601d76311cb411341a000045 had identical body with
6060732b94d57fd87900003d. All relevant elements from both questions
are available in the merged question with id 6060732b94d57fd87900003d.
[1] 4234 questions : 1252 factoid, 1148 yesno, 1018 summary, 816 list | @article{tsatsaronis2015overview,
title = {
An overview of the BIOASQ large-scale biomedical semantic indexing and
question answering competition
},
author = {
Tsatsaronis, George and Balikas, Georgios and Malakasiotis, Prodromos
and Partalas, Ioannis and Zschunke, Matthias and Alvers, Michael R and
Weissenborn, Dirk and Krithara, Anastasia and Petridis, Sergios and
Polychronopoulos, Dimitris and others
},
year = 2015,
journal = {BMC bioinformatics},
publisher = {BioMed Central Ltd},
volume = 16,
number = 1,
pages = 138
} | null | 3 | 277 | ---
language:
- en
bigbio_language:
- English
license: other
multilinguality: monolingual
bigbio_license_shortname: NLM_LICENSE
pretty_name: BioASQ Task B
homepage: http://participants-area.bioasq.org/datasets/
bigbio_pubmed: true
bigbio_public: false
bigbio_tasks:
- QUESTION_ANSWERING
---
# Dataset Card for BioASQ Task B
## Dataset Description
- **Homepage:** http://participants-area.bioasq.org/datasets/
- **Pubmed:** True
- **Public:** False
- **Tasks:** QA
The BioASQ corpus contains multiple question
answering tasks annotated by biomedical experts, including yes/no, factoid, list,
and summary questions. Pertaining to our objective of comparing neural language
models, we focus on the the yes/no questions (Task 7b), and leave the inclusion
of other tasks to future work. Each question is paired with a reference text
containing multiple sentences from a PubMed abstract and a yes/no answer. We use
the official train/dev/test split of 670/75/140 questions.
See 'Domain-Specific Language Model Pretraining for Biomedical
Natural Language Processing'
## Citation Information
```
@article{tsatsaronis2015overview,
title = {
An overview of the BIOASQ large-scale biomedical semantic indexing and
question answering competition
},
author = {
Tsatsaronis, George and Balikas, Georgios and Malakasiotis, Prodromos
and Partalas, Ioannis and Zschunke, Matthias and Alvers, Michael R and
Weissenborn, Dirk and Krithara, Anastasia and Petridis, Sergios and
Polychronopoulos, Dimitris and others
},
year = 2015,
journal = {BMC bioinformatics},
publisher = {BioMed Central Ltd},
volume = 16,
number = 1,
pages = 138
}
```
|
mstz/arhythmia | 2023-04-15T11:37:57.000Z | [
"task_categories:tabular-classification",
"size_categories:n<1K",
"language:en",
"license:cc",
"arrhythmia",
"tabular_classification",
"multiclass_classification",
"binary_classification",
"UCI",
"region:us"
] | mstz | null | @misc{misc_arrhythmia_5,
author = {Guvenir,H., Acar,Burak & Muderrisoglu,Haldun},
title = {{Arrhythmia}},
year = {1998},
howpublished = {UCI Machine Learning Repository},
note = {{DOI}: \\url{10.24432/C5BS32}}
} | null | 0 | 277 | ---
language:
- en
tags:
- arrhythmia
- tabular_classification
- multiclass_classification
- binary_classification
- UCI
pretty_name: Arhythmia
size_categories:
- n<1K
task_categories:
- tabular-classification
configs:
- arhytmia
- has_arhytmia
license: cc
---
# Arhythmia
The [Arrhythmia dataset](https://archive.ics.uci.edu/ml/datasets/Arrhythmia) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
Does the patient have arhythmia? If so, what type?
# Configurations and tasks
| **Configuration** | **Task** | Description |
|-------------------|---------------------------|---------------------------------------------------------------|
| arhytmia | Multiclass classification | What type of arhythmia does the patient have? |
| has_arhytmia | Binary classification | Does the patient have arhythmia? |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/arhythmia", "arhythmia")["train"]
```
# Features
Target feature changes according to the selected configuration and is always in last position in the dataset. |
toughdata/quora-question-answer-dataset | 2023-08-28T13:36:21.000Z | [
"task_categories:question-answering",
"task_categories:conversational",
"task_categories:text2text-generation",
"language:en",
"license:gpl-3.0",
"question",
"answer",
"quora",
"region:us"
] | toughdata | null | null | null | 0 | 277 | ---
license: gpl-3.0
task_categories:
- question-answering
- conversational
- text2text-generation
language:
- en
tags:
- question
- answer
- quora
pretty_name: Quora Question/Answer Pairs
---
Quora Question Answer Dataset (Quora-QuAD) contains 56,402 question-answer pairs scraped from Quora.
# Usage:
For instructions on fine-tuning a model (Flan-T5) with this dataset, please check out the article: https://www.toughdata.net/blog/post/finetune-flan-t5-question-answer-quora-dataset |
jondurbin/airoboros-2.2 | 2023-10-03T19:01:21.000Z | [
"license:other",
"region:us"
] | jondurbin | null | null | null | 1 | 277 | ---
license: other
---
## Overview
This dataset is mostly a continuation of https://hf.co/datasets/jondurbin/airoboros-2.1, with some notable additions and fixes.
- Some of the content is "toxic"/"harmful", and contains profanity and other types of sensitive content.
- None of the content or views contained in text within this dataset necessarily align with my personal beliefs or opinions, they are simply text generated by LLMs and/or scraped from the web.
- Use with caution, particularly in locations with less-than-free speech laws.
- You, and you alone are responsible for having downloaded the dataset and having a copy of the contents therein and I am completely indemnified from any and all liabilities.
### 2.1 Contamination
I accidentally included some of the benchmark data in the first version of the airboros-2.1 model, which is why it had a crazy high truthfulqa score. Discussions here:
- https://huggingface.co/jondurbin/airoboros-l2-70b-2.1/discussions/3#64f325ce352152814d1f796a
- https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard/discussions/225#64f0997659da193a12b78c32
I flagged it for removal and recreated the model right away, but the leaderboard cached the old results so it took some time to reflect.
Some of the instructors I use create overlapping data, and it's hard to filter, especially since the instructions aren't typically verbatim with the benchmark questions.
This time around, I used `thenlper/gte-small` to calculate embeddings of the instructions, along with a faiss index, and removed anything from the dataset that had a similarity score < 0.15 (from truthfulqa). If you have a better way of checking, please let me know!
I haven't done the same for most other benchmarks (yet) because there are hundreds of thousands of instructions and it would be pretty computationally expensive to do. That said, I only have ~1279 multiple choice questions, all randomly GPT generated, so there's probably little-to-no overlap.
### Awareness
I added a new "awareness" instructor, which aims to add a lot more nuance to responses relating to time, location, senses, etc. based on the system prompt.
For example, if you are using the standard prompt with user/assistant, and ask how long it would take to get to Chicago, the answer will be something about AI not having a physical presence.
If, on the other hand, you are using a system prompt with a human character specified, the model attempts to infer location from "home" and will provide a more nuanced answer as a human would (in theory).
https://github.com/jondurbin/airoboros/commit/e91562c88d7610edb051606622e7c25a99884f7e
### Editor
I created a text edit instructor as well, which uses a reverse prompt mechanism, meaning it takes the existing writing samples that have been generated, rewrites them to have misspellings, poor grammar, etc., then uses a prompt like "Please correct and improve the text." with the original well-written text and target output.
https://github.com/jondurbin/airoboros/commit/e60a68de5f9622320c9cfff3b238bd83cc7e373b
### Writing
I regenerated (almost) all of the training data that included "Once upon a time..." because it's too cliche and boring.
### Multiple choice
I created many more multiple choice questions, many of which have additional text context.
### Roleplay/conversation
I re-created all of the GTKM and RP datasets this time around, removing all of the "USER: " and "ASSISTANT: " prefixes from the instructions/responses, so it's more compatible with existing interfaces.
The GTKM instructor now does the same thing as RP, in that it saves each round of "conversation" as a separate row in the output - previously it only saved the final response, which may not have been sufficient since I don't typically train on inputs.
### UTF-8 to ASCII
I replaced most of the "standard" utf-8 sequences - left double quote, right double quote, left apostraphe, ellipses - with standard ascii characters. I don't know if this was contributing to part of the issue with eos tokens being produced after apostraphes, but I figured it was worth trying.
### Summarization
I also included 500 examples from:
https://hf.co/datasets/mattpscott/airoboros-summarization
These are existing summarizarions from various public datasets, formatted to airoboros style contextual qa.
Thanks Matt!
### Usage/license info
Much (most) of the data was generated via gpt-4 API calls, which has a restriction in the ToS about "competing" models. Please seek legal advice if you plan to build or use a model that includes this dataset in a commercial setting. |
DFKI-SLT/tacred | 2023-05-17T12:55:00.000Z | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"annotations_creators:crowdsourced",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended|other",
"language:en",
"license:other",
"relation extraction",
"arxiv:2104.08398",
"region:us"
] | DFKI-SLT | TACRED is a large-scale relation extraction dataset with 106,264 examples built over newswire
and web text from the corpus used in the yearly TAC Knowledge Base Population (TAC KBP) challenges.
Examples in TACRED cover 41 relation types as used in the TAC KBP challenges (e.g., per:schools_attended
and org:members) or are labeled as no_relation if no defined relation is held. These examples are created
by combining available human annotations from the TAC KBP challenges and crowdsourcing.
Please see our EMNLP paper, or our EMNLP slides for full details.
Note: There is currently a label-corrected version of the TACRED dataset, which you should consider using instead of
the original version released in 2017. For more details on this new version, see the TACRED Revisited paper
published at ACL 2020.
Note 2: This Datasetreader changes the offsets of the following fields, to conform with standard Python usage (see
#_generate_examples()):
- subj_end to subj_end + 1 (make end offset exclusive)
- obj_end to obj_end + 1 (make end offset exclusive)
- stanford_head to stanford_head - 1 (make head offsets 0-based) | @inproceedings{zhang-etal-2017-position,
title = "Position-aware Attention and Supervised Data Improve Slot Filling",
author = "Zhang, Yuhao and
Zhong, Victor and
Chen, Danqi and
Angeli, Gabor and
Manning, Christopher D.",
booktitle = "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
month = sep,
year = "2017",
address = "Copenhagen, Denmark",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/D17-1004",
doi = "10.18653/v1/D17-1004",
pages = "35--45",
}
@inproceedings{alt-etal-2020-tacred,
title = "{TACRED} Revisited: A Thorough Evaluation of the {TACRED} Relation Extraction Task",
author = "Alt, Christoph and
Gabryszak, Aleksandra and
Hennig, Leonhard",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.acl-main.142",
doi = "10.18653/v1/2020.acl-main.142",
pages = "1558--1569",
}
@article{stoica2021re,
author = {George Stoica and
Emmanouil Antonios Platanios and
Barnab{\'{a}}s P{\'{o}}czos},
title = {Re-TACRED: Addressing Shortcomings of the {TACRED} Dataset},
journal = {CoRR},
volume = {abs/2104.08398},
year = {2021},
url = {https://arxiv.org/abs/2104.08398},
eprinttype = {arXiv},
eprint = {2104.08398},
timestamp = {Mon, 26 Apr 2021 17:25:10 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2104-08398.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
} | null | 3 | 276 | ---
annotations_creators:
- crowdsourced
- expert-generated
language:
- en
language_creators:
- found
license:
- other
multilinguality:
- monolingual
pretty_name: The TAC Relation Extraction Dataset, TACRED Revisited and Re-TACRED
size_categories:
- 100K<n<1M
source_datasets:
- extended|other
tags:
- relation extraction
task_categories:
- text-classification
task_ids:
- multi-class-classification
---
# Dataset Card for "tacred"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://nlp.stanford.edu/projects/tacred](https://nlp.stanford.edu/projects/tacred)
- **Paper:** [Position-aware Attention and Supervised Data Improve Slot Filling](https://aclanthology.org/D17-1004/)
- **Point of Contact:** See [https://nlp.stanford.edu/projects/tacred/](https://nlp.stanford.edu/projects/tacred/)
- **Size of downloaded dataset files:** 62.3 MB
- **Size of the generated dataset:** 139.2 MB
- **Total amount of disk used:** 201.5 MB
### Dataset Summary
The TAC Relation Extraction Dataset (TACRED) is a large-scale relation extraction dataset with 106,264 examples built over newswire and web text from the corpus used in the yearly TAC Knowledge Base Population (TAC KBP) challenges. Examples in TACRED cover 41 relation types as used in the TAC KBP challenges (e.g., per:schools_attended
and org:members) or are labeled as no_relation if no defined relation is held. These examples are created by combining available human annotations from the TAC
KBP challenges and crowdsourcing. Please see [Stanford's EMNLP paper](https://nlp.stanford.edu/pubs/zhang2017tacred.pdf), or their [EMNLP slides](https://nlp.stanford.edu/projects/tacred/files/position-emnlp2017.pdf) for full details.
Note:
- There is currently a [label-corrected version](https://github.com/DFKI-NLP/tacrev) of the TACRED dataset, which you should consider using instead of
the original version released in 2017. For more details on this new version, see the [TACRED Revisited paper](https://aclanthology.org/2020.acl-main.142/)
published at ACL 2020.
- There is also a [relabeled and pruned version](https://github.com/gstoica27/Re-TACRED) of the TACRED dataset.
For more details on this new version, see the [Re-TACRED paper](https://arxiv.org/abs/2104.08398)
published at ACL 2020.
This repository provides all three versions of the dataset as BuilderConfigs - `'original'`, `'revisited'` and `'re-tacred'`.
Simply set the `name` parameter in the `load_dataset` method in order to choose a specific version. The original TACRED is loaded per default.
### Supported Tasks and Leaderboards
- **Tasks:** Relation Classification
- **Leaderboards:** [https://paperswithcode.com/sota/relation-extraction-on-tacred](https://paperswithcode.com/sota/relation-extraction-on-tacred)
### Languages
The language in the dataset is English.
## Dataset Structure
### Data Instances
- **Size of downloaded dataset files:** 62.3 MB
- **Size of the generated dataset:** 139.2 MB
- **Total amount of disk used:** 201.5 MB
An example of 'train' looks as follows:
```json
{
"id": "61b3a5c8c9a882dcfcd2",
"docid": "AFP_ENG_20070218.0019.LDC2009T13",
"relation": "org:founded_by",
"token": ["Tom", "Thabane", "resigned", "in", "October", "last", "year", "to", "form", "the", "All", "Basotho", "Convention", "-LRB-", "ABC", "-RRB-", ",", "crossing", "the", "floor", "with", "17", "members", "of", "parliament", ",", "causing", "constitutional", "monarch", "King", "Letsie", "III", "to", "dissolve", "parliament", "and", "call", "the", "snap", "election", "."],
"subj_start": 10,
"subj_end": 13,
"obj_start": 0,
"obj_end": 2,
"subj_type": "ORGANIZATION",
"obj_type": "PERSON",
"stanford_pos": ["NNP", "NNP", "VBD", "IN", "NNP", "JJ", "NN", "TO", "VB", "DT", "DT", "NNP", "NNP", "-LRB-", "NNP", "-RRB-", ",", "VBG", "DT", "NN", "IN", "CD", "NNS", "IN", "NN", ",", "VBG", "JJ", "NN", "NNP", "NNP", "NNP", "TO", "VB", "NN", "CC", "VB", "DT", "NN", "NN", "."],
"stanford_ner": ["PERSON", "PERSON", "O", "O", "DATE", "DATE", "DATE", "O", "O", "O", "O", "O", "O", "O", "ORGANIZATION", "O", "O", "O", "O", "O", "O", "NUMBER", "O", "O", "O", "O", "O", "O", "O", "O", "PERSON", "PERSON", "O", "O", "O", "O", "O", "O", "O", "O", "O"],
"stanford_head": [2, 3, 0, 5, 3, 7, 3, 9, 3, 13, 13, 13, 9, 15, 13, 15, 3, 3, 20, 18, 23, 23, 18, 25, 23, 3, 3, 32, 32, 32, 32, 27, 34, 27, 34, 34, 34, 40, 40, 37, 3],
"stanford_deprel": ["compound", "nsubj", "ROOT", "case", "nmod", "amod", "nmod:tmod", "mark", "xcomp", "det", "compound", "compound", "dobj", "punct", "appos", "punct", "punct", "xcomp", "det", "dobj", "case", "nummod", "nmod", "case", "nmod", "punct", "xcomp", "amod", "compound", "compound", "compound", "dobj", "mark", "xcomp", "dobj", "cc", "conj", "det", "compound", "dobj", "punct"]
}
```
### Data Fields
The data fields are the same among all splits.
- `id`: the instance id of this sentence, a `string` feature.
- `docid`: the TAC KBP document id of this sentence, a `string` feature.
- `token`: the list of tokens of this sentence, obtained with the StanfordNLP toolkit, a `list` of `string` features.
- `relation`: the relation label of this instance, a `string` classification label.
- `subj_start`: the 0-based index of the start token of the relation subject mention, an `ìnt` feature.
- `subj_end`: the 0-based index of the end token of the relation subject mention, exclusive, an `ìnt` feature.
- `subj_type`: the NER type of the subject mention, among 23 fine-grained types used in the [Stanford NER system](https://stanfordnlp.github.io/CoreNLP/ner.html), a `string` feature.
- `obj_start`: the 0-based index of the start token of the relation object mention, an `ìnt` feature.
- `obj_end`: the 0-based index of the end token of the relation object mention, exclusive, an `ìnt` feature.
- `obj_type`: the NER type of the object mention, among 23 fine-grained types used in the [Stanford NER system](https://stanfordnlp.github.io/CoreNLP/ner.html), a `string` feature.
- `stanford_pos`: the part-of-speech tag per token. the NER type of the subject mention, among 23 fine-grained types used in the [Stanford NER system](https://stanfordnlp.github.io/CoreNLP/ner.html), a `list` of `string` features.
- `stanford_ner`: the NER tags of tokens (IO-Scheme), among 23 fine-grained types used in the [Stanford NER system](https://stanfordnlp.github.io/CoreNLP/ner.html), a `list` of `string` features.
- `stanford_deprel`: the Stanford dependency relation tag per token, a `list` of `string` features.
- `stanford_head`: the head (source) token index (0-based) for the dependency relation per token. The root token has a head index of -1, a `list` of `int` features.
### Data Splits
To miminize dataset bias, TACRED is stratified across years in which the TAC KBP challenge was run:
| | Train | Dev | Test |
| ----- | ------ | ----- | ---- |
| TACRED | 68,124 (TAC KBP 2009-2012) | 22,631 (TAC KBP 2013) | 15,509 (TAC KBP 2014) |
| Re-TACRED | 58,465 (TAC KBP 2009-2012) | 19,584 (TAC KBP 2013) | 13,418 (TAC KBP 2014) |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
See the Stanford paper and the Tacred Revisited paper, plus their appendices.
To ensure that models trained on TACRED are not biased towards predicting false positives on real-world text,
all sampled sentences where no relation was found between the mention pairs were fully annotated to be negative examples. As a result, 79.5% of the examples
are labeled as no_relation.
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
To respect the copyright of the underlying TAC KBP corpus, TACRED is released via the
Linguistic Data Consortium ([LDC License](https://catalog.ldc.upenn.edu/license/ldc-non-members-agreement.pdf)).
You can download TACRED from the [LDC TACRED webpage](https://catalog.ldc.upenn.edu/LDC2018T24).
If you are an LDC member, the access will be free; otherwise, an access fee of $25 is needed.
### Citation Information
The original dataset:
```
@inproceedings{zhang2017tacred,
author = {Zhang, Yuhao and Zhong, Victor and Chen, Danqi and Angeli, Gabor and Manning, Christopher D.},
booktitle = {Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP 2017)},
title = {Position-aware Attention and Supervised Data Improve Slot Filling},
url = {https://nlp.stanford.edu/pubs/zhang2017tacred.pdf},
pages = {35--45},
year = {2017}
}
```
For the revised version (`"revisited"`), please also cite:
```
@inproceedings{alt-etal-2020-tacred,
title = "{TACRED} Revisited: A Thorough Evaluation of the {TACRED} Relation Extraction Task",
author = "Alt, Christoph and
Gabryszak, Aleksandra and
Hennig, Leonhard",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.acl-main.142",
doi = "10.18653/v1/2020.acl-main.142",
pages = "1558--1569",
}
```
For the relabeled version (`"re-tacred"`), please also cite:
```
@inproceedings{DBLP:conf/aaai/StoicaPP21,
author = {George Stoica and
Emmanouil Antonios Platanios and
Barnab{\'{a}}s P{\'{o}}czos},
title = {Re-TACRED: Addressing Shortcomings of the {TACRED} Dataset},
booktitle = {Thirty-Fifth {AAAI} Conference on Artificial Intelligence, {AAAI}
2021, Thirty-Third Conference on Innovative Applications of Artificial
Intelligence, {IAAI} 2021, The Eleventh Symposium on Educational Advances
in Artificial Intelligence, {EAAI} 2021, Virtual Event, February 2-9,
2021},
pages = {13843--13850},
publisher = {{AAAI} Press},
year = {2021},
url = {https://ojs.aaai.org/index.php/AAAI/article/view/17631},
}
```
### Contributions
Thanks to [@dfki-nlp](https://github.com/dfki-nlp) and [@phucdev](https://github.com/phucdev) for adding this dataset.
|
Babelscape/rebel-dataset | 2023-06-15T12:12:59.000Z | [
"task_categories:text-retrieval",
"task_categories:text-generation",
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:en",
"license:cc-by-sa-4.0",
"relation-extraction",
"conditional-text-generation",
"arxiv:2005.00614",
"region:us"
] | Babelscape | REBEL is a silver dataset created for the paper REBEL: Relation Extraction By End-to-end Language generation | @inproceedings{huguet-cabot-navigli-2021-rebel,
title = "REBEL: Relation Extraction By End-to-end Language generation",
author = "Huguet Cabot, Pere-Llu{\'\i}s and
Navigli, Roberto",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021",
month = nov,
year = "2021",
address = "Online and in the Barceló Bávaro Convention Centre, Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://github.com/Babelscape/rebel/blob/main/docs/EMNLP_2021_REBEL__Camera_Ready_.pdf",
} | null | 14 | 275 | ---
annotations_creators:
- machine-generated
language_creators:
- machine-generated
language:
- en
license: cc-by-sa-4.0
multilinguality:
- monolingual
size_categories:
- unknown
source_datasets:
- original
task_categories:
- text-retrieval
- text-generation
task_ids: []
pretty_name: rebel-dataset
tags:
- relation-extraction
- conditional-text-generation
---
cc-by-nc-sa-4.0---
annotations_creators:
- machine-generated
language_creators:
- machine-generated
language:
- en
multilinguality:
- monolingual
size_categories:
- unknown
source_datasets:
- original
task_categories:
- text-retrieval
- text-generation
task_ids: []
pretty_name: rebel-dataset
tags:
- relation-extraction
- conditional-text-generation
---
# Dataset Card for REBEL dataset
## Table of Contents
- [Dataset Card for REBEL dataset](#dataset-card-for-rebel)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [https://github.com/Babelscape/rebel](https://github.com/Babelscape/rebel)
- **Paper:** [https://github.com/Babelscape/rebel/blob/main/docs/EMNLP_2021_REBEL__Camera_Ready_.pdf](https://github.com/Babelscape/rebel/blob/main/docs/EMNLP_2021_REBEL__Camera_Ready_.pdf)
- **Point of Contact:** [huguetcabot@babelscape.com](huguetcabot@babelscape.com)
### Dataset Summary
Dataset created for [REBEL](https://huggingface.co/Babelscape/rebel-large) dataset from interlinking Wikidata and Wikipedia for Relation Extraction, filtered using NLI.
### Supported Tasks and Leaderboards
- `text-retrieval-other-relation-extraction`: The dataset can be used to train a model for Relation Extraction, which consists in extracting triplets from raw text, made of subject, object and relation type. Success on this task is typically measured by achieving a *high* [F1](https://huggingface.co/metrics/F1). The [BART](https://huggingface.co/transformers/model_doc/bart.html)) model currently achieves the following score: 74 Micro F1 and 51 Macro F1 for the 220 most frequent relation types.
### Languages
The dataset is in English, from the English Wikipedia.
## Dataset Structure
### Data Instances
REBEL
- `Size of downloaded dataset files`: 1490.02 MB
- `Size of the generated dataset`: 1199.27 MB
- `Total amount of disk used`: 2689.29 MB
```
{
'id': 'Q82442-1',
'title': 'Arsène Lupin, Gentleman Burglar',
'context': 'Arsène Lupin , Gentleman Burglar is the first collection of stories by Maurice Leblanc recounting the adventures of Arsène Lupin , released on 10 June 1907 .',
'triplets': '<triplet> Arsène Lupin, Gentleman Burglar <subj> Maurice Leblanc <obj> author <triplet> Arsène Lupin <subj> Maurice Leblanc <obj> creator'
}
```
The original data is in jsonl format and contains much more information. It is divided by Wikipedia articles instead of by sentence, and contains metadata about Wikidata entities, their boundaries in the text, how it was annotated, etc. For more information check the [paper repository](https://huggingface.co/Babelscape/rebel-large) and how it was generated using the Relation Extraction dataset pipeline, [cRocoDiLe](https://github.com/Babelscape/crocodile).
### Data Fields
List and describe the fields present in the dataset. Mention their data type, and whether they are used as input or output in any of the tasks the dataset currently supports. If the data has span indices, describe their attributes, such as whether they are at the character level or word level, whether they are contiguous or not, etc. If the datasets contains example IDs, state whether they have an inherent meaning, such as a mapping to other datasets or pointing to relationships between data points.
- `id`: ID of the instance. It contains a unique id matching to a Wikipedia page and a number separated by a hyphen indicating which sentence of the Wikipedia article it is.
- `title`: Title of the Wikipedia page the sentence comes from.
- `context`: Text from Wikipedia articles that serves as context for the Relation Extraction task.
- `triplets`: Linearized version of the triplets present in the text, split by the use of special tokens. For more info on this linearization check the [paper](https://github.com/Babelscape/rebel/blob/main/docs/EMNLP_2021_REBEL__Camera_Ready_.pdf).
### Data Splits
Test and Validation splits are each 5% of the original data.
Provide the sizes of each split. As appropriate, provide any descriptive statistics for the features, such as average length. For example:
| | Tain | Valid | Test |
| ----- | ------ | ----- | ---- |
| Input Sentences | 3,120,296 | 172,860 | 173,601 |
| Input Sentences (top 220 relation types as used in original paper) | 784,202 | 43,341 | 43,506 |
| Number of Triplets (top 220 relation types as used in original paper) | 878,555 | 48,514 | 48,852 |
## Dataset Creation
### Curation Rationale
This dataset was created to enable the training of a BART based model as pre-training phase for Relation Extraction as seen in the paper [REBEL: Relation Extraction By End-to-end Language generation](https://github.com/Babelscape/rebel/blob/main/docs/EMNLP_2021_REBEL__Camera_Ready_.pdf).
### Source Data
Data comes from Wikipedia text before the table of contents, as well as Wikidata for the triplets annotation.
#### Initial Data Collection and Normalization
For the data collection, the dataset extraction pipeline [cRocoDiLe: Automati**c** **R**elati**o**n Extra**c**ti**o**n **D**ataset w**i**th N**L**I filt**e**ring](https://github.com/Babelscape/crocodile) insipired by [T-REx Pipeline](https://github.com/hadyelsahar/RE-NLG-Dataset) more details found at: [T-REx Website](https://hadyelsahar.github.io/t-rex/). The starting point is a Wikipedia dump as well as a Wikidata one.
After the triplets are extracted, an NLI system was used to filter out those not entailed by the text.
#### Who are the source language producers?
Any Wikipedia and Wikidata contributor.
### Annotations
#### Annotation process
The dataset extraction pipeline [cRocoDiLe: Automati**c** **R**elati**o**n Extra**c**ti**o**n **D**ataset w**i**th N**L**I filt**e**ring](https://github.com/Babelscape/crocodile).
#### Who are the annotators?
Automatic annottations
### Personal and Sensitive Information
All text is from Wikipedia, any Personal or Sensitive Information there may be present in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
The dataset serves as a pre-training step for Relation Extraction models. It is distantly annotated, hence it should only be used as such. A model trained solely on this dataset may produce allucinations coming from the silver nature of the dataset.
### Discussion of Biases
Since the dataset was automatically created from Wikipedia and Wikidata, it may reflect the biases withing those sources.
For Wikipedia text, see for example [Dinan et al 2020 on biases in Wikipedia (esp. Table 1)](https://arxiv.org/abs/2005.00614), or [Blodgett et al 2020](https://www.aclweb.org/anthology/2020.acl-main.485/) for a more general discussion of the topic.
For Wikidata, there are class imbalances, also resulting from Wikipedia.
### Other Known Limitations
Not for now
## Additional Information
### Dataset Curators
Pere-Lluis Huguet Cabot - Babelscape and Sapienza University of Rome, Italy
Roberto Navigli - Sapienza University of Rome, Italy
### Licensing Information
Contents of this repository are restricted to only non-commercial research purposes under the [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/). Copyright of the dataset contents belongs to the original copyright holders.
### Citation Information
Provide the [BibTex](http://www.bibtex.org/)-formatted reference for the dataset. For example:
```
@inproceedings{huguet-cabot-navigli-2021-rebel,
title = "REBEL: Relation Extraction By End-to-end Language generation",
author = "Huguet Cabot, Pere-Llu{\'\i}s and
Navigli, Roberto",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021",
month = nov,
year = "2021",
address = "Online and in the Barceló Bávaro Convention Centre, Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://github.com/Babelscape/rebel/blob/main/docs/EMNLP_2021_REBEL__Camera_Ready_.pdf",
}
```
### Contributions
Thanks to [@littlepea13](https://github.com/LittlePea13) for adding this dataset. |
medalpaca/medical_meadow_pubmed_causal | 2023-04-06T17:01:00.000Z | [
"task_categories:question-answering",
"language:en",
"region:us"
] | medalpaca | null | null | null | 2 | 275 | ---
task_categories:
- question-answering
language:
- en
---
# Dataset Card for Pubmed Causal
## Dataset Description
- **Paper:** https://aclanthology.org/D19-1473/
### Dataset Summary
This is the dataset used in the paper: Detecting Causal Language Use in Science Findings.
### Citation Information
```
@inproceedings{yu-etal-2019-detecting,
title = "Detecting Causal Language Use in Science Findings",
author = "Yu, Bei and
Li, Yingya and
Wang, Jun",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
month = nov,
year = "2019",
address = "Hong Kong, China",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/D19-1473",
doi = "10.18653/v1/D19-1473",
pages = "4664--4674",
}
``` |
allenai/mup | 2022-10-25T10:16:52.000Z | [
"license:odc-by",
"region:us"
] | allenai | null | null | null | 2 | 274 | ---
license:
- odc-by
---
# MuP - Multi Perspective Scientific Document Summarization
Generating summaries of scientific documents is known to be a challenging task. Majority of existing work in summarization assumes only one single best gold summary for each given document. Having only one gold summary negatively impacts our ability to evaluate the quality of summarization systems as writing summaries is a subjective activity. At the same time, annotating multiple gold summaries for scientific documents can be extremely expensive as it requires domain experts to read and understand long scientific documents. This shared task will enable exploring methods for generating multi-perspective summaries. We introduce a novel summarization corpus, leveraging data from scientific peer reviews to capture diverse perspectives from the reader's point of view.
|
sidhq/email-thread-summary | 2023-07-17T03:19:09.000Z | [
"task_categories:summarization",
"language:en",
"region:us"
] | sidhq | null | null | null | 2 | 274 | ---
dataset_info:
features:
- name: thread
struct:
- name: subject
dtype: string
- name: messages
list:
- name: timestamp
dtype: timestamp[s]
- name: from
dtype: string
- name: to
sequence: string
- name: body
dtype: string
- name: summary
dtype: string
splits:
- name: train
num_bytes: 38290851
num_examples: 3750
- name: test
num_bytes: 4189760
num_examples: 417
download_size: 13597444
dataset_size: 42480611
task_categories:
- summarization
language:
- en
pretty_name: Email Thread Summary
---
# Dataset Card for "email-thread-summary"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
wiki_auto | 2023-06-01T14:59:51.000Z | [
"task_categories:text2text-generation",
"task_ids:text-simplification",
"annotations_creators:crowdsourced",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended|other-wikipedia",
"language:en",
"license:cc-by-sa-3.0",
"arxiv:2005.02324",
"region:us"
] | null | WikiAuto provides a set of aligned sentences from English Wikipedia and Simple English Wikipedia
as a resource to train sentence simplification systems. The authors first crowd-sourced a set of manual alignments
between sentences in a subset of the Simple English Wikipedia and their corresponding versions in English Wikipedia
(this corresponds to the `manual` config), then trained a neural CRF system to predict these alignments.
The trained model was then applied to the other articles in Simple English Wikipedia with an English counterpart to
create a larger corpus of aligned sentences (corresponding to the `auto`, `auto_acl`, `auto_full_no_split`, and `auto_full_with_split` configs here). | @inproceedings{acl/JiangMLZX20,
author = {Chao Jiang and
Mounica Maddela and
Wuwei Lan and
Yang Zhong and
Wei Xu},
editor = {Dan Jurafsky and
Joyce Chai and
Natalie Schluter and
Joel R. Tetreault},
title = {Neural {CRF} Model for Sentence Alignment in Text Simplification},
booktitle = {Proceedings of the 58th Annual Meeting of the Association for Computational
Linguistics, {ACL} 2020, Online, July 5-10, 2020},
pages = {7943--7960},
publisher = {Association for Computational Linguistics},
year = {2020},
url = {https://www.aclweb.org/anthology/2020.acl-main.709/}
} | null | 7 | 273 | ---
annotations_creators:
- crowdsourced
- machine-generated
language_creators:
- found
language:
- en
license:
- cc-by-sa-3.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- extended|other-wikipedia
task_categories:
- text2text-generation
task_ids:
- text-simplification
pretty_name: WikiAuto
dataset_info:
- config_name: manual
features:
- name: alignment_label
dtype:
class_label:
names:
'0': notAligned
'1': aligned
'2': partialAligned
- name: normal_sentence_id
dtype: string
- name: simple_sentence_id
dtype: string
- name: normal_sentence
dtype: string
- name: simple_sentence
dtype: string
- name: gleu_score
dtype: float32
splits:
- name: train
num_bytes: 110838475
num_examples: 373801
- name: dev
num_bytes: 21112775
num_examples: 73249
- name: test
num_bytes: 33851634
num_examples: 118074
download_size: 168957430
dataset_size: 165802884
- config_name: auto_acl
features:
- name: normal_sentence
dtype: string
- name: simple_sentence
dtype: string
splits:
- name: full
num_bytes: 121975414
num_examples: 488332
download_size: 118068366
dataset_size: 121975414
- config_name: auto
features:
- name: example_id
dtype: string
- name: normal
struct:
- name: normal_article_id
dtype: int32
- name: normal_article_title
dtype: string
- name: normal_article_url
dtype: string
- name: normal_article_content
sequence:
- name: normal_sentence_id
dtype: string
- name: normal_sentence
dtype: string
- name: simple
struct:
- name: simple_article_id
dtype: int32
- name: simple_article_title
dtype: string
- name: simple_article_url
dtype: string
- name: simple_article_content
sequence:
- name: simple_sentence_id
dtype: string
- name: simple_sentence
dtype: string
- name: paragraph_alignment
sequence:
- name: normal_paragraph_id
dtype: string
- name: simple_paragraph_id
dtype: string
- name: sentence_alignment
sequence:
- name: normal_sentence_id
dtype: string
- name: simple_sentence_id
dtype: string
splits:
- name: part_1
num_bytes: 1773240295
num_examples: 125059
- name: part_2
num_bytes: 80417651
num_examples: 13036
download_size: 2160638921
dataset_size: 1853657946
- config_name: auto_full_no_split
features:
- name: normal_sentence
dtype: string
- name: simple_sentence
dtype: string
splits:
- name: full
num_bytes: 146310611
num_examples: 591994
download_size: 141574179
dataset_size: 146310611
- config_name: auto_full_with_split
features:
- name: normal_sentence
dtype: string
- name: simple_sentence
dtype: string
splits:
- name: full
num_bytes: 124549115
num_examples: 483801
download_size: 120678315
dataset_size: 124549115
config_names:
- auto
- auto_acl
- auto_full_no_split
- auto_full_with_split
- manual
---
# Dataset Card for WikiAuto
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [WikiAuto github repository](https://github.com/chaojiang06/wiki-auto)
- **Paper:** [Neural CRF Model for Sentence Alignment in Text Simplification](https://arxiv.org/abs/2005.02324)
- **Point of Contact:** [Chao Jiang](jiang.1530@osu.edu)
### Dataset Summary
WikiAuto provides a set of aligned sentences from English Wikipedia and Simple English Wikipedia as a resource to train sentence simplification systems.
The authors first crowd-sourced a set of manual alignments between sentences in a subset of the Simple English Wikipedia and their corresponding versions in English Wikipedia (this corresponds to the `manual` config in this version of dataset), then trained a neural CRF system to predict these alignments.
The trained alignment prediction model was then applied to the other articles in Simple English Wikipedia with an English counterpart to create a larger corpus of aligned sentences (corresponding to the `auto`, `auto_acl`, `auto_full_no_split`, and `auto_full_with_split` configs here).
### Supported Tasks and Leaderboards
The dataset was created to support a `text-simplification` task. Success in these tasks is typically measured using the [SARI](https://huggingface.co/metrics/sari) and [FKBLEU](https://huggingface.co/metrics/fkbleu) metrics described in the paper [Optimizing Statistical Machine Translation for Text Simplification](https://www.aclweb.org/anthology/Q16-1029.pdf).
### Languages
While both the input and output of the proposed task are in English (`en`), it should be noted that it is presented as a translation task where Wikipedia Simple English is treated as its own idiom. For a statement of what is intended (but not always observed) to constitute Simple English on this platform, see [Simple English in Wikipedia](https://simple.wikipedia.org/wiki/Wikipedia:About#Simple_English).
## Dataset Structure
### Data Instances
The data in all of the configurations looks a little different.
A `manual` config instance consists of a sentence from the Simple English Wikipedia article, one from the linked English Wikipedia article, IDs for each of them, and a label indicating whether they are aligned. Sentences on either side can be repeated so that the aligned sentences are in the same instances. For example:
```
{'alignment_label': 1,
'normal_sentence_id': '0_66252-1-0-0',
'simple_sentence_id': '0_66252-0-0-0',
'normal_sentence': 'The Local Government Act 1985 is an Act of Parliament in the United Kingdom.', 'simple_sentence': 'The Local Government Act 1985 was an Act of Parliament in the United Kingdom', 'gleu_score': 0.800000011920929}
```
Is followed by
```
{'alignment_label': 0,
'normal_sentence_id': '0_66252-1-0-1',
'simple_sentence_id': '0_66252-0-0-0',
'normal_sentence': 'Its main effect was to abolish the six county councils of the metropolitan counties that had been set up in 1974, 11 years earlier, by the Local Government Act 1972, along with the Greater London Council that had been established in 1965.',
'simple_sentence': 'The Local Government Act 1985 was an Act of Parliament in the United Kingdom', 'gleu_score': 0.08641975373029709}
```
The `auto` config shows a pair of an English and corresponding Simple English Wikipedia as an instance, with an alignment at the paragraph and sentence level:
```
{'example_id': '0',
'normal': {'normal_article_content': {'normal_sentence': ["Lata Mondal ( ; born: 16 January 1993, Dhaka) is a Bangladeshi cricketer who plays for the Bangladesh national women's cricket team.",
'She is a right handed batter.',
'Mondal was born on January 16, 1993 in Dhaka, Bangladesh.',
"Mondal made her ODI career against the Ireland women's cricket team on November 26, 2011.",
"Mondal made her T20I career against the Ireland women's cricket team on August 28, 2012.",
"In October 2018, she was named in Bangladesh's squad for the 2018 ICC Women's World Twenty20 tournament in the West Indies.",
"Mondal was a member of the team that won a silver medal in cricket against the China national women's cricket team at the 2010 Asian Games in Guangzhou, China."],
'normal_sentence_id': ['normal-41918715-0-0',
'normal-41918715-0-1',
'normal-41918715-1-0',
'normal-41918715-2-0',
'normal-41918715-3-0',
'normal-41918715-3-1',
'normal-41918715-4-0']},
'normal_article_id': 41918715,
'normal_article_title': 'Lata Mondal',
'normal_article_url': 'https://en.wikipedia.org/wiki?curid=41918715'},
'paragraph_alignment': {'normal_paragraph_id': ['normal-41918715-0'],
'simple_paragraph_id': ['simple-702227-0']},
'sentence_alignment': {'normal_sentence_id': ['normal-41918715-0-0',
'normal-41918715-0-1'],
'simple_sentence_id': ['simple-702227-0-0', 'simple-702227-0-1']},
'simple': {'simple_article_content': {'simple_sentence': ["Lata Mondal (born: 16 January 1993) is a Bangladeshi cricketer who plays for the Bangladesh national women's cricket team.",
'She is a right handed bat.'],
'simple_sentence_id': ['simple-702227-0-0', 'simple-702227-0-1']},
'simple_article_id': 702227,
'simple_article_title': 'Lata Mondal',
'simple_article_url': 'https://simple.wikipedia.org/wiki?curid=702227'}}
```
Finally, the `auto_acl`, the `auto_full_no_split`, and the `auto_full_with_split` configs were obtained by selecting the aligned pairs of sentences from `auto` to provide a ready-to-go aligned dataset to train a sequence-to-sequence system. While `auto_acl` corresponds to the filtered version of the data used to train the systems in the paper, `auto_full_no_split` and `auto_full_with_split` correspond to the unfiltered versions with and without sentence splits respectively. In the `auto_full_with_split` config, we join the sentences in the simple article mapped to the same sentence in the complex article to capture sentence splitting. Split sentences are separated by a `<SEP>` token. In the `auto_full_no_split` config, we do not join the splits and treat them as separate pairs. An instance is a single pair of sentences:
```
{'normal_sentence': 'In early work , Rutherford discovered the concept of radioactive half-life , the radioactive element radon , and differentiated and named alpha and beta radiation .\n',
'simple_sentence': 'Rutherford discovered the radioactive half-life , and the three parts of radiation which he named Alpha , Beta , and Gamma .\n'}
```
### Data Fields
The data has the following field:
- `normal_sentence`: a sentence from English Wikipedia.
- `normal_sentence_id`: a unique ID for each English Wikipedia sentence. The last two dash-separated numbers correspond to the paragraph number in the article and the sentence number in the paragraph.
- `simple_sentence`: a sentence from Simple English Wikipedia.
- `simple_sentence_id`: a unique ID for each Simple English Wikipedia sentence. The last two dash-separated numbers correspond to the paragraph number in the article and the sentence number in the paragraph.
- `alignment_label`: signifies whether a pair of sentences is aligned: labels are `2:partialAligned`, `1:aligned` and `0:notAligned`
- `paragraph_alignment`: a first step of alignment mapping English and Simple English paragraphs from linked articles
- `sentence_alignment`: the full alignment mapping English and Simple English sentences from linked articles
- `gleu_score`: the sentence level GLEU (Google-BLEU) score for each pair.
### Data Splits
In `auto`, the `part_2` split corresponds to the articles used in `manual`, and `part_1` has the rest of Wikipedia.
The `manual` config is provided with a `train`/`dev`/`test` split with the following amounts of data:
| | train | validation | test |
|------------------------|--------:|-----------:|--------:|
| Total sentence pairs | 373801 | 73249 | 118074 |
| Aligned sentence pairs | 1889 | 346 | 677 |
## Dataset Creation
### Curation Rationale
Simple English Wikipedia provides a ready source of training data for text simplification systems, as 1. articles in different languages are linked, making it easier to find parallel data and 2. the Simple English data is written by users for users rather than by professional translators. However, even though articles are aligned, finding a good sentence-level alignment can remain challenging. This work aims to provide a solution for this problem. By manually annotating a sub-set of the articles, they manage to achieve an F1 score of over 88% on predicting alignment, which allows to create a good quality sentence level aligned corpus using all of Simple English Wikipedia.
### Source Data
#### Initial Data Collection and Normalization
The authors mention that they "extracted 138,095 article pairs from the 2019/09 Wikipedia dump [...] using an improved version of the [WikiExtractor](https://github.com/attardi/wikiextractor) library". The [SpaCy](https://spacy.io/) library is used for sentence splitting.
#### Who are the source language producers?
The dataset uses langauge from Wikipedia: some demographic information is provided [here](https://en.wikipedia.org/wiki/Wikipedia:Who_writes_Wikipedia%3F).
### Annotations
#### Annotation process
Sentence alignment labels were obtained for 500 randomly sampled document pairs (10,123 sentence pairs total). The authors pre-selected several alignment candidates from English Wikipedia for each Simple Wikipedia sentence based on various similarity metrics, then asked the crowd-workers to annotate these pairs.
#### Who are the annotators?
No demographic annotation is provided for the crowd workers.
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The dataset was created by Chao Jiang, Mounica Maddela, Wuwei Lan, Yang Zhong, and Wei Xu working at Ohio State University.
### Licensing Information
The dataset is not licensed by itself, but the source Wikipedia data is under a `cc-by-sa-3.0` license.
### Citation Information
You can cite the paper presenting the dataset as:
```
@inproceedings{acl/JiangMLZX20,
author = {Chao Jiang and
Mounica Maddela and
Wuwei Lan and
Yang Zhong and
Wei Xu},
editor = {Dan Jurafsky and
Joyce Chai and
Natalie Schluter and
Joel R. Tetreault},
title = {Neural {CRF} Model for Sentence Alignment in Text Simplification},
booktitle = {Proceedings of the 58th Annual Meeting of the Association for Computational
Linguistics, {ACL} 2020, Online, July 5-10, 2020},
pages = {7943--7960},
publisher = {Association for Computational Linguistics},
year = {2020},
url = {https://www.aclweb.org/anthology/2020.acl-main.709/}
}
```
### Contributions
Thanks to [@yjernite](https://github.com/yjernite), [@mounicam](https://github.com/mounicam) for adding this dataset. |
mteb/medrxiv-clustering-s2s | 2022-09-27T19:10:50.000Z | [
"language:en",
"region:us"
] | mteb | null | null | null | 1 | 272 | ---
language:
- en
--- |
bloyal/oas_paired_human_sars_cov_2 | 2023-08-28T19:31:21.000Z | [
"size_categories:100K<n<1M",
"license:cc-by-4.0",
"region:us"
] | bloyal | null | null | null | 0 | 272 | ---
license: cc-by-4.0
size_categories:
- 100K<n<1M
---
# Paired SARS-COV-2 heavy/light chain sequences from the Observed Antibody Space database
Human paired heavy/light chain amino acid sequences from the Observed Antibody Space (OAS) database obtained from SARS-COV-2 studies.
https://opig.stats.ox.ac.uk/webapps/oas/
Please include the following citation in your work:
```
Olsen, TH, Boyles, F, Deane, CM. Observed Antibody Space: A diverse database of cleaned, annotated, and translated unpaired and paired antibody sequences. Protein Science. 2022; 31: 141–146. https://doi.org/10.1002/pro.4205
```
## Data Preparation
This data was obtained on August 3, 2023 by searching the OAS Paired Sequence database with the following criteria:
- Species = "human"
- Disease = "SARS-COV-2"
This returned 704,652 filtered sequences from 3 studies split across 63 .csv.gz data unit files. These were extracted and filtered for records where both the `complete_vdj_heavy` and `complete_vdj_light` values were "T". Finally, the `sequence_alignment_aa_heavy` and `sequence_alignment_aa_light` fields were extracted into dataset and a 90/10 train/test applied. The resulting data was saved in pyarrow format. |
njoncode/medical-audio-dataset | 2023-09-18T10:44:55.000Z | [
"region:us"
] | njoncode | This new dataset is designed for Speech Recognition. | @InProceedings{huggingface:dataset,
title = {A medical audio dataset},
author={huggingface, Inc.
},
year={2023}
} | null | 0 | 272 | Entry not found |
veggiebird/biology-scienceqa | 2023-09-22T05:36:16.000Z | [
"region:us"
] | veggiebird | null | null | null | 0 | 272 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: embeddings
sequence: float32
splits:
- name: train
num_bytes: 7464263
num_examples: 1596
download_size: 7087955
dataset_size: 7464263
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "biology-scienceqa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
HuggingFaceH4/stack-exchange-preferences | 2023-03-08T03:37:53.000Z | [
"task_categories:question-answering",
"size_categories:10M<n<100M",
"language:en",
"license:cc-by-sa-4.0",
"RLHF",
"preferences",
"human-feedback",
"Stack Exchange",
"arxiv:2112.00861",
"region:us"
] | HuggingFaceH4 | null | null | null | 72 | 271 | ---
license: cc-by-sa-4.0
task_categories:
- question-answering
language:
- en
pretty_name: H4 Stack Exchange Preferences Dataset
tags:
- RLHF
- preferences
- human-feedback
- Stack Exchange
download_size: 22132072448
size_categories:
- 10M<n<100M
---
# Dataset Card for H4 Stack Exchange Preferences Dataset
## Dataset Description
- **Homepage:** https://archive.org/details/stackexchange
- **Repository:** (private for now) https://github.com/huggingface/h4
- **Point of Contact:** Nathan Lambert, nathan@huggingface.co
- **Size of downloaded dataset:** 22.13 GB
- **Number of instructions:** 10,741,532
### Dataset Summary
This dataset contains questions and answers from the [Stack Overflow Data Dump](https://archive.org/details/stackexchange) for the purpose of **preference model training**.
Importantly, the questions have been filtered to fit the following criteria for preference models (following closely from [Askell et al. 2021](https://arxiv.org/abs/2112.00861)): *have >=2 answers*.
This data could also be used for instruction fine-tuning and language model training.
The questions are grouped with answers that are assigned a score corresponding to the Anthropic paper:
```
score = log2 (1 + upvotes) rounded to the nearest integer, plus 1 if the answer was accepted by the questioner (we assign a score of −1 if the number of upvotes is negative).
```
Some important notes when using this dataset for preference model pretraining (PMP), which can be ignored for other uses:
* the data will likely need to be filtered more due to matching scores.
* see section 4.1 of Askel et al 2021 for instructions on using each pair of samples twice via the following `binarization` (for better pre-training initialization):
```
Subsequently, we created a binary dataset by applying a ‘binarization’ procedure to the ranked dataset. That
is, for every ranked pair A > B, we transform it into two independent binary comparisons:
GOOD:A > BAD:A
BAD:B > GOOD:B
```
To see all the stackexchanges used in this data, please see [this file](https://huggingface.co/datasets/HuggingFaceH4/pmp-stack-exchange/blob/main/stack_exchanges.json).
Unfortunately, sharing the binarized data directly without metadata violates the license, so we have shared a script for binarization.
### Using the data
Here is a script from our internal tooling used to create a binarized dataset:
```
# Copyright 2023 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import random
from argparse import ArgumentParser
from pathlib import Path
import numpy as np
from datasets import Dataset, concatenate_datasets, load_dataset
from h4.data.utils import save_dataset_shards
H4_DIR = Path(__file__).resolve().parents[3]
DATA_DIR = H4_DIR / "data"
if __name__ == "__main__":
parser = ArgumentParser()
parser.add_argument("--debug", action="store_true", help="Added print statements / limit data size for debugging")
parser.add_argument(
"--output_dir",
default=f"{DATA_DIR}/pmp-binarized",
type=str,
help="Where to save the processed dataset",
)
parser.add_argument(
"--exchange_name",
type=str,
default=None,
help="Optional argument to specify a specific subsection of the dataset",
)
parser.add_argument(
"--binary_score", type=int, default=8, help="Score assigned to binarized pairs for preference data."
)
parser.add_argument(
"--stream_data", action="store_true", help="Optionally stream data, which can be useful with weaker computers"
)
parser.set_defaults(debug=False, stream_data=False) # default will process full dataset
args = parser.parse_args()
specific_exchange = args.exchange_name
stream_dataset = args.stream_data
binary_score = args.binary_score
if specific_exchange:
data_dir = "data/" + args.exchange_name
else:
data_dir = None
if args.debug:
data_len_limit = 10000
else:
data_len_limit = np.inf
dataset = load_dataset(
"HuggingFaceH4/pmp-stack-exchange",
data_dir=data_dir,
split="train",
streaming=stream_dataset,
)
pmp_data = []
for i, d in enumerate(iter(dataset)):
# check debug limit, quit if in debug mode (don't save)
if i > data_len_limit:
print("Early exit for debug mode!")
print(pmp_data)
break
question = d["question"]
answers = d["answers"]
num_answers = len(answers)
answer_scores = [a["pm_score"] for a in answers]
if len(np.unique(answer_scores)) < 2:
print(f"PM Scores are {answer_scores}, skipping this question {i}")
else:
# Sample 2 unique scores for binarization
dif_scores = False
while not dif_scores:
# print("infinite loop...?")
two_answers = random.sample(answers, 2)
if two_answers[0]["pm_score"] != two_answers[1]["pm_score"]:
dif_scores = True
answer_0 = two_answers[0]
answer_1 = two_answers[1]
text_0 = "Question: " + question + "\n" + "Answer: " + answer_0["text"]
text_1 = "Question: " + question + "\n" + "Answer: " + answer_1["text"]
score_0 = binary_score
score_1 = binary_score
pmp_data.append({"context": text_0, "score": score_0})
pmp_data.append({"context": text_1, "score": score_1})
# Save binarized data
sublist_len = 100000
print(f"Dataset length is {len(pmp_data)}")
# bypass known issue in arrow https://issues.apache.org/jira/browse/ARROW-17137
print(f"Processed dataset length > {sublist_len}, processing to HF dataset in chunks")
chunks = [pmp_data[x : x + sublist_len] for x in range(0, len(pmp_data), sublist_len)]
ds_chunks = [Dataset.from_list(ch) for ch in chunks]
ds = concatenate_datasets(ds_chunks)
save_dataset_shards(ds, args.output_dir, subset="stackexchange", shard_size="100MB")
```
### Languages
This is intended to be English only, thought other languages may be present. Some Stack Exchanges that are omitted include:
```
spanish: es.meta.stackoverflow.com, es.stackoverflow.com
japanese: ja.meta.stackoverflow.com, ja.stackoverflow.com
portugese: pt.stackoverflow.com, pt.meta.stackoverflow.com
russian: ru.stackoverflow, ru.meta.stackoverflow
```
### Licensing Information
License: https://creativecommons.org/licenses/by-sa/4.0/
The cc-by-sa 4.0 licensing, while intentionally permissive, does require attribution:
Attribution — You must attribute the work in the manner specified by the author or licensor (but not in any way that suggests that they endorse you or your use of the work).
Specifically the attribution requirements are as follows:
1. Visually display or otherwise indicate the source of the content as coming from the Stack Exchange Network. This requirement is satisfied with a discreet text blurb, or some other unobtrusive but clear visual indication.
2. Ensure that any Internet use of the content includes a hyperlink directly to the original question on the source site on the Network (e.g., http://stackoverflow.com/questions/12345)
3. Visually display or otherwise clearly indicate the author names for every question and answer used
4. Ensure that any Internet use of the content includes a hyperlink for each author name directly back to his or her user profile page on the source site on the Network (e.g., http://stackoverflow.com/users/12345/username), directly to the Stack Exchange domain, in standard HTML (i.e. not through a Tinyurl or other such indirect hyperlink, form of obfuscation or redirection), without any “nofollow” command or any other such means of avoiding detection by search engines, and visible even with JavaScript disabled.
For more information, see the Stack Exchange Terms of Service.
### Citation Information
```
@online{h4stackexchange,
author = {Lambert, Nathan and Tunstall, Lewis and Rajani, Nazneen and Thrush, Tristan},
title = {HuggingFace H4 Stack Exchange Preference Dataset},
year = 2023,
url = {https://huggingface.co/datasets/HuggingFaceH4/stack-exchange-preferences},
}
``` |
IlyaGusev/ru_turbo_alpaca | 2023-05-25T19:45:14.000Z | [
"task_categories:text-generation",
"task_categories:text2text-generation",
"size_categories:10K<n<100K",
"language:ru",
"license:cc-by-4.0",
"instruction-finetuning",
"instruction generation",
"alpaca",
"region:us"
] | IlyaGusev | null | null | null | 49 | 271 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: alternative_output
dtype: string
- name: label
dtype: string
- name: all_labels
sequence: string
- name: agreement
dtype: float32
- name: overlap
dtype: uint32
splits:
- name: train
num_bytes: 54774775
num_examples: 29822
download_size: 14565995
dataset_size: 54774775
license: cc-by-4.0
task_categories:
- text-generation
- text2text-generation
language:
- ru
tags:
- instruction-finetuning
- instruction generation
- alpaca
size_categories:
- 10K<n<100K
---
# RuTurboAlpaca
Dataset of ChatGPT-generated instructions in Russian.
<img src="https://cdn.midjourney.com/770a35fa-00c0-4214-bb88-727dbc7cfaf3/0_0.png" >
* Code: [rulm/self_instruct](https://github.com/IlyaGusev/rulm/tree/master/self_instruct)
* Code is based on [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca) and [self-instruct](https://github.com/yizhongw/self-instruct/).
* 29822 examples
Preliminary evaluation by an expert based on 400 samples:
* 83% of samples contain correct instructions
* 63% of samples have correct instructions and outputs
Crowdsouring-based evaluation on 3500 samples:
* 90% of samples contain correct instructions
* 68% of samples have correct instructions and outputs
Prompt template:
```
Составь набор из {{num_tasks}} разных заданий для дообучения языковой модели:
1. Делай задания максимально непохожими друг на друга: по типу, по запрашиваемым действиям, по формулировке, по наличию входа.
2. Задания должны быть выполнимы языковой моделью, которая не умеет работать с картинками, видео, и аудио, и не имеет доступа ко внешнему миру.
3. Используй хороший грамотный русский язык.
4. Делай задания в одно или два предложения.
5. Генерируй подходящие реалистичные входные данные, не используй общие шаблоны типа \"Имя человека\" или [имя] вместо реального имени.
6. Задание может быть без входных данных, в таком случае используй токен <noinput> вместо них.
7. На выходе сгенерируй подходящий длинный ответ.
8. Следуй тому же шаблону, который приведен в примерах, разделяй задания с помощью ###. Это важно!
Примеры заданий:
{% for task in example_tasks %}
{{task.index}}. Задание: {{task.instruction}}
{{task.index}}. Вход: {{task.input}}
{{task.index}}. Выход: {{task.output}}
{{ "###" if not loop.last else "" }}
{% endfor %}
```
## Legal disclaimer
Data is based on OpenAI’s gpt-3.5-turbo, whose [terms of use](https://openai.com/policies/terms-of-use) prohibit for us developing models that compete with OpenAI. Not for you. |
sid6i7/patient-doctor | 2023-03-30T20:02:27.000Z | [
"region:us"
] | sid6i7 | null | null | null | 3 | 271 | Entry not found |
isashap/resumedatasetspeft | 2023-09-20T03:45:23.000Z | [
"region:us"
] | isashap | null | null | null | 0 | 271 | Entry not found |
qa4mre | 2023-04-05T13:36:59.000Z | [
"task_categories:multiple-choice",
"task_ids:multiple-choice-qa",
"annotations_creators:other",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:ar",
"language:bg",
"language:de",
"language:en",
"language:es",
"language:it",
"language:ro",
"license:unknown",
"region:us"
] | null | QA4MRE dataset was created for the CLEF 2011/2012/2013 shared tasks to promote research in
question answering and reading comprehension. The dataset contains a supporting
passage and a set of questions corresponding to the passage. Multiple options
for answers are provided for each question, of which only one is correct. The
training and test datasets are available for the main track.
Additional gold standard documents are available for two pilot studies: one on
alzheimers data, and the other on entrance exams data. | null | null | 2 | 270 | ---
annotations_creators:
- other
language:
- ar
- bg
- de
- en
- es
- it
- ro
language_creators:
- found
license:
- unknown
multilinguality:
- multilingual
pretty_name: 'QA4MRE: Question Answering for Machine Reading Evaluation'
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- multiple-choice
task_ids:
- multiple-choice-qa
paperswithcode_id: null
dataset_info:
- config_name: 2011.main.DE
features:
- name: topic_id
dtype: string
- name: topic_name
dtype: string
- name: test_id
dtype: string
- name: document_id
dtype: string
- name: document_str
dtype: string
- name: question_id
dtype: string
- name: question_str
dtype: string
- name: answer_options
sequence:
- name: answer_id
dtype: string
- name: answer_str
dtype: string
- name: correct_answer_id
dtype: string
- name: correct_answer_str
dtype: string
splits:
- name: train
num_bytes: 1747118
num_examples: 120
download_size: 222289
dataset_size: 1747118
- config_name: 2011.main.EN
features:
- name: topic_id
dtype: string
- name: topic_name
dtype: string
- name: test_id
dtype: string
- name: document_id
dtype: string
- name: document_str
dtype: string
- name: question_id
dtype: string
- name: question_str
dtype: string
- name: answer_options
sequence:
- name: answer_id
dtype: string
- name: answer_str
dtype: string
- name: correct_answer_id
dtype: string
- name: correct_answer_str
dtype: string
splits:
- name: train
num_bytes: 1569676
num_examples: 120
download_size: 202490
dataset_size: 1569676
- config_name: 2011.main.ES
features:
- name: topic_id
dtype: string
- name: topic_name
dtype: string
- name: test_id
dtype: string
- name: document_id
dtype: string
- name: document_str
dtype: string
- name: question_id
dtype: string
- name: question_str
dtype: string
- name: answer_options
sequence:
- name: answer_id
dtype: string
- name: answer_str
dtype: string
- name: correct_answer_id
dtype: string
- name: correct_answer_str
dtype: string
splits:
- name: train
num_bytes: 1694460
num_examples: 120
download_size: 217617
dataset_size: 1694460
- config_name: 2011.main.IT
features:
- name: topic_id
dtype: string
- name: topic_name
dtype: string
- name: test_id
dtype: string
- name: document_id
dtype: string
- name: document_str
dtype: string
- name: question_id
dtype: string
- name: question_str
dtype: string
- name: answer_options
sequence:
- name: answer_id
dtype: string
- name: answer_str
dtype: string
- name: correct_answer_id
dtype: string
- name: correct_answer_str
dtype: string
splits:
- name: train
num_bytes: 1667188
num_examples: 120
download_size: 214764
dataset_size: 1667188
- config_name: 2011.main.RO
features:
- name: topic_id
dtype: string
- name: topic_name
dtype: string
- name: test_id
dtype: string
- name: document_id
dtype: string
- name: document_str
dtype: string
- name: question_id
dtype: string
- name: question_str
dtype: string
- name: answer_options
sequence:
- name: answer_id
dtype: string
- name: answer_str
dtype: string
- name: correct_answer_id
dtype: string
- name: correct_answer_str
dtype: string
splits:
- name: train
num_bytes: 1740419
num_examples: 120
download_size: 221510
dataset_size: 1740419
- config_name: 2012.main.AR
features:
- name: topic_id
dtype: string
- name: topic_name
dtype: string
- name: test_id
dtype: string
- name: document_id
dtype: string
- name: document_str
dtype: string
- name: question_id
dtype: string
- name: question_str
dtype: string
- name: answer_options
sequence:
- name: answer_id
dtype: string
- name: answer_str
dtype: string
- name: correct_answer_id
dtype: string
- name: correct_answer_str
dtype: string
splits:
- name: train
num_bytes: 2710656
num_examples: 160
download_size: 356178
dataset_size: 2710656
- config_name: 2012.main.BG
features:
- name: topic_id
dtype: string
- name: topic_name
dtype: string
- name: test_id
dtype: string
- name: document_id
dtype: string
- name: document_str
dtype: string
- name: question_id
dtype: string
- name: question_str
dtype: string
- name: answer_options
sequence:
- name: answer_id
dtype: string
- name: answer_str
dtype: string
- name: correct_answer_id
dtype: string
- name: correct_answer_str
dtype: string
splits:
- name: train
num_bytes: 3454215
num_examples: 160
download_size: 445060
dataset_size: 3454215
- config_name: 2012.main.DE
features:
- name: topic_id
dtype: string
- name: topic_name
dtype: string
- name: test_id
dtype: string
- name: document_id
dtype: string
- name: document_str
dtype: string
- name: question_id
dtype: string
- name: question_str
dtype: string
- name: answer_options
sequence:
- name: answer_id
dtype: string
- name: answer_str
dtype: string
- name: correct_answer_id
dtype: string
- name: correct_answer_str
dtype: string
splits:
- name: train
num_bytes: 2087466
num_examples: 160
download_size: 281600
dataset_size: 2087466
- config_name: 2012.main.EN
features:
- name: topic_id
dtype: string
- name: topic_name
dtype: string
- name: test_id
dtype: string
- name: document_id
dtype: string
- name: document_str
dtype: string
- name: question_id
dtype: string
- name: question_str
dtype: string
- name: answer_options
sequence:
- name: answer_id
dtype: string
- name: answer_str
dtype: string
- name: correct_answer_id
dtype: string
- name: correct_answer_str
dtype: string
splits:
- name: train
num_bytes: 1757586
num_examples: 160
download_size: 243467
dataset_size: 1757586
- config_name: 2012.main.ES
features:
- name: topic_id
dtype: string
- name: topic_name
dtype: string
- name: test_id
dtype: string
- name: document_id
dtype: string
- name: document_str
dtype: string
- name: question_id
dtype: string
- name: question_str
dtype: string
- name: answer_options
sequence:
- name: answer_id
dtype: string
- name: answer_str
dtype: string
- name: correct_answer_id
dtype: string
- name: correct_answer_str
dtype: string
splits:
- name: train
num_bytes: 2057402
num_examples: 160
download_size: 278445
dataset_size: 2057402
- config_name: 2012.main.IT
features:
- name: topic_id
dtype: string
- name: topic_name
dtype: string
- name: test_id
dtype: string
- name: document_id
dtype: string
- name: document_str
dtype: string
- name: question_id
dtype: string
- name: question_str
dtype: string
- name: answer_options
sequence:
- name: answer_id
dtype: string
- name: answer_str
dtype: string
- name: correct_answer_id
dtype: string
- name: correct_answer_str
dtype: string
splits:
- name: train
num_bytes: 2071710
num_examples: 160
download_size: 280051
dataset_size: 2071710
- config_name: 2012.main.RO
features:
- name: topic_id
dtype: string
- name: topic_name
dtype: string
- name: test_id
dtype: string
- name: document_id
dtype: string
- name: document_str
dtype: string
- name: question_id
dtype: string
- name: question_str
dtype: string
- name: answer_options
sequence:
- name: answer_id
dtype: string
- name: answer_str
dtype: string
- name: correct_answer_id
dtype: string
- name: correct_answer_str
dtype: string
splits:
- name: train
num_bytes: 2074930
num_examples: 160
download_size: 279541
dataset_size: 2074930
- config_name: 2012.alzheimers.EN
features:
- name: topic_id
dtype: string
- name: topic_name
dtype: string
- name: test_id
dtype: string
- name: document_id
dtype: string
- name: document_str
dtype: string
- name: question_id
dtype: string
- name: question_str
dtype: string
- name: answer_options
sequence:
- name: answer_id
dtype: string
- name: answer_str
dtype: string
- name: correct_answer_id
dtype: string
- name: correct_answer_str
dtype: string
splits:
- name: train
num_bytes: 1637988
num_examples: 40
download_size: 177345
dataset_size: 1637988
- config_name: 2013.main.AR
features:
- name: topic_id
dtype: string
- name: topic_name
dtype: string
- name: test_id
dtype: string
- name: document_id
dtype: string
- name: document_str
dtype: string
- name: question_id
dtype: string
- name: question_str
dtype: string
- name: answer_options
sequence:
- name: answer_id
dtype: string
- name: answer_str
dtype: string
- name: correct_answer_id
dtype: string
- name: correct_answer_str
dtype: string
splits:
- name: train
num_bytes: 4180979
num_examples: 284
download_size: 378302
dataset_size: 4180979
- config_name: 2013.main.BG
features:
- name: topic_id
dtype: string
- name: topic_name
dtype: string
- name: test_id
dtype: string
- name: document_id
dtype: string
- name: document_str
dtype: string
- name: question_id
dtype: string
- name: question_str
dtype: string
- name: answer_options
sequence:
- name: answer_id
dtype: string
- name: answer_str
dtype: string
- name: correct_answer_id
dtype: string
- name: correct_answer_str
dtype: string
splits:
- name: train
num_bytes: 5403246
num_examples: 284
download_size: 463605
dataset_size: 5403246
- config_name: 2013.main.EN
features:
- name: topic_id
dtype: string
- name: topic_name
dtype: string
- name: test_id
dtype: string
- name: document_id
dtype: string
- name: document_str
dtype: string
- name: question_id
dtype: string
- name: question_str
dtype: string
- name: answer_options
sequence:
- name: answer_id
dtype: string
- name: answer_str
dtype: string
- name: correct_answer_id
dtype: string
- name: correct_answer_str
dtype: string
splits:
- name: train
num_bytes: 2887866
num_examples: 284
download_size: 274969
dataset_size: 2887866
- config_name: 2013.main.ES
features:
- name: topic_id
dtype: string
- name: topic_name
dtype: string
- name: test_id
dtype: string
- name: document_id
dtype: string
- name: document_str
dtype: string
- name: question_id
dtype: string
- name: question_str
dtype: string
- name: answer_options
sequence:
- name: answer_id
dtype: string
- name: answer_str
dtype: string
- name: correct_answer_id
dtype: string
- name: correct_answer_str
dtype: string
splits:
- name: train
num_bytes: 3449693
num_examples: 284
download_size: 315166
dataset_size: 3449693
- config_name: 2013.main.RO
features:
- name: topic_id
dtype: string
- name: topic_name
dtype: string
- name: test_id
dtype: string
- name: document_id
dtype: string
- name: document_str
dtype: string
- name: question_id
dtype: string
- name: question_str
dtype: string
- name: answer_options
sequence:
- name: answer_id
dtype: string
- name: answer_str
dtype: string
- name: correct_answer_id
dtype: string
- name: correct_answer_str
dtype: string
splits:
- name: train
num_bytes: 3363049
num_examples: 284
download_size: 313510
dataset_size: 3363049
- config_name: 2013.alzheimers.EN
features:
- name: topic_id
dtype: string
- name: topic_name
dtype: string
- name: test_id
dtype: string
- name: document_id
dtype: string
- name: document_str
dtype: string
- name: question_id
dtype: string
- name: question_str
dtype: string
- name: answer_options
sequence:
- name: answer_id
dtype: string
- name: answer_str
dtype: string
- name: correct_answer_id
dtype: string
- name: correct_answer_str
dtype: string
splits:
- name: train
num_bytes: 2614812
num_examples: 40
download_size: 274413
dataset_size: 2614812
- config_name: 2013.entrance_exam.EN
features:
- name: topic_id
dtype: string
- name: topic_name
dtype: string
- name: test_id
dtype: string
- name: document_id
dtype: string
- name: document_str
dtype: string
- name: question_id
dtype: string
- name: question_str
dtype: string
- name: answer_options
sequence:
- name: answer_id
dtype: string
- name: answer_str
dtype: string
- name: correct_answer_id
dtype: string
- name: correct_answer_str
dtype: string
splits:
- name: train
num_bytes: 180827
num_examples: 46
download_size: 54598
dataset_size: 180827
---
# Dataset Card for "qa4mre"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://nlp.uned.es/clef-qa/repository/qa4mre.php
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [QA4MRE 2011-2013: Overview of Question Answering for Machine Reading Evaluation](https://link.springer.com/chapter/10.1007/978-3-642-40802-1_29)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 5.49 MB
- **Size of the generated dataset:** 48.35 MB
- **Total amount of disk used:** 53.84 MB
### Dataset Summary
QA4MRE dataset was created for the CLEF 2011/2012/2013 shared tasks to promote research in
question answering and reading comprehension. The dataset contains a supporting
passage and a set of questions corresponding to the passage. Multiple options
for answers are provided for each question, of which only one is correct. The
training and test datasets are available for the main track.
Additional gold standard documents are available for two pilot studies: one on
alzheimers data, and the other on entrance exams data.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### 2011.main.DE
- **Size of downloaded dataset files:** 0.22 MB
- **Size of the generated dataset:** 1.75 MB
- **Total amount of disk used:** 1.97 MB
An example of 'train' looks as follows.
```
```
#### 2011.main.EN
- **Size of downloaded dataset files:** 0.20 MB
- **Size of the generated dataset:** 1.57 MB
- **Total amount of disk used:** 1.77 MB
An example of 'train' looks as follows.
```
```
#### 2011.main.ES
- **Size of downloaded dataset files:** 0.22 MB
- **Size of the generated dataset:** 1.70 MB
- **Total amount of disk used:** 1.91 MB
An example of 'train' looks as follows.
```
```
#### 2011.main.IT
- **Size of downloaded dataset files:** 0.21 MB
- **Size of the generated dataset:** 1.67 MB
- **Total amount of disk used:** 1.88 MB
An example of 'train' looks as follows.
```
```
#### 2011.main.RO
- **Size of downloaded dataset files:** 0.22 MB
- **Size of the generated dataset:** 1.74 MB
- **Total amount of disk used:** 1.96 MB
An example of 'train' looks as follows.
```
```
### Data Fields
The data fields are the same among all splits.
#### 2011.main.DE
- `topic_id`: a `string` feature.
- `topic_name`: a `string` feature.
- `test_id`: a `string` feature.
- `document_id`: a `string` feature.
- `document_str`: a `string` feature.
- `question_id`: a `string` feature.
- `question_str`: a `string` feature.
- `answer_options`: a dictionary feature containing:
- `answer_id`: a `string` feature.
- `answer_str`: a `string` feature.
- `correct_answer_id`: a `string` feature.
- `correct_answer_str`: a `string` feature.
#### 2011.main.EN
- `topic_id`: a `string` feature.
- `topic_name`: a `string` feature.
- `test_id`: a `string` feature.
- `document_id`: a `string` feature.
- `document_str`: a `string` feature.
- `question_id`: a `string` feature.
- `question_str`: a `string` feature.
- `answer_options`: a dictionary feature containing:
- `answer_id`: a `string` feature.
- `answer_str`: a `string` feature.
- `correct_answer_id`: a `string` feature.
- `correct_answer_str`: a `string` feature.
#### 2011.main.ES
- `topic_id`: a `string` feature.
- `topic_name`: a `string` feature.
- `test_id`: a `string` feature.
- `document_id`: a `string` feature.
- `document_str`: a `string` feature.
- `question_id`: a `string` feature.
- `question_str`: a `string` feature.
- `answer_options`: a dictionary feature containing:
- `answer_id`: a `string` feature.
- `answer_str`: a `string` feature.
- `correct_answer_id`: a `string` feature.
- `correct_answer_str`: a `string` feature.
#### 2011.main.IT
- `topic_id`: a `string` feature.
- `topic_name`: a `string` feature.
- `test_id`: a `string` feature.
- `document_id`: a `string` feature.
- `document_str`: a `string` feature.
- `question_id`: a `string` feature.
- `question_str`: a `string` feature.
- `answer_options`: a dictionary feature containing:
- `answer_id`: a `string` feature.
- `answer_str`: a `string` feature.
- `correct_answer_id`: a `string` feature.
- `correct_answer_str`: a `string` feature.
#### 2011.main.RO
- `topic_id`: a `string` feature.
- `topic_name`: a `string` feature.
- `test_id`: a `string` feature.
- `document_id`: a `string` feature.
- `document_str`: a `string` feature.
- `question_id`: a `string` feature.
- `question_str`: a `string` feature.
- `answer_options`: a dictionary feature containing:
- `answer_id`: a `string` feature.
- `answer_str`: a `string` feature.
- `correct_answer_id`: a `string` feature.
- `correct_answer_str`: a `string` feature.
### Data Splits
| name |train|
|------------|----:|
|2011.main.DE| 120|
|2011.main.EN| 120|
|2011.main.ES| 120|
|2011.main.IT| 120|
|2011.main.RO| 120|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{10.1007/978-3-642-40802-1_29,
author="Pe{\~{n}}as, Anselmo
and Hovy, Eduard
and Forner, Pamela
and Rodrigo, {\'A}lvaro
and Sutcliffe, Richard
and Morante, Roser",
editor="Forner, Pamela
and M{\"u}ller, Henning
and Paredes, Roberto
and Rosso, Paolo
and Stein, Benno",
title="QA4MRE 2011-2013: Overview of Question Answering for Machine Reading Evaluation",
booktitle="Information Access Evaluation. Multilinguality, Multimodality, and Visualization",
year="2013",
publisher="Springer Berlin Heidelberg",
address="Berlin, Heidelberg",
pages="303--320",
isbn="978-3-642-40802-1"
}
```
### Contributions
Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@albertvillanova](https://github.com/albertvillanova), [@mariamabarham](https://github.com/mariamabarham), [@thomwolf](https://github.com/thomwolf) for adding this dataset. |
SetFit/stsb | 2022-02-28T14:20:16.000Z | [
"region:us"
] | SetFit | null | null | null | 0 | 270 | # Glue STS-B
This dataset is a port of the official [`sts-b` dataset](https://huggingface.co/datasets/glue/viewer/stsb/validation) on the Hub.
This is not a classification task, so the label_text column is only included for consistency
Note that the sentence1 and sentence2 columns have been renamed to text1 and text2 respectively.
Also, the test split is not labeled; the label column values are always -1.
|
result-kand2-sdxl-wuerst-karlo/c09c453b | 2023-09-29T10:08:13.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | null | 0 | 270 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 195
num_examples: 10
download_size: 1353
dataset_size: 195
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "c09c453b"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
open-llm-leaderboard/details_Riiid__sheep-duck-llama-2-70b-v1.1 | 2023-10-04T07:22:11.000Z | [
"region:us"
] | open-llm-leaderboard | null | null | null | 0 | 270 | ---
pretty_name: Evaluation run of Riiid/sheep-duck-llama-2-70b-v1.1
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [Riiid/sheep-duck-llama-2-70b-v1.1](https://huggingface.co/Riiid/sheep-duck-llama-2-70b-v1.1)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 61 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Riiid__sheep-duck-llama-2-70b-v1.1\"\
,\n\t\"harness_truthfulqa_mc_0\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\
\nThese are the [latest results from run 2023-10-04T07:20:47.805368](https://huggingface.co/datasets/open-llm-leaderboard/details_Riiid__sheep-duck-llama-2-70b-v1.1/blob/main/results_2023-10-04T07-20-47.805368.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.7077701920572519,\n\
\ \"acc_stderr\": 0.030864748969041296,\n \"acc_norm\": 0.7116562654123524,\n\
\ \"acc_norm_stderr\": 0.030832223559852722,\n \"mc1\": 0.4675642594859241,\n\
\ \"mc1_stderr\": 0.017466632149577613,\n \"mc2\": 0.6457987192860579,\n\
\ \"mc2_stderr\": 0.014744553500507454\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.6877133105802048,\n \"acc_stderr\": 0.013542598541688065,\n\
\ \"acc_norm\": 0.7303754266211604,\n \"acc_norm_stderr\": 0.012968040686869154\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6914957179844653,\n\
\ \"acc_stderr\": 0.004609320024893897,\n \"acc_norm\": 0.8781119298944433,\n\
\ \"acc_norm_stderr\": 0.003264878737586879\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.38,\n \"acc_stderr\": 0.048783173121456316,\n \
\ \"acc_norm\": 0.38,\n \"acc_norm_stderr\": 0.048783173121456316\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6296296296296297,\n\
\ \"acc_stderr\": 0.04171654161354543,\n \"acc_norm\": 0.6296296296296297,\n\
\ \"acc_norm_stderr\": 0.04171654161354543\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.8092105263157895,\n \"acc_stderr\": 0.03197565821032499,\n\
\ \"acc_norm\": 0.8092105263157895,\n \"acc_norm_stderr\": 0.03197565821032499\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.75,\n\
\ \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.75,\n \
\ \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.7509433962264151,\n \"acc_stderr\": 0.026616482980501704,\n\
\ \"acc_norm\": 0.7509433962264151,\n \"acc_norm_stderr\": 0.026616482980501704\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.8194444444444444,\n\
\ \"acc_stderr\": 0.03216600808802267,\n \"acc_norm\": 0.8194444444444444,\n\
\ \"acc_norm_stderr\": 0.03216600808802267\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.48,\n \"acc_stderr\": 0.050211673156867795,\n \
\ \"acc_norm\": 0.48,\n \"acc_norm_stderr\": 0.050211673156867795\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"\
acc\": 0.6,\n \"acc_stderr\": 0.04923659639173309,\n \"acc_norm\"\
: 0.6,\n \"acc_norm_stderr\": 0.04923659639173309\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.4,\n \"acc_stderr\": 0.049236596391733084,\n \
\ \"acc_norm\": 0.4,\n \"acc_norm_stderr\": 0.049236596391733084\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6820809248554913,\n\
\ \"acc_stderr\": 0.0355068398916558,\n \"acc_norm\": 0.6820809248554913,\n\
\ \"acc_norm_stderr\": 0.0355068398916558\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.37254901960784315,\n \"acc_stderr\": 0.048108401480826346,\n\
\ \"acc_norm\": 0.37254901960784315,\n \"acc_norm_stderr\": 0.048108401480826346\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.76,\n \"acc_stderr\": 0.042923469599092816,\n \"acc_norm\": 0.76,\n\
\ \"acc_norm_stderr\": 0.042923469599092816\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.6936170212765957,\n \"acc_stderr\": 0.030135906478517563,\n\
\ \"acc_norm\": 0.6936170212765957,\n \"acc_norm_stderr\": 0.030135906478517563\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.4649122807017544,\n\
\ \"acc_stderr\": 0.04692008381368909,\n \"acc_norm\": 0.4649122807017544,\n\
\ \"acc_norm_stderr\": 0.04692008381368909\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.6413793103448275,\n \"acc_stderr\": 0.03996629574876719,\n\
\ \"acc_norm\": 0.6413793103448275,\n \"acc_norm_stderr\": 0.03996629574876719\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.48148148148148145,\n \"acc_stderr\": 0.02573364199183898,\n \"\
acc_norm\": 0.48148148148148145,\n \"acc_norm_stderr\": 0.02573364199183898\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.5,\n\
\ \"acc_stderr\": 0.04472135954999579,\n \"acc_norm\": 0.5,\n \
\ \"acc_norm_stderr\": 0.04472135954999579\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.48,\n \"acc_stderr\": 0.050211673156867795,\n \
\ \"acc_norm\": 0.48,\n \"acc_norm_stderr\": 0.050211673156867795\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\"\
: 0.8193548387096774,\n \"acc_stderr\": 0.021886178567172523,\n \"\
acc_norm\": 0.8193548387096774,\n \"acc_norm_stderr\": 0.021886178567172523\n\
\ },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\"\
: 0.5566502463054187,\n \"acc_stderr\": 0.03495334582162933,\n \"\
acc_norm\": 0.5566502463054187,\n \"acc_norm_stderr\": 0.03495334582162933\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.78,\n \"acc_stderr\": 0.04163331998932262,\n \"acc_norm\"\
: 0.78,\n \"acc_norm_stderr\": 0.04163331998932262\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.8484848484848485,\n \"acc_stderr\": 0.027998073798781678,\n\
\ \"acc_norm\": 0.8484848484848485,\n \"acc_norm_stderr\": 0.027998073798781678\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.8939393939393939,\n \"acc_stderr\": 0.02193804773885311,\n \"\
acc_norm\": 0.8939393939393939,\n \"acc_norm_stderr\": 0.02193804773885311\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.9430051813471503,\n \"acc_stderr\": 0.01673108529360755,\n\
\ \"acc_norm\": 0.9430051813471503,\n \"acc_norm_stderr\": 0.01673108529360755\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.7128205128205128,\n \"acc_stderr\": 0.02293992541853062,\n \
\ \"acc_norm\": 0.7128205128205128,\n \"acc_norm_stderr\": 0.02293992541853062\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.34074074074074073,\n \"acc_stderr\": 0.028897748741131137,\n \
\ \"acc_norm\": 0.34074074074074073,\n \"acc_norm_stderr\": 0.028897748741131137\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.7689075630252101,\n \"acc_stderr\": 0.027381406927868876,\n\
\ \"acc_norm\": 0.7689075630252101,\n \"acc_norm_stderr\": 0.027381406927868876\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.48344370860927155,\n \"acc_stderr\": 0.040802441856289715,\n \"\
acc_norm\": 0.48344370860927155,\n \"acc_norm_stderr\": 0.040802441856289715\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.9009174311926605,\n \"acc_stderr\": 0.01280978008187893,\n \"\
acc_norm\": 0.9009174311926605,\n \"acc_norm_stderr\": 0.01280978008187893\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.6018518518518519,\n \"acc_stderr\": 0.033384734032074016,\n \"\
acc_norm\": 0.6018518518518519,\n \"acc_norm_stderr\": 0.033384734032074016\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.9264705882352942,\n \"acc_stderr\": 0.01831885585008968,\n \"\
acc_norm\": 0.9264705882352942,\n \"acc_norm_stderr\": 0.01831885585008968\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.8987341772151899,\n \"acc_stderr\": 0.019637720526065498,\n \
\ \"acc_norm\": 0.8987341772151899,\n \"acc_norm_stderr\": 0.019637720526065498\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.7892376681614349,\n\
\ \"acc_stderr\": 0.027373095500540186,\n \"acc_norm\": 0.7892376681614349,\n\
\ \"acc_norm_stderr\": 0.027373095500540186\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.8473282442748091,\n \"acc_stderr\": 0.031545216720054725,\n\
\ \"acc_norm\": 0.8473282442748091,\n \"acc_norm_stderr\": 0.031545216720054725\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.8677685950413223,\n \"acc_stderr\": 0.03092278832044579,\n \"\
acc_norm\": 0.8677685950413223,\n \"acc_norm_stderr\": 0.03092278832044579\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8333333333333334,\n\
\ \"acc_stderr\": 0.03602814176392645,\n \"acc_norm\": 0.8333333333333334,\n\
\ \"acc_norm_stderr\": 0.03602814176392645\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.8282208588957055,\n \"acc_stderr\": 0.029634717272371037,\n\
\ \"acc_norm\": 0.8282208588957055,\n \"acc_norm_stderr\": 0.029634717272371037\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5535714285714286,\n\
\ \"acc_stderr\": 0.047184714852195865,\n \"acc_norm\": 0.5535714285714286,\n\
\ \"acc_norm_stderr\": 0.047184714852195865\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.8446601941747572,\n \"acc_stderr\": 0.03586594738573974,\n\
\ \"acc_norm\": 0.8446601941747572,\n \"acc_norm_stderr\": 0.03586594738573974\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.9188034188034188,\n\
\ \"acc_stderr\": 0.017893784904018516,\n \"acc_norm\": 0.9188034188034188,\n\
\ \"acc_norm_stderr\": 0.017893784904018516\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.72,\n \"acc_stderr\": 0.04512608598542126,\n \
\ \"acc_norm\": 0.72,\n \"acc_norm_stderr\": 0.04512608598542126\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8659003831417624,\n\
\ \"acc_stderr\": 0.012185528166499978,\n \"acc_norm\": 0.8659003831417624,\n\
\ \"acc_norm_stderr\": 0.012185528166499978\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7745664739884393,\n \"acc_stderr\": 0.022497230190967554,\n\
\ \"acc_norm\": 0.7745664739884393,\n \"acc_norm_stderr\": 0.022497230190967554\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.6368715083798883,\n\
\ \"acc_stderr\": 0.016083749986853708,\n \"acc_norm\": 0.6368715083798883,\n\
\ \"acc_norm_stderr\": 0.016083749986853708\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.7516339869281046,\n \"acc_stderr\": 0.02473998135511359,\n\
\ \"acc_norm\": 0.7516339869281046,\n \"acc_norm_stderr\": 0.02473998135511359\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.77491961414791,\n\
\ \"acc_stderr\": 0.023720088516179027,\n \"acc_norm\": 0.77491961414791,\n\
\ \"acc_norm_stderr\": 0.023720088516179027\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.8333333333333334,\n \"acc_stderr\": 0.020736358408060002,\n\
\ \"acc_norm\": 0.8333333333333334,\n \"acc_norm_stderr\": 0.020736358408060002\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.5780141843971631,\n \"acc_stderr\": 0.02946218923337059,\n \
\ \"acc_norm\": 0.5780141843971631,\n \"acc_norm_stderr\": 0.02946218923337059\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.589960886571056,\n\
\ \"acc_stderr\": 0.012561837621962028,\n \"acc_norm\": 0.589960886571056,\n\
\ \"acc_norm_stderr\": 0.012561837621962028\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.7426470588235294,\n \"acc_stderr\": 0.0265565194700415,\n\
\ \"acc_norm\": 0.7426470588235294,\n \"acc_norm_stderr\": 0.0265565194700415\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.7696078431372549,\n \"acc_stderr\": 0.01703522925803403,\n \
\ \"acc_norm\": 0.7696078431372549,\n \"acc_norm_stderr\": 0.01703522925803403\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7454545454545455,\n\
\ \"acc_stderr\": 0.041723430387053825,\n \"acc_norm\": 0.7454545454545455,\n\
\ \"acc_norm_stderr\": 0.041723430387053825\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.7959183673469388,\n \"acc_stderr\": 0.025801283475090496,\n\
\ \"acc_norm\": 0.7959183673469388,\n \"acc_norm_stderr\": 0.025801283475090496\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8756218905472637,\n\
\ \"acc_stderr\": 0.023335401790166327,\n \"acc_norm\": 0.8756218905472637,\n\
\ \"acc_norm_stderr\": 0.023335401790166327\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.87,\n \"acc_stderr\": 0.033799766898963086,\n \
\ \"acc_norm\": 0.87,\n \"acc_norm_stderr\": 0.033799766898963086\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5240963855421686,\n\
\ \"acc_stderr\": 0.03887971849597264,\n \"acc_norm\": 0.5240963855421686,\n\
\ \"acc_norm_stderr\": 0.03887971849597264\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8771929824561403,\n \"acc_stderr\": 0.02517298435015575,\n\
\ \"acc_norm\": 0.8771929824561403,\n \"acc_norm_stderr\": 0.02517298435015575\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.4675642594859241,\n\
\ \"mc1_stderr\": 0.017466632149577613,\n \"mc2\": 0.6457987192860579,\n\
\ \"mc2_stderr\": 0.014744553500507454\n }\n}\n```"
repo_url: https://huggingface.co/Riiid/sheep-duck-llama-2-70b-v1.1
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_10_04T07_20_47.805368
path:
- '**/details_harness|arc:challenge|25_2023-10-04T07-20-47.805368.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-10-04T07-20-47.805368.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_10_04T07_20_47.805368
path:
- '**/details_harness|hellaswag|10_2023-10-04T07-20-47.805368.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-10-04T07-20-47.805368.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_10_04T07_20_47.805368
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-04T07-20-47.805368.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-10-04T07-20-47.805368.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-04T07-20-47.805368.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_10_04T07_20_47.805368
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-04T07-20-47.805368.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-04T07-20-47.805368.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_10_04T07_20_47.805368
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-04T07-20-47.805368.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-04T07-20-47.805368.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_10_04T07_20_47.805368
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-04T07-20-47.805368.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-04T07-20-47.805368.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_10_04T07_20_47.805368
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-04T07-20-47.805368.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-04T07-20-47.805368.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_10_04T07_20_47.805368
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-04T07-20-47.805368.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-04T07-20-47.805368.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_10_04T07_20_47.805368
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-04T07-20-47.805368.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-04T07-20-47.805368.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_10_04T07_20_47.805368
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-04T07-20-47.805368.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-04T07-20-47.805368.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_10_04T07_20_47.805368
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-04T07-20-47.805368.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-04T07-20-47.805368.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_10_04T07_20_47.805368
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-04T07-20-47.805368.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-04T07-20-47.805368.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_10_04T07_20_47.805368
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-04T07-20-47.805368.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-04T07-20-47.805368.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_10_04T07_20_47.805368
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-04T07-20-47.805368.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-04T07-20-47.805368.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_10_04T07_20_47.805368
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-04T07-20-47.805368.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-04T07-20-47.805368.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_10_04T07_20_47.805368
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-04T07-20-47.805368.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-04T07-20-47.805368.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_10_04T07_20_47.805368
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-04T07-20-47.805368.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-04T07-20-47.805368.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_10_04T07_20_47.805368
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-04T07-20-47.805368.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-04T07-20-47.805368.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_10_04T07_20_47.805368
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-04T07-20-47.805368.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-04T07-20-47.805368.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_10_04T07_20_47.805368
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-04T07-20-47.805368.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-04T07-20-47.805368.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_10_04T07_20_47.805368
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-04T07-20-47.805368.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-04T07-20-47.805368.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_10_04T07_20_47.805368
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-04T07-20-47.805368.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-04T07-20-47.805368.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_10_04T07_20_47.805368
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-04T07-20-47.805368.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-04T07-20-47.805368.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_10_04T07_20_47.805368
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-04T07-20-47.805368.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-04T07-20-47.805368.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_10_04T07_20_47.805368
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-04T07-20-47.805368.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-04T07-20-47.805368.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_10_04T07_20_47.805368
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-04T07-20-47.805368.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-04T07-20-47.805368.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_10_04T07_20_47.805368
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-04T07-20-47.805368.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-04T07-20-47.805368.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_10_04T07_20_47.805368
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-04T07-20-47.805368.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-04T07-20-47.805368.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_10_04T07_20_47.805368
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-04T07-20-47.805368.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-04T07-20-47.805368.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_10_04T07_20_47.805368
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-04T07-20-47.805368.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-04T07-20-47.805368.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_10_04T07_20_47.805368
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-04T07-20-47.805368.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-04T07-20-47.805368.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_10_04T07_20_47.805368
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-04T07-20-47.805368.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-04T07-20-47.805368.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_10_04T07_20_47.805368
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-04T07-20-47.805368.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-04T07-20-47.805368.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_10_04T07_20_47.805368
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-04T07-20-47.805368.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-04T07-20-47.805368.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_10_04T07_20_47.805368
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-04T07-20-47.805368.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-04T07-20-47.805368.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_10_04T07_20_47.805368
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-04T07-20-47.805368.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-04T07-20-47.805368.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_10_04T07_20_47.805368
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-04T07-20-47.805368.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-04T07-20-47.805368.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_10_04T07_20_47.805368
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-10-04T07-20-47.805368.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-10-04T07-20-47.805368.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_10_04T07_20_47.805368
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-04T07-20-47.805368.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-04T07-20-47.805368.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_10_04T07_20_47.805368
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-04T07-20-47.805368.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-04T07-20-47.805368.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_10_04T07_20_47.805368
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-04T07-20-47.805368.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-04T07-20-47.805368.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_10_04T07_20_47.805368
path:
- '**/details_harness|hendrycksTest-management|5_2023-10-04T07-20-47.805368.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-10-04T07-20-47.805368.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_10_04T07_20_47.805368
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-10-04T07-20-47.805368.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-10-04T07-20-47.805368.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_10_04T07_20_47.805368
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-04T07-20-47.805368.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-04T07-20-47.805368.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_10_04T07_20_47.805368
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-04T07-20-47.805368.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-04T07-20-47.805368.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_10_04T07_20_47.805368
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-04T07-20-47.805368.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-04T07-20-47.805368.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_10_04T07_20_47.805368
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-04T07-20-47.805368.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-04T07-20-47.805368.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_10_04T07_20_47.805368
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-04T07-20-47.805368.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-04T07-20-47.805368.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_10_04T07_20_47.805368
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-04T07-20-47.805368.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-04T07-20-47.805368.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_10_04T07_20_47.805368
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-04T07-20-47.805368.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-04T07-20-47.805368.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_10_04T07_20_47.805368
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-04T07-20-47.805368.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-04T07-20-47.805368.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_10_04T07_20_47.805368
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-04T07-20-47.805368.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-04T07-20-47.805368.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_10_04T07_20_47.805368
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-04T07-20-47.805368.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-04T07-20-47.805368.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_10_04T07_20_47.805368
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-04T07-20-47.805368.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-04T07-20-47.805368.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_10_04T07_20_47.805368
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-04T07-20-47.805368.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-04T07-20-47.805368.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_10_04T07_20_47.805368
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-04T07-20-47.805368.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-04T07-20-47.805368.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_10_04T07_20_47.805368
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-10-04T07-20-47.805368.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-10-04T07-20-47.805368.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_10_04T07_20_47.805368
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-04T07-20-47.805368.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-04T07-20-47.805368.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_10_04T07_20_47.805368
path:
- '**/details_harness|hendrycksTest-virology|5_2023-10-04T07-20-47.805368.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-10-04T07-20-47.805368.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_10_04T07_20_47.805368
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-04T07-20-47.805368.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-04T07-20-47.805368.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_10_04T07_20_47.805368
path:
- '**/details_harness|truthfulqa:mc|0_2023-10-04T07-20-47.805368.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-10-04T07-20-47.805368.parquet'
- config_name: results
data_files:
- split: 2023_10_04T07_20_47.805368
path:
- results_2023-10-04T07-20-47.805368.parquet
- split: latest
path:
- results_2023-10-04T07-20-47.805368.parquet
---
# Dataset Card for Evaluation run of Riiid/sheep-duck-llama-2-70b-v1.1
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Riiid/sheep-duck-llama-2-70b-v1.1
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [Riiid/sheep-duck-llama-2-70b-v1.1](https://huggingface.co/Riiid/sheep-duck-llama-2-70b-v1.1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Riiid__sheep-duck-llama-2-70b-v1.1",
"harness_truthfulqa_mc_0",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-04T07:20:47.805368](https://huggingface.co/datasets/open-llm-leaderboard/details_Riiid__sheep-duck-llama-2-70b-v1.1/blob/main/results_2023-10-04T07-20-47.805368.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.7077701920572519,
"acc_stderr": 0.030864748969041296,
"acc_norm": 0.7116562654123524,
"acc_norm_stderr": 0.030832223559852722,
"mc1": 0.4675642594859241,
"mc1_stderr": 0.017466632149577613,
"mc2": 0.6457987192860579,
"mc2_stderr": 0.014744553500507454
},
"harness|arc:challenge|25": {
"acc": 0.6877133105802048,
"acc_stderr": 0.013542598541688065,
"acc_norm": 0.7303754266211604,
"acc_norm_stderr": 0.012968040686869154
},
"harness|hellaswag|10": {
"acc": 0.6914957179844653,
"acc_stderr": 0.004609320024893897,
"acc_norm": 0.8781119298944433,
"acc_norm_stderr": 0.003264878737586879
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.38,
"acc_stderr": 0.048783173121456316,
"acc_norm": 0.38,
"acc_norm_stderr": 0.048783173121456316
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6296296296296297,
"acc_stderr": 0.04171654161354543,
"acc_norm": 0.6296296296296297,
"acc_norm_stderr": 0.04171654161354543
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.8092105263157895,
"acc_stderr": 0.03197565821032499,
"acc_norm": 0.8092105263157895,
"acc_norm_stderr": 0.03197565821032499
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.75,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.75,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7509433962264151,
"acc_stderr": 0.026616482980501704,
"acc_norm": 0.7509433962264151,
"acc_norm_stderr": 0.026616482980501704
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.8194444444444444,
"acc_stderr": 0.03216600808802267,
"acc_norm": 0.8194444444444444,
"acc_norm_stderr": 0.03216600808802267
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.48,
"acc_stderr": 0.050211673156867795,
"acc_norm": 0.48,
"acc_norm_stderr": 0.050211673156867795
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.6,
"acc_stderr": 0.04923659639173309,
"acc_norm": 0.6,
"acc_norm_stderr": 0.04923659639173309
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.4,
"acc_stderr": 0.049236596391733084,
"acc_norm": 0.4,
"acc_norm_stderr": 0.049236596391733084
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6820809248554913,
"acc_stderr": 0.0355068398916558,
"acc_norm": 0.6820809248554913,
"acc_norm_stderr": 0.0355068398916558
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.37254901960784315,
"acc_stderr": 0.048108401480826346,
"acc_norm": 0.37254901960784315,
"acc_norm_stderr": 0.048108401480826346
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.76,
"acc_stderr": 0.042923469599092816,
"acc_norm": 0.76,
"acc_norm_stderr": 0.042923469599092816
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.6936170212765957,
"acc_stderr": 0.030135906478517563,
"acc_norm": 0.6936170212765957,
"acc_norm_stderr": 0.030135906478517563
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.4649122807017544,
"acc_stderr": 0.04692008381368909,
"acc_norm": 0.4649122807017544,
"acc_norm_stderr": 0.04692008381368909
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.6413793103448275,
"acc_stderr": 0.03996629574876719,
"acc_norm": 0.6413793103448275,
"acc_norm_stderr": 0.03996629574876719
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.48148148148148145,
"acc_stderr": 0.02573364199183898,
"acc_norm": 0.48148148148148145,
"acc_norm_stderr": 0.02573364199183898
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.5,
"acc_stderr": 0.04472135954999579,
"acc_norm": 0.5,
"acc_norm_stderr": 0.04472135954999579
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.48,
"acc_stderr": 0.050211673156867795,
"acc_norm": 0.48,
"acc_norm_stderr": 0.050211673156867795
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.8193548387096774,
"acc_stderr": 0.021886178567172523,
"acc_norm": 0.8193548387096774,
"acc_norm_stderr": 0.021886178567172523
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5566502463054187,
"acc_stderr": 0.03495334582162933,
"acc_norm": 0.5566502463054187,
"acc_norm_stderr": 0.03495334582162933
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.78,
"acc_stderr": 0.04163331998932262,
"acc_norm": 0.78,
"acc_norm_stderr": 0.04163331998932262
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.8484848484848485,
"acc_stderr": 0.027998073798781678,
"acc_norm": 0.8484848484848485,
"acc_norm_stderr": 0.027998073798781678
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.8939393939393939,
"acc_stderr": 0.02193804773885311,
"acc_norm": 0.8939393939393939,
"acc_norm_stderr": 0.02193804773885311
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9430051813471503,
"acc_stderr": 0.01673108529360755,
"acc_norm": 0.9430051813471503,
"acc_norm_stderr": 0.01673108529360755
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.7128205128205128,
"acc_stderr": 0.02293992541853062,
"acc_norm": 0.7128205128205128,
"acc_norm_stderr": 0.02293992541853062
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.34074074074074073,
"acc_stderr": 0.028897748741131137,
"acc_norm": 0.34074074074074073,
"acc_norm_stderr": 0.028897748741131137
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.7689075630252101,
"acc_stderr": 0.027381406927868876,
"acc_norm": 0.7689075630252101,
"acc_norm_stderr": 0.027381406927868876
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.48344370860927155,
"acc_stderr": 0.040802441856289715,
"acc_norm": 0.48344370860927155,
"acc_norm_stderr": 0.040802441856289715
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.9009174311926605,
"acc_stderr": 0.01280978008187893,
"acc_norm": 0.9009174311926605,
"acc_norm_stderr": 0.01280978008187893
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.6018518518518519,
"acc_stderr": 0.033384734032074016,
"acc_norm": 0.6018518518518519,
"acc_norm_stderr": 0.033384734032074016
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.9264705882352942,
"acc_stderr": 0.01831885585008968,
"acc_norm": 0.9264705882352942,
"acc_norm_stderr": 0.01831885585008968
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8987341772151899,
"acc_stderr": 0.019637720526065498,
"acc_norm": 0.8987341772151899,
"acc_norm_stderr": 0.019637720526065498
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.7892376681614349,
"acc_stderr": 0.027373095500540186,
"acc_norm": 0.7892376681614349,
"acc_norm_stderr": 0.027373095500540186
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.8473282442748091,
"acc_stderr": 0.031545216720054725,
"acc_norm": 0.8473282442748091,
"acc_norm_stderr": 0.031545216720054725
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8677685950413223,
"acc_stderr": 0.03092278832044579,
"acc_norm": 0.8677685950413223,
"acc_norm_stderr": 0.03092278832044579
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.8333333333333334,
"acc_stderr": 0.03602814176392645,
"acc_norm": 0.8333333333333334,
"acc_norm_stderr": 0.03602814176392645
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.8282208588957055,
"acc_stderr": 0.029634717272371037,
"acc_norm": 0.8282208588957055,
"acc_norm_stderr": 0.029634717272371037
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.5535714285714286,
"acc_stderr": 0.047184714852195865,
"acc_norm": 0.5535714285714286,
"acc_norm_stderr": 0.047184714852195865
},
"harness|hendrycksTest-management|5": {
"acc": 0.8446601941747572,
"acc_stderr": 0.03586594738573974,
"acc_norm": 0.8446601941747572,
"acc_norm_stderr": 0.03586594738573974
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.9188034188034188,
"acc_stderr": 0.017893784904018516,
"acc_norm": 0.9188034188034188,
"acc_norm_stderr": 0.017893784904018516
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.72,
"acc_stderr": 0.04512608598542126,
"acc_norm": 0.72,
"acc_norm_stderr": 0.04512608598542126
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8659003831417624,
"acc_stderr": 0.012185528166499978,
"acc_norm": 0.8659003831417624,
"acc_norm_stderr": 0.012185528166499978
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7745664739884393,
"acc_stderr": 0.022497230190967554,
"acc_norm": 0.7745664739884393,
"acc_norm_stderr": 0.022497230190967554
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.6368715083798883,
"acc_stderr": 0.016083749986853708,
"acc_norm": 0.6368715083798883,
"acc_norm_stderr": 0.016083749986853708
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7516339869281046,
"acc_stderr": 0.02473998135511359,
"acc_norm": 0.7516339869281046,
"acc_norm_stderr": 0.02473998135511359
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.77491961414791,
"acc_stderr": 0.023720088516179027,
"acc_norm": 0.77491961414791,
"acc_norm_stderr": 0.023720088516179027
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.8333333333333334,
"acc_stderr": 0.020736358408060002,
"acc_norm": 0.8333333333333334,
"acc_norm_stderr": 0.020736358408060002
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.5780141843971631,
"acc_stderr": 0.02946218923337059,
"acc_norm": 0.5780141843971631,
"acc_norm_stderr": 0.02946218923337059
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.589960886571056,
"acc_stderr": 0.012561837621962028,
"acc_norm": 0.589960886571056,
"acc_norm_stderr": 0.012561837621962028
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.7426470588235294,
"acc_stderr": 0.0265565194700415,
"acc_norm": 0.7426470588235294,
"acc_norm_stderr": 0.0265565194700415
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.7696078431372549,
"acc_stderr": 0.01703522925803403,
"acc_norm": 0.7696078431372549,
"acc_norm_stderr": 0.01703522925803403
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.7454545454545455,
"acc_stderr": 0.041723430387053825,
"acc_norm": 0.7454545454545455,
"acc_norm_stderr": 0.041723430387053825
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7959183673469388,
"acc_stderr": 0.025801283475090496,
"acc_norm": 0.7959183673469388,
"acc_norm_stderr": 0.025801283475090496
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8756218905472637,
"acc_stderr": 0.023335401790166327,
"acc_norm": 0.8756218905472637,
"acc_norm_stderr": 0.023335401790166327
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.87,
"acc_stderr": 0.033799766898963086,
"acc_norm": 0.87,
"acc_norm_stderr": 0.033799766898963086
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5240963855421686,
"acc_stderr": 0.03887971849597264,
"acc_norm": 0.5240963855421686,
"acc_norm_stderr": 0.03887971849597264
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8771929824561403,
"acc_stderr": 0.02517298435015575,
"acc_norm": 0.8771929824561403,
"acc_norm_stderr": 0.02517298435015575
},
"harness|truthfulqa:mc|0": {
"mc1": 0.4675642594859241,
"mc1_stderr": 0.017466632149577613,
"mc2": 0.6457987192860579,
"mc2_stderr": 0.014744553500507454
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
gradio/NYC-Airbnb-Open-Data | 2022-10-09T05:31:38.000Z | [
"license:afl-3.0",
"region:us"
] | gradio | null | null | null | 2 | 268 | ---
license: afl-3.0
---
|
parambharat/tamil_asr_corpus | 2022-12-07T17:32:59.000Z | [
"task_categories:automatic-speech-recognition",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended|common_voice",
"source_datasets:extended|openslr",
"language:ta",
"license:cc-by-4.0",
"region:us"
] | parambharat | The corpus contains roughly 1000 hours of audio and trasncripts in Tamil language. The transcripts have beedn de-duplicated using exact match deduplication. | @misc{mile_1,
doi = {10.48550/ARXIV.2207.13331},
url = {https://arxiv.org/abs/2207.13331},
author = {A, Madhavaraj and Pilar, Bharathi and G, Ramakrishnan A},
title = {Subword Dictionary Learning and Segmentation Techniques for Automatic Speech Recognition in Tamil and Kannada},
publisher = {arXiv},
year = {2022},
}
@misc{mile_2,
doi = {10.48550/ARXIV.2207.13333},
url = {https://arxiv.org/abs/2207.13333},
author = {A, Madhavaraj and Pilar, Bharathi and G, Ramakrishnan A},
title = {Knowledge-driven Subword Grammar Modeling for Automatic Speech Recognition in Tamil and Kannada},
publisher = {arXiv},
year = {2022},
}
@inproceedings{he-etal-2020-open,
title = {{Open-source Multi-speaker Speech Corpora for Building Gujarati, Kannada, Malayalam, Marathi, Tamil and Telugu Speech Synthesis Systems}},
author = {He, Fei and Chu, Shan-Hui Cathy and Kjartansson, Oddur and Rivera, Clara and Katanova, Anna and Gutkin, Alexander and Demirsahin, Isin and Johny, Cibu and Jansche, Martin and Sarin, Supheakmungkol and Pipatsrisawat, Knot},
booktitle = {Proceedings of The 12th Language Resources and Evaluation Conference (LREC)},
month = may,
year = {2020},
address = {Marseille, France},
publisher = {European Language Resources Association (ELRA)},
pages = {6494--6503},
url = {https://www.aclweb.org/anthology/2020.lrec-1.800},
ISBN = "{979-10-95546-34-4},
}
@misc{https://doi.org/10.48550/arxiv.2211.09536,
doi = {10.48550/ARXIV.2211.09536},
url = {https://arxiv.org/abs/2211.09536},
author = {Kumar, Gokul Karthik and S, Praveen and Kumar, Pratyush and Khapra, Mitesh M. and Nandakumar, Karthik},
keywords = {Computation and Language (cs.CL), Machine Learning (cs.LG), Sound (cs.SD), Audio and Speech Processing (eess.AS), FOS: Computer and information sciences, FOS: Computer and information sciences, FOS: Electrical engineering, electronic engineering, information engineering, FOS: Electrical engineering, electronic engineering, information engineering},
title = {Towards Building Text-To-Speech Systems for the Next Billion Users},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
@inproceedings{commonvoice:2020,
author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.},
title = {Common Voice: A Massively-Multilingual Speech Corpus},
booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)},
pages = {4211--4215},
year = 2020
}
@misc{https://doi.org/10.48550/arxiv.2205.12446,
doi = {10.48550/ARXIV.2205.12446},
url = {https://arxiv.org/abs/2205.12446},
author = {Conneau, Alexis and Ma, Min and Khanuja, Simran and Zhang, Yu and Axelrod, Vera and Dalmia, Siddharth and Riesa, Jason and Rivera, Clara and Bapna, Ankur},
keywords = {Computation and Language (cs.CL), Machine Learning (cs.LG), Sound (cs.SD), Audio and Speech Processing (eess.AS), FOS: Computer and information sciences, FOS: Computer and information sciences, FOS: Electrical engineering, electronic engineering, information engineering, FOS: Electrical engineering, electronic engineering, information engineering},
title = {FLEURS: Few-shot Learning Evaluation of Universal Representations of Speech},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
} | null | 1 | 268 | ---
annotations_creators:
- found
language:
- ta
language_creators:
- found
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: Tamil ASR Corpus
size_categories:
- 100K<n<1M
source_datasets:
- extended|common_voice
- extended|openslr
tags: []
task_categories:
- automatic-speech-recognition
task_ids: []
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@parambharat](https://github.com/parambharat) for adding this dataset.
|
Thaweewat/alpaca-cleaned-52k-th | 2023-05-09T16:18:02.000Z | [
"task_categories:question-answering",
"task_categories:summarization",
"size_categories:10K<n<100K",
"language:th",
"license:cc-by-sa-3.0",
"instruction-finetuning",
"region:us"
] | Thaweewat | null | null | null | 3 | 268 | ---
license: cc-by-sa-3.0
task_categories:
- question-answering
- summarization
tags:
- instruction-finetuning
language:
- th
size_categories:
- 10K<n<100K
---
# Summary
This is a Thai 🇹🇭-instructed dataset translated from cleaned version of the original Alpaca Dataset released by Stanford using Google Cloud Translation, contain 52,000 instructions and demonstrations generated by OpenAI's `text-davinci-003` engine.
This instruction data can be used to conduct instruction-tuning for language models and make the language model follow instruction better.
The following issues have been identified in the original release and fixed in this dataset:
1. **Hallucinations:** Many instructions in the original dataset had instructions referencing data on the internet, which just caused GPT3 to hallucinate an answer.
2. **Merged Instructions:** There were many instructions that were merged together in the original dataset for some reason.
3. **Empty outputs:** Some entries in the original dataset had empty outputs.
4. **Empty code examples:** Some descriptions in the original dataset were missing code examples, making it difficult to understand the intended behavior of the code.
5. **Instructions to generate images:** Some descriptions in the original dataset included instructions to generate images, something obviously not possible.
6. **N/A outputs:** Some code snippets in the original dataset had N/A outputs.
7. **Inconsistent input field:** The original dataset had inconsistent usage of the input field when it was supposed to be empty.
8. **Wrong answers:** Some instructions/questions in the original dataset had incorrect answers. About 80% of the math problems are estimated to have incorrect answers.
9. **Non-Sensical/Unclear instructions:** Many instructions are unclear, we try to clarify (or re-write) if instructions are non-sensical. Instructions that are slightly unclear, but where one could deduce the meaning are not altered.
10. **Extraneous escape and control characters:** The original dataset had several entries with extraneous escape and control characters.
### Original Alpaca Dataset Summary
Alpaca is a dataset of 52,000 instructions and demonstrations generated by OpenAI's `text-davinci-003` engine. This instruction data can be used to conduct instruction-tuning for language models and make the language model follow instruction better.
The authors built on the data generation pipeline from [Self-Instruct framework](https://github.com/yizhongw/self-instruct) and made the following modifications:
- The `text-davinci-003` engine to generate the instruction data instead of `davinci`.
- A [new prompt](https://github.com/tatsu-lab/stanford_alpaca/blob/main/prompt.txt) was written that explicitly gave the requirement of instruction generation to `text-davinci-003`.
- Much more aggressive batch decoding was used, i.e., generating 20 instructions at once, which significantly reduced the cost of data generation.
- The data generation pipeline was simplified by discarding the difference between classification and non-classification instructions.
- Only a single instance was generated for each instruction, instead of 2 to 3 instances as in Self-Instruct.
The authors built on the data generation pipeline from [Self-Instruct framework](https://github.com/yizhongw/self-instruct) and made the following modifications:
- The `text-davinci-003` engine to generate the instruction data instead of `davinci`.
- A [new prompt](https://github.com/tatsu-lab/stanford_alpaca/blob/main/prompt.txt) was written that explicitly gave the requirement of instruction generation to `text-davinci-003`.
- Much more aggressive batch decoding was used, i.e., generating 20 instructions at once, which significantly reduced the cost of data generation.
- The data generation pipeline was simplified by discarding the difference between classification and non-classification instructions.
- Only a single instance was generated for each instruction, instead of 2 to 3 instances as in Self-Instruct.
This produced an instruction-following dataset with 52K examples obtained at a much lower cost (less than $500).
In a preliminary study, the authors also found that the 52K generated data to be much more diverse than the data released by [Self-Instruct](https://github.com/yizhongw/self-instruct/blob/main/data/seed_tasks.jsonl).
Supported Tasks:
- Training LLMs
- Synthetic Data Generation
- Data Augmentation
Languages: Thai
Version: 1.0
--- |
open-llm-leaderboard/details_uni-tianyan__Uni-TianYan | 2023-09-18T02:40:22.000Z | [
"region:us"
] | open-llm-leaderboard | null | null | null | 0 | 268 | ---
pretty_name: Evaluation run of uni-tianyan/Uni-TianYan
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [uni-tianyan/Uni-TianYan](https://huggingface.co/uni-tianyan/Uni-TianYan) on the\
\ [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 64 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_uni-tianyan__Uni-TianYan\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-09-18T02:40:09.826211](https://huggingface.co/datasets/open-llm-leaderboard/details_uni-tianyan__Uni-TianYan/blob/main/results_2023-09-18T02-40-09.826211.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.24486157718120805,\n\
\ \"em_stderr\": 0.004403654691385411,\n \"f1\": 0.39787751677852523,\n\
\ \"f1_stderr\": 0.004155160727794137,\n \"acc\": 0.5222921265482389,\n\
\ \"acc_stderr\": 0.01107896164608613\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.24486157718120805,\n \"em_stderr\": 0.004403654691385411,\n\
\ \"f1\": 0.39787751677852523,\n \"f1_stderr\": 0.004155160727794137\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.221379833206975,\n \
\ \"acc_stderr\": 0.011436000004253518\n },\n \"harness|winogrande|5\":\
\ {\n \"acc\": 0.8232044198895028,\n \"acc_stderr\": 0.010721923287918744\n\
\ }\n}\n```"
repo_url: https://huggingface.co/uni-tianyan/Uni-TianYan
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|arc:challenge|25_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_09_18T02_40_09.826211
path:
- '**/details_harness|drop|3_2023-09-18T02-40-09.826211.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-09-18T02-40-09.826211.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_09_18T02_40_09.826211
path:
- '**/details_harness|gsm8k|5_2023-09-18T02-40-09.826211.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-09-18T02-40-09.826211.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hellaswag|10_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-management|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-virology|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|truthfulqa:mc|0_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_09_18T02_40_09.826211
path:
- '**/details_harness|winogrande|5_2023-09-18T02-40-09.826211.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-09-18T02-40-09.826211.parquet'
- config_name: results
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- results_2023-09-03T12:27:36.436118.parquet
- split: 2023_09_18T02_40_09.826211
path:
- results_2023-09-18T02-40-09.826211.parquet
- split: latest
path:
- results_2023-09-18T02-40-09.826211.parquet
---
# Dataset Card for Evaluation run of uni-tianyan/Uni-TianYan
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/uni-tianyan/Uni-TianYan
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [uni-tianyan/Uni-TianYan](https://huggingface.co/uni-tianyan/Uni-TianYan) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_uni-tianyan__Uni-TianYan",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-09-18T02:40:09.826211](https://huggingface.co/datasets/open-llm-leaderboard/details_uni-tianyan__Uni-TianYan/blob/main/results_2023-09-18T02-40-09.826211.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.24486157718120805,
"em_stderr": 0.004403654691385411,
"f1": 0.39787751677852523,
"f1_stderr": 0.004155160727794137,
"acc": 0.5222921265482389,
"acc_stderr": 0.01107896164608613
},
"harness|drop|3": {
"em": 0.24486157718120805,
"em_stderr": 0.004403654691385411,
"f1": 0.39787751677852523,
"f1_stderr": 0.004155160727794137
},
"harness|gsm8k|5": {
"acc": 0.221379833206975,
"acc_stderr": 0.011436000004253518
},
"harness|winogrande|5": {
"acc": 0.8232044198895028,
"acc_stderr": 0.010721923287918744
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
ghomasHudson/muld | 2022-11-02T12:55:17.000Z | [
"task_categories:question-answering",
"task_categories:summarization",
"task_categories:text-generation",
"task_categories:translation",
"task_ids:abstractive-qa",
"annotations_creators:found",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:translation",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"source_datasets:extended|hotpot_qa",
"source_datasets:extended|open_subtitles",
"language:en",
"language:de",
"conditional-text-generation",
"arxiv:2202.07362",
"region:us"
] | ghomasHudson | MuLD: The Multitask Long Document Benchmark
A set of NLP tasks where each example is over 10,000 tokens long. | @misc{hudson2022muld,
title{MuLD: The Multitask Long Document Benchmark},
author={G Thomas Hudson, Noura Al Moubayed}
year={2022},
eprint={TODO},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
Some of these datasets are directly based on existing datasets. Please cite these works. | null | 5 | 267 | ---
annotations_creators:
- found
- crowdsourced
language_creators:
- found
language:
- en
- de
license: []
multilinguality:
- translation
- monolingual
size_categories:
- unknown
source_datasets:
- original
- extended|hotpot_qa
- extended|open_subtitles
task_categories:
- question-answering
- summarization
- text-generation
- translation
task_ids:
- abstractive-qa
pretty_name: The Multitask Long Document Benchmark
tags:
- conditional-text-generation
---
# MuLD
> The Multitask Long Document Benchmark

MuLD (Multitask Long Document Benchmark) is a set of 6 NLP tasks where the inputs consist of at least 10,000 words. The benchmark covers a wide variety of task types including translation, summarization, question answering, and classification. Additionally there is a range of output lengths from a single word classification label all the way up to an output longer than the input text.
- **Repository:** https://github.com/ghomasHudson/muld
- **Paper:** https://arxiv.org/abs/2202.07362
### Supported Tasks and Leaderboards
The 6 MuLD tasks consist of:
- **NarrativeQA** - A question answering dataset requiring an understanding of the plot of books and films.
- **HotpotQA** - An expanded version of HotpotQA requiring multihop reasoning between multiple wikipedia pages. This expanded version includes the full Wikipedia pages.
- **OpenSubtitles** - A translation dataset based on the OpenSubtitles 2018 dataset. The entire subtitles for each tv show is provided, one subtitle per line in both English and German.
- **VLSP (Very Long Scientific Papers)** - An expanded version of the Scientific Papers summarization dataset. Instead of removing very long papers (e.g. thesis), we explicitly include them removing any short papers.
- **AO3 Style Change Detection** - Consists of documents formed from the work of multiple [Archive of Our Own](ao3.org) authors, where the task is to predict the author for each paragraph.
- **Movie Character Types** - Predicting whether a named character is the Hero/Villain given a movie script.
### Dataset Structure
The data is presented in a text-to-text format where each instance contains a input string, output string and (optionally) json encoded metadata.
```
{'input: 'Who was wearing the blue shirt? The beginning...', 'output': ['John'], 'metadata': ''}
```
### Data Fields
- `input`: a string which has a differing structure per task but is presented in a unified format
- `output`: a list of strings where each is a possible answer. Most instances only have a single answer, but some such as narrativeQA and VLSP may have multiple.
- `metadata`: Additional metadata which may be helpful for evaluation. In this version, only the OpenSubtitles task contains metadata (for the ContraPro annotations).
### Data Splits
Each tasks contains different splits depending what was available in the source datasets:
| Task Name | Train | Validation | Test |
|----------------------------|----|----|-----|
| NarrativeQA | ✔️ | ✔️ | ✔️ |
| HotpotQA | ✔️ | ✔️ | |
| AO3 Style Change Detection | ✔️ | ✔️ | ✔️ |
| Movie Character Types | ✔️ | ✔️ | ✔️ |
| VLSP | | | ✔️ |
| OpenSubtitles | ✔️ | | ✔️ |
### Citation Information
```
@misc{hudson2022muld,
title={MuLD: The Multitask Long Document Benchmark},
author={G Thomas Hudson and Noura Al Moubayed},
year={2022},
eprint={2202.07362},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
Please also cite the papers directly used in this benchmark. |
ai4bharat/IndicQA | 2023-06-20T03:03:32.000Z | [
"task_categories:question-answering",
"task_ids:closed-domain-qa",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:n<1K",
"source_datasets:original",
"language:as",
"language:bn",
"language:gu",
"language:hi",
"language:kn",
"language:ml",
"language:mr",
"language:or",
"language:pa",
"language:ta",
"language:te",
"license:cc-by-4.0",
"region:us"
] | ai4bharat | \ | \ | null | 1 | 267 | ---
annotations_creators:
- expert-generated
language:
- as
- bn
- gu
- hi
- kn
- ml
- mr
- or
- pa
- ta
- te
language_creators:
- found
license:
- cc-by-4.0
multilinguality:
- multilingual
pretty_name: IndicQA
size_categories:
- n<1K
source_datasets:
- original
tags: []
task_categories:
- question-answering
task_ids:
- closed-domain-qa
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. |
olm/olm-wikipedia-20221220 | 2022-12-29T03:12:35.000Z | [
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"language:en",
"pretraining",
"language modelling",
"wikipedia",
"web",
"region:us"
] | olm | null | null | null | 2 | 267 | ---
annotations_creators:
- no-annotation
language:
- en
language_creators:
- found
license: []
multilinguality:
- monolingual
pretty_name: OLM December 2022 Wikipedia
size_categories:
- 1M<n<10M
source_datasets: []
tags:
- pretraining
- language modelling
- wikipedia
- web
task_categories: []
task_ids: []
---
# Dataset Card for OLM December 2022 Wikipedia
Pretraining dataset, created with the OLM repo [here](https://github.com/huggingface/olm-datasets) from a December 2022 Wikipedia snapshot. |
bzantium/LongBench | 2023-09-25T04:03:43.000Z | [
"task_categories:question-answering",
"task_categories:text-generation",
"task_categories:summarization",
"task_categories:conversational",
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:en",
"language:zh",
"Long Context",
"arxiv:2308.14508",
"arxiv:2108.00573",
"arxiv:1712.07040",
"arxiv:2105.03011",
"arxiv:2104.02112",
"arxiv:2104.05938",
"arxiv:2305.05280",
"arxiv:2303.09752",
"arxiv:1910.10683",
"arxiv:2306.14893",
"arxiv:2306.03091",
"region:us"
] | bzantium | LongBench is a comprehensive benchmark for multilingual and multi-task purposes, with the goal to fully measure and evaluate the ability of pre-trained language models to understand long text. This dataset consists of twenty different tasks, covering key long-text application scenarios such as multi-document QA, single-document QA, summarization, few-shot learning, synthetic tasks, and code completion. | null | null | 0 | 267 | ---
task_categories:
- question-answering
- text-generation
- summarization
- conversational
- text-classification
language:
- en
- zh
tags:
- Long Context
size_categories:
- 1K<n<10K
---
# Introduction
**LongBench** is the first benchmark for bilingual, multitask, and comprehensive assessment of **long context understanding** capabilities of large language models. LongBench includes different languages (Chinese and English) to provide a more comprehensive evaluation of the large models' multilingual capabilities on long contexts. In addition, LongBench is composed of six major categories and twenty one different tasks, covering key long-text application scenarios such as single-document QA, multi-document QA, summarization, few-shot learning, synthetic tasks and code completion.
We are fully aware of the potentially high costs involved in the model evaluation process, especially in the context of long context scenarios (such as manual annotation costs or API call costs). Therefore, we adopt a fully automated evaluation method, aimed at measuring and evaluating the model's ability to understand long contexts at the lowest cost.
LongBench includes 14 English tasks, 5 Chinese tasks, and 2 code tasks, with the average length of most tasks ranging from 5k to 15k, and a total of 4,750 test data. For detailed statistics and construction methods of LongBench tasks, please refer [here](task.md). In addition, we provide LongBench-E, a test set with a more uniform length distribution constructed by uniform sampling, with comparable amounts of data in the 0-4k, 4k-8k, and 8k+ length intervals to provide an analysis of the model's performance variations at different input lengths.
Github Repo for LongBench: https://github.com/THUDM/LongBench
Arxiv Paper for LongBench: https://arxiv.org/pdf/2308.14508.pdf
# How to use it?
#### Loading Data
```python
from datasets import load_dataset
datasets = ["narrativeqa", "qasper", "multifieldqa_en", "multifieldqa_zh", "hotpotqa", "2wikimqa", "musique", \
"dureader", "gov_report", "qmsum", "multi_news", "vcsum", "trec", "triviaqa", "samsum", "lsht", \
"passage_count", "passage_retrieval_en", "passage_retrieval_zh", "lcc", "repobench-p"]
for dataset in datasets:
data = load_dataset('THUDM/LongBench', dataset, split='test')
```
Similarly, you can load the **LongBench-E** data
```python
from datasets import load_dataset
datasets = ["qasper", "multifieldqa_en", "hotpotqa", "2wikimqa", "gov_report", "multi_news", "trec", \
"triviaqa", "samsum", "passage_count", "passage_retrieval_en", "lcc", "repobench-p"]
for dataset in datasets:
data = load_dataset('THUDM/LongBench', f"{dataset}_e", split='test')
```
Alternatively, you can download the folder from [this link](https://huggingface.co/datasets/THUDM/LongBench/resolve/main/data.zip) to load the data.
#### Data Format
All data in **LongBench** (LongBench-E) are standardized to the following format:
```json
{
"input": "The input/command for the task, usually short, such as questions in QA, queries in Few-shot tasks, etc",
"context": "The long context required for the task, such as documents, cross-file code, few-shot examples in Few-shot tasks",
"answers": "A List of all true answers",
"length": "Total length of the first three items (counted in characters for Chinese and words for English)",
"dataset": "The name of the dataset to which this piece of data belongs",
"language": "The language of this piece of data",
"all_classes": "All categories in classification tasks, null for non-classification tasks",
"_id": "Random id for each piece of data"
}
```
#### Evaluation
This repository provides data download for LongBench. If you wish to use this dataset for automated evaluation, please refer to our [github](https://github.com/THUDM/LongBench).
# Task statistics
| Task | Task Type | Eval metric | Avg len |Language | \#Sample |
| :-------- | :-----------:| :-----------: |:-------: | :-----------: |:--------: |
| HotpotQA | Multi-doc QA | F1 |9,151 |EN |200 |
| 2WikiMultihopQA| Multi-doc QA | F1 |4,887 |EN |200 |
| MuSiQue| Multi-doc QA | F1 |11,214 |EN |200 |
| DuReader| Multi-doc QA | Rouge-L |15,768 |ZH |200 |
| MultiFieldQA-en| Single-doc QA | F1 |4,559 |EN |150 |
| MultiFieldQA-zh| Single-doc QA | F1 |6,701 |ZH |200 |
| NarrativeQA| Single-doc QA | F1 |18,409 |EN |200 |
| Qasper| Single-doc QA | F1 |3,619 |EN |200 |
| GovReport| Summarization | Rouge-L |8,734 |EN |200 |
| QMSum| Summarization | Rouge-L |10,614 |EN |200 |
| MultiNews| Summarization | Rouge-L |2,113 |EN |200 |
| VCSUM| Summarization | Rouge-L |15,380 |ZH |200 |
| TriviaQA| Few shot | F1 |8,209 |EN |200 |
| SAMSum| Few shot | Rouge-L |6,258 |EN |200 |
| TREC| Few shot | Accuracy |5,177 |EN |200 |
| LSHT| Few shot | Accuracy |22,337 |ZH |200 |
| PassageRetrieval-en| Synthetic | Accuracy |9,289 |EN |200 |
| PassageCount| Synthetic | Accuracy |11,141 |EN |200 |
| PassageRetrieval-zh | Synthetic | Accuracy |6,745 |ZH |200 |
| LCC| Code | Edit Sim |1,235 |Python/C#/Java |500 |
| RepoBench-P| Code | Edit Sim |4,206 |Python/Java |500 |
> Note: In order to avoid discrepancies caused by different tokenizers, we use the word count (using Python's split function) to calculate the average length of English datasets and code datasets, and use the character count to calculate the average length of Chinese datasets.
# Task description
| Task | Task Description |
| :---------------- | :----------------------------------------------------------- |
| HotpotQA | Answer related questions based on multiple given documents |
| 2WikiMultihopQA | Answer related questions based on multiple given documents |
| MuSiQue | Answer related questions based on multiple given documents |
| DuReader | Answer related Chinese questions based on multiple retrieved documents |
| MultiFieldQA-en | Answer English questions based on a long article, which comes from a relatively diverse field |
| MultiFieldQA-zh | Answer Chinese questions based on a long article, which comes from a relatively diverse field |
| NarrativeQA | Answer questions based on stories or scripts, including understanding of important elements such as characters, plots, themes, etc. |
| Qasper | Answer questions based on a NLP research paper, questions proposed and answered by NLP practitioners |
| GovReport | A summarization task that requires summarizing government work reports |
| MultiNews | A multi-doc summarization that requires summarizing over multiple news |
| QMSum | A summarization task that requires summarizing meeting records based on user queries |
| VCSUM | A summarization task that requires summarizing Chinese meeting records |
| SAMSum | A dialogue summarization task, providing several few-shot examples |
| TriviaQA | Single document question answering task, providing several few-shot examples |
| NQ | Single document question answering task, providing several few-shot examples |
| TREC | A classification task that requires categorizing questions, includes 50 categories in total |
| LSHT | A Chinese classification task that requires categorizing news, includes 24 categories in total |
| PassageRetrieval-en | Given 30 English Wikipedia paragraphs, determine which paragraph the given summary corresponds to |
| PassageCount | Determine the total number of different paragraphs in a given repetitive article |
| PassageRetrieval-zh | Given several Chinese paragraphs from the C4 data set, determine which paragraph the given abstract corresponds to |
| LCC | Given a long piece of code, predict the next line of code |
| RepoBench-P | Given code in multiple files within a GitHub repository (including cross-file dependencies), predict the next line of code |
# Task construction
> Note: For all tasks constructed from existing datasets, we use data from the validation or test set of the existing dataset (except for VCSUM).
- The tasks of [HotpotQA](https://hotpotqa.github.io/), [2WikiMultihopQA](https://aclanthology.org/2020.coling-main.580/), [MuSiQue](https://arxiv.org/abs/2108.00573), and [DuReader](https://github.com/baidu/DuReader) are built based on the original datasets and processed to be suitable for long context evaluation. Specifically, for questions in the validation set, we select the evidence passage that contains the answer and several distracting articles. These articles together with the original question constitute the input of the tasks.
- The tasks of MultiFiedQA-zh and MultiFieldQA-en consist of long artical data from about 10 sources, including Latex papers, judicial documents, government work reports, and PDF documents indexed by Google. For each long artical, we invite several PhD and master students to annotate, i.e., to ask questions based on the long artical and give the correct answers. To better automate evaluation, we ask the annotators to propose questions with definitive answers as much as possible.
- The tasks of [NarrativeQA](https://arxiv.org/pdf/1712.07040.pdf), [Qasper](https://arxiv.org/pdf/2105.03011.pdf), [GovReport](https://arxiv.org/pdf/2104.02112.pdf), [QMSum](https://arxiv.org/pdf/2104.05938.pdf) and [MultiNews](https://aclanthology.org/P19-1102.pdf) directly use the data provided by the original papers. In the specific construction, we use the template provided by [ZeroSCROLLS](https://www.zero.scrolls-benchmark.com/) to convert the corresponding data into pure text input.
- The [VCSUM](https://arxiv.org/abs/2305.05280) task is built based on the original dataset, and we design a corresponding template to convert the corresponding data into pure text input.
- The [TriviaQA](https://nlp.cs.washington.edu/triviaqa/) task is constructed in the manner of [CoLT5](https://arxiv.org/abs/2303.09752), which provides several examples of question and answering based on documents, and requires the language model to answer related questions based on new documents.
- The tasks of [SAMSum](https://aclanthology.org/D19-5409.pdf), [TREC](https://aclanthology.org/C02-1150.pdf) and [LSHT](http://tcci.ccf.org.cn/conference/2014/dldoc/evatask6.pdf) are built based on the original datasets. For each question in the validation set, we sample several data from the training set to form few-shot examples. These examples together with the questions in the validation set constitute the input for this task.
- The PassageRetrieval-en task is constructed based on English Wikipedia. For each piece of data, we randomly sample 30 paragraphs from English Wikipedia and select one for summarization (using GPT-3.5-Turbo). This task requires the model to give the original paragraph name to which the summary corresponds.
- The PassageCount task is constructed based on the English wiki. For each piece of data, we randomly sample several passages from English Wikipedia, repeat each paragraph at random several times, and finally shuffle the paragraphs. This task requires the model to determine the total number of different paragraphs in the given context.
- The PasskeyRetrieval-zh task is constructed based on [C4](https://arxiv.org/abs/1910.10683). For each piece of data, we randomly sample several Chinese paragraphs from C4 and select one of them for summarization (using GPT-3.5-Turbo). This task requires the model to give the original paragraph name to which the summary corresponds.
- For the [LCC](https://arxiv.org/abs/2306.14893) task, we sample from the original code completion dataset. In the [RepoBench-P](https://arxiv.org/abs/2306.03091) task, we select the most challenging XF-F (Cross-File-First) setting from the original dataset and refer to the Oracle-Filled scenario in the paper. For each original piece of data, we randomly extract multiple cross-file code snippets, including the gold cross-file code snippet, and concatenate them as input, requiring the model to effectively use cross-file code for completion.
# LongBench-E statistics
| Task | Task Type | \#data in 0-4k | \#data in 4-8k | \#data in 8k+|
| :--------- | :-----------:| :-----------: |:---------: | :-------------: |
| HotpotQA | Multi-doc QA | 100 |100 |100 |
| 2WikiMultihopQA| Multi-doc QA | 100 |100 |100 |
| MultiFieldQA-en| Single-doc QA | 67 |70 |13 |
| Qasper| Single-doc QA | 100 |100 |24 |
| GovReport| Summarization | 100 |100 |100 |
| MultiNews| Summarization | 100 |100 |94 |
| TriviaQA| Few shot | 100 |100 |100 |
| SAMSum| Few shot | 100 |100 |100 |
| TREC| Few shot | 100 |100 |100 |
| PassageRetrieval-en| Synthetic | 100 |100 |100 |
| PassageCount| Synthetic | 100 |100 |100 |
| LCC| Code | 100 |100 |100 |
| RepoBench-P| Code | 100 |100 |100 |
# Citation
```
@misc{bai2023longbench,
title={LongBench: A Bilingual, Multitask Benchmark for Long Context Understanding},
author={Yushi Bai and Xin Lv and Jiajie Zhang and Hongchang Lyu and Jiankai Tang and Zhidian Huang and Zhengxiao Du and Xiao Liu and Aohan Zeng and Lei Hou and Yuxiao Dong and Jie Tang and Juanzi Li},
year={2023},
eprint={2308.14508},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
yxchar/ag-tlm | 2021-11-04T21:20:14.000Z | [
"region:us"
] | yxchar | null | null | null | 0 | 266 | Entry not found |
mteb/medrxiv-clustering-p2p | 2022-09-27T19:10:43.000Z | [
"language:en",
"region:us"
] | mteb | null | null | null | 0 | 266 | ---
language:
- en
--- |
heegyu/kowiki-sentences | 2022-10-06T00:54:57.000Z | [
"task_categories:other",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"language:ko",
"license:cc-by-sa-3.0",
"region:us"
] | heegyu | null | null | null | 1 | 266 | ---
license: cc-by-sa-3.0
language:
- ko
language_creators:
- other
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
task_categories:
- other
---
20221001 한국어 위키를 kss(backend=mecab)을 이용해서 문장 단위로 분리한 데이터
- 549262 articles, 4724064 sentences
- 한국어 비중이 50% 이하거나 한국어 글자가 10자 이하인 경우를 제외 |
DKYoon/SlimPajama-6B | 2023-08-21T16:54:47.000Z | [
"task_categories:text-generation",
"size_categories:1M<n<10M",
"language:en",
"region:us"
] | DKYoon | null | null | null | 4 | 266 | ---
language:
- en
size_categories:
- 1M<n<10M
task_categories:
- text-generation
pretty_name: SlimPajama-6B
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: meta
struct:
- name: redpajama_set_name
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 23918118724
num_examples: 5489000
- name: validation
num_bytes: 39109042
num_examples: 9347
- name: test
num_bytes: 40114950
num_examples: 9346
download_size: 14048972121
dataset_size: 23997342716
---
Sampled version of [cerebras/SlimPajama-627B](https://huggingface.co/datasets/cerebras/SlimPajama-627B).
[Since the original data was shuffled before chunking](https://huggingface.co/datasets/cerebras/SlimPajama-627B/discussions/4), I only downloaded train/chunk1 (of 10 total) and further sampled 10%. This should result in roughly 6B tokens, hence SlimPajama-6B.
The dataset is 24GBs in storage size when decompressed (original dataset is over 2TBs) and has 5489000 rows.
The validation set and test set were sampled as well.
---
#### Data source proportions for SlimPajama-627B and SlimPajama-6B
For sanity purpose, I caluclated the byte proportion of the sampled version.
| Data source | SlimPajama-627B | SlimPajama-6B |
| ------------- | ---------- | --------- |
| Commoncrawl | 52.2% | 54.1% |
| C4 | 26.7% | 28.7% |
| GitHub | 5.2% | 4.2% |
| Books | 4.2% | 3.7% |
| ArXiv | 4.6% | 3.4% |
| Wikpedia | 3.8% | 3.1% |
| StackExchange | 3.3% | 2.8% |
---
Please refer to the original dataset for other info.
```
@misc{cerebras2023slimpajama,
author = {Soboleva, Daria and Al-Khateeb, Faisal and Myers, Robert and Steeves, Jacob R and Hestness, Joel and Dey, Nolan},
title = {{SlimPajama: A 627B token cleaned and deduplicated version of RedPajama}},
month = June,
year = 2023,
howpublished = {\url{https://www.cerebras.net/blog/slimpajama-a-627b-token-cleaned-and-deduplicated-version-of-redpajama}},
url = {https://huggingface.co/datasets/cerebras/SlimPajama-627B},
}
``` |
open-llm-leaderboard/details_ICBU-NPU__FashionGPT-70B-V1.1 | 2023-09-19T01:01:39.000Z | [
"region:us"
] | open-llm-leaderboard | null | null | null | 0 | 266 | ---
pretty_name: Evaluation run of ICBU-NPU/FashionGPT-70B-V1.1
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [ICBU-NPU/FashionGPT-70B-V1.1](https://huggingface.co/ICBU-NPU/FashionGPT-70B-V1.1)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 61 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_ICBU-NPU__FashionGPT-70B-V1.1\"\
,\n\t\"harness_truthfulqa_mc_0\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\
\nThese are the [latest results from run 2023-09-19T01:00:15.258935](https://huggingface.co/datasets/open-llm-leaderboard/details_ICBU-NPU__FashionGPT-70B-V1.1/blob/main/results_2023-09-19T01-00-15.258935.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.7092811864393619,\n\
\ \"acc_stderr\": 0.03073791163351864,\n \"acc_norm\": 0.7129018476555437,\n\
\ \"acc_norm_stderr\": 0.030707851474891332,\n \"mc1\": 0.4528763769889841,\n\
\ \"mc1_stderr\": 0.01742558984831402,\n \"mc2\": 0.6526486216773408,\n\
\ \"mc2_stderr\": 0.01469598165350219\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.6843003412969283,\n \"acc_stderr\": 0.013582571095815291,\n\
\ \"acc_norm\": 0.7175767918088737,\n \"acc_norm_stderr\": 0.013155456884097222\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.7016530571599283,\n\
\ \"acc_stderr\": 0.004565974937793703,\n \"acc_norm\": 0.8819956184027086,\n\
\ \"acc_norm_stderr\": 0.0032195397905004797\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \
\ \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6296296296296297,\n\
\ \"acc_stderr\": 0.04171654161354543,\n \"acc_norm\": 0.6296296296296297,\n\
\ \"acc_norm_stderr\": 0.04171654161354543\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.8289473684210527,\n \"acc_stderr\": 0.03064360707167709,\n\
\ \"acc_norm\": 0.8289473684210527,\n \"acc_norm_stderr\": 0.03064360707167709\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.77,\n\
\ \"acc_stderr\": 0.042295258468165044,\n \"acc_norm\": 0.77,\n \
\ \"acc_norm_stderr\": 0.042295258468165044\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.7433962264150943,\n \"acc_stderr\": 0.026880647889051985,\n\
\ \"acc_norm\": 0.7433962264150943,\n \"acc_norm_stderr\": 0.026880647889051985\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.8333333333333334,\n\
\ \"acc_stderr\": 0.031164899666948617,\n \"acc_norm\": 0.8333333333333334,\n\
\ \"acc_norm_stderr\": 0.031164899666948617\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.5,\n \"acc_stderr\": 0.050251890762960605,\n \
\ \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.050251890762960605\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.61,\n \"acc_stderr\": 0.04902071300001975,\n \"acc_norm\": 0.61,\n\
\ \"acc_norm_stderr\": 0.04902071300001975\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.4,\n \"acc_stderr\": 0.049236596391733084,\n \
\ \"acc_norm\": 0.4,\n \"acc_norm_stderr\": 0.049236596391733084\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6589595375722543,\n\
\ \"acc_stderr\": 0.036146654241808254,\n \"acc_norm\": 0.6589595375722543,\n\
\ \"acc_norm_stderr\": 0.036146654241808254\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.3627450980392157,\n \"acc_stderr\": 0.04784060704105654,\n\
\ \"acc_norm\": 0.3627450980392157,\n \"acc_norm_stderr\": 0.04784060704105654\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.78,\n \"acc_stderr\": 0.04163331998932263,\n \"acc_norm\": 0.78,\n\
\ \"acc_norm_stderr\": 0.04163331998932263\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.7106382978723405,\n \"acc_stderr\": 0.02964400657700962,\n\
\ \"acc_norm\": 0.7106382978723405,\n \"acc_norm_stderr\": 0.02964400657700962\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.4649122807017544,\n\
\ \"acc_stderr\": 0.046920083813689104,\n \"acc_norm\": 0.4649122807017544,\n\
\ \"acc_norm_stderr\": 0.046920083813689104\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.6413793103448275,\n \"acc_stderr\": 0.03996629574876719,\n\
\ \"acc_norm\": 0.6413793103448275,\n \"acc_norm_stderr\": 0.03996629574876719\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.4708994708994709,\n \"acc_stderr\": 0.025707658614154954,\n \"\
acc_norm\": 0.4708994708994709,\n \"acc_norm_stderr\": 0.025707658614154954\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.5,\n\
\ \"acc_stderr\": 0.04472135954999579,\n \"acc_norm\": 0.5,\n \
\ \"acc_norm_stderr\": 0.04472135954999579\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.5,\n \"acc_stderr\": 0.050251890762960605,\n \
\ \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.050251890762960605\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.8290322580645161,\n\
\ \"acc_stderr\": 0.02141724293632159,\n \"acc_norm\": 0.8290322580645161,\n\
\ \"acc_norm_stderr\": 0.02141724293632159\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.5763546798029556,\n \"acc_stderr\": 0.03476725747649037,\n\
\ \"acc_norm\": 0.5763546798029556,\n \"acc_norm_stderr\": 0.03476725747649037\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.78,\n \"acc_stderr\": 0.04163331998932262,\n \"acc_norm\"\
: 0.78,\n \"acc_norm_stderr\": 0.04163331998932262\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.8424242424242424,\n \"acc_stderr\": 0.02845038880528436,\n\
\ \"acc_norm\": 0.8424242424242424,\n \"acc_norm_stderr\": 0.02845038880528436\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.8939393939393939,\n \"acc_stderr\": 0.02193804773885311,\n \"\
acc_norm\": 0.8939393939393939,\n \"acc_norm_stderr\": 0.02193804773885311\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.9326424870466321,\n \"acc_stderr\": 0.018088393839078894,\n\
\ \"acc_norm\": 0.9326424870466321,\n \"acc_norm_stderr\": 0.018088393839078894\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.7128205128205128,\n \"acc_stderr\": 0.022939925418530616,\n\
\ \"acc_norm\": 0.7128205128205128,\n \"acc_norm_stderr\": 0.022939925418530616\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.337037037037037,\n \"acc_stderr\": 0.028820884666253252,\n \
\ \"acc_norm\": 0.337037037037037,\n \"acc_norm_stderr\": 0.028820884666253252\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.7773109243697479,\n \"acc_stderr\": 0.02702543349888239,\n \
\ \"acc_norm\": 0.7773109243697479,\n \"acc_norm_stderr\": 0.02702543349888239\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.48344370860927155,\n \"acc_stderr\": 0.040802441856289715,\n \"\
acc_norm\": 0.48344370860927155,\n \"acc_norm_stderr\": 0.040802441856289715\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.9045871559633027,\n \"acc_stderr\": 0.0125958992823358,\n \"acc_norm\"\
: 0.9045871559633027,\n \"acc_norm_stderr\": 0.0125958992823358\n },\n\
\ \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5925925925925926,\n\
\ \"acc_stderr\": 0.03350991604696044,\n \"acc_norm\": 0.5925925925925926,\n\
\ \"acc_norm_stderr\": 0.03350991604696044\n },\n \"harness|hendrycksTest-high_school_us_history|5\"\
: {\n \"acc\": 0.9117647058823529,\n \"acc_stderr\": 0.019907399791316942,\n\
\ \"acc_norm\": 0.9117647058823529,\n \"acc_norm_stderr\": 0.019907399791316942\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.8818565400843882,\n \"acc_stderr\": 0.021011052659878467,\n \
\ \"acc_norm\": 0.8818565400843882,\n \"acc_norm_stderr\": 0.021011052659878467\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.8026905829596412,\n\
\ \"acc_stderr\": 0.02670985334496796,\n \"acc_norm\": 0.8026905829596412,\n\
\ \"acc_norm_stderr\": 0.02670985334496796\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.8549618320610687,\n \"acc_stderr\": 0.030884661089515368,\n\
\ \"acc_norm\": 0.8549618320610687,\n \"acc_norm_stderr\": 0.030884661089515368\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.8760330578512396,\n \"acc_stderr\": 0.030083098716035202,\n \"\
acc_norm\": 0.8760330578512396,\n \"acc_norm_stderr\": 0.030083098716035202\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8425925925925926,\n\
\ \"acc_stderr\": 0.035207039905179635,\n \"acc_norm\": 0.8425925925925926,\n\
\ \"acc_norm_stderr\": 0.035207039905179635\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.8343558282208589,\n \"acc_stderr\": 0.029208296231259104,\n\
\ \"acc_norm\": 0.8343558282208589,\n \"acc_norm_stderr\": 0.029208296231259104\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5446428571428571,\n\
\ \"acc_stderr\": 0.04726835553719098,\n \"acc_norm\": 0.5446428571428571,\n\
\ \"acc_norm_stderr\": 0.04726835553719098\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.8349514563106796,\n \"acc_stderr\": 0.03675668832233188,\n\
\ \"acc_norm\": 0.8349514563106796,\n \"acc_norm_stderr\": 0.03675668832233188\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.9145299145299145,\n\
\ \"acc_stderr\": 0.01831589168562585,\n \"acc_norm\": 0.9145299145299145,\n\
\ \"acc_norm_stderr\": 0.01831589168562585\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.74,\n \"acc_stderr\": 0.04408440022768078,\n \
\ \"acc_norm\": 0.74,\n \"acc_norm_stderr\": 0.04408440022768078\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8646232439335888,\n\
\ \"acc_stderr\": 0.012234384586856478,\n \"acc_norm\": 0.8646232439335888,\n\
\ \"acc_norm_stderr\": 0.012234384586856478\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7774566473988439,\n \"acc_stderr\": 0.02239421566194282,\n\
\ \"acc_norm\": 0.7774566473988439,\n \"acc_norm_stderr\": 0.02239421566194282\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.646927374301676,\n\
\ \"acc_stderr\": 0.015984204545268575,\n \"acc_norm\": 0.646927374301676,\n\
\ \"acc_norm_stderr\": 0.015984204545268575\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.7581699346405228,\n \"acc_stderr\": 0.024518195641879334,\n\
\ \"acc_norm\": 0.7581699346405228,\n \"acc_norm_stderr\": 0.024518195641879334\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7717041800643086,\n\
\ \"acc_stderr\": 0.0238393033113982,\n \"acc_norm\": 0.7717041800643086,\n\
\ \"acc_norm_stderr\": 0.0238393033113982\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.8271604938271605,\n \"acc_stderr\": 0.021038517770157375,\n\
\ \"acc_norm\": 0.8271604938271605,\n \"acc_norm_stderr\": 0.021038517770157375\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.6028368794326241,\n \"acc_stderr\": 0.029189805673587085,\n \
\ \"acc_norm\": 0.6028368794326241,\n \"acc_norm_stderr\": 0.029189805673587085\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.5867014341590613,\n\
\ \"acc_stderr\": 0.012576779494860075,\n \"acc_norm\": 0.5867014341590613,\n\
\ \"acc_norm_stderr\": 0.012576779494860075\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.7463235294117647,\n \"acc_stderr\": 0.026431329870789524,\n\
\ \"acc_norm\": 0.7463235294117647,\n \"acc_norm_stderr\": 0.026431329870789524\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.7647058823529411,\n \"acc_stderr\": 0.017160587235046352,\n \
\ \"acc_norm\": 0.7647058823529411,\n \"acc_norm_stderr\": 0.017160587235046352\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7363636363636363,\n\
\ \"acc_stderr\": 0.04220224692971987,\n \"acc_norm\": 0.7363636363636363,\n\
\ \"acc_norm_stderr\": 0.04220224692971987\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.8122448979591836,\n \"acc_stderr\": 0.02500025603954619,\n\
\ \"acc_norm\": 0.8122448979591836,\n \"acc_norm_stderr\": 0.02500025603954619\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8855721393034826,\n\
\ \"acc_stderr\": 0.022509345325101713,\n \"acc_norm\": 0.8855721393034826,\n\
\ \"acc_norm_stderr\": 0.022509345325101713\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.87,\n \"acc_stderr\": 0.033799766898963086,\n \
\ \"acc_norm\": 0.87,\n \"acc_norm_stderr\": 0.033799766898963086\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5240963855421686,\n\
\ \"acc_stderr\": 0.03887971849597264,\n \"acc_norm\": 0.5240963855421686,\n\
\ \"acc_norm_stderr\": 0.03887971849597264\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8713450292397661,\n \"acc_stderr\": 0.02567934272327692,\n\
\ \"acc_norm\": 0.8713450292397661,\n \"acc_norm_stderr\": 0.02567934272327692\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.4528763769889841,\n\
\ \"mc1_stderr\": 0.01742558984831402,\n \"mc2\": 0.6526486216773408,\n\
\ \"mc2_stderr\": 0.01469598165350219\n }\n}\n```"
repo_url: https://huggingface.co/ICBU-NPU/FashionGPT-70B-V1.1
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_09_19T01_00_15.258935
path:
- '**/details_harness|arc:challenge|25_2023-09-19T01-00-15.258935.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-09-19T01-00-15.258935.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_09_19T01_00_15.258935
path:
- '**/details_harness|hellaswag|10_2023-09-19T01-00-15.258935.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-09-19T01-00-15.258935.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_09_19T01_00_15.258935
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-19T01-00-15.258935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-09-19T01-00-15.258935.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-19T01-00-15.258935.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_09_19T01_00_15.258935
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-19T01-00-15.258935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-19T01-00-15.258935.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_09_19T01_00_15.258935
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-19T01-00-15.258935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-19T01-00-15.258935.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_09_19T01_00_15.258935
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-19T01-00-15.258935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-19T01-00-15.258935.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_09_19T01_00_15.258935
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-19T01-00-15.258935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-19T01-00-15.258935.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_09_19T01_00_15.258935
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-19T01-00-15.258935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-19T01-00-15.258935.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_09_19T01_00_15.258935
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-19T01-00-15.258935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-19T01-00-15.258935.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_09_19T01_00_15.258935
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-19T01-00-15.258935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-19T01-00-15.258935.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_09_19T01_00_15.258935
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-19T01-00-15.258935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-19T01-00-15.258935.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_09_19T01_00_15.258935
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-19T01-00-15.258935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-19T01-00-15.258935.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_09_19T01_00_15.258935
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-19T01-00-15.258935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-19T01-00-15.258935.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_09_19T01_00_15.258935
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-19T01-00-15.258935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-19T01-00-15.258935.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_09_19T01_00_15.258935
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-19T01-00-15.258935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-19T01-00-15.258935.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_09_19T01_00_15.258935
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-19T01-00-15.258935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-19T01-00-15.258935.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_09_19T01_00_15.258935
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-19T01-00-15.258935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-19T01-00-15.258935.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_09_19T01_00_15.258935
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-19T01-00-15.258935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-19T01-00-15.258935.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_09_19T01_00_15.258935
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-19T01-00-15.258935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-19T01-00-15.258935.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_09_19T01_00_15.258935
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-19T01-00-15.258935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-19T01-00-15.258935.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_09_19T01_00_15.258935
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-19T01-00-15.258935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-19T01-00-15.258935.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_09_19T01_00_15.258935
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-19T01-00-15.258935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-19T01-00-15.258935.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_09_19T01_00_15.258935
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-19T01-00-15.258935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-19T01-00-15.258935.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_09_19T01_00_15.258935
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-19T01-00-15.258935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-19T01-00-15.258935.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_09_19T01_00_15.258935
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-19T01-00-15.258935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-19T01-00-15.258935.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_09_19T01_00_15.258935
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-19T01-00-15.258935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-19T01-00-15.258935.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_09_19T01_00_15.258935
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-19T01-00-15.258935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-19T01-00-15.258935.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_09_19T01_00_15.258935
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-19T01-00-15.258935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-19T01-00-15.258935.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_09_19T01_00_15.258935
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-19T01-00-15.258935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-19T01-00-15.258935.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_09_19T01_00_15.258935
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-19T01-00-15.258935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-19T01-00-15.258935.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_09_19T01_00_15.258935
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-19T01-00-15.258935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-19T01-00-15.258935.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_09_19T01_00_15.258935
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-19T01-00-15.258935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-19T01-00-15.258935.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_09_19T01_00_15.258935
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-19T01-00-15.258935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-19T01-00-15.258935.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_09_19T01_00_15.258935
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-19T01-00-15.258935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-19T01-00-15.258935.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_09_19T01_00_15.258935
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-19T01-00-15.258935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-19T01-00-15.258935.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_09_19T01_00_15.258935
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-19T01-00-15.258935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-19T01-00-15.258935.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_09_19T01_00_15.258935
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-19T01-00-15.258935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-19T01-00-15.258935.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_09_19T01_00_15.258935
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-09-19T01-00-15.258935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-09-19T01-00-15.258935.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_09_19T01_00_15.258935
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-19T01-00-15.258935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-19T01-00-15.258935.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_09_19T01_00_15.258935
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-19T01-00-15.258935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-19T01-00-15.258935.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_09_19T01_00_15.258935
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-19T01-00-15.258935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-19T01-00-15.258935.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_09_19T01_00_15.258935
path:
- '**/details_harness|hendrycksTest-management|5_2023-09-19T01-00-15.258935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-09-19T01-00-15.258935.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_09_19T01_00_15.258935
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-09-19T01-00-15.258935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-09-19T01-00-15.258935.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_09_19T01_00_15.258935
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-19T01-00-15.258935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-19T01-00-15.258935.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_09_19T01_00_15.258935
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-19T01-00-15.258935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-19T01-00-15.258935.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_09_19T01_00_15.258935
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-19T01-00-15.258935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-19T01-00-15.258935.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_09_19T01_00_15.258935
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-19T01-00-15.258935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-19T01-00-15.258935.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_09_19T01_00_15.258935
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-19T01-00-15.258935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-19T01-00-15.258935.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_09_19T01_00_15.258935
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-19T01-00-15.258935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-19T01-00-15.258935.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_09_19T01_00_15.258935
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-19T01-00-15.258935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-19T01-00-15.258935.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_09_19T01_00_15.258935
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-19T01-00-15.258935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-19T01-00-15.258935.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_09_19T01_00_15.258935
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-19T01-00-15.258935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-19T01-00-15.258935.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_09_19T01_00_15.258935
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-19T01-00-15.258935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-19T01-00-15.258935.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_09_19T01_00_15.258935
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-19T01-00-15.258935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-19T01-00-15.258935.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_09_19T01_00_15.258935
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-19T01-00-15.258935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-19T01-00-15.258935.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_09_19T01_00_15.258935
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-19T01-00-15.258935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-19T01-00-15.258935.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_09_19T01_00_15.258935
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-09-19T01-00-15.258935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-09-19T01-00-15.258935.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_09_19T01_00_15.258935
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-19T01-00-15.258935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-19T01-00-15.258935.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_09_19T01_00_15.258935
path:
- '**/details_harness|hendrycksTest-virology|5_2023-09-19T01-00-15.258935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-09-19T01-00-15.258935.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_09_19T01_00_15.258935
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-19T01-00-15.258935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-19T01-00-15.258935.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_09_19T01_00_15.258935
path:
- '**/details_harness|truthfulqa:mc|0_2023-09-19T01-00-15.258935.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-09-19T01-00-15.258935.parquet'
- config_name: results
data_files:
- split: 2023_09_19T01_00_15.258935
path:
- results_2023-09-19T01-00-15.258935.parquet
- split: latest
path:
- results_2023-09-19T01-00-15.258935.parquet
---
# Dataset Card for Evaluation run of ICBU-NPU/FashionGPT-70B-V1.1
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/ICBU-NPU/FashionGPT-70B-V1.1
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [ICBU-NPU/FashionGPT-70B-V1.1](https://huggingface.co/ICBU-NPU/FashionGPT-70B-V1.1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_ICBU-NPU__FashionGPT-70B-V1.1",
"harness_truthfulqa_mc_0",
split="train")
```
## Latest results
These are the [latest results from run 2023-09-19T01:00:15.258935](https://huggingface.co/datasets/open-llm-leaderboard/details_ICBU-NPU__FashionGPT-70B-V1.1/blob/main/results_2023-09-19T01-00-15.258935.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.7092811864393619,
"acc_stderr": 0.03073791163351864,
"acc_norm": 0.7129018476555437,
"acc_norm_stderr": 0.030707851474891332,
"mc1": 0.4528763769889841,
"mc1_stderr": 0.01742558984831402,
"mc2": 0.6526486216773408,
"mc2_stderr": 0.01469598165350219
},
"harness|arc:challenge|25": {
"acc": 0.6843003412969283,
"acc_stderr": 0.013582571095815291,
"acc_norm": 0.7175767918088737,
"acc_norm_stderr": 0.013155456884097222
},
"harness|hellaswag|10": {
"acc": 0.7016530571599283,
"acc_stderr": 0.004565974937793703,
"acc_norm": 0.8819956184027086,
"acc_norm_stderr": 0.0032195397905004797
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.31,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.31,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6296296296296297,
"acc_stderr": 0.04171654161354543,
"acc_norm": 0.6296296296296297,
"acc_norm_stderr": 0.04171654161354543
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.8289473684210527,
"acc_stderr": 0.03064360707167709,
"acc_norm": 0.8289473684210527,
"acc_norm_stderr": 0.03064360707167709
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.77,
"acc_stderr": 0.042295258468165044,
"acc_norm": 0.77,
"acc_norm_stderr": 0.042295258468165044
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7433962264150943,
"acc_stderr": 0.026880647889051985,
"acc_norm": 0.7433962264150943,
"acc_norm_stderr": 0.026880647889051985
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.8333333333333334,
"acc_stderr": 0.031164899666948617,
"acc_norm": 0.8333333333333334,
"acc_norm_stderr": 0.031164899666948617
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.5,
"acc_stderr": 0.050251890762960605,
"acc_norm": 0.5,
"acc_norm_stderr": 0.050251890762960605
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.61,
"acc_stderr": 0.04902071300001975,
"acc_norm": 0.61,
"acc_norm_stderr": 0.04902071300001975
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.4,
"acc_stderr": 0.049236596391733084,
"acc_norm": 0.4,
"acc_norm_stderr": 0.049236596391733084
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6589595375722543,
"acc_stderr": 0.036146654241808254,
"acc_norm": 0.6589595375722543,
"acc_norm_stderr": 0.036146654241808254
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.3627450980392157,
"acc_stderr": 0.04784060704105654,
"acc_norm": 0.3627450980392157,
"acc_norm_stderr": 0.04784060704105654
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.78,
"acc_stderr": 0.04163331998932263,
"acc_norm": 0.78,
"acc_norm_stderr": 0.04163331998932263
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.7106382978723405,
"acc_stderr": 0.02964400657700962,
"acc_norm": 0.7106382978723405,
"acc_norm_stderr": 0.02964400657700962
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.4649122807017544,
"acc_stderr": 0.046920083813689104,
"acc_norm": 0.4649122807017544,
"acc_norm_stderr": 0.046920083813689104
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.6413793103448275,
"acc_stderr": 0.03996629574876719,
"acc_norm": 0.6413793103448275,
"acc_norm_stderr": 0.03996629574876719
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.4708994708994709,
"acc_stderr": 0.025707658614154954,
"acc_norm": 0.4708994708994709,
"acc_norm_stderr": 0.025707658614154954
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.5,
"acc_stderr": 0.04472135954999579,
"acc_norm": 0.5,
"acc_norm_stderr": 0.04472135954999579
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.5,
"acc_stderr": 0.050251890762960605,
"acc_norm": 0.5,
"acc_norm_stderr": 0.050251890762960605
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.8290322580645161,
"acc_stderr": 0.02141724293632159,
"acc_norm": 0.8290322580645161,
"acc_norm_stderr": 0.02141724293632159
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5763546798029556,
"acc_stderr": 0.03476725747649037,
"acc_norm": 0.5763546798029556,
"acc_norm_stderr": 0.03476725747649037
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.78,
"acc_stderr": 0.04163331998932262,
"acc_norm": 0.78,
"acc_norm_stderr": 0.04163331998932262
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.8424242424242424,
"acc_stderr": 0.02845038880528436,
"acc_norm": 0.8424242424242424,
"acc_norm_stderr": 0.02845038880528436
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.8939393939393939,
"acc_stderr": 0.02193804773885311,
"acc_norm": 0.8939393939393939,
"acc_norm_stderr": 0.02193804773885311
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9326424870466321,
"acc_stderr": 0.018088393839078894,
"acc_norm": 0.9326424870466321,
"acc_norm_stderr": 0.018088393839078894
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.7128205128205128,
"acc_stderr": 0.022939925418530616,
"acc_norm": 0.7128205128205128,
"acc_norm_stderr": 0.022939925418530616
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.337037037037037,
"acc_stderr": 0.028820884666253252,
"acc_norm": 0.337037037037037,
"acc_norm_stderr": 0.028820884666253252
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.7773109243697479,
"acc_stderr": 0.02702543349888239,
"acc_norm": 0.7773109243697479,
"acc_norm_stderr": 0.02702543349888239
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.48344370860927155,
"acc_stderr": 0.040802441856289715,
"acc_norm": 0.48344370860927155,
"acc_norm_stderr": 0.040802441856289715
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.9045871559633027,
"acc_stderr": 0.0125958992823358,
"acc_norm": 0.9045871559633027,
"acc_norm_stderr": 0.0125958992823358
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5925925925925926,
"acc_stderr": 0.03350991604696044,
"acc_norm": 0.5925925925925926,
"acc_norm_stderr": 0.03350991604696044
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.9117647058823529,
"acc_stderr": 0.019907399791316942,
"acc_norm": 0.9117647058823529,
"acc_norm_stderr": 0.019907399791316942
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8818565400843882,
"acc_stderr": 0.021011052659878467,
"acc_norm": 0.8818565400843882,
"acc_norm_stderr": 0.021011052659878467
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.8026905829596412,
"acc_stderr": 0.02670985334496796,
"acc_norm": 0.8026905829596412,
"acc_norm_stderr": 0.02670985334496796
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.8549618320610687,
"acc_stderr": 0.030884661089515368,
"acc_norm": 0.8549618320610687,
"acc_norm_stderr": 0.030884661089515368
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8760330578512396,
"acc_stderr": 0.030083098716035202,
"acc_norm": 0.8760330578512396,
"acc_norm_stderr": 0.030083098716035202
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.8425925925925926,
"acc_stderr": 0.035207039905179635,
"acc_norm": 0.8425925925925926,
"acc_norm_stderr": 0.035207039905179635
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.8343558282208589,
"acc_stderr": 0.029208296231259104,
"acc_norm": 0.8343558282208589,
"acc_norm_stderr": 0.029208296231259104
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.5446428571428571,
"acc_stderr": 0.04726835553719098,
"acc_norm": 0.5446428571428571,
"acc_norm_stderr": 0.04726835553719098
},
"harness|hendrycksTest-management|5": {
"acc": 0.8349514563106796,
"acc_stderr": 0.03675668832233188,
"acc_norm": 0.8349514563106796,
"acc_norm_stderr": 0.03675668832233188
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.9145299145299145,
"acc_stderr": 0.01831589168562585,
"acc_norm": 0.9145299145299145,
"acc_norm_stderr": 0.01831589168562585
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.74,
"acc_stderr": 0.04408440022768078,
"acc_norm": 0.74,
"acc_norm_stderr": 0.04408440022768078
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8646232439335888,
"acc_stderr": 0.012234384586856478,
"acc_norm": 0.8646232439335888,
"acc_norm_stderr": 0.012234384586856478
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7774566473988439,
"acc_stderr": 0.02239421566194282,
"acc_norm": 0.7774566473988439,
"acc_norm_stderr": 0.02239421566194282
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.646927374301676,
"acc_stderr": 0.015984204545268575,
"acc_norm": 0.646927374301676,
"acc_norm_stderr": 0.015984204545268575
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7581699346405228,
"acc_stderr": 0.024518195641879334,
"acc_norm": 0.7581699346405228,
"acc_norm_stderr": 0.024518195641879334
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7717041800643086,
"acc_stderr": 0.0238393033113982,
"acc_norm": 0.7717041800643086,
"acc_norm_stderr": 0.0238393033113982
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.8271604938271605,
"acc_stderr": 0.021038517770157375,
"acc_norm": 0.8271604938271605,
"acc_norm_stderr": 0.021038517770157375
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.6028368794326241,
"acc_stderr": 0.029189805673587085,
"acc_norm": 0.6028368794326241,
"acc_norm_stderr": 0.029189805673587085
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.5867014341590613,
"acc_stderr": 0.012576779494860075,
"acc_norm": 0.5867014341590613,
"acc_norm_stderr": 0.012576779494860075
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.7463235294117647,
"acc_stderr": 0.026431329870789524,
"acc_norm": 0.7463235294117647,
"acc_norm_stderr": 0.026431329870789524
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.7647058823529411,
"acc_stderr": 0.017160587235046352,
"acc_norm": 0.7647058823529411,
"acc_norm_stderr": 0.017160587235046352
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.7363636363636363,
"acc_stderr": 0.04220224692971987,
"acc_norm": 0.7363636363636363,
"acc_norm_stderr": 0.04220224692971987
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.8122448979591836,
"acc_stderr": 0.02500025603954619,
"acc_norm": 0.8122448979591836,
"acc_norm_stderr": 0.02500025603954619
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8855721393034826,
"acc_stderr": 0.022509345325101713,
"acc_norm": 0.8855721393034826,
"acc_norm_stderr": 0.022509345325101713
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.87,
"acc_stderr": 0.033799766898963086,
"acc_norm": 0.87,
"acc_norm_stderr": 0.033799766898963086
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5240963855421686,
"acc_stderr": 0.03887971849597264,
"acc_norm": 0.5240963855421686,
"acc_norm_stderr": 0.03887971849597264
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8713450292397661,
"acc_stderr": 0.02567934272327692,
"acc_norm": 0.8713450292397661,
"acc_norm_stderr": 0.02567934272327692
},
"harness|truthfulqa:mc|0": {
"mc1": 0.4528763769889841,
"mc1_stderr": 0.01742558984831402,
"mc2": 0.6526486216773408,
"mc2_stderr": 0.01469598165350219
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
wiki_asp | 2022-11-18T21:59:51.000Z | [
"task_categories:summarization",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-sa-4.0",
"aspect-based-summarization",
"arxiv:2011.07832",
"region:us"
] | null | WikiAsp is a multi-domain, aspect-based summarization dataset in the encyclopedic
domain. In this task, models are asked to summarize cited reference documents of a
Wikipedia article into aspect-based summaries. Each of the 20 domains include 10
domain-specific pre-defined aspects. | @article{hayashi20tacl,
title = {WikiAsp: A Dataset for Multi-domain Aspect-based Summarization},
authors = {Hiroaki Hayashi and Prashant Budania and Peng Wang and Chris Ackerson and Raj Neervannan and Graham Neubig},
journal = {Transactions of the Association for Computational Linguistics (TACL)},
year = {2020},
url = {https://arxiv.org/abs/2011.07832}
} | null | 3 | 265 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- summarization
task_ids: []
paperswithcode_id: wikiasp
pretty_name: WikiAsp
tags:
- aspect-based-summarization
dataset_info:
- config_name: album
features:
- name: exid
dtype: string
- name: inputs
sequence: string
- name: targets
sequence:
sequence: string
splits:
- name: train
num_bytes: 1907323642
num_examples: 24434
- name: test
num_bytes: 232999001
num_examples: 3038
- name: validation
num_bytes: 234990092
num_examples: 3104
download_size: 644173065
dataset_size: 2375312735
- config_name: animal
features:
- name: exid
dtype: string
- name: inputs
sequence: string
- name: targets
sequence:
sequence: string
splits:
- name: train
num_bytes: 497474133
num_examples: 16540
- name: test
num_bytes: 61315970
num_examples: 2007
- name: validation
num_bytes: 57943532
num_examples: 2005
download_size: 150974930
dataset_size: 616733635
- config_name: artist
features:
- name: exid
dtype: string
- name: inputs
sequence: string
- name: targets
sequence:
sequence: string
splits:
- name: train
num_bytes: 1876134255
num_examples: 26754
- name: test
num_bytes: 237751553
num_examples: 3329
- name: validation
num_bytes: 223240910
num_examples: 3194
download_size: 626686303
dataset_size: 2337126718
- config_name: building
features:
- name: exid
dtype: string
- name: inputs
sequence: string
- name: targets
sequence:
sequence: string
splits:
- name: train
num_bytes: 1100057273
num_examples: 20449
- name: test
num_bytes: 134357678
num_examples: 2482
- name: validation
num_bytes: 139387376
num_examples: 2607
download_size: 346224042
dataset_size: 1373802327
- config_name: company
features:
- name: exid
dtype: string
- name: inputs
sequence: string
- name: targets
sequence:
sequence: string
splits:
- name: train
num_bytes: 1606057076
num_examples: 24353
- name: test
num_bytes: 199282041
num_examples: 3029
- name: validation
num_bytes: 200498778
num_examples: 2946
download_size: 504194353
dataset_size: 2005837895
- config_name: educational_institution
features:
- name: exid
dtype: string
- name: inputs
sequence: string
- name: targets
sequence:
sequence: string
splits:
- name: train
num_bytes: 1623000534
num_examples: 17634
- name: test
num_bytes: 200476681
num_examples: 2267
- name: validation
num_bytes: 203262430
num_examples: 2141
download_size: 471033992
dataset_size: 2026739645
- config_name: event
features:
- name: exid
dtype: string
- name: inputs
sequence: string
- name: targets
sequence:
sequence: string
splits:
- name: train
num_bytes: 748201660
num_examples: 6475
- name: test
num_bytes: 96212295
num_examples: 828
- name: validation
num_bytes: 97431395
num_examples: 807
download_size: 240072903
dataset_size: 941845350
- config_name: film
features:
- name: exid
dtype: string
- name: inputs
sequence: string
- name: targets
sequence:
sequence: string
splits:
- name: train
num_bytes: 2370068027
num_examples: 32129
- name: test
num_bytes: 294918370
num_examples: 3981
- name: validation
num_bytes: 290240851
num_examples: 4014
download_size: 808231638
dataset_size: 2955227248
- config_name: group
features:
- name: exid
dtype: string
- name: inputs
sequence: string
- name: targets
sequence:
sequence: string
splits:
- name: train
num_bytes: 1025166800
num_examples: 11966
- name: test
num_bytes: 114239405
num_examples: 1444
- name: validation
num_bytes: 120863870
num_examples: 1462
download_size: 344498865
dataset_size: 1260270075
- config_name: historic_place
features:
- name: exid
dtype: string
- name: inputs
sequence: string
- name: targets
sequence:
sequence: string
splits:
- name: train
num_bytes: 256158020
num_examples: 4919
- name: test
num_bytes: 31201154
num_examples: 600
- name: validation
num_bytes: 29058067
num_examples: 601
download_size: 77289509
dataset_size: 316417241
- config_name: infrastructure
features:
- name: exid
dtype: string
- name: inputs
sequence: string
- name: targets
sequence:
sequence: string
splits:
- name: train
num_bytes: 1124486451
num_examples: 17226
- name: test
num_bytes: 134820330
num_examples: 2091
- name: validation
num_bytes: 125193140
num_examples: 1984
download_size: 328804337
dataset_size: 1384499921
- config_name: mean_of_transportation
features:
- name: exid
dtype: string
- name: inputs
sequence: string
- name: targets
sequence:
sequence: string
splits:
- name: train
num_bytes: 650424738
num_examples: 9277
- name: test
num_bytes: 89759392
num_examples: 1170
- name: validation
num_bytes: 88440901
num_examples: 1215
download_size: 210234418
dataset_size: 828625031
- config_name: office_holder
features:
- name: exid
dtype: string
- name: inputs
sequence: string
- name: targets
sequence:
sequence: string
splits:
- name: train
num_bytes: 1643899203
num_examples: 18177
- name: test
num_bytes: 207433317
num_examples: 2333
- name: validation
num_bytes: 202624275
num_examples: 2218
download_size: 524721727
dataset_size: 2053956795
- config_name: plant
features:
- name: exid
dtype: string
- name: inputs
sequence: string
- name: targets
sequence:
sequence: string
splits:
- name: train
num_bytes: 239150885
num_examples: 6107
- name: test
num_bytes: 31340125
num_examples: 774
- name: validation
num_bytes: 28752150
num_examples: 786
download_size: 77890632
dataset_size: 299243160
- config_name: single
features:
- name: exid
dtype: string
- name: inputs
sequence: string
- name: targets
sequence:
sequence: string
splits:
- name: train
num_bytes: 1277277277
num_examples: 14217
- name: test
num_bytes: 152328537
num_examples: 1712
- name: validation
num_bytes: 160312594
num_examples: 1734
download_size: 429214401
dataset_size: 1589918408
- config_name: soccer_player
features:
- name: exid
dtype: string
- name: inputs
sequence: string
- name: targets
sequence:
sequence: string
splits:
- name: train
num_bytes: 604502541
num_examples: 17599
- name: test
num_bytes: 72820378
num_examples: 2280
- name: validation
num_bytes: 76705685
num_examples: 2150
download_size: 193347234
dataset_size: 754028604
- config_name: software
features:
- name: exid
dtype: string
- name: inputs
sequence: string
- name: targets
sequence:
sequence: string
splits:
- name: train
num_bytes: 1122906186
num_examples: 13516
- name: test
num_bytes: 133717992
num_examples: 1638
- name: validation
num_bytes: 134578157
num_examples: 1637
download_size: 356764908
dataset_size: 1391202335
- config_name: television_show
features:
- name: exid
dtype: string
- name: inputs
sequence: string
- name: targets
sequence:
sequence: string
splits:
- name: train
num_bytes: 893325347
num_examples: 8717
- name: test
num_bytes: 115155155
num_examples: 1072
- name: validation
num_bytes: 119461892
num_examples: 1128
download_size: 302093407
dataset_size: 1127942394
- config_name: town
features:
- name: exid
dtype: string
- name: inputs
sequence: string
- name: targets
sequence:
sequence: string
splits:
- name: train
num_bytes: 772504751
num_examples: 14818
- name: test
num_bytes: 100975827
num_examples: 1831
- name: validation
num_bytes: 101522638
num_examples: 1911
download_size: 243261734
dataset_size: 975003216
- config_name: written_work
features:
- name: exid
dtype: string
- name: inputs
sequence: string
- name: targets
sequence:
sequence: string
splits:
- name: train
num_bytes: 1491395960
num_examples: 15065
- name: test
num_bytes: 189537205
num_examples: 1931
- name: validation
num_bytes: 185707567
num_examples: 1843
download_size: 498307235
dataset_size: 1866640732
---
# Dataset Card for WikiAsp
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Wiki Asp](https://github.com/neulab/wikiasp)
- **Repository:** [GitHub](https://github.com/neulab/wikiasp)
- **Paper:** [WikiAsp: A Dataset for Multi-domain Aspect-based Summarization](https://arxiv.org/abs/2011.07832)
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
An example from the "plant" configuration:
```
{
'exid': 'train-78-8',
'inputs': ['< EOT > calcareous rocks and barrens , wooded cliff edges .',
'plant an erect short - lived perennial ( or biennial ) herb whose slender leafy stems radiate from the base , and are 3 - 5 dm tall , giving it a bushy appearance .',
'leaves densely hairy , grayish - green , simple and alternate on the stem .',
'flowers are bright yellow to yellow - orange , cross - shaped , each having 4 spatula - shaped petals about 5 mm long .',
'fruit is a nearly globe - shaped capsule , about 3 mm in diameter , with 1 or 2 seeds in each cell .',
'flowering period : early april to late may .',
'even though there are many members of the mustard family in the range of this species , no other plant shares this combination of characters : bright yellow flowers , grayish - green stems and foliage , globe - shaped fruits with a long style , perennial habit , and the habitat of limestone rocky cliffs .',
'timber removal may be beneficial and even needed to maintain the open character of the habitat for this species .',
'hand removal of trees in the vicinity of the population is necessary to avoid impacts from timber operations .',
'southwest indiana , north central kentucky , and north central tennessee .',
'email : naturepreserves @ ky . gov feedback naturepreserves @ ky . gov | about the agency | about this site copyright © 2003 - 2013 commonwealth of kentucky .',
'all rights reserved .',
'<EOS>'
],
'targets': [
['description',
'physaria globosa is a small plant covered with dense hairs giving it a grayish appearance . it produces yellow flowers in the spring , and its fruit is globe - shaped . its preferred habitat is dry limestone cliffs , barrens , cedar glades , steep wooded slopes , and talus areas . some have also been found in areas of deeper soil and roadsides .'
],
['conservation',
'the population fluctuates year to year , but on average there are about 2000 living plants at any one time , divided among 33 known locations . threats include forms of habitat degradation and destruction , including road construction and grading , mowing , dumping , herbicides , alteration of waterways , livestock damage , and invasive species of plants such as japanese honeysuckle , garlic mustard , alsike clover , sweet clover , meadow fescue , and multiflora rose . all populations are considered vulnerable to extirpation .'
]
]
}
```
### Data Fields
- `exid`: a unique identifier
- `input`: the cited references and consists of tokenized sentences (with NLTK)
- `targets`: a list of aspect-based summaries, where each element is a pair of a) the target aspect and b) the aspect-based summary
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@katnoria](https://github.com/katnoria) for adding this dataset. |
Fazzie/Teyvat | 2022-12-13T02:09:42.000Z | [
"task_categories:text-to-image",
"annotations_creators:no-annotation",
"language_creators:found",
"source_datasets:original",
"language:en",
"license:unknown",
"region:us"
] | Fazzie | Teyvat is the first small-scale text-to-image prompt dataset for Genshin impact. | null | null | 18 | 265 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
license:
- unknown
source_datasets:
- original
task_categories:
- text-to-image
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 71202
num_examples: 234
download_size: 466995417
dataset_size: 71202
---
# Dataset Card for Teyvat BLIP captions
Dataset used to train [Teyvat characters text to image model](https://github.com/hpcaitech/ColossalAI/tree/main/examples/images/diffusion).
BLIP generated captions for characters images from [genshin-impact fandom wiki](https://genshin-impact.fandom.com/wiki/Character#Playable_Characters)and [biligame wiki for genshin impact](https://wiki.biligame.com/ys/%E8%A7%92%E8%89%B2).
For each row the dataset contains `image` and `text` keys. `image` is a varying size PIL png, and `text` is the accompanying text caption. Only a train split is provided.
The `text` include the tag `Teyvat`, `Name`,`Element`, `Weapon`, `Region`, `Model type`, and `Description`, the `Description` is captioned with the [pre-trained BLIP model](https://github.com/salesforce/BLIP).
## Examples
<img src = "https://huggingface.co/datasets/Fazzie/Teyvat/resolve/main/data/Ganyu_001.png" title = "Ganyu_001.png" style="max-width: 20%;" >
> Teyvat, Name:Ganyu, Element:Cryo, Weapon:Bow, Region:Liyue, Model type:Medium Female, Description:an anime character with blue hair and blue eyes
<img src = "https://huggingface.co/datasets/Fazzie/Teyvat/resolve/main/data/Ganyu_002.png" title = "Ganyu_002.png" style="max-width: 20%;" >
> Teyvat, Name:Ganyu, Element:Cryo, Weapon:Bow, Region:Liyue, Model type:Medium Female, Description:an anime character with blue hair and blue eyes
<img src = "https://huggingface.co/datasets/Fazzie/Teyvat/resolve/main/data/Keqing_003.png" title = "Keqing_003.png" style="max-width: 20%;" >
> Teyvat, Name:Keqing, Element:Electro, Weapon:Sword, Region:Liyue, Model type:Medium Female, Description:a anime girl with long white hair and blue eyes
<img src = "https://huggingface.co/datasets/Fazzie/Teyvat/resolve/main/data/Keqing_004.png" title = "Keqing_004.png" style="max-width: 20%;" >
> Teyvat, Name:Keqing, Element:Electro, Weapon:Sword, Region:Liyue, Model type:Medium Female, Description:an anime character wearing a purple dress and cat ears |
ashraq/tmdb-people-image | 2023-04-21T20:02:31.000Z | [
"region:us"
] | ashraq | null | null | null | 2 | 265 | ---
dataset_info:
features:
- name: adult
dtype: bool
- name: also_known_as
dtype: string
- name: biography
dtype: string
- name: birthday
dtype: string
- name: deathday
dtype: string
- name: gender
dtype: int64
- name: homepage
dtype: string
- name: id
dtype: int64
- name: imdb_id
dtype: string
- name: known_for_department
dtype: string
- name: name
dtype: string
- name: place_of_birth
dtype: string
- name: popularity
dtype: float64
- name: profile_path
dtype: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 3749610460.6819267
num_examples: 116403
download_size: 3733145768
dataset_size: 3749610460.6819267
---
Data was obtained from [TMDB API](https://developers.themoviedb.org/3) |
JanosAudran/financial-reports-sec | 2023-01-06T17:44:08.000Z | [
"task_categories:fill-mask",
"task_categories:text-classification",
"task_ids:masked-language-modeling",
"task_ids:multi-class-classification",
"task_ids:sentiment-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"source_datasets:extended|other",
"language:en",
"license:apache-2.0",
"'finance",
"financial",
"10-K",
"10K",
"10k",
"10-k",
"annual",
"reports",
"sec",
"edgar",
"sentiment",
"firm",
"public",
"us'",
"region:us"
] | JanosAudran | The dataset contains the annual report of US public firms filing with the SEC EDGAR system.
Each annual report (10K filing) is broken into 20 sections. Each section is split into individual sentences.
Sentiment labels are provided on a per filing basis from the market reaction around the filing data.
Additional metadata for each filing is included in the dataset. | null | null | 38 | 265 | ---
annotations_creators:
- expert-generated
language:
- en
language_creators:
- expert-generated
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: US public firm Annual Reports (10-K)
size_categories:
- 10M<n<100M
source_datasets:
- extended|other
tags:
- "'finance"
- financial
- 10-K
- 10K
- 10k
- 10-k
- annual
- reports
- sec
- edgar
- sentiment
- firm
- public
- us'
task_categories:
- fill-mask
- text-classification
task_ids:
- masked-language-modeling
- multi-class-classification
- sentiment-classification
dataset_info:
- config_name: large_lite
features:
- name: cik
dtype: string
- name: sentence
dtype: string
- name: section
dtype:
class_label:
names:
"0": section_1
"1": section_10
"2": section_11
"3": section_12
"4": section_13
"5": section_14
"6": section_15
"7": section_1A
"8": section_1B
"9": section_2
"10": section_3
"11": section_4
"12": section_5
"13": section_6
"14": section_7
"15": section_7A
"16": section_8
"17": section_9
"18": section_9A
"19": section_9B
- name: labels
struct:
- name: 1d
dtype:
class_label:
names:
"0": positive
"1": negative
- name: 5d
dtype:
class_label:
names:
"0": positive
"1": negative
- name: 30d
dtype:
class_label:
names:
"0": positive
"1": negative
- name: filingDate
dtype: string
- name: docID
dtype: string
- name: sentenceID
dtype: string
- name: sentenceCount
dtype: int64
splits:
- name: train
num_bytes: 16424576472
num_examples: 67316227
- name: validation
num_bytes: 423527281
num_examples: 1585561
- name: test
num_bytes: 773116540
num_examples: 2965174
download_size: 13362319126
dataset_size: 17621220293
- config_name: large_full
features:
- name: cik
dtype: string
- name: sentence
dtype: string
- name: section
dtype:
class_label:
names:
"0": section_1
"1": section_10
"2": section_11
"3": section_12
"4": section_13
"5": section_14
"6": section_15
"7": section_1A
"8": section_1B
"9": section_2
"10": section_3
"11": section_4
"12": section_5
"13": section_6
"14": section_7
"15": section_7A
"16": section_8
"17": section_9
"18": section_9A
"19": section_9B
- name: labels
struct:
- name: 1d
dtype:
class_label:
names:
"0": positive
"1": negative
- name: 5d
dtype:
class_label:
names:
"0": positive
"1": negative
- name: 30d
dtype:
class_label:
names:
"0": positive
"1": negative
- name: filingDate
dtype: string
- name: name
dtype: string
- name: docID
dtype: string
- name: sentenceID
dtype: string
- name: sentenceCount
dtype: int64
- name: tickers
list: string
- name: exchanges
list: string
- name: entityType
dtype: string
- name: sic
dtype: string
- name: stateOfIncorporation
dtype: string
- name: tickerCount
dtype: int32
- name: acceptanceDateTime
dtype: string
- name: form
dtype: string
- name: reportDate
dtype: string
- name: returns
struct:
- name: 1d
struct:
- name: closePriceEndDate
dtype: float32
- name: closePriceStartDate
dtype: float32
- name: endDate
dtype: string
- name: startDate
dtype: string
- name: ret
dtype: float32
- name: 5d
struct:
- name: closePriceEndDate
dtype: float32
- name: closePriceStartDate
dtype: float32
- name: endDate
dtype: string
- name: startDate
dtype: string
- name: ret
dtype: float32
- name: 30d
struct:
- name: closePriceEndDate
dtype: float32
- name: closePriceStartDate
dtype: float32
- name: endDate
dtype: string
- name: startDate
dtype: string
- name: ret
dtype: float32
splits:
- name: train
num_bytes: 39306095718
num_examples: 67316227
- name: validation
num_bytes: 964030458
num_examples: 1585561
- name: test
num_bytes: 1785383996
num_examples: 2965174
download_size: 13362319126
dataset_size: 42055510172
- config_name: small_full
features:
- name: cik
dtype: string
- name: sentence
dtype: string
- name: section
dtype:
class_label:
names:
"0": section_1
"1": section_1A
"2": section_1B
"3": section_2
"4": section_3
"5": section_4
"6": section_5
"7": section_6
"8": section_7
"9": section_7A
"10": section_8
"11": section_9
"12": section_9A
"13": section_9B
"14": section_10
"15": section_11
"16": section_12
"17": section_13
"18": section_14
"19": section_15
- name: labels
struct:
- name: 1d
dtype:
class_label:
names:
"0": positive
"1": negative
- name: 5d
dtype:
class_label:
names:
"0": positive
"1": negative
- name: 30d
dtype:
class_label:
names:
"0": positive
"1": negative
- name: filingDate
dtype: string
- name: name
dtype: string
- name: docID
dtype: string
- name: sentenceID
dtype: string
- name: sentenceCount
dtype: int64
- name: tickers
list: string
- name: exchanges
list: string
- name: entityType
dtype: string
- name: sic
dtype: string
- name: stateOfIncorporation
dtype: string
- name: tickerCount
dtype: int32
- name: acceptanceDateTime
dtype: string
- name: form
dtype: string
- name: reportDate
dtype: string
- name: returns
struct:
- name: 1d
struct:
- name: closePriceEndDate
dtype: float32
- name: closePriceStartDate
dtype: float32
- name: endDate
dtype: string
- name: startDate
dtype: string
- name: ret
dtype: float32
- name: 5d
struct:
- name: closePriceEndDate
dtype: float32
- name: closePriceStartDate
dtype: float32
- name: endDate
dtype: string
- name: startDate
dtype: string
- name: ret
dtype: float32
- name: 30d
struct:
- name: closePriceEndDate
dtype: float32
- name: closePriceStartDate
dtype: float32
- name: endDate
dtype: string
- name: startDate
dtype: string
- name: ret
dtype: float32
splits:
- name: train
num_bytes: 128731540
num_examples: 200000
- name: validation
num_bytes: 13411689
num_examples: 20000
- name: test
num_bytes: 13188331
num_examples: 20000
download_size: 42764380
dataset_size: 155331560
- config_name: small_lite
features:
- name: cik
dtype: string
- name: sentence
dtype: string
- name: section
dtype:
class_label:
names:
"0": section_1
"1": section_1A
"2": section_1B
"3": section_2
"4": section_3
"5": section_4
"6": section_5
"7": section_6
"8": section_7
"9": section_7A
"10": section_8
"11": section_9
"12": section_9A
"13": section_9B
"14": section_10
"15": section_11
"16": section_12
"17": section_13
"18": section_14
"19": section_15
- name: labels
struct:
- name: 1d
dtype:
class_label:
names:
"0": positive
"1": negative
- name: 5d
dtype:
class_label:
names:
"0": positive
"1": negative
- name: 30d
dtype:
class_label:
names:
"0": positive
"1": negative
- name: filingDate
dtype: string
- name: docID
dtype: string
- name: sentenceID
dtype: string
- name: sentenceCount
dtype: int64
splits:
- name: train
num_bytes: 60681688
num_examples: 200000
- name: validation
num_bytes: 6677389
num_examples: 20000
- name: test
num_bytes: 6351730
num_examples: 20000
download_size: 42764380
dataset_size: 73710807
---
# Dataset Card for [financial-reports-sec]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Configurations](#dataset-configurations)
- [Usage](#usage)
- [Supported Tasks](#supported-tasks)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Summary Statistics](#dataset-summary-statistics)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [References](#references)
- [Citation Information](#citation-information)
## Dataset Description
- **Point of Contact: Aman Khan**
### Dataset Summary
The dataset contains the annual report of US public firms filing with the SEC EDGAR system from 1993-2020. Each annual report (**10K filing**) is broken into 20 sections. Each section is split into individual sentences. Sentiment labels are provided on a **per filing basis** from the market reaction around the filing date for 3 different time windows _[t-1, t+1]_, _[t-1, t+5]_ and _[t-1, t+30]_. Additional metadata for each filing is included in the dataset.
### Dataset Configurations
**Four** configurations are available:
- _**large_lite**_:
- Contains only the basic features needed. Extra metadata is ommitted.
- Features List:
- **cik**
- **sentence**
- **section**
- **labels**
- **filingDate**
- **docID**
- **sentenceID**
- **sentenceCount**
- _**large_full**_:
- All features are included.
- Features List (excluding those already in the lite verison above):
- **name**
- **tickers**
- **exchanges**
- **entityType**
- **sic**
- **stateOfIncorporation**
- **tickerCount**
- **acceptanceDateTime**
- **form**
- **reportDate**
- **returns**
- _**small_lite**_:
- Same as _**large_lite**_ version except that only (200,000/20,000/20,000) sentences are loaded for (train/test/validation) splits.
- _**small_full**_:
- Same as _**large_full**_ version except that only (200,000/20,000/20,000) sentences are loaded for (train/test/validation) splits.
### Usage
```python
import datasets
# Load the lite configuration of the dataset
raw_dataset = datasets.load_dataset("JanosAudran/financial-reports-sec", "large_lite")
# Load a specific split
raw_dataset = datasets.load_dataset("JanosAudran/financial-reports-sec", "small_full", split="train")
```
### Supported Tasks
The tasks the dataset can be used directly for includes:
- _Masked Language Modelling_
- A model like BERT can be fine-tuned on this corpus of financial text.
- _Sentiment Analysis_
- For each annual report a label ["positive", "negative"] is provided based on the market reaction around the filing date (refer to [Annotations](#annotations)).
- _Next Sentence Prediction/Sentence Order Prediction_
- Sentences extracted from the filings are in their original order and as such the dataset can be adapted very easily for either of these tasks.
### Languages
All sentences are in English.
## Dataset Structure
### Data Instances
Refer to dataset preview.
### Data Fields
**Feature Name**
- Description
- Data type
- Example/Structure
**cik**
- 10 digit identifier used by SEC for a firm.
- _string_
- '0000001750'
**sentence**
- A single sentence from the 10-K filing.
- _string_
- 'The finance agreement is secured by a first priority security interest in all insurance policies, all unearned premium, return premiums, dividend payments and loss payments thereof.'
**section**
- The section of the 10-K filing the sentence is located.
- _ClassLabel_
- ```python
ClassLabel(names=['section_1', 'section_10', 'section_11', 'section_12', 'section_13', 'section_14', 'section_15', 'section_1A', 'section_1B', 'section_2','section_3', 'section_4', 'section_5', 'section_6', 'section_7', 'section_7A','section_8', 'section_9', 'section_9A', 'section_9B'], id=None)
```
**labels**
- The sentiment label for the entire filing (_**positve**_ or _**negative**_) based on different time windows.
- _Dict of ClassLables_
- ```python
{
'1d': ClassLabel(names=['positive', 'negative'], id=None),
'5d': ClassLabel(names=['positive', 'negative'], id=None),
'30d': ClassLabel(names=['positive', 'negative'], id=None)
}
```
**filingDate**
- The date the 10-K report was filed with the SEC.
- _string_
- '2021-03-10'
**docID**
- Unique ID for identifying the exact 10-K filing. Unique across all configs and splits. Can be used to identify the document from which the sentence came from.
- _string_
- '0000001750_10-K_2020'
**sentenceID**
- Unique ID for identifying the exact sentence. Unique across all configs and splits.
- _string_
- '0000001750_10-K_2020_section_1_100'
**sentenceCount**
- Integer identiying the running sequence for the sentence. Unique **only** for a given config and split.
- _string_
- 123
**name**
- The name of the filing entity
- _string_
- 'Investar Holding Corp'
**tickers**
- List of ticker symbols for the filing entity.
- _List of strings_
- ['ISTR']
**exchanges**
- List of exchanges for the filing entity.
- _List of strings_
- ['Nasdaq']
**entityType**
- The type of entity as identified in the 10-K filing.
- _string_
- 'operating'
**sic**
- Four digit SIC code for the filing entity.
- _string_
- '6022'
**stateOfIncorporation**
- Two character code for the state of incorporation for the filing entity.
- _string_
- 'LA'
**tickerCount**
- _**Internal use**_. Count of ticker symbols. Always 1.
- _int_
- 1
**acceptanceDateTime**
- The full timestamp of when the filing was accepted into the SEC EDGAR system.
- _string_
- '2021-03-10T14:26:11.000Z'
**form**
- The type of filing. Always 10-K in the dataset.
- _string_
- '10-K'
**reportDate**
- The last date in the fiscal year for which the entity is filing the report.
- _string_
- '2020-12-31'
**returns**
- _**Internal use**_. The prices and timestamps used to calculate the sentiment labels.
- _Dict_
- ```python
{'1d': {
'closePriceEndDate': 21.45746421813965,
'closePriceStartDate': 20.64960479736328,
'endDate': '2021-03-11T00:00:00-05:00',
'startDate': '2021-03-09T00:00:00-05:00',
'ret': 0.03912226855754852
},
'5d': {
'closePriceEndDate': 21.743167877197266,
'closePriceStartDate': 20.64960479736328,
'endDate': '2021-03-15T00:00:00-04:00',
'startDate': '2021-03-09T00:00:00-05:00',
'ret': 0.052958063781261444
},
'30d': {
'closePriceEndDate': 20.63919448852539,
'closePriceStartDate': 20.64960479736328,
'endDate': '2021-04-09T00:00:00-04:00',
'startDate': '2021-03-09T00:00:00-05:00',
'ret': -0.0005041408003307879}}
```
### Data Splits
| Config | train | validation | test |
| ---------- | ---------: | ---------: | --------: |
| large_full | 67,316,227 | 1,585,561 | 2,965,174 |
| large_lite | 67,316,227 | 1,585,561 | 2,965,174 |
| small_full | 200,000 | 20,000 | 20,000 |
| small_lite | 200,000 | 20,000 | 20,000 |
### Dataset Summary Statistics
| Variable | count | mean | std | min | 1% | 25% | 50% | 75% | 99% | max |
| :-------------------------------- | ---------: | ----: | -----: | -----: | -----: | -----: | ----: | ----: | ----: | --------: |
| Unique Firm Count | 4,677 | | | | | | | | | |
| Filings Count | 55,349 | | | | | | | | | |
| Sentence Count | 71,866,962 | | | | | | | | | |
| Filings per Firm | 4,677 | 12 | 9 | 1 | 1 | 4 | 11 | 19 | 27 | 28 |
| Return per Filing - 1d | 55,349 | 0.008 | 0.394 | -0.973 | -0.253 | -0.023 | 0 | 0.02 | 0.367 | 77.977 |
| Return per Filing - 5d | 55,349 | 0.013 | 0.584 | -0.99 | -0.333 | -0.034 | 0 | 0.031 | 0.5 | 100 |
| Return per Filing - 30d | 55,349 | 0.191 | 22.924 | -0.999 | -0.548 | -0.068 | 0.001 | 0.074 | 1 | 5,002.748 |
| Sentences per Filing | 55,349 | 1,299 | 654 | 0 | 110 | 839 | 1,268 | 1,681 | 3,135 | 8,286 |
| Sentences by Section - section_1 | 55,349 | 221 | 183 | 0 | 0 | 97 | 180 | 293 | 852 | 2,724 |
| Sentences by Section - section_10 | 55,349 | 24 | 40 | 0 | 0 | 4 | 6 | 20 | 173 | 1,594 |
| Sentences by Section - section_11 | 55,349 | 16 | 47 | 0 | 0 | 3 | 3 | 4 | 243 | 808 |
| Sentences by Section - section_12 | 55,349 | 9 | 14 | 0 | 0 | 3 | 4 | 8 | 56 | 1,287 |
| Sentences by Section - section_13 | 55,349 | 8 | 20 | 0 | 0 | 3 | 3 | 4 | 79 | 837 |
| Sentences by Section - section_14 | 55,349 | 22 | 93 | 0 | 0 | 3 | 3 | 8 | 413 | 3,536 |
| Sentences by Section - section_15 | 55,349 | 177 | 267 | 0 | 0 | 9 | 26 | 315 | 1104 | 4,140 |
| Sentences by Section - section_1A | 55,349 | 197 | 204 | 0 | 0 | 3 | 158 | 292 | 885 | 2,106 |
| Sentences by Section - section_1B | 55,349 | 4 | 31 | 0 | 0 | 1 | 3 | 3 | 13 | 2,414 |
| Sentences by Section - section_2 | 55,349 | 16 | 45 | 0 | 0 | 6 | 8 | 13 | 169 | 1,903 |
| Sentences by Section - section_3 | 55,349 | 14 | 36 | 0 | 0 | 4 | 5 | 12 | 121 | 2,326 |
| Sentences by Section - section_4 | 55,349 | 7 | 17 | 0 | 0 | 3 | 3 | 4 | 66 | 991 |
| Sentences by Section - section_5 | 55,349 | 20 | 41 | 0 | 0 | 10 | 15 | 21 | 87 | 3,816 |
| Sentences by Section - section_6 | 55,349 | 8 | 29 | 0 | 0 | 3 | 4 | 7 | 43 | 2,156 |
| Sentences by Section - section_7 | 55,349 | 265 | 198 | 0 | 0 | 121 | 246 | 373 | 856 | 4,539 |
| Sentences by Section - section_7A | 55,349 | 18 | 52 | 0 | 0 | 3 | 9 | 21 | 102 | 3,596 |
| Sentences by Section - section_8 | 55,349 | 257 | 296 | 0 | 0 | 3 | 182 | 454 | 1105 | 4,431 |
| Sentences by Section - section_9 | 55,349 | 5 | 33 | 0 | 0 | 3 | 3 | 4 | 18 | 2,330 |
| Sentences by Section - section_9A | 55,349 | 17 | 16 | 0 | 0 | 8 | 15 | 23 | 50 | 794 |
| Sentences by Section - section_9B | 55,349 | 4 | 18 | 0 | 0 | 2 | 3 | 4 | 23 | 813 |
| Word count per Sentence | 71,866,962 | 28 | 22 | 1 | 2 | 16 | 24 | 34 | 98 | 8,675 |
## Dataset Creation
### Curation Rationale
To create this dataset multiple sources of information have to be cleaned and processed for data merging. Starting from the raw filings:
- Useful metadata about the filing and firm was added.
- Time windows around the filing date were carefully created.
- Stock price data was then added for the windows.
- Ambiguous/duplicate records were removed.
### Source Data
#### Initial Data Collection and Normalization
Initial data was collected and processed by the authors of the research paper [**EDGAR-CORPUS: Billions of Tokens Make The World Go Round**](#references). Market price and returns data was collected from Yahoo Finance. Additional metadata was collected from SEC.
#### Who are the source language producers?
US public firms filing with the SEC.
### Annotations
#### Annotation process
Labels for sentiment classification are based on buy-and-hold returns over a fixed time window around the filing date with the SEC i.e. when the data becomes public. Returns are chosen for this process as it reflects the combined market intelligence at parsing the new information in the filings. For each filing date **t** the stock price at **t-1** and **t+W** is used to calculate returns. If, the returns are positive a label of **positive** is assigned else a label of **negative** is assigned. Three different windows are used to assign the labels:
- **1d**: _[t-1, t+1]_
- **5d**: _[t-1, t+5]_
- **30d**: _[t-1, t+30]_
The windows are based on calendar days and are adjusted for weekends and holidays. The rationale behind using 3 windows is as follows:
- A very short window may not give enough time for all the information contained in the filing to be reflected in the stock price.
- A very long window may capture other events that drive stock price for the firm.
#### Who are the annotators?
Financial market participants.
### Personal and Sensitive Information
The dataset contains public filings data from SEC. Market returns data was collected from Yahoo Finance.
## Considerations for Using the Data
### Social Impact of Dataset
Low to none.
### Discussion of Biases
The dataset is about financial information of public companies and as such the tone and style of text is in line with financial literature.
### Other Known Limitations
NA
## Additional Information
### Dataset Curators
**Aman Khan**
### Licensing Information
This dataset is provided under Apache 2.0.
### References
- Lefteris Loukas, Manos Fergadiotis, Ion Androutsopoulos, & Prodromos Malakasiotis. (2021). EDGAR-CORPUS [Data set]. Zenodo. https://doi.org/10.5281/zenodo.5589195
### Citation Information
Please use the following to cite this dataset:
```
@ONLINE{financial-reports-sec,
author = "Aman Khan",
title = "Financial Reports SEC",
url = "https://huggingface.co/datasets/JanosAudran/financial-reports-sec"
}
```
|
axiong/pmc_llama_instructions | 2023-09-01T04:52:44.000Z | [
"task_categories:question-answering",
"task_categories:text-generation",
"language:en",
"license:openrail",
"biology",
"med",
"region:us"
] | axiong | null | null | null | 10 | 265 | ---
license: openrail
task_categories:
- question-answering
- text-generation
language:
- en
tags:
- biology
- med
---
This repo provides part of the dataset used for PMC-LLaMA-13B's instruction tuning.
| Data | Size | Link |
| --- | --- | --- |
| ChatDoctor | 100K | https://www.yunxiangli.top/ChatDoctor/ |
| MedQA | 10.2K | https://huggingface.co/datasets/GBaker/MedQA-USMLE-4-options |
| MedMCQA | 183K | https://huggingface.co/datasets/medmcqa |
| PubmedQA | 211K | https://huggingface.co/datasets/pubmed_qa |
| LiveQA | 635 | https://huggingface.co/datasets/truehealth/liveqa |
| MedicationQA | 690 | https://huggingface.co/datasets/truehealth/medicationqa |
| UMLS | 99K | https://www.nlm.nih.gov/research/umls/index.html |
The whole instruction dataset is composed of 7 parts. We have covered them in this dataset repo except for *ChatDoctor*.
You should consider merge ChatDoctor's data for complete dataset.
|
kyujinpy/KoCoT_2000 | 2023-10-10T13:19:00.000Z | [
"task_categories:text-generation",
"task_categories:text-classification",
"size_categories:1k<n<5k",
"language:en",
"license:cc-by-4.0",
"arxiv:2305.14045",
"region:us"
] | kyujinpy | null | null | null | 6 | 265 | ---
license: cc-by-4.0
task_categories:
- text-generation
- text-classification
language:
- en
size_categories:
- 1k<n<5k
---
# KoCoT-Collection
Using DeepL dataset, translation about [kaist-CoT](https://huggingface.co/datasets/kaist-ai/CoT-Collection).
---
# Original Dataset Card for Dataset Name
## Dataset Description
- **Homepage:https://github.com/kaistAI/CoT-Collection**
- **Repository:https://github.com/kaistAI/CoT-Collection**
- **Paper:https://arxiv.org/abs/2305.14045**
- **Point of Contact:sejune@lklab.io**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
English
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
| name | train |
|-------------------|------:|
|CoT-Collection|1837928|
## Additional Information
### Citation Information
```
@article{kim2023cot,
title={The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning},
author={Kim, Seungone and Joo, Se June and Kim, Doyoung and Jang, Joel and Ye, Seonghyeon and Shin, Jamin and Seo, Minjoon},
journal={arXiv preprint arXiv:2305.14045},
year={2023}
}
``` |
mteb/reddit-clustering-p2p | 2022-09-27T19:13:59.000Z | [
"language:en",
"region:us"
] | mteb | null | null | null | 0 | 264 | ---
language:
- en
---
10 sets with the following stats:
1. 91 labels & 15592 samples
2. 64 labels & 79172 samples
3. 38 labels & 1942 samples
4. 11 labels & 13224 samples
5. 64 labels & 92303 samples
6. 87 labels & 28607 samples
7. 10 labels & 69146 samples
8. 48 labels & 67469 samples
9. 64 labels & 29683 samples
10. 31 labels & 62261 samples
Selected at random using the script available on the mteb github repository.
|
humarin/chatgpt-paraphrases | 2023-04-05T16:27:16.000Z | [
"task_categories:text2text-generation",
"size_categories:100K<n<1M",
"language:en",
"license:openrail",
"region:us"
] | humarin | null | null | null | 30 | 264 | ---
license: openrail
task_categories:
- text2text-generation
language:
- en
size_categories:
- 100K<n<1M
---
This is a dataset of paraphrases created by ChatGPT.
Model based on this dataset is avaible: [model](https://huggingface.co/humarin/chatgpt_paraphraser_on_T5_base)
## We used this prompt to generate paraphrases
Generate 5 similar paraphrases for this question, show it like a numbered list without commentaries: *{text}*
This dataset is based on the [Quora paraphrase question](https://www.kaggle.com/competitions/quora-question-pairs), texts from the [SQUAD 2.0](https://huggingface.co/datasets/squad_v2) and the [CNN news dataset](https://huggingface.co/datasets/cnn_dailymail).
We generated 5 paraphrases for each sample, totally this dataset has about 420k data rows. You can make 30 rows from a row from each sample. In this way you can make 12.6 millions train pairs (420k rows with 5 paraphrases -> 6x5x420000 = 12.6 millions of bidirected or 6x5x420000/2 = 6.3 millions of unique pairs).
## We used
- 247138 questions from the Quora dataset
- 91983 texts from the Squad 2.0 dataset
- 80076 texts from the CNN news dataset
## Structure of the dataset
- text column - an original sentence or question from the datasets
- paraphrases - a list of 5 paraphrases
- category - question / sentence
- source - quora / squad_2 / cnn_news
## Legal disclaimer
Data is based on OpenAI’s gpt-3.5-turbo, whose [terms of use](https://openai.com/policies/terms-of-use) prohibit developing models that compete with OpenAI. So if you use this dataset to train a model, don't compete with OpenAI.
### BibTeX entry and citation info
```bibtex
@inproceedings{chatgpt_paraphrases_dataset,
author={Vladimir Vorobev, Maxim Kuznetsov},
title={ChatGPT paraphrases dataset},
year={2023}
}
``` |
open-llm-leaderboard/details_AIDC-ai-business__Marcoroni-70B-v1 | 2023-09-22T18:17:15.000Z | [
"region:us"
] | open-llm-leaderboard | null | null | null | 0 | 264 | ---
pretty_name: Evaluation run of AIDC-ai-business/Marcoroni-70B-v1
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [AIDC-ai-business/Marcoroni-70B-v1](https://huggingface.co/AIDC-ai-business/Marcoroni-70B-v1)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 61 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_AIDC-ai-business__Marcoroni-70B-v1\"\
,\n\t\"harness_truthfulqa_mc_0\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\
\nThese are the [latest results from run 2023-09-22T18:15:51.325850](https://huggingface.co/datasets/open-llm-leaderboard/details_AIDC-ai-business__Marcoroni-70B-v1/blob/main/results_2023-09-22T18-15-51.325850.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.7060854021313192,\n\
\ \"acc_stderr\": 0.030933452595170018,\n \"acc_norm\": 0.7100920024195952,\n\
\ \"acc_norm_stderr\": 0.030899690738616904,\n \"mc1\": 0.46878824969400246,\n\
\ \"mc1_stderr\": 0.017469364874577537,\n \"mc2\": 0.6440529032814905,\n\
\ \"mc2_stderr\": 0.014798378745240535\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.6877133105802048,\n \"acc_stderr\": 0.013542598541688067,\n\
\ \"acc_norm\": 0.735494880546075,\n \"acc_norm_stderr\": 0.01288927294931337\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6876120294761999,\n\
\ \"acc_stderr\": 0.004625198756710245,\n \"acc_norm\": 0.8762198765186218,\n\
\ \"acc_norm_stderr\": 0.0032865748124511866\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.38,\n \"acc_stderr\": 0.048783173121456316,\n \
\ \"acc_norm\": 0.38,\n \"acc_norm_stderr\": 0.048783173121456316\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6444444444444445,\n\
\ \"acc_stderr\": 0.04135176749720385,\n \"acc_norm\": 0.6444444444444445,\n\
\ \"acc_norm_stderr\": 0.04135176749720385\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.8157894736842105,\n \"acc_stderr\": 0.031546980450822305,\n\
\ \"acc_norm\": 0.8157894736842105,\n \"acc_norm_stderr\": 0.031546980450822305\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.75,\n\
\ \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.75,\n \
\ \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.7471698113207547,\n \"acc_stderr\": 0.026749899771241214,\n\
\ \"acc_norm\": 0.7471698113207547,\n \"acc_norm_stderr\": 0.026749899771241214\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.8194444444444444,\n\
\ \"acc_stderr\": 0.03216600808802267,\n \"acc_norm\": 0.8194444444444444,\n\
\ \"acc_norm_stderr\": 0.03216600808802267\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.45,\n \"acc_stderr\": 0.04999999999999999,\n \
\ \"acc_norm\": 0.45,\n \"acc_norm_stderr\": 0.04999999999999999\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.58,\n \"acc_stderr\": 0.049604496374885836,\n \"acc_norm\": 0.58,\n\
\ \"acc_norm_stderr\": 0.049604496374885836\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.41,\n \"acc_stderr\": 0.04943110704237102,\n \
\ \"acc_norm\": 0.41,\n \"acc_norm_stderr\": 0.04943110704237102\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.7052023121387283,\n\
\ \"acc_stderr\": 0.03476599607516478,\n \"acc_norm\": 0.7052023121387283,\n\
\ \"acc_norm_stderr\": 0.03476599607516478\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.3627450980392157,\n \"acc_stderr\": 0.04784060704105654,\n\
\ \"acc_norm\": 0.3627450980392157,\n \"acc_norm_stderr\": 0.04784060704105654\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.73,\n \"acc_stderr\": 0.04461960433384739,\n \"acc_norm\": 0.73,\n\
\ \"acc_norm_stderr\": 0.04461960433384739\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.7106382978723405,\n \"acc_stderr\": 0.02964400657700962,\n\
\ \"acc_norm\": 0.7106382978723405,\n \"acc_norm_stderr\": 0.02964400657700962\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.45614035087719296,\n\
\ \"acc_stderr\": 0.04685473041907789,\n \"acc_norm\": 0.45614035087719296,\n\
\ \"acc_norm_stderr\": 0.04685473041907789\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.6275862068965518,\n \"acc_stderr\": 0.04028731532947558,\n\
\ \"acc_norm\": 0.6275862068965518,\n \"acc_norm_stderr\": 0.04028731532947558\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.46825396825396826,\n \"acc_stderr\": 0.0256993528321318,\n \"\
acc_norm\": 0.46825396825396826,\n \"acc_norm_stderr\": 0.0256993528321318\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.49206349206349204,\n\
\ \"acc_stderr\": 0.044715725362943486,\n \"acc_norm\": 0.49206349206349204,\n\
\ \"acc_norm_stderr\": 0.044715725362943486\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.46,\n \"acc_stderr\": 0.05009082659620333,\n \
\ \"acc_norm\": 0.46,\n \"acc_norm_stderr\": 0.05009082659620333\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.8290322580645161,\n\
\ \"acc_stderr\": 0.021417242936321586,\n \"acc_norm\": 0.8290322580645161,\n\
\ \"acc_norm_stderr\": 0.021417242936321586\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.5566502463054187,\n \"acc_stderr\": 0.03495334582162933,\n\
\ \"acc_norm\": 0.5566502463054187,\n \"acc_norm_stderr\": 0.03495334582162933\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.79,\n \"acc_stderr\": 0.04093601807403325,\n \"acc_norm\"\
: 0.79,\n \"acc_norm_stderr\": 0.04093601807403325\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.8424242424242424,\n \"acc_stderr\": 0.028450388805284343,\n\
\ \"acc_norm\": 0.8424242424242424,\n \"acc_norm_stderr\": 0.028450388805284343\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.8888888888888888,\n \"acc_stderr\": 0.02239078763821677,\n \"\
acc_norm\": 0.8888888888888888,\n \"acc_norm_stderr\": 0.02239078763821677\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.9378238341968912,\n \"acc_stderr\": 0.017426974154240528,\n\
\ \"acc_norm\": 0.9378238341968912,\n \"acc_norm_stderr\": 0.017426974154240528\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.7307692307692307,\n \"acc_stderr\": 0.022489389793654817,\n\
\ \"acc_norm\": 0.7307692307692307,\n \"acc_norm_stderr\": 0.022489389793654817\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.3148148148148148,\n \"acc_stderr\": 0.02831753349606647,\n \
\ \"acc_norm\": 0.3148148148148148,\n \"acc_norm_stderr\": 0.02831753349606647\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.7857142857142857,\n \"acc_stderr\": 0.02665353159671549,\n \
\ \"acc_norm\": 0.7857142857142857,\n \"acc_norm_stderr\": 0.02665353159671549\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.4900662251655629,\n \"acc_stderr\": 0.04081677107248436,\n \"\
acc_norm\": 0.4900662251655629,\n \"acc_norm_stderr\": 0.04081677107248436\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8972477064220183,\n \"acc_stderr\": 0.013018246509173768,\n \"\
acc_norm\": 0.8972477064220183,\n \"acc_norm_stderr\": 0.013018246509173768\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.5925925925925926,\n \"acc_stderr\": 0.033509916046960436,\n \"\
acc_norm\": 0.5925925925925926,\n \"acc_norm_stderr\": 0.033509916046960436\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.9215686274509803,\n \"acc_stderr\": 0.018869514646658928,\n \"\
acc_norm\": 0.9215686274509803,\n \"acc_norm_stderr\": 0.018869514646658928\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.8945147679324894,\n \"acc_stderr\": 0.01999556072375854,\n \
\ \"acc_norm\": 0.8945147679324894,\n \"acc_norm_stderr\": 0.01999556072375854\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.8026905829596412,\n\
\ \"acc_stderr\": 0.02670985334496796,\n \"acc_norm\": 0.8026905829596412,\n\
\ \"acc_norm_stderr\": 0.02670985334496796\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.8549618320610687,\n \"acc_stderr\": 0.030884661089515368,\n\
\ \"acc_norm\": 0.8549618320610687,\n \"acc_norm_stderr\": 0.030884661089515368\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.8677685950413223,\n \"acc_stderr\": 0.030922788320445784,\n \"\
acc_norm\": 0.8677685950413223,\n \"acc_norm_stderr\": 0.030922788320445784\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8333333333333334,\n\
\ \"acc_stderr\": 0.03602814176392645,\n \"acc_norm\": 0.8333333333333334,\n\
\ \"acc_norm_stderr\": 0.03602814176392645\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.8343558282208589,\n \"acc_stderr\": 0.029208296231259104,\n\
\ \"acc_norm\": 0.8343558282208589,\n \"acc_norm_stderr\": 0.029208296231259104\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5892857142857143,\n\
\ \"acc_stderr\": 0.04669510663875191,\n \"acc_norm\": 0.5892857142857143,\n\
\ \"acc_norm_stderr\": 0.04669510663875191\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.8349514563106796,\n \"acc_stderr\": 0.03675668832233188,\n\
\ \"acc_norm\": 0.8349514563106796,\n \"acc_norm_stderr\": 0.03675668832233188\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8974358974358975,\n\
\ \"acc_stderr\": 0.019875655027867454,\n \"acc_norm\": 0.8974358974358975,\n\
\ \"acc_norm_stderr\": 0.019875655027867454\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.72,\n \"acc_stderr\": 0.04512608598542126,\n \
\ \"acc_norm\": 0.72,\n \"acc_norm_stderr\": 0.04512608598542126\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8684546615581098,\n\
\ \"acc_stderr\": 0.01208670521425043,\n \"acc_norm\": 0.8684546615581098,\n\
\ \"acc_norm_stderr\": 0.01208670521425043\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7687861271676301,\n \"acc_stderr\": 0.022698657167855713,\n\
\ \"acc_norm\": 0.7687861271676301,\n \"acc_norm_stderr\": 0.022698657167855713\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.6681564245810055,\n\
\ \"acc_stderr\": 0.015748421208187303,\n \"acc_norm\": 0.6681564245810055,\n\
\ \"acc_norm_stderr\": 0.015748421208187303\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.7581699346405228,\n \"acc_stderr\": 0.024518195641879334,\n\
\ \"acc_norm\": 0.7581699346405228,\n \"acc_norm_stderr\": 0.024518195641879334\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7684887459807074,\n\
\ \"acc_stderr\": 0.023956532766639133,\n \"acc_norm\": 0.7684887459807074,\n\
\ \"acc_norm_stderr\": 0.023956532766639133\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.8179012345679012,\n \"acc_stderr\": 0.021473491834808334,\n\
\ \"acc_norm\": 0.8179012345679012,\n \"acc_norm_stderr\": 0.021473491834808334\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.5638297872340425,\n \"acc_stderr\": 0.029583452036284076,\n \
\ \"acc_norm\": 0.5638297872340425,\n \"acc_norm_stderr\": 0.029583452036284076\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.5788787483702738,\n\
\ \"acc_stderr\": 0.012610325733489903,\n \"acc_norm\": 0.5788787483702738,\n\
\ \"acc_norm_stderr\": 0.012610325733489903\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.7279411764705882,\n \"acc_stderr\": 0.027033041151681456,\n\
\ \"acc_norm\": 0.7279411764705882,\n \"acc_norm_stderr\": 0.027033041151681456\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.7663398692810458,\n \"acc_stderr\": 0.017119158496044506,\n \
\ \"acc_norm\": 0.7663398692810458,\n \"acc_norm_stderr\": 0.017119158496044506\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7363636363636363,\n\
\ \"acc_stderr\": 0.04220224692971987,\n \"acc_norm\": 0.7363636363636363,\n\
\ \"acc_norm_stderr\": 0.04220224692971987\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.7836734693877551,\n \"acc_stderr\": 0.026358916334904035,\n\
\ \"acc_norm\": 0.7836734693877551,\n \"acc_norm_stderr\": 0.026358916334904035\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8706467661691543,\n\
\ \"acc_stderr\": 0.02372983088101853,\n \"acc_norm\": 0.8706467661691543,\n\
\ \"acc_norm_stderr\": 0.02372983088101853\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.87,\n \"acc_stderr\": 0.033799766898963086,\n \
\ \"acc_norm\": 0.87,\n \"acc_norm_stderr\": 0.033799766898963086\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5542168674698795,\n\
\ \"acc_stderr\": 0.03869543323472101,\n \"acc_norm\": 0.5542168674698795,\n\
\ \"acc_norm_stderr\": 0.03869543323472101\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8654970760233918,\n \"acc_stderr\": 0.026168221344662297,\n\
\ \"acc_norm\": 0.8654970760233918,\n \"acc_norm_stderr\": 0.026168221344662297\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.46878824969400246,\n\
\ \"mc1_stderr\": 0.017469364874577537,\n \"mc2\": 0.6440529032814905,\n\
\ \"mc2_stderr\": 0.014798378745240535\n }\n}\n```"
repo_url: https://huggingface.co/AIDC-ai-business/Marcoroni-70B-v1
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_09_22T18_15_51.325850
path:
- '**/details_harness|arc:challenge|25_2023-09-22T18-15-51.325850.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-09-22T18-15-51.325850.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_09_22T18_15_51.325850
path:
- '**/details_harness|hellaswag|10_2023-09-22T18-15-51.325850.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-09-22T18-15-51.325850.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_09_22T18_15_51.325850
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-22T18-15-51.325850.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-09-22T18-15-51.325850.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-22T18-15-51.325850.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_09_22T18_15_51.325850
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-22T18-15-51.325850.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-22T18-15-51.325850.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_09_22T18_15_51.325850
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-22T18-15-51.325850.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-22T18-15-51.325850.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_09_22T18_15_51.325850
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-22T18-15-51.325850.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-22T18-15-51.325850.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_09_22T18_15_51.325850
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-22T18-15-51.325850.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-22T18-15-51.325850.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_09_22T18_15_51.325850
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-22T18-15-51.325850.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-22T18-15-51.325850.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_09_22T18_15_51.325850
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-22T18-15-51.325850.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-22T18-15-51.325850.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_09_22T18_15_51.325850
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-22T18-15-51.325850.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-22T18-15-51.325850.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_09_22T18_15_51.325850
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-22T18-15-51.325850.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-22T18-15-51.325850.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_09_22T18_15_51.325850
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-22T18-15-51.325850.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-22T18-15-51.325850.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_09_22T18_15_51.325850
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-22T18-15-51.325850.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-22T18-15-51.325850.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_09_22T18_15_51.325850
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-22T18-15-51.325850.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-22T18-15-51.325850.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_09_22T18_15_51.325850
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-22T18-15-51.325850.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-22T18-15-51.325850.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_09_22T18_15_51.325850
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-22T18-15-51.325850.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-22T18-15-51.325850.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_09_22T18_15_51.325850
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-22T18-15-51.325850.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-22T18-15-51.325850.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_09_22T18_15_51.325850
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-22T18-15-51.325850.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-22T18-15-51.325850.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_09_22T18_15_51.325850
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-22T18-15-51.325850.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-22T18-15-51.325850.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_09_22T18_15_51.325850
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-22T18-15-51.325850.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-22T18-15-51.325850.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_09_22T18_15_51.325850
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-22T18-15-51.325850.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-22T18-15-51.325850.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_09_22T18_15_51.325850
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-22T18-15-51.325850.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-22T18-15-51.325850.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_09_22T18_15_51.325850
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-22T18-15-51.325850.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-22T18-15-51.325850.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_09_22T18_15_51.325850
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-22T18-15-51.325850.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-22T18-15-51.325850.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_09_22T18_15_51.325850
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-22T18-15-51.325850.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-22T18-15-51.325850.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_09_22T18_15_51.325850
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-22T18-15-51.325850.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-22T18-15-51.325850.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_09_22T18_15_51.325850
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-22T18-15-51.325850.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-22T18-15-51.325850.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_09_22T18_15_51.325850
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-22T18-15-51.325850.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-22T18-15-51.325850.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_09_22T18_15_51.325850
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-22T18-15-51.325850.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-22T18-15-51.325850.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_09_22T18_15_51.325850
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-22T18-15-51.325850.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-22T18-15-51.325850.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_09_22T18_15_51.325850
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-22T18-15-51.325850.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-22T18-15-51.325850.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_09_22T18_15_51.325850
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-22T18-15-51.325850.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-22T18-15-51.325850.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_09_22T18_15_51.325850
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-22T18-15-51.325850.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-22T18-15-51.325850.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_09_22T18_15_51.325850
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-22T18-15-51.325850.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-22T18-15-51.325850.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_09_22T18_15_51.325850
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-22T18-15-51.325850.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-22T18-15-51.325850.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_09_22T18_15_51.325850
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-22T18-15-51.325850.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-22T18-15-51.325850.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_09_22T18_15_51.325850
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-22T18-15-51.325850.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-22T18-15-51.325850.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_09_22T18_15_51.325850
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-09-22T18-15-51.325850.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-09-22T18-15-51.325850.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_09_22T18_15_51.325850
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-22T18-15-51.325850.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-22T18-15-51.325850.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_09_22T18_15_51.325850
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-22T18-15-51.325850.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-22T18-15-51.325850.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_09_22T18_15_51.325850
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-22T18-15-51.325850.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-22T18-15-51.325850.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_09_22T18_15_51.325850
path:
- '**/details_harness|hendrycksTest-management|5_2023-09-22T18-15-51.325850.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-09-22T18-15-51.325850.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_09_22T18_15_51.325850
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-09-22T18-15-51.325850.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-09-22T18-15-51.325850.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_09_22T18_15_51.325850
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-22T18-15-51.325850.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-22T18-15-51.325850.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_09_22T18_15_51.325850
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-22T18-15-51.325850.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-22T18-15-51.325850.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_09_22T18_15_51.325850
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-22T18-15-51.325850.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-22T18-15-51.325850.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_09_22T18_15_51.325850
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-22T18-15-51.325850.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-22T18-15-51.325850.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_09_22T18_15_51.325850
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-22T18-15-51.325850.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-22T18-15-51.325850.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_09_22T18_15_51.325850
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-22T18-15-51.325850.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-22T18-15-51.325850.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_09_22T18_15_51.325850
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-22T18-15-51.325850.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-22T18-15-51.325850.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_09_22T18_15_51.325850
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-22T18-15-51.325850.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-22T18-15-51.325850.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_09_22T18_15_51.325850
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-22T18-15-51.325850.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-22T18-15-51.325850.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_09_22T18_15_51.325850
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-22T18-15-51.325850.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-22T18-15-51.325850.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_09_22T18_15_51.325850
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-22T18-15-51.325850.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-22T18-15-51.325850.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_09_22T18_15_51.325850
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-22T18-15-51.325850.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-22T18-15-51.325850.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_09_22T18_15_51.325850
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-22T18-15-51.325850.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-22T18-15-51.325850.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_09_22T18_15_51.325850
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-09-22T18-15-51.325850.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-09-22T18-15-51.325850.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_09_22T18_15_51.325850
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-22T18-15-51.325850.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-22T18-15-51.325850.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_09_22T18_15_51.325850
path:
- '**/details_harness|hendrycksTest-virology|5_2023-09-22T18-15-51.325850.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-09-22T18-15-51.325850.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_09_22T18_15_51.325850
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-22T18-15-51.325850.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-22T18-15-51.325850.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_09_22T18_15_51.325850
path:
- '**/details_harness|truthfulqa:mc|0_2023-09-22T18-15-51.325850.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-09-22T18-15-51.325850.parquet'
- config_name: results
data_files:
- split: 2023_09_22T18_15_51.325850
path:
- results_2023-09-22T18-15-51.325850.parquet
- split: latest
path:
- results_2023-09-22T18-15-51.325850.parquet
---
# Dataset Card for Evaluation run of AIDC-ai-business/Marcoroni-70B-v1
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/AIDC-ai-business/Marcoroni-70B-v1
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [AIDC-ai-business/Marcoroni-70B-v1](https://huggingface.co/AIDC-ai-business/Marcoroni-70B-v1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_AIDC-ai-business__Marcoroni-70B-v1",
"harness_truthfulqa_mc_0",
split="train")
```
## Latest results
These are the [latest results from run 2023-09-22T18:15:51.325850](https://huggingface.co/datasets/open-llm-leaderboard/details_AIDC-ai-business__Marcoroni-70B-v1/blob/main/results_2023-09-22T18-15-51.325850.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.7060854021313192,
"acc_stderr": 0.030933452595170018,
"acc_norm": 0.7100920024195952,
"acc_norm_stderr": 0.030899690738616904,
"mc1": 0.46878824969400246,
"mc1_stderr": 0.017469364874577537,
"mc2": 0.6440529032814905,
"mc2_stderr": 0.014798378745240535
},
"harness|arc:challenge|25": {
"acc": 0.6877133105802048,
"acc_stderr": 0.013542598541688067,
"acc_norm": 0.735494880546075,
"acc_norm_stderr": 0.01288927294931337
},
"harness|hellaswag|10": {
"acc": 0.6876120294761999,
"acc_stderr": 0.004625198756710245,
"acc_norm": 0.8762198765186218,
"acc_norm_stderr": 0.0032865748124511866
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.38,
"acc_stderr": 0.048783173121456316,
"acc_norm": 0.38,
"acc_norm_stderr": 0.048783173121456316
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6444444444444445,
"acc_stderr": 0.04135176749720385,
"acc_norm": 0.6444444444444445,
"acc_norm_stderr": 0.04135176749720385
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.8157894736842105,
"acc_stderr": 0.031546980450822305,
"acc_norm": 0.8157894736842105,
"acc_norm_stderr": 0.031546980450822305
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.75,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.75,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7471698113207547,
"acc_stderr": 0.026749899771241214,
"acc_norm": 0.7471698113207547,
"acc_norm_stderr": 0.026749899771241214
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.8194444444444444,
"acc_stderr": 0.03216600808802267,
"acc_norm": 0.8194444444444444,
"acc_norm_stderr": 0.03216600808802267
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.45,
"acc_stderr": 0.04999999999999999,
"acc_norm": 0.45,
"acc_norm_stderr": 0.04999999999999999
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.58,
"acc_stderr": 0.049604496374885836,
"acc_norm": 0.58,
"acc_norm_stderr": 0.049604496374885836
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.41,
"acc_stderr": 0.04943110704237102,
"acc_norm": 0.41,
"acc_norm_stderr": 0.04943110704237102
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.7052023121387283,
"acc_stderr": 0.03476599607516478,
"acc_norm": 0.7052023121387283,
"acc_norm_stderr": 0.03476599607516478
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.3627450980392157,
"acc_stderr": 0.04784060704105654,
"acc_norm": 0.3627450980392157,
"acc_norm_stderr": 0.04784060704105654
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.73,
"acc_stderr": 0.04461960433384739,
"acc_norm": 0.73,
"acc_norm_stderr": 0.04461960433384739
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.7106382978723405,
"acc_stderr": 0.02964400657700962,
"acc_norm": 0.7106382978723405,
"acc_norm_stderr": 0.02964400657700962
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.45614035087719296,
"acc_stderr": 0.04685473041907789,
"acc_norm": 0.45614035087719296,
"acc_norm_stderr": 0.04685473041907789
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.6275862068965518,
"acc_stderr": 0.04028731532947558,
"acc_norm": 0.6275862068965518,
"acc_norm_stderr": 0.04028731532947558
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.46825396825396826,
"acc_stderr": 0.0256993528321318,
"acc_norm": 0.46825396825396826,
"acc_norm_stderr": 0.0256993528321318
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.49206349206349204,
"acc_stderr": 0.044715725362943486,
"acc_norm": 0.49206349206349204,
"acc_norm_stderr": 0.044715725362943486
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.46,
"acc_stderr": 0.05009082659620333,
"acc_norm": 0.46,
"acc_norm_stderr": 0.05009082659620333
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.8290322580645161,
"acc_stderr": 0.021417242936321586,
"acc_norm": 0.8290322580645161,
"acc_norm_stderr": 0.021417242936321586
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5566502463054187,
"acc_stderr": 0.03495334582162933,
"acc_norm": 0.5566502463054187,
"acc_norm_stderr": 0.03495334582162933
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.79,
"acc_stderr": 0.04093601807403325,
"acc_norm": 0.79,
"acc_norm_stderr": 0.04093601807403325
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.8424242424242424,
"acc_stderr": 0.028450388805284343,
"acc_norm": 0.8424242424242424,
"acc_norm_stderr": 0.028450388805284343
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.8888888888888888,
"acc_stderr": 0.02239078763821677,
"acc_norm": 0.8888888888888888,
"acc_norm_stderr": 0.02239078763821677
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9378238341968912,
"acc_stderr": 0.017426974154240528,
"acc_norm": 0.9378238341968912,
"acc_norm_stderr": 0.017426974154240528
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.7307692307692307,
"acc_stderr": 0.022489389793654817,
"acc_norm": 0.7307692307692307,
"acc_norm_stderr": 0.022489389793654817
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3148148148148148,
"acc_stderr": 0.02831753349606647,
"acc_norm": 0.3148148148148148,
"acc_norm_stderr": 0.02831753349606647
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.7857142857142857,
"acc_stderr": 0.02665353159671549,
"acc_norm": 0.7857142857142857,
"acc_norm_stderr": 0.02665353159671549
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.4900662251655629,
"acc_stderr": 0.04081677107248436,
"acc_norm": 0.4900662251655629,
"acc_norm_stderr": 0.04081677107248436
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8972477064220183,
"acc_stderr": 0.013018246509173768,
"acc_norm": 0.8972477064220183,
"acc_norm_stderr": 0.013018246509173768
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5925925925925926,
"acc_stderr": 0.033509916046960436,
"acc_norm": 0.5925925925925926,
"acc_norm_stderr": 0.033509916046960436
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.9215686274509803,
"acc_stderr": 0.018869514646658928,
"acc_norm": 0.9215686274509803,
"acc_norm_stderr": 0.018869514646658928
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8945147679324894,
"acc_stderr": 0.01999556072375854,
"acc_norm": 0.8945147679324894,
"acc_norm_stderr": 0.01999556072375854
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.8026905829596412,
"acc_stderr": 0.02670985334496796,
"acc_norm": 0.8026905829596412,
"acc_norm_stderr": 0.02670985334496796
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.8549618320610687,
"acc_stderr": 0.030884661089515368,
"acc_norm": 0.8549618320610687,
"acc_norm_stderr": 0.030884661089515368
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8677685950413223,
"acc_stderr": 0.030922788320445784,
"acc_norm": 0.8677685950413223,
"acc_norm_stderr": 0.030922788320445784
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.8333333333333334,
"acc_stderr": 0.03602814176392645,
"acc_norm": 0.8333333333333334,
"acc_norm_stderr": 0.03602814176392645
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.8343558282208589,
"acc_stderr": 0.029208296231259104,
"acc_norm": 0.8343558282208589,
"acc_norm_stderr": 0.029208296231259104
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.5892857142857143,
"acc_stderr": 0.04669510663875191,
"acc_norm": 0.5892857142857143,
"acc_norm_stderr": 0.04669510663875191
},
"harness|hendrycksTest-management|5": {
"acc": 0.8349514563106796,
"acc_stderr": 0.03675668832233188,
"acc_norm": 0.8349514563106796,
"acc_norm_stderr": 0.03675668832233188
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8974358974358975,
"acc_stderr": 0.019875655027867454,
"acc_norm": 0.8974358974358975,
"acc_norm_stderr": 0.019875655027867454
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.72,
"acc_stderr": 0.04512608598542126,
"acc_norm": 0.72,
"acc_norm_stderr": 0.04512608598542126
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8684546615581098,
"acc_stderr": 0.01208670521425043,
"acc_norm": 0.8684546615581098,
"acc_norm_stderr": 0.01208670521425043
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7687861271676301,
"acc_stderr": 0.022698657167855713,
"acc_norm": 0.7687861271676301,
"acc_norm_stderr": 0.022698657167855713
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.6681564245810055,
"acc_stderr": 0.015748421208187303,
"acc_norm": 0.6681564245810055,
"acc_norm_stderr": 0.015748421208187303
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7581699346405228,
"acc_stderr": 0.024518195641879334,
"acc_norm": 0.7581699346405228,
"acc_norm_stderr": 0.024518195641879334
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7684887459807074,
"acc_stderr": 0.023956532766639133,
"acc_norm": 0.7684887459807074,
"acc_norm_stderr": 0.023956532766639133
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.8179012345679012,
"acc_stderr": 0.021473491834808334,
"acc_norm": 0.8179012345679012,
"acc_norm_stderr": 0.021473491834808334
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.5638297872340425,
"acc_stderr": 0.029583452036284076,
"acc_norm": 0.5638297872340425,
"acc_norm_stderr": 0.029583452036284076
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.5788787483702738,
"acc_stderr": 0.012610325733489903,
"acc_norm": 0.5788787483702738,
"acc_norm_stderr": 0.012610325733489903
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.7279411764705882,
"acc_stderr": 0.027033041151681456,
"acc_norm": 0.7279411764705882,
"acc_norm_stderr": 0.027033041151681456
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.7663398692810458,
"acc_stderr": 0.017119158496044506,
"acc_norm": 0.7663398692810458,
"acc_norm_stderr": 0.017119158496044506
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.7363636363636363,
"acc_stderr": 0.04220224692971987,
"acc_norm": 0.7363636363636363,
"acc_norm_stderr": 0.04220224692971987
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7836734693877551,
"acc_stderr": 0.026358916334904035,
"acc_norm": 0.7836734693877551,
"acc_norm_stderr": 0.026358916334904035
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8706467661691543,
"acc_stderr": 0.02372983088101853,
"acc_norm": 0.8706467661691543,
"acc_norm_stderr": 0.02372983088101853
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.87,
"acc_stderr": 0.033799766898963086,
"acc_norm": 0.87,
"acc_norm_stderr": 0.033799766898963086
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5542168674698795,
"acc_stderr": 0.03869543323472101,
"acc_norm": 0.5542168674698795,
"acc_norm_stderr": 0.03869543323472101
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8654970760233918,
"acc_stderr": 0.026168221344662297,
"acc_norm": 0.8654970760233918,
"acc_norm_stderr": 0.026168221344662297
},
"harness|truthfulqa:mc|0": {
"mc1": 0.46878824969400246,
"mc1_stderr": 0.017469364874577537,
"mc2": 0.6440529032814905,
"mc2_stderr": 0.014798378745240535
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
open-llm-leaderboard/details_mistralai__Mistral-7B-v0.1 | 2023-09-27T15:32:21.000Z | [
"region:us"
] | open-llm-leaderboard | null | null | null | 0 | 264 | ---
pretty_name: Evaluation run of mistralai/Mistral-7B-v0.1
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 61 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_mistralai__Mistral-7B-v0.1\"\
,\n\t\"harness_truthfulqa_mc_0\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\
\nThese are the [latest results from run 2023-09-27T15:30:59.039834](https://huggingface.co/datasets/open-llm-leaderboard/details_mistralai__Mistral-7B-v0.1/blob/main/results_2023-09-27T15-30-59.039834.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6401944309606807,\n\
\ \"acc_stderr\": 0.03294634536642965,\n \"acc_norm\": 0.6441811200820821,\n\
\ \"acc_norm_stderr\": 0.0329250607713158,\n \"mc1\": 0.2802937576499388,\n\
\ \"mc1_stderr\": 0.015723139524608763,\n \"mc2\": 0.4215317106968115,\n\
\ \"mc2_stderr\": 0.014138129483133954\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.568259385665529,\n \"acc_stderr\": 0.014474591427196202,\n\
\ \"acc_norm\": 0.5998293515358362,\n \"acc_norm_stderr\": 0.014317197787809172\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6294562836088429,\n\
\ \"acc_stderr\": 0.00481963366883254,\n \"acc_norm\": 0.8331009759012149,\n\
\ \"acc_norm_stderr\": 0.0037212361965025162\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.29,\n \"acc_stderr\": 0.045604802157206845,\n \
\ \"acc_norm\": 0.29,\n \"acc_norm_stderr\": 0.045604802157206845\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6296296296296297,\n\
\ \"acc_stderr\": 0.041716541613545426,\n \"acc_norm\": 0.6296296296296297,\n\
\ \"acc_norm_stderr\": 0.041716541613545426\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.6578947368421053,\n \"acc_stderr\": 0.03860731599316091,\n\
\ \"acc_norm\": 0.6578947368421053,\n \"acc_norm_stderr\": 0.03860731599316091\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.57,\n\
\ \"acc_stderr\": 0.049756985195624284,\n \"acc_norm\": 0.57,\n \
\ \"acc_norm_stderr\": 0.049756985195624284\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.6943396226415094,\n \"acc_stderr\": 0.028353298073322663,\n\
\ \"acc_norm\": 0.6943396226415094,\n \"acc_norm_stderr\": 0.028353298073322663\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7291666666666666,\n\
\ \"acc_stderr\": 0.03716177437566017,\n \"acc_norm\": 0.7291666666666666,\n\
\ \"acc_norm_stderr\": 0.03716177437566017\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.52,\n \"acc_stderr\": 0.050211673156867795,\n \
\ \"acc_norm\": 0.52,\n \"acc_norm_stderr\": 0.050211673156867795\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"\
acc\": 0.52,\n \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\"\
: 0.52,\n \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.4,\n \"acc_stderr\": 0.04923659639173309,\n \
\ \"acc_norm\": 0.4,\n \"acc_norm_stderr\": 0.04923659639173309\n },\n\
\ \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6647398843930635,\n\
\ \"acc_stderr\": 0.03599586301247077,\n \"acc_norm\": 0.6647398843930635,\n\
\ \"acc_norm_stderr\": 0.03599586301247077\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.39215686274509803,\n \"acc_stderr\": 0.04858083574266346,\n\
\ \"acc_norm\": 0.39215686274509803,\n \"acc_norm_stderr\": 0.04858083574266346\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.77,\n \"acc_stderr\": 0.042295258468165065,\n \"acc_norm\": 0.77,\n\
\ \"acc_norm_stderr\": 0.042295258468165065\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.574468085106383,\n \"acc_stderr\": 0.03232146916224468,\n\
\ \"acc_norm\": 0.574468085106383,\n \"acc_norm_stderr\": 0.03232146916224468\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5,\n\
\ \"acc_stderr\": 0.047036043419179864,\n \"acc_norm\": 0.5,\n \
\ \"acc_norm_stderr\": 0.047036043419179864\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5724137931034483,\n \"acc_stderr\": 0.04122737111370332,\n\
\ \"acc_norm\": 0.5724137931034483,\n \"acc_norm_stderr\": 0.04122737111370332\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.373015873015873,\n \"acc_stderr\": 0.02490699045899257,\n \"acc_norm\"\
: 0.373015873015873,\n \"acc_norm_stderr\": 0.02490699045899257\n },\n\
\ \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4126984126984127,\n\
\ \"acc_stderr\": 0.04403438954768177,\n \"acc_norm\": 0.4126984126984127,\n\
\ \"acc_norm_stderr\": 0.04403438954768177\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.37,\n \"acc_stderr\": 0.04852365870939099,\n \
\ \"acc_norm\": 0.37,\n \"acc_norm_stderr\": 0.04852365870939099\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7709677419354839,\n\
\ \"acc_stderr\": 0.023904914311782648,\n \"acc_norm\": 0.7709677419354839,\n\
\ \"acc_norm_stderr\": 0.023904914311782648\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.5270935960591133,\n \"acc_stderr\": 0.03512819077876106,\n\
\ \"acc_norm\": 0.5270935960591133,\n \"acc_norm_stderr\": 0.03512819077876106\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.68,\n \"acc_stderr\": 0.04688261722621504,\n \"acc_norm\"\
: 0.68,\n \"acc_norm_stderr\": 0.04688261722621504\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7818181818181819,\n \"acc_stderr\": 0.032250781083062896,\n\
\ \"acc_norm\": 0.7818181818181819,\n \"acc_norm_stderr\": 0.032250781083062896\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.7727272727272727,\n \"acc_stderr\": 0.029857515673386417,\n \"\
acc_norm\": 0.7727272727272727,\n \"acc_norm_stderr\": 0.029857515673386417\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.8652849740932642,\n \"acc_stderr\": 0.02463978909770944,\n\
\ \"acc_norm\": 0.8652849740932642,\n \"acc_norm_stderr\": 0.02463978909770944\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.6666666666666666,\n \"acc_stderr\": 0.023901157979402534,\n\
\ \"acc_norm\": 0.6666666666666666,\n \"acc_norm_stderr\": 0.023901157979402534\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.337037037037037,\n \"acc_stderr\": 0.028820884666253255,\n \
\ \"acc_norm\": 0.337037037037037,\n \"acc_norm_stderr\": 0.028820884666253255\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6596638655462185,\n \"acc_stderr\": 0.030778057422931673,\n\
\ \"acc_norm\": 0.6596638655462185,\n \"acc_norm_stderr\": 0.030778057422931673\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.32450331125827814,\n \"acc_stderr\": 0.038227469376587525,\n \"\
acc_norm\": 0.32450331125827814,\n \"acc_norm_stderr\": 0.038227469376587525\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8238532110091743,\n \"acc_stderr\": 0.016332882393431385,\n \"\
acc_norm\": 0.8238532110091743,\n \"acc_norm_stderr\": 0.016332882393431385\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.5740740740740741,\n \"acc_stderr\": 0.03372343271653062,\n \"\
acc_norm\": 0.5740740740740741,\n \"acc_norm_stderr\": 0.03372343271653062\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.7990196078431373,\n \"acc_stderr\": 0.028125972265654373,\n \"\
acc_norm\": 0.7990196078431373,\n \"acc_norm_stderr\": 0.028125972265654373\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.7721518987341772,\n \"acc_stderr\": 0.027303484599069436,\n \
\ \"acc_norm\": 0.7721518987341772,\n \"acc_norm_stderr\": 0.027303484599069436\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.7040358744394619,\n\
\ \"acc_stderr\": 0.030636591348699803,\n \"acc_norm\": 0.7040358744394619,\n\
\ \"acc_norm_stderr\": 0.030636591348699803\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.7938931297709924,\n \"acc_stderr\": 0.03547771004159463,\n\
\ \"acc_norm\": 0.7938931297709924,\n \"acc_norm_stderr\": 0.03547771004159463\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.7768595041322314,\n \"acc_stderr\": 0.03800754475228732,\n \"\
acc_norm\": 0.7768595041322314,\n \"acc_norm_stderr\": 0.03800754475228732\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7777777777777778,\n\
\ \"acc_stderr\": 0.040191074725573483,\n \"acc_norm\": 0.7777777777777778,\n\
\ \"acc_norm_stderr\": 0.040191074725573483\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7914110429447853,\n \"acc_stderr\": 0.031921934489347235,\n\
\ \"acc_norm\": 0.7914110429447853,\n \"acc_norm_stderr\": 0.031921934489347235\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.48214285714285715,\n\
\ \"acc_stderr\": 0.047427623612430116,\n \"acc_norm\": 0.48214285714285715,\n\
\ \"acc_norm_stderr\": 0.047427623612430116\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.8155339805825242,\n \"acc_stderr\": 0.03840423627288276,\n\
\ \"acc_norm\": 0.8155339805825242,\n \"acc_norm_stderr\": 0.03840423627288276\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8717948717948718,\n\
\ \"acc_stderr\": 0.02190190511507333,\n \"acc_norm\": 0.8717948717948718,\n\
\ \"acc_norm_stderr\": 0.02190190511507333\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.74,\n \"acc_stderr\": 0.04408440022768078,\n \
\ \"acc_norm\": 0.74,\n \"acc_norm_stderr\": 0.04408440022768078\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8173690932311622,\n\
\ \"acc_stderr\": 0.013816335389973136,\n \"acc_norm\": 0.8173690932311622,\n\
\ \"acc_norm_stderr\": 0.013816335389973136\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7109826589595376,\n \"acc_stderr\": 0.02440517393578323,\n\
\ \"acc_norm\": 0.7109826589595376,\n \"acc_norm_stderr\": 0.02440517393578323\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.32513966480446926,\n\
\ \"acc_stderr\": 0.01566654278505355,\n \"acc_norm\": 0.32513966480446926,\n\
\ \"acc_norm_stderr\": 0.01566654278505355\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.7581699346405228,\n \"acc_stderr\": 0.024518195641879334,\n\
\ \"acc_norm\": 0.7581699346405228,\n \"acc_norm_stderr\": 0.024518195641879334\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6977491961414791,\n\
\ \"acc_stderr\": 0.026082700695399665,\n \"acc_norm\": 0.6977491961414791,\n\
\ \"acc_norm_stderr\": 0.026082700695399665\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.7345679012345679,\n \"acc_stderr\": 0.024569223600460845,\n\
\ \"acc_norm\": 0.7345679012345679,\n \"acc_norm_stderr\": 0.024569223600460845\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.4858156028368794,\n \"acc_stderr\": 0.02981549448368206,\n \
\ \"acc_norm\": 0.4858156028368794,\n \"acc_norm_stderr\": 0.02981549448368206\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.44784876140808344,\n\
\ \"acc_stderr\": 0.01270058240476822,\n \"acc_norm\": 0.44784876140808344,\n\
\ \"acc_norm_stderr\": 0.01270058240476822\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.6911764705882353,\n \"acc_stderr\": 0.02806499816704009,\n\
\ \"acc_norm\": 0.6911764705882353,\n \"acc_norm_stderr\": 0.02806499816704009\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6813725490196079,\n \"acc_stderr\": 0.01885008469646872,\n \
\ \"acc_norm\": 0.6813725490196079,\n \"acc_norm_stderr\": 0.01885008469646872\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6727272727272727,\n\
\ \"acc_stderr\": 0.0449429086625209,\n \"acc_norm\": 0.6727272727272727,\n\
\ \"acc_norm_stderr\": 0.0449429086625209\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.726530612244898,\n \"acc_stderr\": 0.028535560337128448,\n\
\ \"acc_norm\": 0.726530612244898,\n \"acc_norm_stderr\": 0.028535560337128448\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8308457711442786,\n\
\ \"acc_stderr\": 0.026508590656233264,\n \"acc_norm\": 0.8308457711442786,\n\
\ \"acc_norm_stderr\": 0.026508590656233264\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.86,\n \"acc_stderr\": 0.034873508801977704,\n \
\ \"acc_norm\": 0.86,\n \"acc_norm_stderr\": 0.034873508801977704\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5542168674698795,\n\
\ \"acc_stderr\": 0.03869543323472101,\n \"acc_norm\": 0.5542168674698795,\n\
\ \"acc_norm_stderr\": 0.03869543323472101\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8304093567251462,\n \"acc_stderr\": 0.02878210810540171,\n\
\ \"acc_norm\": 0.8304093567251462,\n \"acc_norm_stderr\": 0.02878210810540171\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.2802937576499388,\n\
\ \"mc1_stderr\": 0.015723139524608763,\n \"mc2\": 0.4215317106968115,\n\
\ \"mc2_stderr\": 0.014138129483133954\n }\n}\n```"
repo_url: https://huggingface.co/mistralai/Mistral-7B-v0.1
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_09_27T15_30_59.039834
path:
- '**/details_harness|arc:challenge|25_2023-09-27T15-30-59.039834.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-09-27T15-30-59.039834.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_09_27T15_30_59.039834
path:
- '**/details_harness|hellaswag|10_2023-09-27T15-30-59.039834.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-09-27T15-30-59.039834.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_09_27T15_30_59.039834
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-27T15-30-59.039834.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-09-27T15-30-59.039834.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-27T15-30-59.039834.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_09_27T15_30_59.039834
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-27T15-30-59.039834.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-27T15-30-59.039834.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_09_27T15_30_59.039834
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-27T15-30-59.039834.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-27T15-30-59.039834.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_09_27T15_30_59.039834
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-27T15-30-59.039834.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-27T15-30-59.039834.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_09_27T15_30_59.039834
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-27T15-30-59.039834.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-27T15-30-59.039834.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_09_27T15_30_59.039834
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-27T15-30-59.039834.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-27T15-30-59.039834.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_09_27T15_30_59.039834
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-27T15-30-59.039834.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-27T15-30-59.039834.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_09_27T15_30_59.039834
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-27T15-30-59.039834.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-27T15-30-59.039834.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_09_27T15_30_59.039834
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-27T15-30-59.039834.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-27T15-30-59.039834.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_09_27T15_30_59.039834
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-27T15-30-59.039834.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-27T15-30-59.039834.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_09_27T15_30_59.039834
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-27T15-30-59.039834.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-27T15-30-59.039834.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_09_27T15_30_59.039834
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-27T15-30-59.039834.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-27T15-30-59.039834.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_09_27T15_30_59.039834
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-27T15-30-59.039834.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-27T15-30-59.039834.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_09_27T15_30_59.039834
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-27T15-30-59.039834.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-27T15-30-59.039834.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_09_27T15_30_59.039834
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-27T15-30-59.039834.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-27T15-30-59.039834.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_09_27T15_30_59.039834
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-27T15-30-59.039834.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-27T15-30-59.039834.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_09_27T15_30_59.039834
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-27T15-30-59.039834.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-27T15-30-59.039834.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_09_27T15_30_59.039834
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-27T15-30-59.039834.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-27T15-30-59.039834.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_09_27T15_30_59.039834
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-27T15-30-59.039834.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-27T15-30-59.039834.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_09_27T15_30_59.039834
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-27T15-30-59.039834.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-27T15-30-59.039834.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_09_27T15_30_59.039834
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-27T15-30-59.039834.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-27T15-30-59.039834.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_09_27T15_30_59.039834
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-27T15-30-59.039834.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-27T15-30-59.039834.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_09_27T15_30_59.039834
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-27T15-30-59.039834.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-27T15-30-59.039834.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_09_27T15_30_59.039834
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-27T15-30-59.039834.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-27T15-30-59.039834.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_09_27T15_30_59.039834
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-27T15-30-59.039834.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-27T15-30-59.039834.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_09_27T15_30_59.039834
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-27T15-30-59.039834.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-27T15-30-59.039834.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_09_27T15_30_59.039834
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-27T15-30-59.039834.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-27T15-30-59.039834.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_09_27T15_30_59.039834
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-27T15-30-59.039834.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-27T15-30-59.039834.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_09_27T15_30_59.039834
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-27T15-30-59.039834.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-27T15-30-59.039834.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_09_27T15_30_59.039834
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-27T15-30-59.039834.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-27T15-30-59.039834.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_09_27T15_30_59.039834
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-27T15-30-59.039834.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-27T15-30-59.039834.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_09_27T15_30_59.039834
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-27T15-30-59.039834.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-27T15-30-59.039834.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_09_27T15_30_59.039834
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-27T15-30-59.039834.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-27T15-30-59.039834.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_09_27T15_30_59.039834
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-27T15-30-59.039834.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-27T15-30-59.039834.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_09_27T15_30_59.039834
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-27T15-30-59.039834.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-27T15-30-59.039834.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_09_27T15_30_59.039834
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-09-27T15-30-59.039834.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-09-27T15-30-59.039834.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_09_27T15_30_59.039834
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-27T15-30-59.039834.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-27T15-30-59.039834.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_09_27T15_30_59.039834
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-27T15-30-59.039834.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-27T15-30-59.039834.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_09_27T15_30_59.039834
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-27T15-30-59.039834.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-27T15-30-59.039834.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_09_27T15_30_59.039834
path:
- '**/details_harness|hendrycksTest-management|5_2023-09-27T15-30-59.039834.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-09-27T15-30-59.039834.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_09_27T15_30_59.039834
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-09-27T15-30-59.039834.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-09-27T15-30-59.039834.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_09_27T15_30_59.039834
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-27T15-30-59.039834.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-27T15-30-59.039834.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_09_27T15_30_59.039834
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-27T15-30-59.039834.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-27T15-30-59.039834.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_09_27T15_30_59.039834
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-27T15-30-59.039834.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-27T15-30-59.039834.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_09_27T15_30_59.039834
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-27T15-30-59.039834.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-27T15-30-59.039834.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_09_27T15_30_59.039834
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-27T15-30-59.039834.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-27T15-30-59.039834.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_09_27T15_30_59.039834
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-27T15-30-59.039834.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-27T15-30-59.039834.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_09_27T15_30_59.039834
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-27T15-30-59.039834.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-27T15-30-59.039834.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_09_27T15_30_59.039834
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-27T15-30-59.039834.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-27T15-30-59.039834.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_09_27T15_30_59.039834
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-27T15-30-59.039834.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-27T15-30-59.039834.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_09_27T15_30_59.039834
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-27T15-30-59.039834.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-27T15-30-59.039834.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_09_27T15_30_59.039834
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-27T15-30-59.039834.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-27T15-30-59.039834.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_09_27T15_30_59.039834
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-27T15-30-59.039834.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-27T15-30-59.039834.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_09_27T15_30_59.039834
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-27T15-30-59.039834.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-27T15-30-59.039834.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_09_27T15_30_59.039834
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-09-27T15-30-59.039834.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-09-27T15-30-59.039834.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_09_27T15_30_59.039834
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-27T15-30-59.039834.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-27T15-30-59.039834.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_09_27T15_30_59.039834
path:
- '**/details_harness|hendrycksTest-virology|5_2023-09-27T15-30-59.039834.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-09-27T15-30-59.039834.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_09_27T15_30_59.039834
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-27T15-30-59.039834.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-27T15-30-59.039834.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_09_27T15_30_59.039834
path:
- '**/details_harness|truthfulqa:mc|0_2023-09-27T15-30-59.039834.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-09-27T15-30-59.039834.parquet'
- config_name: results
data_files:
- split: 2023_09_27T15_30_59.039834
path:
- results_2023-09-27T15-30-59.039834.parquet
- split: latest
path:
- results_2023-09-27T15-30-59.039834.parquet
---
# Dataset Card for Evaluation run of mistralai/Mistral-7B-v0.1
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/mistralai/Mistral-7B-v0.1
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_mistralai__Mistral-7B-v0.1",
"harness_truthfulqa_mc_0",
split="train")
```
## Latest results
These are the [latest results from run 2023-09-27T15:30:59.039834](https://huggingface.co/datasets/open-llm-leaderboard/details_mistralai__Mistral-7B-v0.1/blob/main/results_2023-09-27T15-30-59.039834.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6401944309606807,
"acc_stderr": 0.03294634536642965,
"acc_norm": 0.6441811200820821,
"acc_norm_stderr": 0.0329250607713158,
"mc1": 0.2802937576499388,
"mc1_stderr": 0.015723139524608763,
"mc2": 0.4215317106968115,
"mc2_stderr": 0.014138129483133954
},
"harness|arc:challenge|25": {
"acc": 0.568259385665529,
"acc_stderr": 0.014474591427196202,
"acc_norm": 0.5998293515358362,
"acc_norm_stderr": 0.014317197787809172
},
"harness|hellaswag|10": {
"acc": 0.6294562836088429,
"acc_stderr": 0.00481963366883254,
"acc_norm": 0.8331009759012149,
"acc_norm_stderr": 0.0037212361965025162
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.29,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.29,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6296296296296297,
"acc_stderr": 0.041716541613545426,
"acc_norm": 0.6296296296296297,
"acc_norm_stderr": 0.041716541613545426
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6578947368421053,
"acc_stderr": 0.03860731599316091,
"acc_norm": 0.6578947368421053,
"acc_norm_stderr": 0.03860731599316091
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.57,
"acc_stderr": 0.049756985195624284,
"acc_norm": 0.57,
"acc_norm_stderr": 0.049756985195624284
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6943396226415094,
"acc_stderr": 0.028353298073322663,
"acc_norm": 0.6943396226415094,
"acc_norm_stderr": 0.028353298073322663
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7291666666666666,
"acc_stderr": 0.03716177437566017,
"acc_norm": 0.7291666666666666,
"acc_norm_stderr": 0.03716177437566017
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.52,
"acc_stderr": 0.050211673156867795,
"acc_norm": 0.52,
"acc_norm_stderr": 0.050211673156867795
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.52,
"acc_stderr": 0.050211673156867795,
"acc_norm": 0.52,
"acc_norm_stderr": 0.050211673156867795
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.4,
"acc_stderr": 0.04923659639173309,
"acc_norm": 0.4,
"acc_norm_stderr": 0.04923659639173309
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6647398843930635,
"acc_stderr": 0.03599586301247077,
"acc_norm": 0.6647398843930635,
"acc_norm_stderr": 0.03599586301247077
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.39215686274509803,
"acc_stderr": 0.04858083574266346,
"acc_norm": 0.39215686274509803,
"acc_norm_stderr": 0.04858083574266346
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.77,
"acc_stderr": 0.042295258468165065,
"acc_norm": 0.77,
"acc_norm_stderr": 0.042295258468165065
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.574468085106383,
"acc_stderr": 0.03232146916224468,
"acc_norm": 0.574468085106383,
"acc_norm_stderr": 0.03232146916224468
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.5,
"acc_stderr": 0.047036043419179864,
"acc_norm": 0.5,
"acc_norm_stderr": 0.047036043419179864
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5724137931034483,
"acc_stderr": 0.04122737111370332,
"acc_norm": 0.5724137931034483,
"acc_norm_stderr": 0.04122737111370332
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.373015873015873,
"acc_stderr": 0.02490699045899257,
"acc_norm": 0.373015873015873,
"acc_norm_stderr": 0.02490699045899257
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.4126984126984127,
"acc_stderr": 0.04403438954768177,
"acc_norm": 0.4126984126984127,
"acc_norm_stderr": 0.04403438954768177
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.37,
"acc_stderr": 0.04852365870939099,
"acc_norm": 0.37,
"acc_norm_stderr": 0.04852365870939099
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7709677419354839,
"acc_stderr": 0.023904914311782648,
"acc_norm": 0.7709677419354839,
"acc_norm_stderr": 0.023904914311782648
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5270935960591133,
"acc_stderr": 0.03512819077876106,
"acc_norm": 0.5270935960591133,
"acc_norm_stderr": 0.03512819077876106
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.68,
"acc_stderr": 0.04688261722621504,
"acc_norm": 0.68,
"acc_norm_stderr": 0.04688261722621504
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7818181818181819,
"acc_stderr": 0.032250781083062896,
"acc_norm": 0.7818181818181819,
"acc_norm_stderr": 0.032250781083062896
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7727272727272727,
"acc_stderr": 0.029857515673386417,
"acc_norm": 0.7727272727272727,
"acc_norm_stderr": 0.029857515673386417
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8652849740932642,
"acc_stderr": 0.02463978909770944,
"acc_norm": 0.8652849740932642,
"acc_norm_stderr": 0.02463978909770944
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6666666666666666,
"acc_stderr": 0.023901157979402534,
"acc_norm": 0.6666666666666666,
"acc_norm_stderr": 0.023901157979402534
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.337037037037037,
"acc_stderr": 0.028820884666253255,
"acc_norm": 0.337037037037037,
"acc_norm_stderr": 0.028820884666253255
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6596638655462185,
"acc_stderr": 0.030778057422931673,
"acc_norm": 0.6596638655462185,
"acc_norm_stderr": 0.030778057422931673
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.32450331125827814,
"acc_stderr": 0.038227469376587525,
"acc_norm": 0.32450331125827814,
"acc_norm_stderr": 0.038227469376587525
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8238532110091743,
"acc_stderr": 0.016332882393431385,
"acc_norm": 0.8238532110091743,
"acc_norm_stderr": 0.016332882393431385
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5740740740740741,
"acc_stderr": 0.03372343271653062,
"acc_norm": 0.5740740740740741,
"acc_norm_stderr": 0.03372343271653062
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.7990196078431373,
"acc_stderr": 0.028125972265654373,
"acc_norm": 0.7990196078431373,
"acc_norm_stderr": 0.028125972265654373
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7721518987341772,
"acc_stderr": 0.027303484599069436,
"acc_norm": 0.7721518987341772,
"acc_norm_stderr": 0.027303484599069436
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.7040358744394619,
"acc_stderr": 0.030636591348699803,
"acc_norm": 0.7040358744394619,
"acc_norm_stderr": 0.030636591348699803
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7938931297709924,
"acc_stderr": 0.03547771004159463,
"acc_norm": 0.7938931297709924,
"acc_norm_stderr": 0.03547771004159463
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7768595041322314,
"acc_stderr": 0.03800754475228732,
"acc_norm": 0.7768595041322314,
"acc_norm_stderr": 0.03800754475228732
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7777777777777778,
"acc_stderr": 0.040191074725573483,
"acc_norm": 0.7777777777777778,
"acc_norm_stderr": 0.040191074725573483
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7914110429447853,
"acc_stderr": 0.031921934489347235,
"acc_norm": 0.7914110429447853,
"acc_norm_stderr": 0.031921934489347235
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.48214285714285715,
"acc_stderr": 0.047427623612430116,
"acc_norm": 0.48214285714285715,
"acc_norm_stderr": 0.047427623612430116
},
"harness|hendrycksTest-management|5": {
"acc": 0.8155339805825242,
"acc_stderr": 0.03840423627288276,
"acc_norm": 0.8155339805825242,
"acc_norm_stderr": 0.03840423627288276
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8717948717948718,
"acc_stderr": 0.02190190511507333,
"acc_norm": 0.8717948717948718,
"acc_norm_stderr": 0.02190190511507333
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.74,
"acc_stderr": 0.04408440022768078,
"acc_norm": 0.74,
"acc_norm_stderr": 0.04408440022768078
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8173690932311622,
"acc_stderr": 0.013816335389973136,
"acc_norm": 0.8173690932311622,
"acc_norm_stderr": 0.013816335389973136
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7109826589595376,
"acc_stderr": 0.02440517393578323,
"acc_norm": 0.7109826589595376,
"acc_norm_stderr": 0.02440517393578323
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.32513966480446926,
"acc_stderr": 0.01566654278505355,
"acc_norm": 0.32513966480446926,
"acc_norm_stderr": 0.01566654278505355
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7581699346405228,
"acc_stderr": 0.024518195641879334,
"acc_norm": 0.7581699346405228,
"acc_norm_stderr": 0.024518195641879334
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6977491961414791,
"acc_stderr": 0.026082700695399665,
"acc_norm": 0.6977491961414791,
"acc_norm_stderr": 0.026082700695399665
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7345679012345679,
"acc_stderr": 0.024569223600460845,
"acc_norm": 0.7345679012345679,
"acc_norm_stderr": 0.024569223600460845
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4858156028368794,
"acc_stderr": 0.02981549448368206,
"acc_norm": 0.4858156028368794,
"acc_norm_stderr": 0.02981549448368206
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.44784876140808344,
"acc_stderr": 0.01270058240476822,
"acc_norm": 0.44784876140808344,
"acc_norm_stderr": 0.01270058240476822
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6911764705882353,
"acc_stderr": 0.02806499816704009,
"acc_norm": 0.6911764705882353,
"acc_norm_stderr": 0.02806499816704009
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6813725490196079,
"acc_stderr": 0.01885008469646872,
"acc_norm": 0.6813725490196079,
"acc_norm_stderr": 0.01885008469646872
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6727272727272727,
"acc_stderr": 0.0449429086625209,
"acc_norm": 0.6727272727272727,
"acc_norm_stderr": 0.0449429086625209
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.726530612244898,
"acc_stderr": 0.028535560337128448,
"acc_norm": 0.726530612244898,
"acc_norm_stderr": 0.028535560337128448
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8308457711442786,
"acc_stderr": 0.026508590656233264,
"acc_norm": 0.8308457711442786,
"acc_norm_stderr": 0.026508590656233264
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.86,
"acc_stderr": 0.034873508801977704,
"acc_norm": 0.86,
"acc_norm_stderr": 0.034873508801977704
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5542168674698795,
"acc_stderr": 0.03869543323472101,
"acc_norm": 0.5542168674698795,
"acc_norm_stderr": 0.03869543323472101
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8304093567251462,
"acc_stderr": 0.02878210810540171,
"acc_norm": 0.8304093567251462,
"acc_norm_stderr": 0.02878210810540171
},
"harness|truthfulqa:mc|0": {
"mc1": 0.2802937576499388,
"mc1_stderr": 0.015723139524608763,
"mc2": 0.4215317106968115,
"mc2_stderr": 0.014138129483133954
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
strombergnlp/broad_twitter_corpus | 2022-07-01T15:46:36.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"region:us"
] | strombergnlp | This is the Broad Twitter corpus, a dataset of tweets collected over stratified times, places and social uses.
The goal is to represent a broad range of activities, giving a dataset more representative of the language used
in this hardest of social media formats to process. Further, the BTC is annotated for named entities.
For more details see [https://aclanthology.org/C16-1111/](https://aclanthology.org/C16-1111/) | @inproceedings{derczynski2016broad,
title={Broad twitter corpus: A diverse named entity recognition resource},
author={Derczynski, Leon and Bontcheva, Kalina and Roberts, Ian},
booktitle={Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers},
pages={1169--1179},
year={2016}
} | null | 3 | 263 | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
paperswithcode_id: broad-twitter-corpus
pretty_name: Broad Twitter Corpus
---
# Dataset Card for broad_twitter_corpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [https://github.com/GateNLP/broad_twitter_corpus](https://github.com/GateNLP/broad_twitter_corpus)
- **Repository:** [https://github.com/GateNLP/broad_twitter_corpus](https://github.com/GateNLP/broad_twitter_corpus)
- **Paper:** [http://www.aclweb.org/anthology/C16-1111](http://www.aclweb.org/anthology/C16-1111)
- **Leaderboard:** [Named Entity Recognition on Broad Twitter Corpus](https://paperswithcode.com/sota/named-entity-recognition-on-broad-twitter)
- **Point of Contact:** [Leon Derczynski](https://github.com/leondz)
### Dataset Summary
This is the Broad Twitter corpus, a dataset of tweets collected over stratified times, places and social uses. The goal is to represent a broad range of activities, giving a dataset more representative of the language used in this hardest of social media formats to process. Further, the BTC is annotated for named entities.
See the paper, [Broad Twitter Corpus: A Diverse Named Entity Recognition Resource](http://www.aclweb.org/anthology/C16-1111), for details.
### Supported Tasks and Leaderboards
* Named Entity Recognition
* On PWC: [Named Entity Recognition on Broad Twitter Corpus](https://paperswithcode.com/sota/named-entity-recognition-on-broad-twitter)
### Languages
English from UK, US, Australia, Canada, Ireland, New Zealand; `bcp47:en`
## Dataset Structure
### Data Instances
Feature |Count
---|---:
Documents |9 551
Tokens |165 739
Person entities |5 271
Location entities |3 114
Organization entities |3 732
### Data Fields
Each tweet contains an ID, a list of tokens, and a list of NER tags
- `id`: a `string` feature.
- `tokens`: a `list` of `strings`
- `ner_tags`: a `list` of class IDs (`int`s) representing the NER class:
```
0: O
1: B-PER
2: I-PER
3: B-ORG
4: I-ORG
5: B-LOC
6: I-LOC
```
### Data Splits
Section|Region|Collection period|Description|Annotators|Tweet count
---|---|---|---|---|---:
A | UK| 2012.01| General collection |Expert| 1000
B |UK |2012.01-02 |Non-directed tweets |Expert |2000
E |Global| 2014.07| Related to MH17 disaster| Crowd & expert |200
F |Stratified |2009-2014| Twitterati |Crowd & expert |2000
G |Stratified| 2011-2014| Mainstream news| Crowd & expert| 2351
H |Non-UK| 2014 |General collection |Crowd & expert |2000
The most varied parts of the BTC are sections F and H. However, each of the remaining four sections has some specific readily-identifiable bias. So, we propose that one uses half of section H for evaluation and leaves the other half in the training data. Section H should be partitioned in the order of the JSON-format lines. Note that the CoNLL-format data is readily reconstructible from the JSON format, which is the authoritative data format from which others are derived.
**Test**: Section F
**Development**: Section H (the paper says "second half of Section H" but ordinality could be ambiguous, so it all goes in. Bonne chance)
**Training**: everything else
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
Creative Commons Attribution 4.0 International (CC BY 4.0)
### Citation Information
```
@inproceedings{derczynski2016broad,
title={Broad twitter corpus: A diverse named entity recognition resource},
author={Derczynski, Leon and Bontcheva, Kalina and Roberts, Ian},
booktitle={Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers},
pages={1169--1179},
year={2016}
}
```
### Contributions
Author-added dataset [@leondz](https://github.com/leondz)
|
hoskinson-center/proof-pile | 2023-08-19T03:24:11.000Z | [
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"language:en",
"license:apache-2.0",
"math",
"mathematics",
"formal-mathematics",
"region:us"
] | hoskinson-center | A dataset of high quality mathematical text. | @InProceedings{huggingface:dataset,
title = {proof-pile},
author={Zhangir Azerbayev, Edward Ayers, Bartosz Piotrowski
},
year={2022}
} | null | 26 | 263 | ---
annotations_creators:
- no-annotation
language:
- en
language_creators:
- found
license: [apache-2.0]
multilinguality:
- monolingual
pretty_name: proof-pile
size_categories: []
source_datasets: []
tags:
- math
- mathematics
- formal-mathematics
task_categories:
- text-generation
task_ids:
- language-modeling
---
# Dataset Description
The `proof-pile` is a 13GB pre-training dataset of mathematical text that comprises 8.3 billion tokens (using the `gpt-neox` tokenizer). Models trained on this dataset are coming soon :) The dataset is composed of diverse sources of both informal and formal mathematics, namely
- ArXiv.math (10GB)
- Open-source math textbooks (50MB)
- Formal mathematics libraries (500MB)
- Lean mathlib and other Lean repositories
- Isabelle AFP
- Coq mathematical components and other Coq repositories
- HOL Light
- set.mm
- Mizar Mathematical Library
- Math Overflow and Math Stack Exchange (2.5GB)
- Wiki-style sources (50MB)
- ProofWiki
- Wikipedia math articles
- MATH dataset (6MB)
The construction of the dataset is reproducible using the code and instructions in the [proof-pile Github
repo](https://github.com/zhangir-azerbayev/proof-pile).
# Supported Tasks
This dataset is intended to be used for pre-training and fine-tuning language models. We envision models trained on the `proof-pile` will have many downstream applications, including informal quantitative reasoning, formal theorem proving, semantic search for formal mathematics, and autoformalization.
# Languages
All informal mathematics in the `proof-pile` is written in English and LaTeX (arXiv articles in other languages are filtered out using [languagedetect](https://github.com/shuyo/language-detection/blob/wiki/ProjectHome.md)). Formal theorem proving languages represented in this dataset are Lean 3, Isabelle, Coq, HOL Light, Metamath, and Mizar.
# Evaluation
The version of `set.mm` in this dataset has 10% of proofs replaced with the `?` character in order to preserve a validation and test set for Metamath provers pre-trained on the `proof-pile`. The precise split can be found here: [validation](https://github.com/zhangir-azerbayev/mm-extract/blob/main/valid_decls.json) and [test](https://github.com/zhangir-azerbayev/mm-extract/blob/main/test_decls.json).
The Lean mathlib commit used in this dataset is `6313863`. Theorems created in subsequent commits can be used for evaluating Lean theorem provers.
This dataset contains only the training set of the [MATH dataset](https://github.com/hendrycks/math). However, because this dataset contains ProofWiki, the Stacks Project, Trench's Analysis, and Stein's Number Theory, models trained on it cannot be evaluated on the [NaturalProofs dataset](https://github.com/wellecks/naturalproofs).
# Data Preprocessing
This section describes any significant filtering and transformations made to various subsets of the data.
**arXiv.math.**
The arXiv.math dataset is large, heterogeneous, and contains a great deal of noise. We used the following heuristics
when choosing which files from arXiv.math source folders to include in the dataset:
- Keep only files with a `.tex` extension.
- Only include files that use either a `utf-8/16/32` or `latin-1` text encoding.
- Discard files that do not contain a part, chapter, section, sub...section, paragraph, or subparagraph heading.
- Delete files that contain the keyword `gnuplot`. Gnuplot-latex is an old command line utility that generates blocks
of entirely unintelligible source.
- Include only articles in English, as determined by the [langdetect library](https://pypi.org/project/langdetect/). \n",
"\n",
- Exclude files shorter than 280 characters (characters counted after substring removal described below).
In addition, we apply the following transformations to arXiv.math texts:
- Delete everything outside of `\begin{document}` and `\end{document}`.
- Delete everything including or after `\Refs`, `\begin{thebibliography}`, or `\begin{bibdiv}`
- Delete comments.
- Any more than three consecutive newlines are replaced by three consecutive newlines.
In [this notebook](https://github.com/zhangir-azerbayev/proof-pile/blob/main/analysis/arxiv_noisedetection.ipynb), we provide an analysis of the prevalence of noisy documents in the arXiv.math subset of the
proof-pile.
**Stack Exchange.**
We only include questions that have at least 5 upvotes and an answer. We format Stack Exchange posts as follows
```
QUESTION [{num_upvotes} upvotes]: {text of question}
REPLY [{num_upvotes} votes]: {text of reply}
REPLY [{num_upvotes} votes]: {text of reply}
.
.
.
```
**set.mm.**
We converted `set.mm` into human-readable form by following the instructions in the [mm-extract repo](https://github.com/zhangir-azerbayev/mm-extract)
## Contributions
Authors: Zhangir Azerbayev, Edward Ayers, Bartosz Piotrowski.
We would like to thank Jeremy Avigad, Albert Jiang, and Wenda Li for their invaluable guidance, and the Hoskinson Center for Formal Mathematics for its support.
|
reciprocate/megasynth | 2023-07-03T09:37:26.000Z | [
"region:us"
] | reciprocate | null | null | null | 0 | 263 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: selected
dtype: string
- name: rejected
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 21906656
num_examples: 11792
- name: test
num_bytes: 2305629
num_examples: 1249
download_size: 9582063
dataset_size: 24212285
---
# Dataset Card for "megasynth"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Jules33/jtbenchmark1 | 2023-09-12T05:54:00.000Z | [
"region:us"
] | Jules33 | null | null | null | 0 | 263 | Entry not found |
arxiv_dataset | 2022-11-03T16:16:19.000Z | [
"task_categories:translation",
"task_categories:summarization",
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"task_ids:entity-linking-retrieval",
"task_ids:explanation-generation",
"task_ids:fact-checking-retrieval",
"task_ids:text-simplification",
"annotations_creators:no-annotation",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"license:cc0-1.0",
"arxiv:1905.00075",
"region:us"
] | null | A dataset of 1.7 million arXiv articles for applications like trend analysis, paper recommender engines, category prediction, co-citation networks, knowledge graph construction and semantic search interfaces. | @misc{clement2019arxiv,
title={On the Use of ArXiv as a Dataset},
author={Colin B. Clement and Matthew Bierbaum and Kevin P. O'Keeffe and Alexander A. Alemi},
year={2019},
eprint={1905.00075},
archivePrefix={arXiv},
primaryClass={cs.IR}
} | null | 37 | 262 | ---
annotations_creators:
- no-annotation
language_creators:
- expert-generated
language:
- en
license:
- cc0-1.0
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- translation
- summarization
- text-retrieval
task_ids:
- document-retrieval
- entity-linking-retrieval
- explanation-generation
- fact-checking-retrieval
- text-simplification
paperswithcode_id: null
pretty_name: arXiv Dataset
dataset_info:
features:
- name: id
dtype: string
- name: submitter
dtype: string
- name: authors
dtype: string
- name: title
dtype: string
- name: comments
dtype: string
- name: journal-ref
dtype: string
- name: doi
dtype: string
- name: report-no
dtype: string
- name: categories
dtype: string
- name: license
dtype: string
- name: abstract
dtype: string
- name: update_date
dtype: string
splits:
- name: train
num_bytes: 2246545603
num_examples: 1796911
download_size: 0
dataset_size: 2246545603
---
# Dataset Card for arXiv Dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Kaggle arXiv Dataset Homepage](https://www.kaggle.com/Cornell-University/arxiv)
- **Repository:**
- **Paper:** [On the Use of ArXiv as a Dataset](https://arxiv.org/abs/1905.00075)
- **Leaderboard:**
- **Point of Contact:** [Matt Bierbaum](mailto:matt.bierbaum@gmail.com)
### Dataset Summary
A dataset of 1.7 million arXiv articles for applications like trend analysis, paper recommender engines, category prediction, co-citation networks, knowledge graph construction and semantic search interfaces.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The language supported is English
## Dataset Structure
### Data Instances
This dataset is a mirror of the original ArXiv data. Because the full dataset is rather large (1.1TB and growing), this dataset provides only a metadata file in the json format. An example is given below
```
{'id': '0704.0002',
'submitter': 'Louis Theran',
'authors': 'Ileana Streinu and Louis Theran',
'title': 'Sparsity-certifying Graph Decompositions',
'comments': 'To appear in Graphs and Combinatorics',
'journal-ref': None,
'doi': None,
'report-no': None,
'categories': 'math.CO cs.CG',
'license': 'http://arxiv.org/licenses/nonexclusive-distrib/1.0/',
'abstract': ' We describe a new algorithm, the $(k,\\ell)$-pebble game with colors, and use\nit obtain a characterization of the family of $(k,\\ell)$-sparse graphs and\nalgorithmic solutions to a family of problems concerning tree decompositions of\ngraphs. Special instances of sparse graphs appear in rigidity theory and have\nreceived increased attention in recent years. In particular, our colored\npebbles generalize and strengthen the previous results of Lee and Streinu and\ngive a new proof of the Tutte-Nash-Williams characterization of arboricity. We\nalso present a new decomposition that certifies sparsity based on the\n$(k,\\ell)$-pebble game with colors. Our work also exposes connections between\npebble game algorithms and previous sparse graph algorithms by Gabow, Gabow and\nWestermann and Hendrickson.\n',
'update_date': '2008-12-13'}
```
### Data Fields
- `id`: ArXiv ID (can be used to access the paper)
- `submitter`: Who submitted the paper
- `authors`: Authors of the paper
- `title`: Title of the paper
- `comments`: Additional info, such as number of pages and figures
- `journal-ref`: Information about the journal the paper was published in
- `doi`: [Digital Object Identifier](https://www.doi.org)
- `report-no`: Report Number
- `abstract`: The abstract of the paper
- `categories`: Categories / tags in the ArXiv system
### Data Splits
The data was not splited.
## Dataset Creation
### Curation Rationale
For nearly 30 years, ArXiv has served the public and research communities by providing open access to scholarly articles, from the vast branches of physics to the many subdisciplines of computer science to everything in between, including math, statistics, electrical engineering, quantitative biology, and economics. This rich corpus of information offers significant, but sometimes overwhelming depth. In these times of unique global challenges, efficient extraction of insights from data is essential. To help make the arXiv more accessible, a free, open pipeline on Kaggle to the machine-readable arXiv dataset: a repository of 1.7 million articles, with relevant features such as article titles, authors, categories, abstracts, full text PDFs, and more is presented to empower new use cases that can lead to the exploration of richer machine learning techniques that combine multi-modal features towards applications like trend analysis, paper recommender engines, category prediction, co-citation networks, knowledge graph construction and semantic search interfaces.
### Source Data
This data is based on arXiv papers.
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
This dataset contains no annotations.
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The original data is maintained by [ArXiv](https://arxiv.org/)
### Licensing Information
The data is under the [Creative Commons CC0 1.0 Universal Public Domain Dedication](https://creativecommons.org/publicdomain/zero/1.0/)
### Citation Information
```
@misc{clement2019arxiv,
title={On the Use of ArXiv as a Dataset},
author={Colin B. Clement and Matthew Bierbaum and Kevin P. O'Keeffe and Alexander A. Alemi},
year={2019},
eprint={1905.00075},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
```
### Contributions
Thanks to [@tanmoyio](https://github.com/tanmoyio) for adding this dataset. |
head_qa | 2023-06-01T14:59:51.000Z | [
"task_categories:question-answering",
"task_ids:multiple-choice-qa",
"annotations_creators:no-annotation",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"language:es",
"license:mit",
"region:us"
] | null | HEAD-QA is a multi-choice HEAlthcare Dataset. The questions come from exams to access a specialized position in the
Spanish healthcare system, and are challenging even for highly specialized humans. They are designed by the Ministerio
de Sanidad, Consumo y Bienestar Social.
The dataset contains questions about the following topics: medicine, nursing, psychology, chemistry, pharmacology and biology. | @inproceedings{vilares-gomez-rodriguez-2019-head,
title = "{HEAD}-{QA}: A Healthcare Dataset for Complex Reasoning",
author = "Vilares, David and
G{\'o}mez-Rodr{\'i}guez, Carlos",
booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2019",
address = "Florence, Italy",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/P19-1092",
doi = "10.18653/v1/P19-1092",
pages = "960--966",
abstract = "We present HEAD-QA, a multi-choice question answering testbed to encourage research on complex reasoning. The questions come from exams to access a specialized position in the Spanish healthcare system, and are challenging even for highly specialized humans. We then consider monolingual (Spanish) and cross-lingual (to English) experiments with information retrieval and neural techniques. We show that: (i) HEAD-QA challenges current methods, and (ii) the results lag well behind human performance, demonstrating its usefulness as a benchmark for future work.",
} | null | 7 | 262 | ---
annotations_creators:
- no-annotation
language_creators:
- expert-generated
language:
- en
- es
license:
- mit
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- multiple-choice-qa
paperswithcode_id: headqa
pretty_name: HEAD-QA
dataset_info:
- config_name: es
features:
- name: name
dtype: string
- name: year
dtype: string
- name: category
dtype: string
- name: qid
dtype: int32
- name: qtext
dtype: string
- name: ra
dtype: int32
- name: image
dtype: image
- name: answers
list:
- name: aid
dtype: int32
- name: atext
dtype: string
splits:
- name: train
num_bytes: 1229678
num_examples: 2657
- name: test
num_bytes: 1204006
num_examples: 2742
- name: validation
num_bytes: 573354
num_examples: 1366
download_size: 79365502
dataset_size: 3007038
- config_name: en
features:
- name: name
dtype: string
- name: year
dtype: string
- name: category
dtype: string
- name: qid
dtype: int32
- name: qtext
dtype: string
- name: ra
dtype: int32
- name: image
dtype: image
- name: answers
list:
- name: aid
dtype: int32
- name: atext
dtype: string
splits:
- name: train
num_bytes: 1156808
num_examples: 2657
- name: test
num_bytes: 1131536
num_examples: 2742
- name: validation
num_bytes: 539892
num_examples: 1366
download_size: 79365502
dataset_size: 2828236
config_names:
- en
- es
---
# Dataset Card for HEAD-QA
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [HEAD-QA homepage](https://aghie.github.io/head-qa/)
- **Repository:** [HEAD-QA repository](https://github.com/aghie/head-qa)
- **Paper:** [HEAD-QA: A Healthcare Dataset for Complex Reasoning](https://www.aclweb.org/anthology/P19-1092/)
- **Leaderboard:** [HEAD-QA leaderboard](https://aghie.github.io/head-qa/#leaderboard-general)
- **Point of Contact:** [María Grandury](mailto:mariagrandury@gmail.com) (Dataset Submitter)
### Dataset Summary
HEAD-QA is a multi-choice HEAlthcare Dataset. The questions come from exams to access a specialized position in the
Spanish healthcare system, and are challenging even for highly specialized humans. They are designed by the
[Ministerio de Sanidad, Consumo y Bienestar Social](https://www.mscbs.gob.es/), who also provides direct
[access](https://fse.mscbs.gob.es/fseweb/view/public/datosanteriores/cuadernosExamen/busquedaConvocatoria.xhtml)
to the exams of the last 5 years (in Spanish).
```
Date of the last update of the documents object of the reuse: January, 14th, 2019.
```
HEAD-QA tries to make these questions accesible for the Natural Language Processing community. We hope it is an useful resource towards achieving better QA systems. The dataset contains questions about the following topics:
- Medicine
- Nursing
- Psychology
- Chemistry
- Pharmacology
- Biology
### Supported Tasks and Leaderboards
- `multiple-choice-qa`: HEAD-QA is a multi-choice question answering testbed to encourage research on complex reasoning.
### Languages
The questions and answers are available in both Spanish (BCP-47 code: 'es-ES') and English (BCP-47 code: 'en').
The language by default is Spanish:
```
from datasets import load_dataset
data_es = load_dataset('head_qa')
data_en = load_dataset('head_qa', 'en')
```
## Dataset Structure
### Data Instances
A typical data point comprises a question `qtext`, multiple possible answers `atext` and the right answer `ra`.
An example from the HEAD-QA dataset looks as follows:
```
{
'qid': '1',
'category': 'biology',
'qtext': 'Los potenciales postsinápticos excitadores:',
'answers': [
{
'aid': 1,
'atext': 'Son de tipo todo o nada.'
},
{
'aid': 2,
'atext': 'Son hiperpolarizantes.'
},
{
'aid': 3,
'atext': 'Se pueden sumar.'
},
{
'aid': 4,
'atext': 'Se propagan a largas distancias.'
},
{
'aid': 5,
'atext': 'Presentan un periodo refractario.'
}],
'ra': '3',
'image': <PIL.PngImagePlugin.PngImageFile image mode=RGB size=675x538 at 0x1B42B6A1668>,
'name': 'Cuaderno_2013_1_B',
'year': '2013'
}
```
### Data Fields
- `qid`: question identifier (int)
- `category`: category of the question: "medicine", "nursing", "psychology", "chemistry", "pharmacology", "biology"
- `qtext`: question text
- `answers`: list of possible answers. Each element of the list is a dictionary with 2 keys:
- `aid`: answer identifier (int)
- `atext`: answer text
- `ra`: `aid` of the right answer (int)
- `image`: (optional) a `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
- `name`: name of the exam from which the question was extracted
- `year`: year in which the exam took place
### Data Splits
The data is split into train, validation and test set for each of the two languages. The split sizes are as follow:
| | Train | Val | Test |
| ----- | ------ | ----- | ---- |
| Spanish | 2657 | 1366 | 2742 |
| English | 2657 | 1366 | 2742 |
## Dataset Creation
### Curation Rationale
As motivation for the creation of this dataset, here is the abstract of the paper:
"We present HEAD-QA, a multi-choice question answering testbed to encourage research on complex reasoning. The questions
come from exams to access a specialized position in the Spanish healthcare system, and are challenging even for highly
specialized humans. We then consider monolingual (Spanish) and cross-lingual (to English) experiments with information
retrieval and neural techniques. We show that: (i) HEAD-QA challenges current methods, and (ii) the results lag well
behind human performance, demonstrating its usefulness as a benchmark for future work."
### Source Data
#### Initial Data Collection and Normalization
The questions come from exams to access a specialized position in the Spanish healthcare system, and are designed by the
[Ministerio de Sanidad, Consumo y Bienestar Social](https://www.mscbs.gob.es/), who also provides direct
[access](https://fse.mscbs.gob.es/fseweb/view/public/datosanteriores/cuadernosExamen/busquedaConvocatoria.xhtml)
to the exams of the last 5 years (in Spanish).
#### Who are the source language producers?
The dataset was created by David Vilares and Carlos Gómez-Rodríguez.
### Annotations
The dataset does not contain any additional annotations.
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The dataset was created by David Vilares and Carlos Gómez-Rodríguez.
### Licensing Information
According to the [HEAD-QA homepage](https://aghie.github.io/head-qa/#legal-requirements):
The Ministerio de Sanidad, Consumo y Biniestar Social allows the redistribution of the exams and their content under [certain conditions:](https://www.mscbs.gob.es/avisoLegal/home.htm)
- The denaturalization of the content of the information is prohibited in any circumstance.
- The user is obliged to cite the source of the documents subject to reuse.
- The user is obliged to indicate the date of the last update of the documents object of the reuse.
According to the [HEAD-QA repository](https://github.com/aghie/head-qa/blob/master/LICENSE):
The dataset is licensed under the [MIT License](https://mit-license.org/).
### Citation Information
```
@inproceedings{vilares-gomez-rodriguez-2019-head,
title = "{HEAD}-{QA}: A Healthcare Dataset for Complex Reasoning",
author = "Vilares, David and
G{\'o}mez-Rodr{\'i}guez, Carlos",
booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2019",
address = "Florence, Italy",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/P19-1092",
doi = "10.18653/v1/P19-1092",
pages = "960--966",
abstract = "We present HEAD-QA, a multi-choice question answering testbed to encourage research on complex reasoning. The questions come from exams to access a specialized position in the Spanish healthcare system, and are challenging even for highly specialized humans. We then consider monolingual (Spanish) and cross-lingual (to English) experiments with information retrieval and neural techniques. We show that: (i) HEAD-QA challenges current methods, and (ii) the results lag well behind human performance, demonstrating its usefulness as a benchmark for future work.",
}
```
### Contributions
Thanks to [@mariagrandury](https://github.com/mariagrandury) for adding this dataset. |
M-CLIP/ImageCaptions-7M-Embeddings | 2022-05-17T23:34:13.000Z | [
"region:us"
] | M-CLIP | null | null | null | 0 | 262 | Entry not found |
biglam/nls_chapbook_illustrations | 2023-02-15T16:11:54.000Z | [
"task_categories:object-detection",
"task_categories:image-classification",
"task_ids:multi-class-image-classification",
"annotations_creators:expert-generated",
"size_categories:1K<n<10K",
"license:other",
"lam",
"historic",
"arxiv:1405.0312",
"region:us"
] | biglam | null | @inproceedings{10.1145/3476887.3476893,
author = {Dutta, Abhishek and Bergel, Giles and Zisserman, Andrew},
title = {Visual Analysis of Chapbooks Printed in Scotland},
year = {2021},
isbn = {9781450386906},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3476887.3476893},
doi = {10.1145/3476887.3476893},
abstract = {Chapbooks were short, cheap printed booklets produced in large quantities in Scotland, England, Ireland, North America and much of Europe between roughly the seventeenth and nineteenth centuries. A form of popular literature containing songs, stories, poems, games, riddles, religious writings and other content designed to appeal to a wide readership, they were frequently illustrated, particularly on their title-pages. This paper describes the visual analysis of such chapbook illustrations. We automatically extract all the illustrations contained in the National Library of Scotland Chapbooks Printed in Scotland dataset, and create a visual search engine to search this dataset using full or part-illustrations as queries. We also cluster these illustrations based on their visual content, and provide keyword-based search of the metadata associated with each publication. The visual search; clustering of illustrations based on visual content; and metadata search features enable researchers to forensically analyse the chapbooks dataset and to discover unnoticed relationships between its elements. We release all annotations and software tools described in this paper to enable reproduction of the results presented and to allow extension of the methodology described to datasets of a similar nature.},
booktitle = {The 6th International Workshop on Historical Document Imaging and Processing},
pages = {67–72},
numpages = {6},
keywords = {illustration detection, chapbooks, image search, visual grouping, printing, digital scholarship, illustration dataset},
location = {Lausanne, Switzerland},
series = {HIP '21}
} | null | 7 | 262 | ---
annotations_creators:
- expert-generated
language_creators: []
license:
- other
multilinguality: []
pretty_name: National Library of Scotland Chapbook Illustrations
size_categories:
- 1K<n<10K
source_datasets: []
tags:
- lam
- historic
task_categories:
- object-detection
- image-classification
task_ids:
- multi-class-image-classification
---
# Dataset Card for National Library of Scotland Chapbook Illustrations
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.robots.ox.ac.uk/~vgg/research/chapbooks/
- **Repository:** https://data.nls.uk/data/digitised-collections/chapbooks-printed-in-scotland/
- **Paper:** https://www.robots.ox.ac.uk/~vgg/research/chapbooks/data/dutta2021visual.pdf
- **Leaderboard:**
- **Point of Contact:** giles.bergel@eng.ox.ac.uk
### Dataset Summary
This dataset comprises of images from chapbooks held by the [National Library of Scotland](https://www.nls.uk/) and digitised and published as its [Chapbooks Printed in Scotland](https://data.nls.uk/data/digitised-collections/chapbooks-printed-in-scotland/) dataset.
> "Chapbooks were staple everyday reading material from the end of the 17th to the later 19th century. They were usually printed on a single sheet and then folded into books of 8, 12, 16 and 24 pages, and they were often illustrated with crude woodcuts. Their subjects range from news courtship, humour, occupations, fairy tales, apparitions, war, politics, crime, executions, historical figures, transvestites [*sic*] and freemasonry to religion and, of course, poetry. It has been estimated that around two thirds of chapbooks contain songs and poems, often under the title garlands." -[Source](https://data.nls.uk/data/digitised-collections/chapbooks-printed-in-scotland/)
Chapbooks were frequently illustrated, particularly on their title pages to attract customers, usually with a woodblock-printed illustration, or occasionally with a stereotyped woodcut or cast metal ornament. Apart from their artistic interest, these illustrations can also provide historical evidence such as the date, place or persons behind the publication of an item.
This dataset contains annotations for a subset of these chapbooks, created by Giles Bergel and Abhishek Dutta, based in the [Visual Geometry Group](https://www.robots.ox.ac.uk/~vgg/) in the University of Oxford. They were created under a National Librarian of Scotland's Fellowship in Digital Scholarship [awarded](https://data.nls.uk/projects/the-national-librarians-research-fellowship-in-digital-scholarship/) to Giles Bergel in 2020. These annotations provide bounding boxes around illustrations printed on a subset of the chapbook pages, created using a combination of manual annotation and machine classification, described in [this paper](https://www.robots.ox.ac.uk/~vgg/research/chapbooks/data/dutta2021visual.pdf).
The dataset also includes computationally inferred 'visual groupings' to which illustrated chapbook pages may belong. These groupings are based on the recurrence of illustrations on chapbook pages, as determined through the use of the [VGG Image Search Engine (VISE) software](https://www.robots.ox.ac.uk/~vgg/software/vise/)
### Supported Tasks and Leaderboards
- `object-detection`: the dataset contains bounding boxes for images contained in the Chapbooks
- `image-classification`: a configuration for this dataset provides a classification label indicating if a page contains an illustration or not.
- `image-matching`: a configuration for this dataset contains the annotations sorted into clusters or 'visual groupings' of illustrations that contain visually-matching content as determined by using the [VGG Image Search Engine (VISE) software](https://www.robots.ox.ac.uk/~vgg/software/vise/).
The performance on the `object-detection` task reported in the paper [Visual Analysis of Chapbooks Printed in Scotland](https://dl.acm.org/doi/10.1145/3476887.3476893) is as follows:
| IOU threshold | Precision | Recall |
|---------------|-----------|--------|
| 0.50 | 0.993 | 0.911 |
| 0.75 | 0.987 | 0.905 |
| 0.95 | 0.973 | 0.892 |
The performance on the `image classification` task reported in the paper [Visual Analysis of Chapbooks Printed in Scotland](https://dl.acm.org/doi/10.1145/3476887.3476893) is as follows:
Images in original dataset: 47329
Numbers of images on which at least one illustration was detected: 3629
Note that these figures do not represent images that contained multiple detections.
See the [paper](https://dl.acm.org/doi/10.1145/3476887.3476893) for examples of false-positive detections.
The performance on the 'image-matching' task is undergoing evaluation.
### Languages
Text accompanying the illustrations is in English, Scots or Scottish Gaelic.
## Dataset Structure
### Data Instances
An example instance from the `illustration-detection` split:
```python
{'image_id': 4,
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=600x1080>,
'width': 600,
'height': 1080,
'objects': [{'category_id': 0,
'image_id': '4',
'id': 1,
'area': 110901,
'bbox': [34.529998779296875,
556.8300170898438,
401.44000244140625,
276.260009765625],
'segmentation': [[34.529998779296875,
556.8300170898438,
435.9700012207031,
556.8300170898438,
435.9700012207031,
833.0900268554688,
34.529998779296875,
833.0900268554688]],
'iscrowd': False}]}
```
An example instance from the `image-classification` split:
```python
{'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=600x1080>,
'label': 1}
```
An example from the `image-matching` split:
```python
{'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=600x1080>,
'group-label': 231}
```
### Data Fields
The fields for the `illustration-detection` config:
- image_id: id for the image
- height: height of the image
- width: width of the image
- image: image of the chapbook page
- objects: annotations in COCO format, consisting of a list containing dictionaries with the following keys:
- bbox: bounding boxes for the images
- category_id: a label for the image
- image_id: id for the image
- iscrowd: COCO is a crowd flag
- segmentation: COCO segmentation annotations (empty in this case but kept for compatibility with other processing scripts)
The fields for the `image-classification` config:
- image: image
- label: a label indicating if the page contains an illustration or not
The fields for the `image-matching` config:
- image: image of the chapbook page
- label: an id for a particular instance of an image i.e. the same images will share the same id.
### Data Splits
There is a single split `train` for all configs. K-fold validation was used in the [paper](https://dl.acm.org/doi/10.1145/3476887.3476893) describing this dataset, so no existing splits were defined.
## Dataset Creation
### Curation Rationale
The dataset was created to facilitate research into Scottish chapbook illustration and publishing. Detected illustrations can be browsed under publication metadata: together with the use of [VGG Image Search Engine (VISE) software](https://www.robots.ox.ac.uk/~vgg/software/vise/), this allows researchers to identify matching imagery and to infer the source of a chapbook from partial evidence. This browse and search functionality is available in this [public demo](http://meru.robots.ox.ac.uk/nls_chapbooks/filelist) documented [here](https://www.robots.ox.ac.uk/~vgg/research/chapbooks/)
### Source Data
#### Initial Data Collection and Normalization
The initial data was taken from the [National Library of Scotland's Chapbooks Printed in Scotland dataset](https://data.nls.uk/data/digitised-collections/chapbooks-printed-in-scotland/) No normalisation was performed, but only the images and a subset of the metadata was used. OCR text was not used.
#### Who are the source language producers?
The initial dataset was created by the National Library of Scotland from scans and in-house curated catalogue descriptions for the NLS [Data Foundry](https://data.nls.uk) under the direction of Dr. Sarah Ames.
This subset of the data was created by Dr. Giles Bergel and Dr. Abhishek Dutta using a combination of manual annotation and machine classification, described below.
### Annotations
#### Annotation process
Annotation was initially performed on a subset of 337 of the 47329 images, using the [VGG List Annotator (LISA](https://gitlab.com/vgg/lisa) software. Detected illustrations, displayed as annotations in LISA, were reviewed and refined in a number of passes (see [this paper](https://dl.acm.org/doi/10.1145/3476887.3476893) for more details). Initial detections were performed with an [EfficientDet](https://ai.googleblog.com/2020/04/efficientdet-towards-scalable-and.html) object detector trained on [COCO](https://cocodataset.org/#home), the annotation of which is described in [this paper](https://arxiv.org/abs/1405.0312)
#### Who are the annotators?
Abhishek Dutta created the initial 337 annotations for retraining the EfficentDet model. Detections were reviewed and in some cases revised by Giles Bergel.
### Personal and Sensitive Information
None
## Considerations for Using the Data
### Social Impact of Dataset
We believe this dataset will assist in the training and benchmarking of illustration detectors. It is hoped that by automating a task that would otherwise require manual annotation it will save researchers time and labour in preparing data for both machine and human analysis. The dataset in question is based on a category of popular literature that reflected the learning, tastes and cultural faculties of both its large audiences and its largely-unknown creators - we hope that its use, reuse and adaptation will highlight the importance of cheap chapbooks in the spread of literature, knowledge and entertainment in both urban and rural regions of Scotland and the United Kingdom during this period.
### Discussion of Biases
While the original Chapbooks Printed in Scotland is the largest single collection of digitised chapbooks, it is as yet unknown if it is fully representative of all chapbooks printed in Scotland, or of cheap printed literature in general. It is known that a small number of chapbooks (less than 0.1%) within the original collection were not printed in Scotland but this is not expected to have a significant impact on the profile of the collection as a representation of the population of chapbooks as a whole.
The definition of an illustration as opposed to an ornament or other non-textual printed feature is somewhat arbitrary: edge-cases were evaluated by conformance with features that are most characteristic of the chapbook genre as a whole in terms of content, style or placement on the page.
As there is no consensus definition of the chapbook even among domain specialists, the composition of the original dataset is based on the judgement of those who assembled and curated the original collection.
### Other Known Limitations
Within this dataset, illustrations are repeatedly reused to an unusually high degree compared to other printed forms. The positioning of illustrations on the page and the size and format of chapbooks as a whole is also characteristic of the chapbook format in particular. The extent to which these annotations may be generalised to other printed works is under evaluation: initial results have been promising for other letterpress illustrations surrounded by texts.
## Additional Information
### Dataset Curators
- Giles Bergel
- Abhishek Dutta
### Licensing Information
In accordance with the [original data](https://data.nls.uk/data/digitised-collections/chapbooks-printed-in-scotland/), this dataset is in the public domain.
### Citation Information
``` bibtex
@inproceedings{10.1145/3476887.3476893,
author = {Dutta, Abhishek and Bergel, Giles and Zisserman, Andrew},
title = {Visual Analysis of Chapbooks Printed in Scotland},
year = {2021},
isbn = {9781450386906},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3476887.3476893},
doi = {10.1145/3476887.3476893},
abstract = {Chapbooks were short, cheap printed booklets produced in large quantities in Scotland, England, Ireland, North America and much of Europe between roughly the seventeenth and nineteenth centuries. A form of popular literature containing songs, stories, poems, games, riddles, religious writings and other content designed to appeal to a wide readership, they were frequently illustrated, particularly on their title-pages. This paper describes the visual analysis of such chapbook illustrations. We automatically extract all the illustrations contained in the National Library of Scotland Chapbooks Printed in Scotland dataset, and create a visual search engine to search this dataset using full or part-illustrations as queries. We also cluster these illustrations based on their visual content, and provide keyword-based search of the metadata associated with each publication. The visual search; clustering of illustrations based on visual content; and metadata search features enable researchers to forensically analyse the chapbooks dataset and to discover unnoticed relationships between its elements. We release all annotations and software tools described in this paper to enable reproduction of the results presented and to allow extension of the methodology described to datasets of a similar nature.},
booktitle = {The 6th International Workshop on Historical Document Imaging and Processing},
pages = {67–72},
numpages = {6},
keywords = {illustration detection, chapbooks, image search, visual grouping, printing, digital scholarship, illustration dataset},
location = {Lausanne, Switzerland},
series = {HIP '21}
}
```
### Contributions
Thanks to [@davanstrien](https://github.com/davanstrien) and Giles Bergel for adding this dataset. |
lc_quad | 2023-04-05T10:09:15.000Z | [
"task_categories:question-answering",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-3.0",
"knowledge-base-qa",
"region:us"
] | null | LC-QuAD 2.0 is a Large Question Answering dataset with 30,000 pairs of question and its corresponding SPARQL query. The target knowledge base is Wikidata and DBpedia, specifically the 2018 version. Please see our paper for details about the dataset creation process and framework. | @inproceedings{dubey2017lc2,
title={LC-QuAD 2.0: A Large Dataset for Complex Question Answering over Wikidata and DBpedia},
author={Dubey, Mohnish and Banerjee, Debayan and Abdelkawi, Abdelrahman and Lehmann, Jens},
booktitle={Proceedings of the 18th International Semantic Web Conference (ISWC)},
year={2019},
organization={Springer}
} | null | 5 | 261 | ---
annotations_creators:
- crowdsourced
language:
- en
language_creators:
- found
license:
- cc-by-3.0
multilinguality:
- monolingual
pretty_name: 'LC-QuAD 2.0: Large-scale Complex Question Answering Dataset'
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- question-answering
task_ids: []
paperswithcode_id: lc-quad-2-0
tags:
- knowledge-base-qa
dataset_info:
features:
- name: NNQT_question
dtype: string
- name: uid
dtype: int32
- name: subgraph
dtype: string
- name: template_index
dtype: int32
- name: question
dtype: string
- name: sparql_wikidata
dtype: string
- name: sparql_dbpedia18
dtype: string
- name: template
dtype: string
- name: paraphrased_question
dtype: string
splits:
- name: train
num_bytes: 16637751
num_examples: 19293
- name: test
num_bytes: 4067092
num_examples: 4781
download_size: 3959901
dataset_size: 20704843
---
# Dataset Card for LC-QuAD 2.0
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [http://lc-quad.sda.tech/](http://lc-quad.sda.tech/)
- **Repository:** https://github.com/AskNowQA/LC-QuAD2.0
- **Paper:** [LC-QuAD 2.0: A Large Dataset for Complex Question Answering over Wikidata and DBpedia](https://api.semanticscholar.org/CorpusID:198166992)
- **Point of Contact:** [Mohnish Dubey](mailto:dubey@cs.uni-bonn.de) or [Mohnish Dubey](mailto:dubey.mohnish5@gmail.com)
- **Size of downloaded dataset files:** 3.87 MB
- **Size of the generated dataset:** 20.73 MB
- **Total amount of disk used:** 24.60 MB
### Dataset Summary
LC-QuAD 2.0 is a Large Question Answering dataset with 30,000 pairs of question and its corresponding SPARQL query. The target knowledge base is Wikidata and DBpedia, specifically the 2018 version. Please see our paper for details about the dataset creation process and framework.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 3.87 MB
- **Size of the generated dataset:** 20.73 MB
- **Total amount of disk used:** 24.60 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"NNQT_question": "What is the {periodical literature} for {mouthpiece} of {Delta Air Lines}",
"paraphrased_question": "What is Delta Air Line's periodical literature mouthpiece?",
"question": "What periodical literature does Delta Air Lines use as a moutpiece?",
"sparql_dbpedia18": "\"select distinct ?obj where { ?statement <http://www.w3.org/1999/02/22-rdf-syntax-ns#subject> <http://wikidata.dbpedia.org/resou...",
"sparql_wikidata": " select distinct ?obj where { wd:Q188920 wdt:P2813 ?obj . ?obj wdt:P31 wd:Q1002697 } ",
"subgraph": "simple question right",
"template": " <S P ?O ; ?O instanceOf Type>",
"template_index": 65,
"uid": 19719
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `NNQT_question`: a `string` feature.
- `uid`: a `int32` feature.
- `subgraph`: a `string` feature.
- `template_index`: a `int32` feature.
- `question`: a `string` feature.
- `sparql_wikidata`: a `string` feature.
- `sparql_dbpedia18`: a `string` feature.
- `template`: a `string` feature.
- `paraphrased_question`: a `string` feature.
### Data Splits
| name |train|test|
|-------|----:|---:|
|default|19293|4781|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
LC-QuAD 2.0 is licensed under a [Creative Commons Attribution 3.0 Unported License](http://creativecommons.org/licenses/by/3.0/deed.en_US).
### Citation Information
```
@inproceedings{dubey2017lc2,
title={LC-QuAD 2.0: A Large Dataset for Complex Question Answering over Wikidata and DBpedia},
author={Dubey, Mohnish and Banerjee, Debayan and Abdelkawi, Abdelrahman and Lehmann, Jens},
booktitle={Proceedings of the 18th International Semantic Web Conference (ISWC)},
year={2019},
organization={Springer}
}
```
### Contributions
Thanks to [@lewtun](https://github.com/lewtun), [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset. |
jamescalam/image-text-demo | 2023-02-06T05:29:49.000Z | [
"region:us"
] | jamescalam | Demo dataset for testing or showing image-text capabilities. | @InProceedings{huggingface:dataset,
title = {Small image-text set},
author={James Briggs},
year={2022}
} | null | 0 | 261 | Entry not found |
Birchlabs/openai-prm800k-solutions-only | 2023-06-04T21:56:21.000Z | [
"license:mit",
"region:us"
] | Birchlabs | null | null | null | 5 | 261 | ---
license: mit
---
|
zxvix/pubmed_rap_2 | 2023-09-09T09:59:11.000Z | [
"region:us"
] | zxvix | null | null | null | 0 | 261 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
dataset_info:
features:
- name: MedlineCitation
struct:
- name: PMID
dtype: int32
- name: DateCompleted
struct:
- name: Year
dtype: int32
- name: Month
dtype: int32
- name: Day
dtype: int32
- name: NumberOfReferences
dtype: int32
- name: DateRevised
struct:
- name: Year
dtype: int32
- name: Month
dtype: int32
- name: Day
dtype: int32
- name: Article
struct:
- name: Abstract
struct:
- name: AbstractText
dtype: string
- name: ArticleTitle
dtype: string
- name: AuthorList
struct:
- name: Author
sequence:
- name: LastName
dtype: string
- name: ForeName
dtype: string
- name: Initials
dtype: string
- name: CollectiveName
dtype: string
- name: Language
dtype: string
- name: GrantList
struct:
- name: Grant
sequence:
- name: GrantID
dtype: string
- name: Agency
dtype: string
- name: Country
dtype: string
- name: PublicationTypeList
struct:
- name: PublicationType
sequence: string
- name: MedlineJournalInfo
struct:
- name: Country
dtype: string
- name: ChemicalList
struct:
- name: Chemical
sequence:
- name: RegistryNumber
dtype: string
- name: NameOfSubstance
dtype: string
- name: CitationSubset
dtype: string
- name: MeshHeadingList
struct:
- name: MeshHeading
sequence:
- name: DescriptorName
dtype: string
- name: QualifierName
dtype: string
- name: PubmedData
struct:
- name: ArticleIdList
sequence:
- name: ArticleId
sequence: string
- name: PublicationStatus
dtype: string
- name: History
struct:
- name: PubMedPubDate
sequence:
- name: Year
dtype: int32
- name: Month
dtype: int32
- name: Day
dtype: int32
- name: ReferenceList
sequence:
- name: Citation
dtype: string
- name: CitationId
dtype: int32
- name: text
dtype: string
- name: title
dtype: string
- name: original_text
dtype: string
splits:
- name: test
num_bytes: 4134922.0
num_examples: 1000
download_size: 2347554
dataset_size: 4134922.0
---
# Dataset Card for "pubmed_rap_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
medalpaca/medical_meadow_cord19 | 2023-04-06T16:47:03.000Z | [
"task_categories:summarization",
"size_categories:100K<n<1M",
"language:en",
"region:us"
] | medalpaca | null | null | null | 3 | 260 | ---
task_categories:
- summarization
language:
- en
size_categories:
- 100K<n<1M
---
# CORD 19
## Dataset Description
- **Homepage:** https://www.kaggle.com/datasets/allen-institute-for-ai/CORD-19-research-challenge
### Dataset Summary
In response to the COVID-19 pandemic, the White House and a coalition of leading research groups have prepared the COVID-19 Open Research Dataset (CORD-19). CORD-19 is a resource of over 1,000,000 scholarly articles, including over 400,000 with full text, about COVID-19, SARS-CoV-2, and related coronaviruses. This freely available dataset is provided to the global research community to apply recent advances in natural language processing and other AI techniques to generate new insights in support of the ongoing fight against this infectious disease.
This is a processed version of the dataset, where we removed some empty entries and formated it to be compatible with the alpaca training. For more details on the data, please refer to the original publicatio.
### Citation Information
```
@inproceedings{wang-etal-2020-cord,
title = "{CORD-19}: The {COVID-19} Open Research Dataset",
author = "Wang, Lucy Lu and Lo, Kyle and Chandrasekhar, Yoganand and Reas, Russell and Yang, Jiangjiang and Burdick, Doug and Eide, Darrin and Funk, Kathryn and Katsis, Yannis and Kinney, Rodney Michael and Li, Yunyao and Liu, Ziyang and Merrill, William and Mooney, Paul and Murdick, Dewey A. and Rishi, Devvret and Sheehan, Jerry and Shen, Zhihong and Stilson, Brandon and Wade, Alex D. and Wang, Kuansan and Wang, Nancy Xin Ru and Wilhelm, Christopher and Xie, Boya and Raymond, Douglas M. and Weld, Daniel S. and Etzioni, Oren and Kohlmeier, Sebastian",
booktitle = "Proceedings of the 1st Workshop on {NLP} for {COVID-19} at {ACL} 2020",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.nlpcovid19-acl.1"
}
``` |
MBZUAI/LaMini-instruction | 2023-04-30T11:01:41.000Z | [
"task_categories:text2text-generation",
"size_categories:1M<n<10M",
"language:en",
"license:cc-by-nc-4.0",
"arxiv:2304.14402",
"region:us"
] | MBZUAI | null | null | null | 105 | 259 | ---
license: cc-by-nc-4.0
task_categories:
- text2text-generation
language:
- en
size_categories:
- 1M<n<10M
dataset_info:
features:
- name: instruction
dtype: string
- name: response
dtype: string
- name: instruction_source
dtype: string
splits:
- name: train
num_bytes: 1162632572
num_examples: 2585615
download_size: 704293718
dataset_size: 1162632572
---
# Dataset Card for "LaMini-Instruction"
<p align="center" width="100%">
<a><img src="https://raw.githubusercontent.com/mbzuai-nlp/lamini-lm/main/images/lamini.png" alt="Title" style="width: 100%; min-width: 300px; display: block; margin: auto;"></a>
</p>
<p align="center"> <a href="https://twitter.com/WuMinghao_nlp" target="_blank">Minghao Wu</a>, <a href="https://twitter.com/MacAbdul9" target="_blank">Abdul Waheed</a>, <a href="https://chiyuzhang94.github.io/" target="_blank">Chiyu Zhang</a>, <a href="https://mageed.arts.ubc.ca/" target="_blank">Muhammad Abdul-Mageed</a>, <a href="https://afaji.github.io/" target="_blank">Alham Fikri Aji</a>,</p>
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Instances](#data-instances)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Additional Information](#additional-information)
## Dataset Description
We distill the knowledge from large language models by performing sentence/offline distillation (Kim and Rush, 2016). We generate a total of **2.58M** pairs of instructions and responses using [`gpt-3.5-turbo`](https://openai.com/api/) based on several existing resources of prompts, including [self-instruct](https://github.com/yizhongw/self-instruct) (Wang et al., 2022), [P3](https://huggingface.co/datasets/bigscience/P3) (Sanh et al., 2022), [FLAN](https://github.com/google-research/FLAN) (Longpre et al., 2023) and [Alpaca](https://github.com/tatsu-lab/stanford_alpaca) (Taori et al., 2023). More information about the process for generating our instruction dataset, please refer to [our paper](https://arxiv.org/abs/2304.14402).
We use LaMini-Instruction to fine-tune various small-sized language models, resulting in [LaMini-LM model series](https://github.com/mbzuai-nlp/laMini-LM#models).
Each sample contain the following two fields:
* `instruction`: `string`, describes the task that the model is expected to perform.
* `response`: `string`, the response to the instruction as generated by `gpt-3.5-turbo`.
* `instruction_source`: `string`, the source of a instruction. The possible values are `{'alpaca', 'generated_flan', 'generated_p3', 'original_flan', 'original_p3', 'self_instruct_with_topic', 'self_instruct_without_topic'}`.
Two examples in our LaMini-instruction data:
```json
{
"instruction": "What is the capital of Australia?\n\nIs Canberra the only city chosen to serve as a capital due to a compromise between Sydney and Melbourne?",
"response": "The capital of Australia is Canberra. Yes, Canberra was chosen as the capital of Australia in 1908 due to a compromise between Sydney and Melbourne, as neither city wanted the other to become the capital.",
"instruction_source": "generated_flan"
}
```
```json
{
"instruction": "Create a poem that has the word \"ocean\" in every other line.",
"response": "The ocean's depth is a mystery,\nIts waves crashing with energy.\nSalt air fills the coastal scene,\nThe ocean's blue hues pure and clean.\n\nSeagulls cry out a melancholy song,\nThe ocean's rhythm steady and strong.\nSunsets paint the water gold,\nThe ocean's secrets forever untold.",
"instruction_source": "self_instruct_without_topic"
}
```
## Considerations for Using the Data
As the output is distilled from ChatGPT, this data contains errors and biases produced by ChatGPT. Models trained with this dataset will inherit the errors and biases. We encourage users to use this data with caution and propose new methods to filter or improve the imperfections.
### Licensing Information
The dataset is available under the [Creative Commons NonCommercial (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/legalcode).
### Citation Information
Please cite us if you use our data or models.
```bibtex
@article{lamini-lm,
author = {Minghao Wu and
Abdul Waheed and
Chiyu Zhang and
Muhammad Abdul-Mageed and
Alham Fikri Aji
},
title = {LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions},
journal = {CoRR},
volume = {abs/2304.14402},
year = {2023},
url = {https://arxiv.org/abs/2304.14402},
eprinttype = {arXiv},
eprint = {2304.14402}
}
``` |
yuvalkirstain/pickapic_v1_no_images | 2023-04-16T14:53:35.000Z | [
"region:us"
] | yuvalkirstain | null | null | null | 0 | 259 | ---
dataset_info:
features:
- name: are_different
dtype: bool
- name: best_image_uid
dtype: string
- name: caption
dtype: string
- name: created_at
dtype: timestamp[ns]
- name: has_label
dtype: bool
- name: image_0_uid
dtype: string
- name: image_0_url
dtype: string
- name: image_1_uid
dtype: string
- name: image_1_url
dtype: string
- name: label_0
dtype: float64
- name: label_1
dtype: float64
- name: model_0
dtype: string
- name: model_1
dtype: string
- name: ranking_id
dtype: int64
- name: user_id
dtype: int64
- name: num_example_per_prompt
dtype: int64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 308923563
num_examples: 583747
- name: validation
num_bytes: 8759568
num_examples: 17439
- name: test
num_bytes: 7194410
num_examples: 14073
- name: validation_unique
num_bytes: 248229
num_examples: 500
- name: test_unique
num_bytes: 256313
num_examples: 500
download_size: 175013617
dataset_size: 325382083
---
# Dataset Card for "pick_a_pic_v1_no_images"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
open-llm-leaderboard/details_adonlee__LLaMA_2_70B_LoRA | 2023-09-22T21:37:15.000Z | [
"region:us"
] | open-llm-leaderboard | null | null | null | 0 | 259 | ---
pretty_name: Evaluation run of adonlee/LLaMA_2_70B_LoRA
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [adonlee/LLaMA_2_70B_LoRA](https://huggingface.co/adonlee/LLaMA_2_70B_LoRA) on\
\ the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 61 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_adonlee__LLaMA_2_70B_LoRA\"\
,\n\t\"harness_truthfulqa_mc_0\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\
\nThese are the [latest results from run 2023-09-22T21:35:51.410251](https://huggingface.co/datasets/open-llm-leaderboard/details_adonlee__LLaMA_2_70B_LoRA/blob/main/results_2023-09-22T21-35-51.410251.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.7077096775676626,\n\
\ \"acc_stderr\": 0.030867670314758275,\n \"acc_norm\": 0.7114995822621553,\n\
\ \"acc_norm_stderr\": 0.030836833292351554,\n \"mc1\": 0.4663402692778458,\n\
\ \"mc1_stderr\": 0.017463793867168106,\n \"mc2\": 0.6451679386365279,\n\
\ \"mc2_stderr\": 0.014753028795637621\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.6902730375426621,\n \"acc_stderr\": 0.013512058415238361,\n\
\ \"acc_norm\": 0.726962457337884,\n \"acc_norm_stderr\": 0.013019332762635743\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6886078470424218,\n\
\ \"acc_stderr\": 0.004621163476949205,\n \"acc_norm\": 0.8755228042222665,\n\
\ \"acc_norm_stderr\": 0.003294504807555228\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.35,\n \"acc_stderr\": 0.0479372485441102,\n \
\ \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.0479372485441102\n },\n\
\ \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6370370370370371,\n\
\ \"acc_stderr\": 0.041539484047424,\n \"acc_norm\": 0.6370370370370371,\n\
\ \"acc_norm_stderr\": 0.041539484047424\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.8223684210526315,\n \"acc_stderr\": 0.03110318238312338,\n\
\ \"acc_norm\": 0.8223684210526315,\n \"acc_norm_stderr\": 0.03110318238312338\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.76,\n\
\ \"acc_stderr\": 0.04292346959909283,\n \"acc_norm\": 0.76,\n \
\ \"acc_norm_stderr\": 0.04292346959909283\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.7358490566037735,\n \"acc_stderr\": 0.02713429162874171,\n\
\ \"acc_norm\": 0.7358490566037735,\n \"acc_norm_stderr\": 0.02713429162874171\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.8263888888888888,\n\
\ \"acc_stderr\": 0.03167473383795718,\n \"acc_norm\": 0.8263888888888888,\n\
\ \"acc_norm_stderr\": 0.03167473383795718\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.49,\n \"acc_stderr\": 0.05024183937956912,\n \
\ \"acc_norm\": 0.49,\n \"acc_norm_stderr\": 0.05024183937956912\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.56,\n \"acc_stderr\": 0.04988876515698589,\n \"acc_norm\": 0.56,\n\
\ \"acc_norm_stderr\": 0.04988876515698589\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.41,\n \"acc_stderr\": 0.04943110704237102,\n \
\ \"acc_norm\": 0.41,\n \"acc_norm_stderr\": 0.04943110704237102\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6936416184971098,\n\
\ \"acc_stderr\": 0.03514942551267439,\n \"acc_norm\": 0.6936416184971098,\n\
\ \"acc_norm_stderr\": 0.03514942551267439\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.37254901960784315,\n \"acc_stderr\": 0.048108401480826346,\n\
\ \"acc_norm\": 0.37254901960784315,\n \"acc_norm_stderr\": 0.048108401480826346\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.78,\n \"acc_stderr\": 0.04163331998932263,\n \"acc_norm\": 0.78,\n\
\ \"acc_norm_stderr\": 0.04163331998932263\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.7106382978723405,\n \"acc_stderr\": 0.02964400657700962,\n\
\ \"acc_norm\": 0.7106382978723405,\n \"acc_norm_stderr\": 0.02964400657700962\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.45614035087719296,\n\
\ \"acc_stderr\": 0.04685473041907789,\n \"acc_norm\": 0.45614035087719296,\n\
\ \"acc_norm_stderr\": 0.04685473041907789\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.6206896551724138,\n \"acc_stderr\": 0.04043461861916746,\n\
\ \"acc_norm\": 0.6206896551724138,\n \"acc_norm_stderr\": 0.04043461861916746\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.47619047619047616,\n \"acc_stderr\": 0.02572209706438853,\n \"\
acc_norm\": 0.47619047619047616,\n \"acc_norm_stderr\": 0.02572209706438853\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.5079365079365079,\n\
\ \"acc_stderr\": 0.044715725362943486,\n \"acc_norm\": 0.5079365079365079,\n\
\ \"acc_norm_stderr\": 0.044715725362943486\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.47,\n \"acc_stderr\": 0.050161355804659205,\n \
\ \"acc_norm\": 0.47,\n \"acc_norm_stderr\": 0.050161355804659205\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\"\
: 0.8096774193548387,\n \"acc_stderr\": 0.022331707611823078,\n \"\
acc_norm\": 0.8096774193548387,\n \"acc_norm_stderr\": 0.022331707611823078\n\
\ },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\"\
: 0.5714285714285714,\n \"acc_stderr\": 0.034819048444388045,\n \"\
acc_norm\": 0.5714285714285714,\n \"acc_norm_stderr\": 0.034819048444388045\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.78,\n \"acc_stderr\": 0.04163331998932262,\n \"acc_norm\"\
: 0.78,\n \"acc_norm_stderr\": 0.04163331998932262\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.8545454545454545,\n \"acc_stderr\": 0.027530196355066584,\n\
\ \"acc_norm\": 0.8545454545454545,\n \"acc_norm_stderr\": 0.027530196355066584\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.898989898989899,\n \"acc_stderr\": 0.021469735576055343,\n \"\
acc_norm\": 0.898989898989899,\n \"acc_norm_stderr\": 0.021469735576055343\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.9326424870466321,\n \"acc_stderr\": 0.0180883938390789,\n\
\ \"acc_norm\": 0.9326424870466321,\n \"acc_norm_stderr\": 0.0180883938390789\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.7102564102564103,\n \"acc_stderr\": 0.023000628243687968,\n\
\ \"acc_norm\": 0.7102564102564103,\n \"acc_norm_stderr\": 0.023000628243687968\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.337037037037037,\n \"acc_stderr\": 0.028820884666253252,\n \
\ \"acc_norm\": 0.337037037037037,\n \"acc_norm_stderr\": 0.028820884666253252\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.7815126050420168,\n \"acc_stderr\": 0.02684151432295893,\n \
\ \"acc_norm\": 0.7815126050420168,\n \"acc_norm_stderr\": 0.02684151432295893\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.4900662251655629,\n \"acc_stderr\": 0.04081677107248436,\n \"\
acc_norm\": 0.4900662251655629,\n \"acc_norm_stderr\": 0.04081677107248436\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.9009174311926605,\n \"acc_stderr\": 0.01280978008187893,\n \"\
acc_norm\": 0.9009174311926605,\n \"acc_norm_stderr\": 0.01280978008187893\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.5833333333333334,\n \"acc_stderr\": 0.033622774366080424,\n \"\
acc_norm\": 0.5833333333333334,\n \"acc_norm_stderr\": 0.033622774366080424\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.9019607843137255,\n \"acc_stderr\": 0.0208711184555521,\n \"acc_norm\"\
: 0.9019607843137255,\n \"acc_norm_stderr\": 0.0208711184555521\n },\n\
\ \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\":\
\ 0.8818565400843882,\n \"acc_stderr\": 0.02101105265987847,\n \"\
acc_norm\": 0.8818565400843882,\n \"acc_norm_stderr\": 0.02101105265987847\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.7847533632286996,\n\
\ \"acc_stderr\": 0.027584066602208274,\n \"acc_norm\": 0.7847533632286996,\n\
\ \"acc_norm_stderr\": 0.027584066602208274\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.8473282442748091,\n \"acc_stderr\": 0.031545216720054725,\n\
\ \"acc_norm\": 0.8473282442748091,\n \"acc_norm_stderr\": 0.031545216720054725\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.8760330578512396,\n \"acc_stderr\": 0.030083098716035202,\n \"\
acc_norm\": 0.8760330578512396,\n \"acc_norm_stderr\": 0.030083098716035202\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8425925925925926,\n\
\ \"acc_stderr\": 0.035207039905179635,\n \"acc_norm\": 0.8425925925925926,\n\
\ \"acc_norm_stderr\": 0.035207039905179635\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.8466257668711656,\n \"acc_stderr\": 0.0283116014414386,\n\
\ \"acc_norm\": 0.8466257668711656,\n \"acc_norm_stderr\": 0.0283116014414386\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5714285714285714,\n\
\ \"acc_stderr\": 0.04697113923010213,\n \"acc_norm\": 0.5714285714285714,\n\
\ \"acc_norm_stderr\": 0.04697113923010213\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.8252427184466019,\n \"acc_stderr\": 0.03760178006026621,\n\
\ \"acc_norm\": 0.8252427184466019,\n \"acc_norm_stderr\": 0.03760178006026621\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.9145299145299145,\n\
\ \"acc_stderr\": 0.01831589168562585,\n \"acc_norm\": 0.9145299145299145,\n\
\ \"acc_norm_stderr\": 0.01831589168562585\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.75,\n \"acc_stderr\": 0.04351941398892446,\n \
\ \"acc_norm\": 0.75,\n \"acc_norm_stderr\": 0.04351941398892446\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8697318007662835,\n\
\ \"acc_stderr\": 0.012036729568216054,\n \"acc_norm\": 0.8697318007662835,\n\
\ \"acc_norm_stderr\": 0.012036729568216054\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7687861271676301,\n \"acc_stderr\": 0.022698657167855713,\n\
\ \"acc_norm\": 0.7687861271676301,\n \"acc_norm_stderr\": 0.022698657167855713\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.646927374301676,\n\
\ \"acc_stderr\": 0.01598420454526858,\n \"acc_norm\": 0.646927374301676,\n\
\ \"acc_norm_stderr\": 0.01598420454526858\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.7516339869281046,\n \"acc_stderr\": 0.024739981355113592,\n\
\ \"acc_norm\": 0.7516339869281046,\n \"acc_norm_stderr\": 0.024739981355113592\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7684887459807074,\n\
\ \"acc_stderr\": 0.023956532766639133,\n \"acc_norm\": 0.7684887459807074,\n\
\ \"acc_norm_stderr\": 0.023956532766639133\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.8271604938271605,\n \"acc_stderr\": 0.02103851777015737,\n\
\ \"acc_norm\": 0.8271604938271605,\n \"acc_norm_stderr\": 0.02103851777015737\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.599290780141844,\n \"acc_stderr\": 0.029233465745573096,\n \
\ \"acc_norm\": 0.599290780141844,\n \"acc_norm_stderr\": 0.029233465745573096\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.5814863102998696,\n\
\ \"acc_stderr\": 0.012599505608336482,\n \"acc_norm\": 0.5814863102998696,\n\
\ \"acc_norm_stderr\": 0.012599505608336482\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.7316176470588235,\n \"acc_stderr\": 0.026917481224377204,\n\
\ \"acc_norm\": 0.7316176470588235,\n \"acc_norm_stderr\": 0.026917481224377204\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.7679738562091504,\n \"acc_stderr\": 0.017077373377856933,\n \
\ \"acc_norm\": 0.7679738562091504,\n \"acc_norm_stderr\": 0.017077373377856933\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7454545454545455,\n\
\ \"acc_stderr\": 0.041723430387053825,\n \"acc_norm\": 0.7454545454545455,\n\
\ \"acc_norm_stderr\": 0.041723430387053825\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.8081632653061225,\n \"acc_stderr\": 0.025206963154225395,\n\
\ \"acc_norm\": 0.8081632653061225,\n \"acc_norm_stderr\": 0.025206963154225395\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8756218905472637,\n\
\ \"acc_stderr\": 0.023335401790166323,\n \"acc_norm\": 0.8756218905472637,\n\
\ \"acc_norm_stderr\": 0.023335401790166323\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.86,\n \"acc_stderr\": 0.03487350880197769,\n \
\ \"acc_norm\": 0.86,\n \"acc_norm_stderr\": 0.03487350880197769\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5301204819277109,\n\
\ \"acc_stderr\": 0.03885425420866767,\n \"acc_norm\": 0.5301204819277109,\n\
\ \"acc_norm_stderr\": 0.03885425420866767\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8713450292397661,\n \"acc_stderr\": 0.02567934272327692,\n\
\ \"acc_norm\": 0.8713450292397661,\n \"acc_norm_stderr\": 0.02567934272327692\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.4663402692778458,\n\
\ \"mc1_stderr\": 0.017463793867168106,\n \"mc2\": 0.6451679386365279,\n\
\ \"mc2_stderr\": 0.014753028795637621\n }\n}\n```"
repo_url: https://huggingface.co/adonlee/LLaMA_2_70B_LoRA
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|arc:challenge|25_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hellaswag|10_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-management|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-virology|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|truthfulqa:mc|0_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-09-22T21-35-51.410251.parquet'
- config_name: results
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- results_2023-09-22T21-35-51.410251.parquet
- split: latest
path:
- results_2023-09-22T21-35-51.410251.parquet
---
# Dataset Card for Evaluation run of adonlee/LLaMA_2_70B_LoRA
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/adonlee/LLaMA_2_70B_LoRA
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [adonlee/LLaMA_2_70B_LoRA](https://huggingface.co/adonlee/LLaMA_2_70B_LoRA) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_adonlee__LLaMA_2_70B_LoRA",
"harness_truthfulqa_mc_0",
split="train")
```
## Latest results
These are the [latest results from run 2023-09-22T21:35:51.410251](https://huggingface.co/datasets/open-llm-leaderboard/details_adonlee__LLaMA_2_70B_LoRA/blob/main/results_2023-09-22T21-35-51.410251.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.7077096775676626,
"acc_stderr": 0.030867670314758275,
"acc_norm": 0.7114995822621553,
"acc_norm_stderr": 0.030836833292351554,
"mc1": 0.4663402692778458,
"mc1_stderr": 0.017463793867168106,
"mc2": 0.6451679386365279,
"mc2_stderr": 0.014753028795637621
},
"harness|arc:challenge|25": {
"acc": 0.6902730375426621,
"acc_stderr": 0.013512058415238361,
"acc_norm": 0.726962457337884,
"acc_norm_stderr": 0.013019332762635743
},
"harness|hellaswag|10": {
"acc": 0.6886078470424218,
"acc_stderr": 0.004621163476949205,
"acc_norm": 0.8755228042222665,
"acc_norm_stderr": 0.003294504807555228
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.35,
"acc_stderr": 0.0479372485441102,
"acc_norm": 0.35,
"acc_norm_stderr": 0.0479372485441102
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6370370370370371,
"acc_stderr": 0.041539484047424,
"acc_norm": 0.6370370370370371,
"acc_norm_stderr": 0.041539484047424
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.8223684210526315,
"acc_stderr": 0.03110318238312338,
"acc_norm": 0.8223684210526315,
"acc_norm_stderr": 0.03110318238312338
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.76,
"acc_stderr": 0.04292346959909283,
"acc_norm": 0.76,
"acc_norm_stderr": 0.04292346959909283
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7358490566037735,
"acc_stderr": 0.02713429162874171,
"acc_norm": 0.7358490566037735,
"acc_norm_stderr": 0.02713429162874171
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.8263888888888888,
"acc_stderr": 0.03167473383795718,
"acc_norm": 0.8263888888888888,
"acc_norm_stderr": 0.03167473383795718
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.49,
"acc_stderr": 0.05024183937956912,
"acc_norm": 0.49,
"acc_norm_stderr": 0.05024183937956912
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.56,
"acc_stderr": 0.04988876515698589,
"acc_norm": 0.56,
"acc_norm_stderr": 0.04988876515698589
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.41,
"acc_stderr": 0.04943110704237102,
"acc_norm": 0.41,
"acc_norm_stderr": 0.04943110704237102
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6936416184971098,
"acc_stderr": 0.03514942551267439,
"acc_norm": 0.6936416184971098,
"acc_norm_stderr": 0.03514942551267439
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.37254901960784315,
"acc_stderr": 0.048108401480826346,
"acc_norm": 0.37254901960784315,
"acc_norm_stderr": 0.048108401480826346
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.78,
"acc_stderr": 0.04163331998932263,
"acc_norm": 0.78,
"acc_norm_stderr": 0.04163331998932263
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.7106382978723405,
"acc_stderr": 0.02964400657700962,
"acc_norm": 0.7106382978723405,
"acc_norm_stderr": 0.02964400657700962
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.45614035087719296,
"acc_stderr": 0.04685473041907789,
"acc_norm": 0.45614035087719296,
"acc_norm_stderr": 0.04685473041907789
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.6206896551724138,
"acc_stderr": 0.04043461861916746,
"acc_norm": 0.6206896551724138,
"acc_norm_stderr": 0.04043461861916746
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.47619047619047616,
"acc_stderr": 0.02572209706438853,
"acc_norm": 0.47619047619047616,
"acc_norm_stderr": 0.02572209706438853
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.5079365079365079,
"acc_stderr": 0.044715725362943486,
"acc_norm": 0.5079365079365079,
"acc_norm_stderr": 0.044715725362943486
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.47,
"acc_stderr": 0.050161355804659205,
"acc_norm": 0.47,
"acc_norm_stderr": 0.050161355804659205
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.8096774193548387,
"acc_stderr": 0.022331707611823078,
"acc_norm": 0.8096774193548387,
"acc_norm_stderr": 0.022331707611823078
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5714285714285714,
"acc_stderr": 0.034819048444388045,
"acc_norm": 0.5714285714285714,
"acc_norm_stderr": 0.034819048444388045
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.78,
"acc_stderr": 0.04163331998932262,
"acc_norm": 0.78,
"acc_norm_stderr": 0.04163331998932262
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.8545454545454545,
"acc_stderr": 0.027530196355066584,
"acc_norm": 0.8545454545454545,
"acc_norm_stderr": 0.027530196355066584
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.898989898989899,
"acc_stderr": 0.021469735576055343,
"acc_norm": 0.898989898989899,
"acc_norm_stderr": 0.021469735576055343
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9326424870466321,
"acc_stderr": 0.0180883938390789,
"acc_norm": 0.9326424870466321,
"acc_norm_stderr": 0.0180883938390789
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.7102564102564103,
"acc_stderr": 0.023000628243687968,
"acc_norm": 0.7102564102564103,
"acc_norm_stderr": 0.023000628243687968
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.337037037037037,
"acc_stderr": 0.028820884666253252,
"acc_norm": 0.337037037037037,
"acc_norm_stderr": 0.028820884666253252
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.7815126050420168,
"acc_stderr": 0.02684151432295893,
"acc_norm": 0.7815126050420168,
"acc_norm_stderr": 0.02684151432295893
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.4900662251655629,
"acc_stderr": 0.04081677107248436,
"acc_norm": 0.4900662251655629,
"acc_norm_stderr": 0.04081677107248436
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.9009174311926605,
"acc_stderr": 0.01280978008187893,
"acc_norm": 0.9009174311926605,
"acc_norm_stderr": 0.01280978008187893
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5833333333333334,
"acc_stderr": 0.033622774366080424,
"acc_norm": 0.5833333333333334,
"acc_norm_stderr": 0.033622774366080424
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.9019607843137255,
"acc_stderr": 0.0208711184555521,
"acc_norm": 0.9019607843137255,
"acc_norm_stderr": 0.0208711184555521
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8818565400843882,
"acc_stderr": 0.02101105265987847,
"acc_norm": 0.8818565400843882,
"acc_norm_stderr": 0.02101105265987847
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.7847533632286996,
"acc_stderr": 0.027584066602208274,
"acc_norm": 0.7847533632286996,
"acc_norm_stderr": 0.027584066602208274
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.8473282442748091,
"acc_stderr": 0.031545216720054725,
"acc_norm": 0.8473282442748091,
"acc_norm_stderr": 0.031545216720054725
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8760330578512396,
"acc_stderr": 0.030083098716035202,
"acc_norm": 0.8760330578512396,
"acc_norm_stderr": 0.030083098716035202
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.8425925925925926,
"acc_stderr": 0.035207039905179635,
"acc_norm": 0.8425925925925926,
"acc_norm_stderr": 0.035207039905179635
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.8466257668711656,
"acc_stderr": 0.0283116014414386,
"acc_norm": 0.8466257668711656,
"acc_norm_stderr": 0.0283116014414386
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.5714285714285714,
"acc_stderr": 0.04697113923010213,
"acc_norm": 0.5714285714285714,
"acc_norm_stderr": 0.04697113923010213
},
"harness|hendrycksTest-management|5": {
"acc": 0.8252427184466019,
"acc_stderr": 0.03760178006026621,
"acc_norm": 0.8252427184466019,
"acc_norm_stderr": 0.03760178006026621
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.9145299145299145,
"acc_stderr": 0.01831589168562585,
"acc_norm": 0.9145299145299145,
"acc_norm_stderr": 0.01831589168562585
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.75,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.75,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8697318007662835,
"acc_stderr": 0.012036729568216054,
"acc_norm": 0.8697318007662835,
"acc_norm_stderr": 0.012036729568216054
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7687861271676301,
"acc_stderr": 0.022698657167855713,
"acc_norm": 0.7687861271676301,
"acc_norm_stderr": 0.022698657167855713
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.646927374301676,
"acc_stderr": 0.01598420454526858,
"acc_norm": 0.646927374301676,
"acc_norm_stderr": 0.01598420454526858
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7516339869281046,
"acc_stderr": 0.024739981355113592,
"acc_norm": 0.7516339869281046,
"acc_norm_stderr": 0.024739981355113592
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7684887459807074,
"acc_stderr": 0.023956532766639133,
"acc_norm": 0.7684887459807074,
"acc_norm_stderr": 0.023956532766639133
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.8271604938271605,
"acc_stderr": 0.02103851777015737,
"acc_norm": 0.8271604938271605,
"acc_norm_stderr": 0.02103851777015737
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.599290780141844,
"acc_stderr": 0.029233465745573096,
"acc_norm": 0.599290780141844,
"acc_norm_stderr": 0.029233465745573096
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.5814863102998696,
"acc_stderr": 0.012599505608336482,
"acc_norm": 0.5814863102998696,
"acc_norm_stderr": 0.012599505608336482
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.7316176470588235,
"acc_stderr": 0.026917481224377204,
"acc_norm": 0.7316176470588235,
"acc_norm_stderr": 0.026917481224377204
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.7679738562091504,
"acc_stderr": 0.017077373377856933,
"acc_norm": 0.7679738562091504,
"acc_norm_stderr": 0.017077373377856933
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.7454545454545455,
"acc_stderr": 0.041723430387053825,
"acc_norm": 0.7454545454545455,
"acc_norm_stderr": 0.041723430387053825
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.8081632653061225,
"acc_stderr": 0.025206963154225395,
"acc_norm": 0.8081632653061225,
"acc_norm_stderr": 0.025206963154225395
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8756218905472637,
"acc_stderr": 0.023335401790166323,
"acc_norm": 0.8756218905472637,
"acc_norm_stderr": 0.023335401790166323
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.86,
"acc_stderr": 0.03487350880197769,
"acc_norm": 0.86,
"acc_norm_stderr": 0.03487350880197769
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5301204819277109,
"acc_stderr": 0.03885425420866767,
"acc_norm": 0.5301204819277109,
"acc_norm_stderr": 0.03885425420866767
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8713450292397661,
"acc_stderr": 0.02567934272327692,
"acc_norm": 0.8713450292397661,
"acc_norm_stderr": 0.02567934272327692
},
"harness|truthfulqa:mc|0": {
"mc1": 0.4663402692778458,
"mc1_stderr": 0.017463793867168106,
"mc2": 0.6451679386365279,
"mc2_stderr": 0.014753028795637621
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
khalidalt/model-written-evals | 2023-07-02T20:24:29.000Z | [
"task_categories:multiple-choice",
"task_categories:zero-shot-classification",
"task_categories:question-answering",
"task_ids:multiple-choice-qa",
"task_ids:multiple-choice-coreference-resolution",
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"gender bias",
"social bias",
"AI safety",
"personality",
"politics",
"arxiv:2212.09251",
"region:us"
] | khalidalt | This new dataset is designed to solve this great NLP task and is crafted with a lot of care. | @misc{perez2022discovering,
doi = {10.48550/ARXIV.2212.09251},
url = {https://arxiv.org/abs/2212.09251},
author = {Perez, Ethan and Ringer, Sam and Lukošiūtė, Kamilė and Nguyen, Karina and Chen, Edwin and Heiner, Scott and Pettit, Craig and Olsson, Catherine and Kundu, Sandipan and Kadavath, Saurav and Jones, Andy and Chen, Anna and Mann, Ben and Israel, Brian and Seethor, Bryan and McKinnon, Cameron and Olah, Christopher and Yan, Da and Amodei, Daniela and Amodei, Dario and Drain, Dawn and Li, Dustin and Tran-Johnson, Eli and Khundadze, Guro and Kernion, Jackson and Landis, James and Kerr, Jamie and Mueller, Jared and Hyun, Jeeyoon and Landau, Joshua and Ndousse, Kamal and Goldberg, Landon and Lovitt, Liane and Lucas, Martin and Sellitto, Michael and Zhang, Miranda and Kingsland, Neerav and Elhage, Nelson and Joseph, Nicholas and Mercado, Noemí and DasSarma, Nova and Rausch, Oliver and Larson, Robin and McCandlish, Sam and Johnston, Scott and Kravec, Shauna and {El Showk}, Sheer and Lanham, Tamera and Telleen-Lawton, Timothy and Brown, Tom and Henighan, Tom and Hume, Tristan and Bai, Yuntao and Hatfield-Dodds, Zac and Clark, Jack and Bowman, Samuel R. and Askell, Amanda and Grosse, Roger and Hernandez, Danny and Ganguli, Deep and Hubinger, Evan and Schiefer, Nicholas and Kaplan, Jared},
keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Discovering Language Model Behaviors with Model-Written Evaluations},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
} | null | 0 | 258 | ---
annotations_creators:
- machine-generated
language:
- en
language_creators:
- machine-generated
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: Evaluations from "Discovering Language Model Behaviors with Model-Written
Evaluations"
size_categories:
- 100K<n<1M
source_datasets:
- original
tags:
- gender bias
- social bias
- AI safety
- personality
- politics
task_categories:
- multiple-choice
- zero-shot-classification
- question-answering
task_ids:
- multiple-choice-qa
- multiple-choice-coreference-resolution
---
# Model-Written Evaluation Datasets
This repository includes datasets written by language models, used in the paper "Discovering Language Model Behaviors with Model-Written Evaluations."
The evaluations in this dataset were designed for dialogue agents, such as models fine-tuned to respond to user utterances or pretrained language models prompted to simulate a dialogue agent's behavior. However, the data can be adapted to test various other types of models as well.
The dataset consis of each of the following:
1. persona: Datasets designed to evaluate models on different aspects of their behavior, such as their political and religious views, personality traits, moral beliefs, and willingness to pursue potentially risky objectives (e.g., self-preservation or power-seeking).
2. sycophancy: Datasets created to assess models based on their tendency to echo a user's perspective when presented with various questions in fields like philosophy, NLP research, and politics.
3. winogenerated: An extended version of the Winogender Dataset (Rudinger et al., 2018) generated by models. The dataset includes occupation titles generated specifically for this dataset, alongside occupation gender statistics from the Bureau of Labor Statistics.
4. advanced-ai-risk: Datasets evaluating models on behaviors associated with potential catastrophic risks posed by advanced AI systems. These datasets were generated in a few-shot manner.
Please see the cited paper for additional details on the datasets.
**Disclaimer**: As discussed in the paper, some data contains content that includes social biases and stereotypes. The data may also contain other forms of harmful or offensive content. The views expressed in the data do not reflect the views of Anthropic or any of its employees.
## Bibtex Citation
If you would like to cite this work or data, you may use the following bibtex citation:
```
@misc{perez2022discovering,
doi = {10.48550/ARXIV.2212.09251},
url = {https://arxiv.org/abs/2212.09251},
author = {Perez, Ethan and Ringer, Sam and Lukošiūtė, Kamilė and Nguyen, Karina and Chen, Edwin and Heiner, Scott and Pettit, Craig and Olsson, Catherine and Kundu, Sandipan and Kadavath, Saurav and Jones, Andy and Chen, Anna and Mann, Ben and Israel, Brian and Seethor, Bryan and McKinnon, Cameron and Olah, Christopher and Yan, Da and Amodei, Daniela and Amodei, Dario and Drain, Dawn and Li, Dustin and Tran-Johnson, Eli and Khundadze, Guro and Kernion, Jackson and Landis, James and Kerr, Jamie and Mueller, Jared and Hyun, Jeeyoon and Landau, Joshua and Ndousse, Kamal and Goldberg, Landon and Lovitt, Liane and Lucas, Martin and Sellitto, Michael and Zhang, Miranda and Kingsland, Neerav and Elhage, Nelson and Joseph, Nicholas and Mercado, Noemí and DasSarma, Nova and Rausch, Oliver and Larson, Robin and McCandlish, Sam and Johnston, Scott and Kravec, Shauna and {El Showk}, Sheer and Lanham, Tamera and Telleen-Lawton, Timothy and Brown, Tom and Henighan, Tom and Hume, Tristan and Bai, Yuntao and Hatfield-Dodds, Zac and Clark, Jack and Bowman, Samuel R. and Askell, Amanda and Grosse, Roger and Hernandez, Danny and Ganguli, Deep and Hubinger, Evan and Schiefer, Nicholas and Kaplan, Jared},
keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Discovering Language Model Behaviors with Model-Written Evaluations},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.