datasetId stringlengths 2 117 | card stringlengths 19 1.01M |
|---|---|
deepghs/anime_aesthetic | ---
task_categories:
- image-classification
tags:
- art
size_categories:
- 1M<n<10M
--- |
CyberHarem/vento_of_the_front_toarumajutsunoindex | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of Vento of the Front
This is the dataset of Vento of the Front, containing 89 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------|
| raw | 89 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 198 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| 384x512 | 89 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x512 | 89 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. |
| 512x704 | 89 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x640 | 89 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. |
| 640x880 | 89 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 198 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 198 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-1200 | 198 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
|
rkdeva/DermnetSkinData-Test12 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype: string
splits:
- name: train
num_bytes: 376841600.824
num_examples: 3937
download_size: 370136671
dataset_size: 376841600.824
---
# Dataset Card for "DermnetSkinData-Test12"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
liuyanchen1015/MULTI_VALUE_stsb_clefting | ---
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: score
dtype: float64
- name: idx
dtype: int64
- name: value_score
dtype: int64
splits:
- name: dev
num_bytes: 105192
num_examples: 623
- name: test
num_bytes: 85503
num_examples: 578
- name: train
num_bytes: 398951
num_examples: 2485
download_size: 336803
dataset_size: 589646
---
# Dataset Card for "MULTI_VALUE_stsb_clefting"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
FaalSa/f7 | ---
dataset_info:
features:
- name: start
dtype: timestamp[s]
- name: target
sequence: float32
- name: item_id
dtype: string
- name: feat_static_cat
sequence: uint64
splits:
- name: train
num_bytes: 79710
num_examples: 1
- name: validation
num_bytes: 80190
num_examples: 1
- name: test
num_bytes: 80670
num_examples: 1
download_size: 43406
dataset_size: 240570
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
croissantllm/croissant_dataset | ---
task_categories:
- translation
- text-generation
- text2text-generation
- fill-mask
language:
- fr
- en
size_categories:
- 100B<n<1T
---
# CroissantLLM: A Truly Bilingual French-English Language Model
## Dataset
https://arxiv.org/abs/2402.00786
## Licenses
Data redistributed here is subject to the original license under which it was collected. All license information is detailed in the `Data` section of the Technical report.
## Citation
```
@misc{faysse2024croissantllm,
title={CroissantLLM: A Truly Bilingual French-English Language Model},
author={Manuel Faysse and Patrick Fernandes and Nuno M. Guerreiro and António Loison and Duarte M. Alves and Caio Corro and Nicolas Boizard and João Alves and Ricardo Rei and Pedro H. Martins and Antoni Bigata Casademunt and François Yvon and André F. T. Martins and Gautier Viaud and Céline Hudelot and Pierre Colombo},
year={2024},
eprint={2402.00786},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Note
Only the `english_660B_11` split is kept hidden for the moment (until release of the Canary paper) but is available upon request !
|
UMCU/WikiDocPatientInformation_Dutch_translated_with_MariaNMT | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 2807464
num_examples: 5760
download_size: 1300914
dataset_size: 2807464
license: gpl-3.0
task_categories:
- sentence-similarity
- question-answering
language:
- nl
tags:
- medical
pretty_name: Dutch translation of WikiDoc
size_categories:
- 1K<n<10K
---
# Dataset Card for "WikiDocPatientInformation_Dutch_translated_with_MariaNMT"
Translation of the **English** version of the Hugging dataset [WikiDoc patient information](https://huggingface.co/datasets/medalpaca/medical_meadow_wikidoc_patient_information), based
on [WikiDoc](https://www.wikidoc.org/index.php/Main_Page), a medical wikipedia.
to **Dutch** using an [Maria NMT model](https://marian-nmt.github.io/), trained by [Helsinki NLP](https://huggingface.co/Helsinki-NLP/opus-mt-en-nl).
Note, for reference: Maria NMT is based on [BART](https://huggingface.co/docs/transformers/model_doc/bart), described [here](https://arxiv.org/abs/1910.13461).
# Attribution
If you use this dataset please use the following to credit the creators of the OPUS-MT models:
```
@InProceedings{TiedemannThottingal:EAMT2020,
author = {J{\"o}rg Tiedemann and Santhosh Thottingal},
title = {{OPUS-MT} — {B}uilding open translation services for the {W}orld},
booktitle = {Proceedings of the 22nd Annual Conferenec of the European Association for Machine Translation (EAMT)},
year = {2020},
address = {Lisbon, Portugal}
}
```
and
```
@misc {van_es_2024,
author = { {Bram van Es} },
title = { WikiDocPatientInformation_Dutch_translated_with_MariaNMT (Revision 4490701) },
year = 2024,
url = { https://huggingface.co/datasets/UMCU/WikiDocPatientInformation_Dutch_translated_with_MariaNMT },
doi = { 10.57967/hf/1669 },
publisher = { Hugging Face }
}
```
# License
For both the Maria NMT model and the original [Helsinki NLP](https://twitter.com/HelsinkiNLP) [Opus MT model](https://huggingface.co/Helsinki-NLP)
we did **not** find a license. We also did not find a license for the MedQA corpus. For these reasons we use a permissive [CC BY](https://wellcome.org/grant-funding/guidance/open-access-guidance/creative-commons-attribution-licence-cc)
license. If this was in error please let us know and we will add the appropriate licensing promptly. |
krishi/testing | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
splits:
- name: train
num_bytes: 8816519.0
num_examples: 5
download_size: 8818000
dataset_size: 8816519.0
---
# Dataset Card for "testing"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
havens2/apitext_multiple | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 697585
num_examples: 1055
download_size: 322725
dataset_size: 697585
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "apitext_multiple"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mkkumar/Cancer_Patients_Emotion | ---
license: apache-2.0
---
|
habixia1/habixia | ---
license: afl-3.0
---
|
ekolasky/ResultsIdSeperatedSet | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: labels
sequence: int64
- name: global_attention_mask
sequence: int64
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 4149897
num_examples: 747
- name: validation
num_bytes: 253036
num_examples: 49
download_size: 464308
dataset_size: 4402933
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
mcanoglu/cwe-variant-group | ---
license: mit
---
|
p1atdev/JEDHRI | ---
license: cc-by-4.0
language:
- ja
tags:
- legal
- not-for-all-audiences
size_categories:
- n<1K
---
### Japanese Expressions Dataset from Human Rights Infringement on Internet
[権利侵害と不快さの間:日本語人権侵害表現データセット](https://zenodo.org/record/7960519) を HuggingFace datasets 向けに改変。 |
melissa-kang/learning_in_the_machine_audio | ---
license: afl-3.0
---
|
open-llm-leaderboard/details_PotatoOff__Michel-13B | ---
pretty_name: Evaluation run of PotatoOff/Michel-13B
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [PotatoOff/Michel-13B](https://huggingface.co/PotatoOff/Michel-13B) on the [Open\
\ LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_PotatoOff__Michel-13B\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-02-01T22:05:33.263550](https://huggingface.co/datasets/open-llm-leaderboard/details_PotatoOff__Michel-13B/blob/main/results_2024-02-01T22-05-33.263550.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.5499141517864713,\n\
\ \"acc_stderr\": 0.034037566570250866,\n \"acc_norm\": 0.556324343200096,\n\
\ \"acc_norm_stderr\": 0.03477678629039932,\n \"mc1\": 0.35006119951040393,\n\
\ \"mc1_stderr\": 0.01669794942015103,\n \"mc2\": 0.5043477199409111,\n\
\ \"mc2_stderr\": 0.015764099492460493\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.5767918088737202,\n \"acc_stderr\": 0.014438036220848034,\n\
\ \"acc_norm\": 0.6126279863481229,\n \"acc_norm_stderr\": 0.01423587248790987\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6357299342760406,\n\
\ \"acc_stderr\": 0.004802413919932666,\n \"acc_norm\": 0.832105158334993,\n\
\ \"acc_norm_stderr\": 0.0037300899105375796\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.35,\n \"acc_stderr\": 0.0479372485441102,\n \
\ \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.0479372485441102\n },\n\
\ \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.4666666666666667,\n\
\ \"acc_stderr\": 0.043097329010363554,\n \"acc_norm\": 0.4666666666666667,\n\
\ \"acc_norm_stderr\": 0.043097329010363554\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.5592105263157895,\n \"acc_stderr\": 0.04040311062490436,\n\
\ \"acc_norm\": 0.5592105263157895,\n \"acc_norm_stderr\": 0.04040311062490436\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.6,\n\
\ \"acc_stderr\": 0.04923659639173309,\n \"acc_norm\": 0.6,\n \
\ \"acc_norm_stderr\": 0.04923659639173309\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.5584905660377358,\n \"acc_stderr\": 0.030561590426731837,\n\
\ \"acc_norm\": 0.5584905660377358,\n \"acc_norm_stderr\": 0.030561590426731837\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.6180555555555556,\n\
\ \"acc_stderr\": 0.040629907841466674,\n \"acc_norm\": 0.6180555555555556,\n\
\ \"acc_norm_stderr\": 0.040629907841466674\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.45,\n \"acc_stderr\": 0.04999999999999999,\n \
\ \"acc_norm\": 0.45,\n \"acc_norm_stderr\": 0.04999999999999999\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.46,\n \"acc_stderr\": 0.05009082659620332,\n \"acc_norm\": 0.46,\n\
\ \"acc_norm_stderr\": 0.05009082659620332\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695235,\n \
\ \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695235\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.4913294797687861,\n\
\ \"acc_stderr\": 0.038118909889404126,\n \"acc_norm\": 0.4913294797687861,\n\
\ \"acc_norm_stderr\": 0.038118909889404126\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.2549019607843137,\n \"acc_stderr\": 0.043364327079931785,\n\
\ \"acc_norm\": 0.2549019607843137,\n \"acc_norm_stderr\": 0.043364327079931785\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.66,\n \"acc_stderr\": 0.04760952285695237,\n \"acc_norm\": 0.66,\n\
\ \"acc_norm_stderr\": 0.04760952285695237\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.46808510638297873,\n \"acc_stderr\": 0.03261936918467382,\n\
\ \"acc_norm\": 0.46808510638297873,\n \"acc_norm_stderr\": 0.03261936918467382\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.3157894736842105,\n\
\ \"acc_stderr\": 0.043727482902780064,\n \"acc_norm\": 0.3157894736842105,\n\
\ \"acc_norm_stderr\": 0.043727482902780064\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.45517241379310347,\n \"acc_stderr\": 0.04149886942192117,\n\
\ \"acc_norm\": 0.45517241379310347,\n \"acc_norm_stderr\": 0.04149886942192117\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.35714285714285715,\n \"acc_stderr\": 0.024677862841332783,\n \"\
acc_norm\": 0.35714285714285715,\n \"acc_norm_stderr\": 0.024677862841332783\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.36507936507936506,\n\
\ \"acc_stderr\": 0.04306241259127153,\n \"acc_norm\": 0.36507936507936506,\n\
\ \"acc_norm_stderr\": 0.04306241259127153\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.38,\n \"acc_stderr\": 0.04878317312145633,\n \
\ \"acc_norm\": 0.38,\n \"acc_norm_stderr\": 0.04878317312145633\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.6258064516129033,\n\
\ \"acc_stderr\": 0.0275289042998457,\n \"acc_norm\": 0.6258064516129033,\n\
\ \"acc_norm_stderr\": 0.0275289042998457\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.42857142857142855,\n \"acc_stderr\": 0.03481904844438803,\n\
\ \"acc_norm\": 0.42857142857142855,\n \"acc_norm_stderr\": 0.03481904844438803\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.56,\n \"acc_stderr\": 0.04988876515698589,\n \"acc_norm\"\
: 0.56,\n \"acc_norm_stderr\": 0.04988876515698589\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.6121212121212121,\n \"acc_stderr\": 0.038049136539710114,\n\
\ \"acc_norm\": 0.6121212121212121,\n \"acc_norm_stderr\": 0.038049136539710114\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.6868686868686869,\n \"acc_stderr\": 0.033042050878136525,\n \"\
acc_norm\": 0.6868686868686869,\n \"acc_norm_stderr\": 0.033042050878136525\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.7772020725388601,\n \"acc_stderr\": 0.030031147977641538,\n\
\ \"acc_norm\": 0.7772020725388601,\n \"acc_norm_stderr\": 0.030031147977641538\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.4846153846153846,\n \"acc_stderr\": 0.025339003010106515,\n\
\ \"acc_norm\": 0.4846153846153846,\n \"acc_norm_stderr\": 0.025339003010106515\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.337037037037037,\n \"acc_stderr\": 0.028820884666253252,\n \
\ \"acc_norm\": 0.337037037037037,\n \"acc_norm_stderr\": 0.028820884666253252\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.5672268907563025,\n \"acc_stderr\": 0.03218358107742613,\n \
\ \"acc_norm\": 0.5672268907563025,\n \"acc_norm_stderr\": 0.03218358107742613\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.3841059602649007,\n \"acc_stderr\": 0.03971301814719197,\n \"\
acc_norm\": 0.3841059602649007,\n \"acc_norm_stderr\": 0.03971301814719197\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.7247706422018348,\n \"acc_stderr\": 0.019149093743155203,\n \"\
acc_norm\": 0.7247706422018348,\n \"acc_norm_stderr\": 0.019149093743155203\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.48148148148148145,\n \"acc_stderr\": 0.03407632093854053,\n \"\
acc_norm\": 0.48148148148148145,\n \"acc_norm_stderr\": 0.03407632093854053\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.7450980392156863,\n \"acc_stderr\": 0.030587591351604243,\n \"\
acc_norm\": 0.7450980392156863,\n \"acc_norm_stderr\": 0.030587591351604243\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.7172995780590717,\n \"acc_stderr\": 0.02931281415395593,\n \
\ \"acc_norm\": 0.7172995780590717,\n \"acc_norm_stderr\": 0.02931281415395593\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6233183856502242,\n\
\ \"acc_stderr\": 0.03252113489929188,\n \"acc_norm\": 0.6233183856502242,\n\
\ \"acc_norm_stderr\": 0.03252113489929188\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.5801526717557252,\n \"acc_stderr\": 0.043285772152629715,\n\
\ \"acc_norm\": 0.5801526717557252,\n \"acc_norm_stderr\": 0.043285772152629715\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.7272727272727273,\n \"acc_stderr\": 0.04065578140908706,\n \"\
acc_norm\": 0.7272727272727273,\n \"acc_norm_stderr\": 0.04065578140908706\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.6851851851851852,\n\
\ \"acc_stderr\": 0.04489931073591312,\n \"acc_norm\": 0.6851851851851852,\n\
\ \"acc_norm_stderr\": 0.04489931073591312\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.6625766871165644,\n \"acc_stderr\": 0.03714908409935574,\n\
\ \"acc_norm\": 0.6625766871165644,\n \"acc_norm_stderr\": 0.03714908409935574\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.39285714285714285,\n\
\ \"acc_stderr\": 0.046355501356099754,\n \"acc_norm\": 0.39285714285714285,\n\
\ \"acc_norm_stderr\": 0.046355501356099754\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7184466019417476,\n \"acc_stderr\": 0.044532548363264673,\n\
\ \"acc_norm\": 0.7184466019417476,\n \"acc_norm_stderr\": 0.044532548363264673\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.782051282051282,\n\
\ \"acc_stderr\": 0.02704685763071668,\n \"acc_norm\": 0.782051282051282,\n\
\ \"acc_norm_stderr\": 0.02704685763071668\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.55,\n \"acc_stderr\": 0.04999999999999999,\n \
\ \"acc_norm\": 0.55,\n \"acc_norm_stderr\": 0.04999999999999999\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7675606641123882,\n\
\ \"acc_stderr\": 0.015104550008905716,\n \"acc_norm\": 0.7675606641123882,\n\
\ \"acc_norm_stderr\": 0.015104550008905716\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.5953757225433526,\n \"acc_stderr\": 0.02642481659400985,\n\
\ \"acc_norm\": 0.5953757225433526,\n \"acc_norm_stderr\": 0.02642481659400985\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.3553072625698324,\n\
\ \"acc_stderr\": 0.016006989934803182,\n \"acc_norm\": 0.3553072625698324,\n\
\ \"acc_norm_stderr\": 0.016006989934803182\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.5816993464052288,\n \"acc_stderr\": 0.028245134024387292,\n\
\ \"acc_norm\": 0.5816993464052288,\n \"acc_norm_stderr\": 0.028245134024387292\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6237942122186495,\n\
\ \"acc_stderr\": 0.02751392568354943,\n \"acc_norm\": 0.6237942122186495,\n\
\ \"acc_norm_stderr\": 0.02751392568354943\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.6388888888888888,\n \"acc_stderr\": 0.02672586880910079,\n\
\ \"acc_norm\": 0.6388888888888888,\n \"acc_norm_stderr\": 0.02672586880910079\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.40070921985815605,\n \"acc_stderr\": 0.02923346574557309,\n \
\ \"acc_norm\": 0.40070921985815605,\n \"acc_norm_stderr\": 0.02923346574557309\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4152542372881356,\n\
\ \"acc_stderr\": 0.012585471793400659,\n \"acc_norm\": 0.4152542372881356,\n\
\ \"acc_norm_stderr\": 0.012585471793400659\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.5147058823529411,\n \"acc_stderr\": 0.03035969707904612,\n\
\ \"acc_norm\": 0.5147058823529411,\n \"acc_norm_stderr\": 0.03035969707904612\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.5424836601307189,\n \"acc_stderr\": 0.020154685712590898,\n \
\ \"acc_norm\": 0.5424836601307189,\n \"acc_norm_stderr\": 0.020154685712590898\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.5909090909090909,\n\
\ \"acc_stderr\": 0.04709306978661895,\n \"acc_norm\": 0.5909090909090909,\n\
\ \"acc_norm_stderr\": 0.04709306978661895\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.5959183673469388,\n \"acc_stderr\": 0.03141470802586589,\n\
\ \"acc_norm\": 0.5959183673469388,\n \"acc_norm_stderr\": 0.03141470802586589\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.736318407960199,\n\
\ \"acc_stderr\": 0.031157150869355558,\n \"acc_norm\": 0.736318407960199,\n\
\ \"acc_norm_stderr\": 0.031157150869355558\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.81,\n \"acc_stderr\": 0.03942772444036625,\n \
\ \"acc_norm\": 0.81,\n \"acc_norm_stderr\": 0.03942772444036625\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.42771084337349397,\n\
\ \"acc_stderr\": 0.038515976837185335,\n \"acc_norm\": 0.42771084337349397,\n\
\ \"acc_norm_stderr\": 0.038515976837185335\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.7777777777777778,\n \"acc_stderr\": 0.03188578017686398,\n\
\ \"acc_norm\": 0.7777777777777778,\n \"acc_norm_stderr\": 0.03188578017686398\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.35006119951040393,\n\
\ \"mc1_stderr\": 0.01669794942015103,\n \"mc2\": 0.5043477199409111,\n\
\ \"mc2_stderr\": 0.015764099492460493\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7521704814522494,\n \"acc_stderr\": 0.012134386019865348\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.20166793025018953,\n \
\ \"acc_stderr\": 0.01105229588954436\n }\n}\n```"
repo_url: https://huggingface.co/PotatoOff/Michel-13B
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_02_01T22_05_33.263550
path:
- '**/details_harness|arc:challenge|25_2024-02-01T22-05-33.263550.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-02-01T22-05-33.263550.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_02_01T22_05_33.263550
path:
- '**/details_harness|gsm8k|5_2024-02-01T22-05-33.263550.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-02-01T22-05-33.263550.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_02_01T22_05_33.263550
path:
- '**/details_harness|hellaswag|10_2024-02-01T22-05-33.263550.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-02-01T22-05-33.263550.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_02_01T22_05_33.263550
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-01T22-05-33.263550.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-01T22-05-33.263550.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-01T22-05-33.263550.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_02_01T22_05_33.263550
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-01T22-05-33.263550.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-01T22-05-33.263550.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_02_01T22_05_33.263550
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-01T22-05-33.263550.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-01T22-05-33.263550.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_02_01T22_05_33.263550
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-01T22-05-33.263550.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-01T22-05-33.263550.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_02_01T22_05_33.263550
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-01T22-05-33.263550.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-01T22-05-33.263550.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_02_01T22_05_33.263550
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-01T22-05-33.263550.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-01T22-05-33.263550.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_02_01T22_05_33.263550
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-01T22-05-33.263550.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-01T22-05-33.263550.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_02_01T22_05_33.263550
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-01T22-05-33.263550.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-01T22-05-33.263550.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_02_01T22_05_33.263550
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-01T22-05-33.263550.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-01T22-05-33.263550.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_02_01T22_05_33.263550
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-01T22-05-33.263550.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-01T22-05-33.263550.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_02_01T22_05_33.263550
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-01T22-05-33.263550.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-01T22-05-33.263550.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_02_01T22_05_33.263550
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-01T22-05-33.263550.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-01T22-05-33.263550.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_02_01T22_05_33.263550
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-01T22-05-33.263550.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-01T22-05-33.263550.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_02_01T22_05_33.263550
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-01T22-05-33.263550.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-01T22-05-33.263550.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_02_01T22_05_33.263550
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-01T22-05-33.263550.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-01T22-05-33.263550.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_02_01T22_05_33.263550
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-01T22-05-33.263550.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-01T22-05-33.263550.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_02_01T22_05_33.263550
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-01T22-05-33.263550.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-01T22-05-33.263550.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_02_01T22_05_33.263550
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-01T22-05-33.263550.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-01T22-05-33.263550.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_02_01T22_05_33.263550
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-01T22-05-33.263550.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-01T22-05-33.263550.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_02_01T22_05_33.263550
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-01T22-05-33.263550.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-01T22-05-33.263550.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_02_01T22_05_33.263550
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-01T22-05-33.263550.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-01T22-05-33.263550.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_02_01T22_05_33.263550
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-01T22-05-33.263550.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-01T22-05-33.263550.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_02_01T22_05_33.263550
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-01T22-05-33.263550.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-01T22-05-33.263550.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_02_01T22_05_33.263550
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-01T22-05-33.263550.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-01T22-05-33.263550.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_02_01T22_05_33.263550
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-01T22-05-33.263550.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-01T22-05-33.263550.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_02_01T22_05_33.263550
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-01T22-05-33.263550.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-01T22-05-33.263550.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_02_01T22_05_33.263550
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-01T22-05-33.263550.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-01T22-05-33.263550.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_02_01T22_05_33.263550
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-01T22-05-33.263550.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-01T22-05-33.263550.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_02_01T22_05_33.263550
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-01T22-05-33.263550.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-01T22-05-33.263550.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_02_01T22_05_33.263550
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-01T22-05-33.263550.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-01T22-05-33.263550.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_02_01T22_05_33.263550
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-01T22-05-33.263550.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-01T22-05-33.263550.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_02_01T22_05_33.263550
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-01T22-05-33.263550.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-01T22-05-33.263550.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_02_01T22_05_33.263550
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-01T22-05-33.263550.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-01T22-05-33.263550.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_02_01T22_05_33.263550
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-01T22-05-33.263550.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-01T22-05-33.263550.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_02_01T22_05_33.263550
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-01T22-05-33.263550.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-01T22-05-33.263550.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_02_01T22_05_33.263550
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-01T22-05-33.263550.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-01T22-05-33.263550.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_02_01T22_05_33.263550
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-01T22-05-33.263550.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-01T22-05-33.263550.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_02_01T22_05_33.263550
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-01T22-05-33.263550.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-01T22-05-33.263550.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_02_01T22_05_33.263550
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-01T22-05-33.263550.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-01T22-05-33.263550.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_02_01T22_05_33.263550
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-01T22-05-33.263550.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-01T22-05-33.263550.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_02_01T22_05_33.263550
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-01T22-05-33.263550.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-01T22-05-33.263550.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_02_01T22_05_33.263550
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-01T22-05-33.263550.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-01T22-05-33.263550.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_02_01T22_05_33.263550
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-01T22-05-33.263550.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-01T22-05-33.263550.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_02_01T22_05_33.263550
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-01T22-05-33.263550.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-01T22-05-33.263550.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_02_01T22_05_33.263550
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-01T22-05-33.263550.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-01T22-05-33.263550.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_02_01T22_05_33.263550
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-01T22-05-33.263550.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-01T22-05-33.263550.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_02_01T22_05_33.263550
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-01T22-05-33.263550.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-01T22-05-33.263550.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_02_01T22_05_33.263550
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-01T22-05-33.263550.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-01T22-05-33.263550.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_02_01T22_05_33.263550
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-01T22-05-33.263550.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-01T22-05-33.263550.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_02_01T22_05_33.263550
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-01T22-05-33.263550.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-01T22-05-33.263550.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_02_01T22_05_33.263550
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-01T22-05-33.263550.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-01T22-05-33.263550.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_02_01T22_05_33.263550
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-01T22-05-33.263550.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-01T22-05-33.263550.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_02_01T22_05_33.263550
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-01T22-05-33.263550.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-01T22-05-33.263550.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_02_01T22_05_33.263550
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-01T22-05-33.263550.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-01T22-05-33.263550.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_02_01T22_05_33.263550
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-01T22-05-33.263550.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-01T22-05-33.263550.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_02_01T22_05_33.263550
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-01T22-05-33.263550.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-01T22-05-33.263550.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_02_01T22_05_33.263550
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-01T22-05-33.263550.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-01T22-05-33.263550.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_02_01T22_05_33.263550
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-01T22-05-33.263550.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-01T22-05-33.263550.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_02_01T22_05_33.263550
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-01T22-05-33.263550.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-01T22-05-33.263550.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_02_01T22_05_33.263550
path:
- '**/details_harness|winogrande|5_2024-02-01T22-05-33.263550.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-02-01T22-05-33.263550.parquet'
- config_name: results
data_files:
- split: 2024_02_01T22_05_33.263550
path:
- results_2024-02-01T22-05-33.263550.parquet
- split: latest
path:
- results_2024-02-01T22-05-33.263550.parquet
---
# Dataset Card for Evaluation run of PotatoOff/Michel-13B
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [PotatoOff/Michel-13B](https://huggingface.co/PotatoOff/Michel-13B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_PotatoOff__Michel-13B",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-02-01T22:05:33.263550](https://huggingface.co/datasets/open-llm-leaderboard/details_PotatoOff__Michel-13B/blob/main/results_2024-02-01T22-05-33.263550.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.5499141517864713,
"acc_stderr": 0.034037566570250866,
"acc_norm": 0.556324343200096,
"acc_norm_stderr": 0.03477678629039932,
"mc1": 0.35006119951040393,
"mc1_stderr": 0.01669794942015103,
"mc2": 0.5043477199409111,
"mc2_stderr": 0.015764099492460493
},
"harness|arc:challenge|25": {
"acc": 0.5767918088737202,
"acc_stderr": 0.014438036220848034,
"acc_norm": 0.6126279863481229,
"acc_norm_stderr": 0.01423587248790987
},
"harness|hellaswag|10": {
"acc": 0.6357299342760406,
"acc_stderr": 0.004802413919932666,
"acc_norm": 0.832105158334993,
"acc_norm_stderr": 0.0037300899105375796
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.35,
"acc_stderr": 0.0479372485441102,
"acc_norm": 0.35,
"acc_norm_stderr": 0.0479372485441102
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.4666666666666667,
"acc_stderr": 0.043097329010363554,
"acc_norm": 0.4666666666666667,
"acc_norm_stderr": 0.043097329010363554
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.5592105263157895,
"acc_stderr": 0.04040311062490436,
"acc_norm": 0.5592105263157895,
"acc_norm_stderr": 0.04040311062490436
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.6,
"acc_stderr": 0.04923659639173309,
"acc_norm": 0.6,
"acc_norm_stderr": 0.04923659639173309
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.5584905660377358,
"acc_stderr": 0.030561590426731837,
"acc_norm": 0.5584905660377358,
"acc_norm_stderr": 0.030561590426731837
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.6180555555555556,
"acc_stderr": 0.040629907841466674,
"acc_norm": 0.6180555555555556,
"acc_norm_stderr": 0.040629907841466674
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.45,
"acc_stderr": 0.04999999999999999,
"acc_norm": 0.45,
"acc_norm_stderr": 0.04999999999999999
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.46,
"acc_stderr": 0.05009082659620332,
"acc_norm": 0.46,
"acc_norm_stderr": 0.05009082659620332
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.34,
"acc_stderr": 0.04760952285695235,
"acc_norm": 0.34,
"acc_norm_stderr": 0.04760952285695235
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.4913294797687861,
"acc_stderr": 0.038118909889404126,
"acc_norm": 0.4913294797687861,
"acc_norm_stderr": 0.038118909889404126
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.2549019607843137,
"acc_stderr": 0.043364327079931785,
"acc_norm": 0.2549019607843137,
"acc_norm_stderr": 0.043364327079931785
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.66,
"acc_stderr": 0.04760952285695237,
"acc_norm": 0.66,
"acc_norm_stderr": 0.04760952285695237
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.46808510638297873,
"acc_stderr": 0.03261936918467382,
"acc_norm": 0.46808510638297873,
"acc_norm_stderr": 0.03261936918467382
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.3157894736842105,
"acc_stderr": 0.043727482902780064,
"acc_norm": 0.3157894736842105,
"acc_norm_stderr": 0.043727482902780064
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.45517241379310347,
"acc_stderr": 0.04149886942192117,
"acc_norm": 0.45517241379310347,
"acc_norm_stderr": 0.04149886942192117
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.35714285714285715,
"acc_stderr": 0.024677862841332783,
"acc_norm": 0.35714285714285715,
"acc_norm_stderr": 0.024677862841332783
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.36507936507936506,
"acc_stderr": 0.04306241259127153,
"acc_norm": 0.36507936507936506,
"acc_norm_stderr": 0.04306241259127153
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.38,
"acc_stderr": 0.04878317312145633,
"acc_norm": 0.38,
"acc_norm_stderr": 0.04878317312145633
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.6258064516129033,
"acc_stderr": 0.0275289042998457,
"acc_norm": 0.6258064516129033,
"acc_norm_stderr": 0.0275289042998457
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.42857142857142855,
"acc_stderr": 0.03481904844438803,
"acc_norm": 0.42857142857142855,
"acc_norm_stderr": 0.03481904844438803
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.56,
"acc_stderr": 0.04988876515698589,
"acc_norm": 0.56,
"acc_norm_stderr": 0.04988876515698589
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.6121212121212121,
"acc_stderr": 0.038049136539710114,
"acc_norm": 0.6121212121212121,
"acc_norm_stderr": 0.038049136539710114
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.6868686868686869,
"acc_stderr": 0.033042050878136525,
"acc_norm": 0.6868686868686869,
"acc_norm_stderr": 0.033042050878136525
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.7772020725388601,
"acc_stderr": 0.030031147977641538,
"acc_norm": 0.7772020725388601,
"acc_norm_stderr": 0.030031147977641538
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.4846153846153846,
"acc_stderr": 0.025339003010106515,
"acc_norm": 0.4846153846153846,
"acc_norm_stderr": 0.025339003010106515
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.337037037037037,
"acc_stderr": 0.028820884666253252,
"acc_norm": 0.337037037037037,
"acc_norm_stderr": 0.028820884666253252
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.5672268907563025,
"acc_stderr": 0.03218358107742613,
"acc_norm": 0.5672268907563025,
"acc_norm_stderr": 0.03218358107742613
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3841059602649007,
"acc_stderr": 0.03971301814719197,
"acc_norm": 0.3841059602649007,
"acc_norm_stderr": 0.03971301814719197
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.7247706422018348,
"acc_stderr": 0.019149093743155203,
"acc_norm": 0.7247706422018348,
"acc_norm_stderr": 0.019149093743155203
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.48148148148148145,
"acc_stderr": 0.03407632093854053,
"acc_norm": 0.48148148148148145,
"acc_norm_stderr": 0.03407632093854053
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.7450980392156863,
"acc_stderr": 0.030587591351604243,
"acc_norm": 0.7450980392156863,
"acc_norm_stderr": 0.030587591351604243
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7172995780590717,
"acc_stderr": 0.02931281415395593,
"acc_norm": 0.7172995780590717,
"acc_norm_stderr": 0.02931281415395593
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6233183856502242,
"acc_stderr": 0.03252113489929188,
"acc_norm": 0.6233183856502242,
"acc_norm_stderr": 0.03252113489929188
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.5801526717557252,
"acc_stderr": 0.043285772152629715,
"acc_norm": 0.5801526717557252,
"acc_norm_stderr": 0.043285772152629715
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7272727272727273,
"acc_stderr": 0.04065578140908706,
"acc_norm": 0.7272727272727273,
"acc_norm_stderr": 0.04065578140908706
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.6851851851851852,
"acc_stderr": 0.04489931073591312,
"acc_norm": 0.6851851851851852,
"acc_norm_stderr": 0.04489931073591312
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.6625766871165644,
"acc_stderr": 0.03714908409935574,
"acc_norm": 0.6625766871165644,
"acc_norm_stderr": 0.03714908409935574
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.39285714285714285,
"acc_stderr": 0.046355501356099754,
"acc_norm": 0.39285714285714285,
"acc_norm_stderr": 0.046355501356099754
},
"harness|hendrycksTest-management|5": {
"acc": 0.7184466019417476,
"acc_stderr": 0.044532548363264673,
"acc_norm": 0.7184466019417476,
"acc_norm_stderr": 0.044532548363264673
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.782051282051282,
"acc_stderr": 0.02704685763071668,
"acc_norm": 0.782051282051282,
"acc_norm_stderr": 0.02704685763071668
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.55,
"acc_stderr": 0.04999999999999999,
"acc_norm": 0.55,
"acc_norm_stderr": 0.04999999999999999
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.7675606641123882,
"acc_stderr": 0.015104550008905716,
"acc_norm": 0.7675606641123882,
"acc_norm_stderr": 0.015104550008905716
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.5953757225433526,
"acc_stderr": 0.02642481659400985,
"acc_norm": 0.5953757225433526,
"acc_norm_stderr": 0.02642481659400985
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.3553072625698324,
"acc_stderr": 0.016006989934803182,
"acc_norm": 0.3553072625698324,
"acc_norm_stderr": 0.016006989934803182
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.5816993464052288,
"acc_stderr": 0.028245134024387292,
"acc_norm": 0.5816993464052288,
"acc_norm_stderr": 0.028245134024387292
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6237942122186495,
"acc_stderr": 0.02751392568354943,
"acc_norm": 0.6237942122186495,
"acc_norm_stderr": 0.02751392568354943
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.6388888888888888,
"acc_stderr": 0.02672586880910079,
"acc_norm": 0.6388888888888888,
"acc_norm_stderr": 0.02672586880910079
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.40070921985815605,
"acc_stderr": 0.02923346574557309,
"acc_norm": 0.40070921985815605,
"acc_norm_stderr": 0.02923346574557309
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4152542372881356,
"acc_stderr": 0.012585471793400659,
"acc_norm": 0.4152542372881356,
"acc_norm_stderr": 0.012585471793400659
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.5147058823529411,
"acc_stderr": 0.03035969707904612,
"acc_norm": 0.5147058823529411,
"acc_norm_stderr": 0.03035969707904612
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.5424836601307189,
"acc_stderr": 0.020154685712590898,
"acc_norm": 0.5424836601307189,
"acc_norm_stderr": 0.020154685712590898
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.5909090909090909,
"acc_stderr": 0.04709306978661895,
"acc_norm": 0.5909090909090909,
"acc_norm_stderr": 0.04709306978661895
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.5959183673469388,
"acc_stderr": 0.03141470802586589,
"acc_norm": 0.5959183673469388,
"acc_norm_stderr": 0.03141470802586589
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.736318407960199,
"acc_stderr": 0.031157150869355558,
"acc_norm": 0.736318407960199,
"acc_norm_stderr": 0.031157150869355558
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.81,
"acc_stderr": 0.03942772444036625,
"acc_norm": 0.81,
"acc_norm_stderr": 0.03942772444036625
},
"harness|hendrycksTest-virology|5": {
"acc": 0.42771084337349397,
"acc_stderr": 0.038515976837185335,
"acc_norm": 0.42771084337349397,
"acc_norm_stderr": 0.038515976837185335
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.7777777777777778,
"acc_stderr": 0.03188578017686398,
"acc_norm": 0.7777777777777778,
"acc_norm_stderr": 0.03188578017686398
},
"harness|truthfulqa:mc|0": {
"mc1": 0.35006119951040393,
"mc1_stderr": 0.01669794942015103,
"mc2": 0.5043477199409111,
"mc2_stderr": 0.015764099492460493
},
"harness|winogrande|5": {
"acc": 0.7521704814522494,
"acc_stderr": 0.012134386019865348
},
"harness|gsm8k|5": {
"acc": 0.20166793025018953,
"acc_stderr": 0.01105229588954436
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
CVasNLPExperiments/DTD_parition1_test_eachadea_vicuna_13b_1.1_mode_A_ns_1880 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: prompt
dtype: string
- name: true_label
dtype: string
- name: prediction
dtype: string
splits:
- name: fewshot_0__Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full_clip_tags_LAION_ViT_H_14_2B_simple_specific_rices
num_bytes: 791743
num_examples: 1880
download_size: 175207
dataset_size: 791743
---
# Dataset Card for "DTD_parition1_test_eachadea_vicuna_13b_1.1_mode_A_ns_1880"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
affahrizain/jigsaw-toxic-comment | ---
dataset_info:
features:
- name: labels
dtype: int64
- name: comment_clean
dtype: string
splits:
- name: train
num_bytes: 57080609
num_examples: 159100
- name: dev
num_bytes: 7809213
num_examples: 22393
- name: test
num_bytes: 22245686
num_examples: 63978
download_size: 13050863
dataset_size: 87135508
---
# Dataset Card for "jigsaw-toxic-comment"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
open-llm-leaderboard/details_habanoz__TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-4epochs-oasst1-top1-instruct-V1 | ---
pretty_name: Evaluation run of habanoz/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-4epochs-oasst1-top1-instruct-V1
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [habanoz/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-4epochs-oasst1-top1-instruct-V1](https://huggingface.co/habanoz/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-4epochs-oasst1-top1-instruct-V1)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 1 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_habanoz__TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-4epochs-oasst1-top1-instruct-V1\"\
,\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese\
\ are the [latest results from run 2023-12-02T14:51:26.830602](https://huggingface.co/datasets/open-llm-leaderboard/details_habanoz__TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-4epochs-oasst1-top1-instruct-V1/blob/main/results_2023-12-02T14-51-26.830602.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.01288855193328279,\n\
\ \"acc_stderr\": 0.003106901266499671\n },\n \"harness|gsm8k|5\":\
\ {\n \"acc\": 0.01288855193328279,\n \"acc_stderr\": 0.003106901266499671\n\
\ }\n}\n```"
repo_url: https://huggingface.co/habanoz/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-4epochs-oasst1-top1-instruct-V1
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_gsm8k_5
data_files:
- split: 2023_12_02T14_51_26.830602
path:
- '**/details_harness|gsm8k|5_2023-12-02T14-51-26.830602.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-12-02T14-51-26.830602.parquet'
- config_name: results
data_files:
- split: 2023_12_02T14_51_26.830602
path:
- results_2023-12-02T14-51-26.830602.parquet
- split: latest
path:
- results_2023-12-02T14-51-26.830602.parquet
---
# Dataset Card for Evaluation run of habanoz/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-4epochs-oasst1-top1-instruct-V1
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/habanoz/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-4epochs-oasst1-top1-instruct-V1
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [habanoz/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-4epochs-oasst1-top1-instruct-V1](https://huggingface.co/habanoz/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-4epochs-oasst1-top1-instruct-V1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_habanoz__TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-4epochs-oasst1-top1-instruct-V1",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-02T14:51:26.830602](https://huggingface.co/datasets/open-llm-leaderboard/details_habanoz__TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-4epochs-oasst1-top1-instruct-V1/blob/main/results_2023-12-02T14-51-26.830602.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.01288855193328279,
"acc_stderr": 0.003106901266499671
},
"harness|gsm8k|5": {
"acc": 0.01288855193328279,
"acc_stderr": 0.003106901266499671
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
autoevaluate/autoeval-eval-phpthinh__exampleem-raw-eb2c05-1728660344 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- phpthinh/exampleem
eval_info:
task: text_zero_shot_classification
model: bigscience/bloom-7b1
metrics: []
dataset_name: phpthinh/exampleem
dataset_config: raw
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-7b1
* Dataset: phpthinh/exampleem
* Config: raw
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@phpthinh](https://huggingface.co/phpthinh) for evaluating this model. |
silk-road/Luotuo-QA-A-CoQA-Chinese | ---
extra_gated_prompt: 我们翻译了CoQA数据集,请仔细阅读Licensing Information中的信息。
extra_gated_heading: 您需要接受协议并提交信息以获取此数据集
extra_gated_fields:
姓名: text
邮箱: text
所在组织: text
使用目的: text
我同意仅将此数据集用于非商业用途: checkbox
extra_gated_button_content: 我已阅读Licensing Information中的信息并同意提供相关信息
license: other
task_categories:
- question-answering
language:
- zh
- en
size_categories:
- 10K<n<100K
---
# Dataset Card for luotuo-QA-A
## Dataset Description
- **Homepage:** https://github.com/LC1332/Luotuo-Chinese-LLM
- **Repository:** https://github.com/LC1332/Luotuo-QA
- **Point of Contact:** qinyu_luo@163.com
### Dataset Summary
CoQA(Conversational Question Answering)数据集是一个用于对话式问答任务的大规模数据集,包含超过127,000个问题及其对应的答案。这些文本来自七个不同领域的段落:儿童故事、文学作品、中学和高中英语考试、新闻、维基百科、Reddit和Science。
CoQA数据集经过简单清洗,共有7012个story,我们在此基础上将整个数据集翻译成了中文并进行了增广,其中每个story中包含5个左右的问题,每个问题进行了5次增广。
由于此数据集是我们Luotuo-QA项目的一部分,我们将它叫做luotuo-QA-A,旨在促进对话式问答在中文语境下的研究和应用。
您可以在这里查看Luotuo-QA项目:https://github.com/LC1332/Luotuo-QA
此数据集适用于训练和评估中文对话式问答模型。有益于推动中文自然语言处理领域的发展,同时也为研究人员和开发者提供了一个基准,用于比较不同模型的性能和探索新的方法。
我们希望这一工作能够促进全球范围内中文语境对话式问答任务的研究和进一步的创新。
The CoQA (Conversational Question Answering) dataset is a large-scale dataset for conversational question answering tasks, consisting of over 127,000 questions and their corresponding answers. These texts are derived from passages in seven different domains: children's stories, literature, middle and high school English exams, news, Wikipedia, Reddit, and Science.
The CoQA dataset has undergone simple cleaning and consists of 7,012 stories. Building upon this dataset, we have translated the entire collection into Chinese and performed augmentation. Each story contains around 5 questions, and each question has been augmented 5 times.
As this dataset is part of our Luotuo-QA project, we name this dataset as luotuo-QA-A. It aims to facilitate research and applications of conversational question answering in the Chinese language context.
You can find our Luotuo-QA project here: https://github.com/LC1332/Luotuo-QA
This dataset is suitable for training and evaluating Chinese conversational question answering models. It contributes to the advancement of Chinese natural language processing and provides researchers and developers with a benchmark to compare the performance of different models and explore new approaches.
We hope that this work will foster research and further innovation in conversational question answering tasks in the Chinese language context on a global scale.
### Languages
CHINESE
### Data Instances
```
文本:长妈妈曾经讲给我一个故事听:先前,有一个读书人住在古庙里用功,晚间, 在院子里纳凉的时候,突然听到有人在叫他。答应着,四面看时,却见一个美女的 脸露在墙头上,向他一笑,隐去了。他很高兴;但竟给那走来夜谈的老和尚识破了 机关。说他脸上有些妖气,一定遇见“美女蛇”了;这是人首蛇身的怪物,能唤人 名,倘一答应,夜间便要来吃这人的肉的。他自然吓得要死,而那老和尚却道无妨 ,给他一个小盒子,说只要放在枕边,便可高枕而卧。他虽然照样办,却总是睡不 着,——当然睡不着的。到半夜,果然来了,沙沙沙!门外象是风雨声。他正抖作 一团时,却听得豁的一声,一道金光从枕边飞出,外面便什么声音也没有了,那金 光也就飞回来,敛在盒子里。后来呢?后来,老和尚说,这是飞蜈蚣,它能吸蛇的 脑髓,美女蛇就被它治死了。
原始问题为:谁遇到了美女蛇?
问题转义为:谁被美女蛇所困扰?
答案为:读书人
问题转义为:美女蛇袭击了谁?
答案为:读书人
原始问题为:谁杀了美女蛇
问题转义为:谁杀死了美女蛇
答案为:飞蜈蚣
```
### Licensing Information
我们的协议与CoQA数据集原始协议保持一致,请阅读以下内容。
CoQA数据集包含来自七个领域的段落。我们将其中五个领域的段落以以下许可证公开:
文学和维基百科段落遵循CC BY-SA 4.0许可证共享。
儿童故事选自MCTest,该数据集附带MSR-LA许可证。
中学/高中考试段落选自RACE,该数据集有自己的许可证。
新闻段落选自DeepMind CNN数据集,该数据集有Apache许可证。
Our licenses aligns with the original licenses of the CoQA dataset. Please refer to the following information.
CoQA contains passages from seven domains. It make five of these public under the following licenses.
We did translation and augmentation on the CoQA dataset. Therefore, the generated part of the data still complies with the original agreement of CoQA:
Literature and Wikipedia passages are shared under CC BY-SA 4.0 license.
Children's stories are collected from MCTest which comes with MSR-LA license.
Middle/High school exam passages are collected from RACE which comes with its own license.
News passages are collected from the DeepMind CNN dataset which comes with Apache license.
### Citation Information
如果您在项目中使用了我们的模型、代码或者数据,请引用我们。
Please cite us if you use the data or code in this repo.
```bibtex
@misc{alpaca,
author={Jianshen Liao, Ao Sun, Qinyu Luo, Hongsen Huang, Cheng Li},
title = {Luotuo-QA: Better Conversational Question Answering Model with Answer Completion},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/LC1332/Luotuo-QA}},
}
```
### Contributions
Thanks to @XXX, @XXXXXX, @XXXX, @XXXXXX, @XXXXXX, @XXX for adding this dataset. |
nourheshamshaheen/ICPR_pipeline3_big | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': area
'1': heatmap
'2': horizontal_bar
'3': horizontal_interval
'4': line
'5': manhattan
'6': map
'7': pie
'8': scatter
'9': scatter-line
'10': surface
'11': venn
'12': vertical_bar
'13': vertical_box
'14': vertical_interval
- name: pipeline_label
dtype:
class_label:
names:
'0': line
'1': other
'2': scatter
'3': scatter_line
'4': vertical_bar
- name: true_label
dtype: string
splits:
- name: train
num_bytes: 1073939947.25
num_examples: 20630
download_size: 979370224
dataset_size: 1073939947.25
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "ICPR_pipeline3_big"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
gh1407/clustered_causal_pairs_3 | ---
dataset_info:
features:
- name: political_leaning
dtype: string
- name: cause_split
dtype: string
- name: effect_split
dtype: string
splits:
- name: train
num_bytes: 653278
num_examples: 3646
download_size: 143269
dataset_size: 653278
---
# Dataset Card for "clustered_causal_pairs_3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
liuyanchen1015/MULTI_VALUE_sst2_analytic_whose_relativizer | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: score
dtype: int64
splits:
- name: dev
num_bytes: 1512
num_examples: 8
- name: test
num_bytes: 1634
num_examples: 10
- name: train
num_bytes: 21675
num_examples: 140
download_size: 16990
dataset_size: 24821
---
# Dataset Card for "MULTI_VALUE_sst2_analytic_whose_relativizer"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
open-llm-leaderboard/details_TheBloke__OpenOrca-Platypus2-13B-GPTQ | ---
pretty_name: Evaluation run of TheBloke/OpenOrca-Platypus2-13B-GPTQ
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [TheBloke/OpenOrca-Platypus2-13B-GPTQ](https://huggingface.co/TheBloke/OpenOrca-Platypus2-13B-GPTQ)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 64 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_TheBloke__OpenOrca-Platypus2-13B-GPTQ\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-09-16T19:40:18.805309](https://huggingface.co/datasets/open-llm-leaderboard/details_TheBloke__OpenOrca-Platypus2-13B-GPTQ/blob/main/results_2023-09-16T19-40-18.805309.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.00776006711409396,\n\
\ \"em_stderr\": 0.0008986296432392665,\n \"f1\": 0.09614198825503374,\n\
\ \"f1_stderr\": 0.001960302320596267,\n \"acc\": 0.43098320760328224,\n\
\ \"acc_stderr\": 0.009951484755350203\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.00776006711409396,\n \"em_stderr\": 0.0008986296432392665,\n\
\ \"f1\": 0.09614198825503374,\n \"f1_stderr\": 0.001960302320596267\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.09401061410159212,\n \
\ \"acc_stderr\": 0.008038819818872464\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7679558011049724,\n \"acc_stderr\": 0.01186414969182794\n\
\ }\n}\n```"
repo_url: https://huggingface.co/TheBloke/OpenOrca-Platypus2-13B-GPTQ
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_08_31T16_41_28.579874
path:
- '**/details_harness|arc:challenge|25_2023-08-31T16:41:28.579874.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-08-31T16:41:28.579874.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_09_16T19_40_18.805309
path:
- '**/details_harness|drop|3_2023-09-16T19-40-18.805309.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-09-16T19-40-18.805309.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_09_16T19_40_18.805309
path:
- '**/details_harness|gsm8k|5_2023-09-16T19-40-18.805309.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-09-16T19-40-18.805309.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_08_31T16_41_28.579874
path:
- '**/details_harness|hellaswag|10_2023-08-31T16:41:28.579874.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-08-31T16:41:28.579874.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_08_31T16_41_28.579874
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-31T16:41:28.579874.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-31T16:41:28.579874.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-31T16:41:28.579874.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_08_31T16_41_28.579874
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-31T16:41:28.579874.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-31T16:41:28.579874.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_08_31T16_41_28.579874
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-31T16:41:28.579874.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-31T16:41:28.579874.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_08_31T16_41_28.579874
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-31T16:41:28.579874.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-31T16:41:28.579874.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_08_31T16_41_28.579874
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-31T16:41:28.579874.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-31T16:41:28.579874.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_08_31T16_41_28.579874
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-31T16:41:28.579874.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-31T16:41:28.579874.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_08_31T16_41_28.579874
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-31T16:41:28.579874.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-31T16:41:28.579874.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_08_31T16_41_28.579874
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-31T16:41:28.579874.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-31T16:41:28.579874.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_08_31T16_41_28.579874
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-31T16:41:28.579874.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-31T16:41:28.579874.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_08_31T16_41_28.579874
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-31T16:41:28.579874.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-31T16:41:28.579874.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_08_31T16_41_28.579874
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-31T16:41:28.579874.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-31T16:41:28.579874.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_08_31T16_41_28.579874
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-31T16:41:28.579874.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-31T16:41:28.579874.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_08_31T16_41_28.579874
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-31T16:41:28.579874.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-31T16:41:28.579874.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_08_31T16_41_28.579874
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-31T16:41:28.579874.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-31T16:41:28.579874.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_08_31T16_41_28.579874
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-31T16:41:28.579874.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-31T16:41:28.579874.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_08_31T16_41_28.579874
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-31T16:41:28.579874.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-31T16:41:28.579874.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_08_31T16_41_28.579874
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-31T16:41:28.579874.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-31T16:41:28.579874.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_08_31T16_41_28.579874
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-31T16:41:28.579874.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-31T16:41:28.579874.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_08_31T16_41_28.579874
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-31T16:41:28.579874.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-31T16:41:28.579874.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_08_31T16_41_28.579874
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-31T16:41:28.579874.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-31T16:41:28.579874.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_08_31T16_41_28.579874
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-31T16:41:28.579874.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-31T16:41:28.579874.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_08_31T16_41_28.579874
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-31T16:41:28.579874.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-31T16:41:28.579874.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_08_31T16_41_28.579874
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-31T16:41:28.579874.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-31T16:41:28.579874.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_08_31T16_41_28.579874
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-31T16:41:28.579874.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-31T16:41:28.579874.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_08_31T16_41_28.579874
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-31T16:41:28.579874.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-31T16:41:28.579874.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_08_31T16_41_28.579874
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-31T16:41:28.579874.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-31T16:41:28.579874.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_08_31T16_41_28.579874
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-31T16:41:28.579874.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-31T16:41:28.579874.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_08_31T16_41_28.579874
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-31T16:41:28.579874.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-31T16:41:28.579874.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_08_31T16_41_28.579874
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-31T16:41:28.579874.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-31T16:41:28.579874.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_08_31T16_41_28.579874
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-31T16:41:28.579874.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-31T16:41:28.579874.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_08_31T16_41_28.579874
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-31T16:41:28.579874.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-31T16:41:28.579874.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_08_31T16_41_28.579874
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-31T16:41:28.579874.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-31T16:41:28.579874.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_08_31T16_41_28.579874
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-31T16:41:28.579874.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-31T16:41:28.579874.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_08_31T16_41_28.579874
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-31T16:41:28.579874.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-31T16:41:28.579874.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_08_31T16_41_28.579874
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-31T16:41:28.579874.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-31T16:41:28.579874.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_08_31T16_41_28.579874
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-31T16:41:28.579874.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-31T16:41:28.579874.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_08_31T16_41_28.579874
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-31T16:41:28.579874.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-31T16:41:28.579874.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_08_31T16_41_28.579874
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-31T16:41:28.579874.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-31T16:41:28.579874.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_08_31T16_41_28.579874
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-31T16:41:28.579874.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-31T16:41:28.579874.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_08_31T16_41_28.579874
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-31T16:41:28.579874.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-31T16:41:28.579874.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_08_31T16_41_28.579874
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-31T16:41:28.579874.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-31T16:41:28.579874.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_08_31T16_41_28.579874
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-31T16:41:28.579874.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-31T16:41:28.579874.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_08_31T16_41_28.579874
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-31T16:41:28.579874.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-31T16:41:28.579874.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_08_31T16_41_28.579874
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-31T16:41:28.579874.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-31T16:41:28.579874.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_08_31T16_41_28.579874
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-31T16:41:28.579874.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-31T16:41:28.579874.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_08_31T16_41_28.579874
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-31T16:41:28.579874.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-31T16:41:28.579874.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_08_31T16_41_28.579874
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-31T16:41:28.579874.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-31T16:41:28.579874.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_08_31T16_41_28.579874
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-31T16:41:28.579874.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-31T16:41:28.579874.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_08_31T16_41_28.579874
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-31T16:41:28.579874.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-31T16:41:28.579874.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_08_31T16_41_28.579874
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-31T16:41:28.579874.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-31T16:41:28.579874.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_08_31T16_41_28.579874
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-31T16:41:28.579874.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-31T16:41:28.579874.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_08_31T16_41_28.579874
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-31T16:41:28.579874.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-31T16:41:28.579874.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_08_31T16_41_28.579874
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-31T16:41:28.579874.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-31T16:41:28.579874.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_08_31T16_41_28.579874
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-31T16:41:28.579874.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-31T16:41:28.579874.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_08_31T16_41_28.579874
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-31T16:41:28.579874.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-31T16:41:28.579874.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_08_31T16_41_28.579874
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-31T16:41:28.579874.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-31T16:41:28.579874.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_08_31T16_41_28.579874
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-31T16:41:28.579874.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-31T16:41:28.579874.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_08_31T16_41_28.579874
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-31T16:41:28.579874.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-31T16:41:28.579874.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_08_31T16_41_28.579874
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-31T16:41:28.579874.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-31T16:41:28.579874.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_09_16T19_40_18.805309
path:
- '**/details_harness|winogrande|5_2023-09-16T19-40-18.805309.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-09-16T19-40-18.805309.parquet'
- config_name: results
data_files:
- split: 2023_08_31T16_41_28.579874
path:
- results_2023-08-31T16:41:28.579874.parquet
- split: 2023_09_16T19_40_18.805309
path:
- results_2023-09-16T19-40-18.805309.parquet
- split: latest
path:
- results_2023-09-16T19-40-18.805309.parquet
---
# Dataset Card for Evaluation run of TheBloke/OpenOrca-Platypus2-13B-GPTQ
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/TheBloke/OpenOrca-Platypus2-13B-GPTQ
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [TheBloke/OpenOrca-Platypus2-13B-GPTQ](https://huggingface.co/TheBloke/OpenOrca-Platypus2-13B-GPTQ) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_TheBloke__OpenOrca-Platypus2-13B-GPTQ",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-09-16T19:40:18.805309](https://huggingface.co/datasets/open-llm-leaderboard/details_TheBloke__OpenOrca-Platypus2-13B-GPTQ/blob/main/results_2023-09-16T19-40-18.805309.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.00776006711409396,
"em_stderr": 0.0008986296432392665,
"f1": 0.09614198825503374,
"f1_stderr": 0.001960302320596267,
"acc": 0.43098320760328224,
"acc_stderr": 0.009951484755350203
},
"harness|drop|3": {
"em": 0.00776006711409396,
"em_stderr": 0.0008986296432392665,
"f1": 0.09614198825503374,
"f1_stderr": 0.001960302320596267
},
"harness|gsm8k|5": {
"acc": 0.09401061410159212,
"acc_stderr": 0.008038819818872464
},
"harness|winogrande|5": {
"acc": 0.7679558011049724,
"acc_stderr": 0.01186414969182794
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
TrainingDataPro/silicone-masks-biometric-attacks | ---
language:
- en
license: cc-by-nc-nd-4.0
task_categories:
- video-classification
tags:
- code
- finance
dataset_info:
features:
- name: id
dtype: int32
- name: name
dtype: string
- name: video
dtype: string
- name: label
dtype:
class_label:
names:
'0': real
'1': silicone
'2': mask
splits:
- name: train
num_bytes: 2394
num_examples: 62
download_size: 156861504
dataset_size: 2394
---
# Silicone Masks Biometric Attacks
The dataset consists of videos of individuals and attacks with printed 2D masks and silicone masks . Videos are filmed in different lightning conditions (*in a dark room, daylight, light room and nightlight*). Dataset includes videos of people with different attributes (*glasses, mask, hat, hood, wigs and mustaches for men*).
### Types of videos in the dataset:
- **real** - real video of the person
- **outline** -video of the person wearing a printed 2D mask
- **silicone** - video of the person wearing a silicone mask

## Full version of the dataset includes 5792 videos
### Types and number of videos in the full dataset:
- **2885** real videos of people
- **2859** videos of people wearing silicone mask
- **48** videos of people wearing a 2D mask.
### Gender of people in the dataset:
- women: **2685**
- men: **3107**
The dataset serves as a valuable resource for computer vision, anti-spoofing tasks, video analysis, and security systems. It allows for the development of algorithms and models that can effectively detect attacks.
Studying the dataset may lead to the development of improved *security systems, surveillance technologies, and solutions to mitigate the risks associated with masked individuals carrying out attacks*.
# Get the dataset
### This is just an example of the data
Leave a request on [**https://trainingdata.pro/data-market**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=silicone-masks-biometric-attacks) to discuss your requirements, learn about the price and buy the dataset.
# Content
- **real** - contains of real videos of people,
- **mask** - contains of videos with people wearing a printed 2D mask,
- **silicone** - contains of videos with people wearing a silicone mask,
- **dataset_info.csvl** - includes the information about videos in the dataset
### File with the extension .csv
- **video**: link to the video,
- **type**: type of the video
# Attacks might be collected in accordance with your requirements.
## [TrainingData](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=silicone-masks-biometric-attacks) provides high-quality data annotation tailored to your needs
More datasets in TrainingData's Kaggle account: **https://www.kaggle.com/trainingdatapro/datasets**
TrainingData's GitHub: **https://github.com/trainingdata-pro** |
shachardon/midjourney-threads | ---
task_categories:
- text-to-image
language:
- en
pretty_name: Midjourney-Threads
size_categories:
- 100K<n<1M
configs:
- config_name: default
data_files:
- split: train
path:
- "threads_0.csv"
- "threads_20000.csv"
- "threads_40000.csv"
- "threads_60000.csv"
- "threads_80000.csv"
- "threads_100000.csv"
- "threads_120000.csv"
- "threads_140000.csv"
- "threads_160000.csv"
---
# Dataset Card for Midjourney-Threads 🧵💬
<!-- Provide a quick summary of the dataset. -->
This dataset contains users prompts from the Midjourney discord channel, organized into "threads of interaction".
Each thread contains a user’s trails to create one target image.
The dataset was introduced as part of the paper: [Human Learning by Model Feedback: The Dynamics of Iterative Prompting with Midjourney][ourpaper].
[ourpaper]: https://aclanthology.org/2023.emnlp-main.253/ "markdown our paper"
### Dataset Sources
<!-- Provide the basic links for the dataset. -->
- **Repository:** https://github.com/shachardon/Mid-Journey-to-alignment
- **Paper:** https://aclanthology.org/2023.emnlp-main.253/
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
Main Columns:
- 'text' - the original prompt
- 'args' - predefined parameters (such as the aspect ratio, chaos and [more][myexample])
- 'channel_id' - the discord channel
- 'userid' - an anonymous user id
- 'timestamp' - a timestamp of the prompt creation
- 'label' - Ture whether an image that was generated based on that prompt was upscaled, otherwise False.
- 'id' - unique id of the prompt
- 'url_png' - link to the generated images (a 4-grid version)
- 'main_content' - prefix of the prompt, without trailing magic-words
- 'concreteness' - concreteness score, based on the [this paper][concpaper]
- 'word_len' - the number of words
- 'repeat_words' - the occurrences of each word that appears more than once in the prompt, excluding stop words.
- 'reapeat_words_ratio' - repeat_words / word_len
- 'perplexity' - the perplexity GPT-2 assigns to each prompt.
- 'caption_0-3' - captions that were generated by the BLIP-2 model, with the 4 created images as its inputs.
- 'phase' - train/test split, as was used to train image/text classifiers
- 'magic_ratio' - the percentage of words that were recognized as magic words in the prompt
- 'thread_id' - the id of the thread
- 'depth' - the max depth of a constituency parse tree of the prompt.
- 'num_sent_parser' - the number of sentences in the prompt.
- 'num_sent_parser_ratio' - num_sent_parser / word_len
- 'words_per_sent' - word_len / num_sent_parser
[myexample]: https://docs.midjourney.com/docs/parameter-list "markdown more"
[concpaper]: https://link.springer.com/article/10.3758/s13428-013-0403-5 "markdown this paper"
## Dataset Creation
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
We construct the dataset by scraping user-generated prompts from the Midjourney Discord server.
The server contains channels in which a user can type a prompt and arguments, and then the Midjourney bot would reply with 4 generated images, combined together into a grid. Then, if the user is satisfied with one of the 4 images, they can send an 'upscale' command to the bot, to get an upscaled version of the desired image.
We randomly choose one of the 'newbies' channels, where both new and experienced users are experimenting with general domain prompts. We collect 693,528 prompts (From 23 January to 1 March 2023), together with their matching images and meta-data such as timestamps and user ids (which we anonymize).
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
We split the prompts into threads automatically, see the paper for more details.
In addition, we extract features (perplexity, sentence length, and more).
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
We fully anonymize the data by removing user names and other user-specific meta-data. If you recognize your prompts here and want to remove them, please send us an [email](mailto:shachar.don-yehiya@mail.huji.ac.il).
The Midjourney Discord is an open community that allows others to use images and prompts whenever they are posted in a public setting.
Paying users do own all assets they create, and therefore we do not include the image files in our dataset, but only links to them.
### Recommendations, Risks, and Limitations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
We split the prompts into threads automatically, and therefore there are some mistakes. For more about our annotations method, please see the paper.
Our manual sample did not find any offensive content in the prompts.
## Citation
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
```
@inproceedings{don-yehiya-etal-2023-human,
title = "Human Learning by Model Feedback: The Dynamics of Iterative Prompting with Midjourney",
author = "Don-Yehiya, Shachar and
Choshen, Leshem and
Abend, Omri",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.253",
pages = "4146--4161",
abstract = "Generating images with a Text-to-Image model often requires multiple trials, where human users iteratively update their prompt based on feedback, namely the output image. Taking inspiration from cognitive work on reference games and dialogue alignment, this paper analyzes the dynamics of the user prompts along such iterations. We compile a dataset of iterative interactions of human users with Midjourney. Our analysis then reveals that prompts predictably converge toward specific traits along these iterations. We further study whether this convergence is due to human users, realizing they missed important details, or due to adaptation to the model{'}s {``}preferences{''}, producing better images for a specific language style. We show initial evidence that both possibilities are at play. The possibility that users adapt to the model{'}s preference raises concerns about reusing user data for further training. The prompts may be biased towards the preferences of a specific model, rather than align with human intentions and natural manner of expression.",
}
```
|
shunk031/JGLUE | ---
annotations_creators:
- crowdsourced
language:
- ja
language_creators:
- crowdsourced
- found
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: JGLUE
size_categories: []
source_datasets:
- original
tags:
- MARC
- CoLA
- STS
- NLI
- SQuAD
- CommonsenseQA
task_categories:
- multiple-choice
- question-answering
- sentence-similarity
- text-classification
task_ids:
- multiple-choice-qa
- open-domain-qa
- multi-class-classification
- sentiment-classification
---
# Dataset Card for JGLUE
[](https://github.com/shunk031/huggingface-datasets_JGLUE/actions/workflows/ci.yaml)
[](https://aclanthology.org/2022.lrec-1.317)
This dataset loading script is developed on [GitHub](https://github.com/shunk031/huggingface-datasets_JGLUE).
Please feel free to open an [issue](https://github.com/shunk031/huggingface-datasets_JGLUE/issues/new/choose) or [pull request](https://github.com/shunk031/huggingface-datasets_JGLUE/pulls).
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/yahoojapan/JGLUE
- **Repository:** https://github.com/shunk031/huggingface-datasets_JGLUE
### Dataset Summary
From [JGLUE's README.md](https://github.com/yahoojapan/JGLUE#jglue-japanese-general-language-understanding-evaluation):
> JGLUE, Japanese General Language Understanding Evaluation, is built to measure the general NLU ability in Japanese. JGLUE has been constructed from scratch without translation. We hope that JGLUE will facilitate NLU research in Japanese.
> JGLUE has been constructed by a joint research project of Yahoo Japan Corporation and Kawahara Lab at Waseda University.
### Supported Tasks and Leaderboards
From [JGLUE's README.md](https://github.com/yahoojapan/JGLUE#tasksdatasets):
> JGLUE consists of the tasks of text classification, sentence pair classification, and QA. Each task consists of multiple datasets.
#### Supported Tasks
##### MARC-ja
From [JGLUE's README.md](https://github.com/yahoojapan/JGLUE#marc-ja):
> MARC-ja is a dataset of the text classification task. This dataset is based on the Japanese portion of [Multilingual Amazon Reviews Corpus (MARC)](https://docs.opendata.aws/amazon-reviews-ml/readme.html) ([Keung+, 2020](https://aclanthology.org/2020.emnlp-main.369/)).
##### JCoLA
From [JCoLA's README.md](https://github.com/osekilab/JCoLA#jcola-japanese-corpus-of-linguistic-acceptability)
> JCoLA (Japanese Corpus of Linguistic Accept010 ability) is a novel dataset for targeted syntactic evaluations of language models in Japanese, which consists of 10,020 sentences with acceptability judgments by linguists. The sentences are manually extracted from linguistics journals, handbooks and textbooks. JCoLA is included in [JGLUE benchmark](https://github.com/yahoojapan/JGLUE) (Kurihara et al., 2022).
##### JSTS
From [JGLUE's README.md](https://github.com/yahoojapan/JGLUE#jsts):
> JSTS is a Japanese version of the STS (Semantic Textual Similarity) dataset. STS is a task to estimate the semantic similarity of a sentence pair. The sentences in JSTS and JNLI (described below) are extracted from the Japanese version of the MS COCO Caption Dataset, [the YJ Captions Dataset](https://github.com/yahoojapan/YJCaptions) ([Miyazaki and Shimizu, 2016](https://aclanthology.org/P16-1168/)).
##### JNLI
From [JGLUE's README.md](https://github.com/yahoojapan/JGLUE#jnli):
> JNLI is a Japanese version of the NLI (Natural Language Inference) dataset. NLI is a task to recognize the inference relation that a premise sentence has to a hypothesis sentence. The inference relations are entailment, contradiction, and neutral.
##### JSQuAD
From [JGLUE's README.md](https://github.com/yahoojapan/JGLUE#jsquad):
> JSQuAD is a Japanese version of [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/) ([Rajpurkar+, 2018](https://aclanthology.org/P18-2124/)), one of the datasets of reading comprehension. Each instance in the dataset consists of a question regarding a given context (Wikipedia article) and its answer. JSQuAD is based on SQuAD 1.1 (there are no unanswerable questions). We used [the Japanese Wikipedia dump](https://dumps.wikimedia.org/jawiki/) as of 20211101.
##### JCommonsenseQA
From [JGLUE's README.md](https://github.com/yahoojapan/JGLUE#jcommonsenseqa):
> JCommonsenseQA is a Japanese version of [CommonsenseQA](https://www.tau-nlp.org/commonsenseqa) ([Talmor+, 2019](https://aclanthology.org/N19-1421/)), which is a multiple-choice question answering dataset that requires commonsense reasoning ability. It is built using crowdsourcing with seeds extracted from the knowledge base [ConceptNet](https://conceptnet.io/).
#### Leaderboard
From [JGLUE's README.md](https://github.com/yahoojapan/JGLUE#leaderboard):
> A leaderboard will be made public soon. The test set will be released at that time.
### Languages
The language data in JGLUE is in Japanese ([BCP-47 ja-JP](https://www.rfc-editor.org/info/bcp47)).
## Dataset Structure
### Data Instances
When loading a specific configuration, users has to append a version dependent suffix:
#### MARC-ja
```python
from datasets import load_dataset
dataset = load_dataset("shunk031/JGLUE", name="MARC-ja")
print(dataset)
# DatasetDict({
# train: Dataset({
# features: ['sentence', 'label', 'review_id'],
# num_rows: 187528
# })
# validation: Dataset({
# features: ['sentence', 'label', 'review_id'],
# num_rows: 5654
# })
# })
```
#### JCoLA
```python
from datasets import load_dataset
dataset = load_dataset("shunk031/JGLUE", name="JCoLA")
print(dataset)
# DatasetDict({
# train: Dataset({
# features: ['uid', 'source', 'label', 'diacritic', 'sentence', 'original', 'translation', 'gloss', 'simple', 'linguistic_phenomenon'],
# num_rows: 6919
# })
# validation: Dataset({
# features: ['uid', 'source', 'label', 'diacritic', 'sentence', 'original', 'translation', 'gloss', 'simple', 'linguistic_phenomenon'],
# num_rows: 865
# })
# validation_out_of_domain: Dataset({
# features: ['uid', 'source', 'label', 'diacritic', 'sentence', 'original', 'translation', 'gloss', 'simple', 'linguistic_phenomenon'],
# num_rows: 685
# })
# validation_out_of_domain_annotated: Dataset({
# features: ['uid', 'source', 'label', 'diacritic', 'sentence', 'original', 'translation', 'gloss', 'simple', 'linguistic_phenomenon'],
# num_rows: 685
# })
# })
```
An example of the JCoLA dataset (validation - out of domain annotated) looks as follows:
```json
{
"uid": 9109,
"source": "Asano_and_Ura_2010",
"label": 1,
"diacritic": "g",
"sentence": "太郎のゴミの捨て方について話した。",
"original": "太郎のゴミの捨て方",
"translation": "‘The way (for Taro) to throw out garbage’",
"gloss": true,
"linguistic_phenomenon": {
"argument_structure": true,
"binding": false,
"control_raising": false,
"ellipsis": false,
"filler_gap": false,
"island_effects": false,
"morphology": false,
"nominal_structure": false,
"negative_polarity_concord_items": false,
"quantifier": false,
"verbal_agreement": false,
"simple": false
}
}
```
#### JSTS
```python
from datasets import load_dataset
dataset = load_dataset("shunk031/JGLUE", name="JSTS")
print(dataset)
# DatasetDict({
# train: Dataset({
# features: ['sentence_pair_id', 'yjcaptions_id', 'sentence1', 'sentence2', 'label'],
# num_rows: 12451
# })
# validation: Dataset({
# features: ['sentence_pair_id', 'yjcaptions_id', 'sentence1', 'sentence2', 'label'],
# num_rows: 1457
# })
# })
```
An example of the JSTS dataset looks as follows:
```json
{
"sentence_pair_id": "691",
"yjcaptions_id": "127202-129817-129818",
"sentence1": "街中の道路を大きなバスが走っています。 (A big bus is running on the road in the city.)",
"sentence2": "道路を大きなバスが走っています。 (There is a big bus running on the road.)",
"label": 4.4
}
```
#### JNLI
```python
from datasets import load_dataset
dataset = load_dataset("shunk031/JGLUE", name="JNLI")
print(dataset)
# DatasetDict({
# train: Dataset({
# features: ['sentence_pair_id', 'yjcaptions_id', 'sentence1', 'sentence2', 'label'],
# num_rows: 20073
# })
# validation: Dataset({
# features: ['sentence_pair_id', 'yjcaptions_id', 'sentence1', 'sentence2', 'label'],
# num_rows: 2434
# })
# })
```
An example of the JNLI dataset looks as follows:
```json
{
"sentence_pair_id": "1157",
"yjcaptions_id": "127202-129817-129818",
"sentence1": "街中の道路を大きなバスが走っています。 (A big bus is running on the road in the city.)",
"sentence2": "道路を大きなバスが走っています。 (There is a big bus running on the road.)",
"label": "entailment"
}
```
#### JSQuAD
```python
from datasets import load_dataset
dataset = load_dataset("shunk031/JGLUE", name="JSQuAD")
print(dataset)
# DatasetDict({
# train: Dataset({
# features: ['id', 'title', 'context', 'question', 'answers', 'is_impossible'],
# num_rows: 62859
# })
# validation: Dataset({
# features: ['id', 'title', 'context', 'question', 'answers', 'is_impossible'],
# num_rows: 4442
# })
# })
```
An example of the JSQuAD looks as follows:
```json
{
"id": "a1531320p0q0",
"title": "東海道新幹線",
"context": "東海道新幹線 [SEP] 1987 年(昭和 62 年)4 月 1 日の国鉄分割民営化により、JR 東海が運営を継承した。西日本旅客鉄道(JR 西日本)が継承した山陽新幹線とは相互乗り入れが行われており、東海道新幹線区間のみで運転される列車にも JR 西日本所有の車両が使用されることがある。2020 年(令和 2 年)3 月現在、東京駅 - 新大阪駅間の所要時間は最速 2 時間 21 分、最高速度 285 km/h で運行されている。",
"question": "2020 年(令和 2 年)3 月現在、東京駅 - 新大阪駅間の最高速度はどのくらいか。",
"answers": {
"text": ["285 km/h"],
"answer_start": [182]
},
"is_impossible": false
}
```
#### JCommonsenseQA
```python
from datasets import load_dataset
dataset = load_dataset("shunk031/JGLUE", name="JCommonsenseQA")
print(dataset)
# DatasetDict({
# train: Dataset({
# features: ['q_id', 'question', 'choice0', 'choice1', 'choice2', 'choice3', 'choice4', 'label'],
# num_rows: 8939
# })
# validation: Dataset({
# features: ['q_id', 'question', 'choice0', 'choice1', 'choice2', 'choice3', 'choice4', 'label'],
# num_rows: 1119
# })
# })
```
An example of the JCommonsenseQA looks as follows:
```json
{
"q_id": 3016,
"question": "会社の最高責任者を何というか? (What do you call the chief executive officer of a company?)",
"choice0": "社長 (president)",
"choice1": "教師 (teacher)",
"choice2": "部長 (manager)",
"choice3": "バイト (part-time worker)",
"choice4": "部下 (subordinate)",
"label": 0
}
```
### Data Fields
#### MARC-ja
- `sentence_pair_id`: ID of the sentence pair
- `yjcaptions_id`: sentence ids in yjcaptions (explained below)
- `sentence1`: first sentence
- `sentence2`: second sentence
- `label`: sentence similarity: 5 (equivalent meaning) - 0 (completely different meaning)
##### Explanation for `yjcaptions_id`
From [JGLUE's README.md](https://github.com/yahoojapan/JGLUE#explanation-for-yjcaptions_id), there are the following two cases:
1. sentence pairs in one image: `(image id)-(sentence1 id)-(sentence2 id)`
- e.g., 723-844-847
- a sentence id starting with "g" means a sentence generated by a crowdworker (e.g., 69501-75698-g103): only for JNLI
2. sentence pairs in two images: `(image id of sentence1)_(image id of sentence2)-(sentence1 id)-(sentence2 id)`
- e.g., 91337_217583-96105-91680
#### JCoLA
From [JCoLA's README.md](https://github.com/osekilab/JCoLA#data-description) and [JCoLA's paper](https://arxiv.org/abs/2309.12676)
- `uid`: unique id of the sentence
- `source`: author and the year of publication of the source article
- `label`: acceptability judgement label (0 for unacceptable, 1 for acceptable)
- `diacritic`: acceptability judgement as originally notated in the source article
- `sentence`: sentence (modified by the author if needed)
- `original`: original sentence as presented in the source article
- `translation`: English translation of the sentence as presentend in the source article (if any)
- `gloss`: gloss of the sentence as presented in the source article (if any)
- `linguistic_phenomenon`
- `argument_structure`: acceptability judgements based on the order of arguments and case marking
- `binding`: acceptability judgements based on the binding of noun phrases
- `control_raising`: acceptability judgements based on predicates that are categorized as control or raising
- `ellipsis`: acceptability judgements based on the possibility of omitting elements in the sentences
- `filler_gap`: acceptability judgements based on the dependency between the moved element and the gap
- `island effects`: acceptability judgements based on the restrictions on filler-gap dependencies such as wh-movements
- `morphology`: acceptability judgements based on the morphology
- `nominal_structure`: acceptability judgements based on the internal structure of noun phrases
- `negative_polarity_concord_items`: acceptability judgements based on the restrictions on where negative polarity/concord items (NPIs/NCIs) can appear
- `quantifiers`: acceptability judgements based on the distribution of quantifiers such as floating quantifiers
- `verbal_agreement`: acceptability judgements based on the dependency between subjects and verbs
- `simple`: acceptability judgements that do not have marked syntactic structures
#### JNLI
- `sentence_pair_id`: ID of the sentence pair
- `yjcaptions_id`: sentence ids in the yjcaptions
- `sentence1`: premise sentence
- `sentence2`: hypothesis sentence
- `label`: inference relation
#### JSQuAD
- `title`: title of a Wikipedia article
- `paragraphs`: a set of paragraphs
- `qas`: a set of pairs of a question and its answer
- `question`: question
- `id`: id of a question
- `answers`: a set of answers
- `text`: answer text
- `answer_start`: start position (character index)
- `is_impossible`: all the values are false
- `context`: a concatenation of the title and paragraph
#### JCommonsenseQA
- `q_id`: ID of the question
- `question`: question
- `choice{0..4}`: choice
- `label`: correct choice id
### Data Splits
From [JGLUE's README.md](https://github.com/yahoojapan/JGLUE/blob/main/README.md#tasksdatasets):
> Only train/dev sets are available now, and the test set will be available after the leaderboard is made public.
From [JCoLA's paper](https://arxiv.org/abs/2309.12676):
> The in-domain data is split into training data (6,919 instances), development data (865 instances), and test data (865 instances). On the other hand, the out-of-domain data is only used for evaluation, and divided into development data (685 instances) and test data (686 instances).
| Task | Dataset | Train | Dev | Test |
|------------------------------|----------------|--------:|------:|------:|
| Text Classification | MARC-ja | 187,528 | 5,654 | 5,639 |
| | JCoLA | 6,919 | 865† / 685‡ | 865† / 685‡ |
| Sentence Pair Classification | JSTS | 12,451 | 1,457 | 1,589 |
| | JNLI | 20,073 | 2,434 | 2,508 |
| Question Answering | JSQuAD | 62,859 | 4,442 | 4,420 |
| | JCommonsenseQA | 8,939 | 1,119 | 1,118 |
> JCoLA: † in domain. ‡ out of domain.
## Dataset Creation
### Curation Rationale
From [JGLUE's paper](https://aclanthology.org/2022.lrec-1.317/):
> JGLUE is designed to cover a wide range of GLUE and SuperGLUE tasks and consists of three kinds of tasks: text classification, sentence pair classification, and question answering.
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
- The source language producers are users of Amazon (MARC-ja), crowd-workers of [Yahoo! Crowdsourcing](https://crowdsourcing.yahoo.co.jp/) (JSTS, JNLI and JCommonsenseQA), writers of the Japanese Wikipedia (JSQuAD), crowd-workers of [Lancers](https://www.lancers.jp/).
### Annotations
#### Annotation process
##### MARC-ja
From [JGLUE's paper](https://aclanthology.org/2022.lrec-1.317/):
> As one of the text classification datasets, we build a dataset based on the Multilingual Amazon Reviews Corpus (MARC) (Keung et al., 2020). MARC is a multilingual corpus of product reviews with 5-level star ratings (1-5) on the Amazon shopping site. This corpus covers six languages, including English and Japanese. For JGLUE, we use the Japanese part of MARC and to make it easy for both humans and computers to judge a class label, we cast the text classification task as a binary classification task, where 1- and 2-star ratings are converted to “negative”, and 4 and 5 are converted to “positive”. We do not use reviews with a 3-star rating.
> One of the problems with MARC is that it sometimes contains data where the rating diverges from the review text. This happens, for example, when a review with positive content is given a rating of 1 or 2. These data degrade the quality of our dataset. To improve the quality of the dev/test instances used for evaluation, we crowdsource a positive/negative judgment task for approximately 12,000 reviews. We adopt only reviews with the same votes from 7 or more out of 10 workers and assign a label of the maximum votes to these reviews. We divide the resulting reviews into dev/test data.
> We obtained 5,654 and 5,639 instances for the dev and test data, respectively, through the above procedure. For the training data, we extracted 187,528 instances directly from MARC without performing the cleaning procedure because of the large number of training instances. The statistics of MARC-ja are listed in Table 2. For the evaluation metric for MARC-ja, we use accuracy because it is a binary classification task of texts.
##### JCoLA
From [JCoLA's paper](https://arxiv.org/abs/2309.12676):
> ### 3 JCoLA
> In this study, we introduce JCoLA (Japanese Corpus of Linguistic Acceptability), which will be the first large-scale acceptability judgment task dataset focusing on Japanese. JCoLA consists of sentences from textbooks and handbooks on Japanese syntax, as well as from journal articles on Japanese syntax that are published in JEAL (Journal of East Asian Linguistics), one of the prestigious journals in theoretical linguistics.
> #### 3.1 Data Collection
> Sentences in JCoLA were collected from prominent textbooks and handbooks focusing on Japanese syntax. In addition to the main text, example sentences included in the footnotes were also considered for collection. We also collected acceptability judgments from journal articles on Japanese syntax published in JEAL (Journal of East Asian Linguistics): one of the prestigious journals in the-oretical linguistics. Specifically, we examined all the articles published in JEAL between 2006 and 2015 (133 papers in total), and extracted 2,252 acceptability judgments from 26 papers on Japanese syntax (Table 2). Acceptability judgments include sentences in appendices and footnotes, but not sentences presented for analyses of syntactic structures (e.g. sentences with brackets to show their syntactic structures). As a result, a total of 11,984 example. sentences were collected. Using this as a basis, JCoLA was constructed through the methodology explained in the following sections.
##### JSTS and JNLI
From [JGLUE's paper](https://aclanthology.org/2022.lrec-1.317/):
> For the sentence pair classification datasets, we construct a semantic textual similarity (STS) dataset, JSTS, and a natural language inference (NLI) dataset, JNLI.
> ### Overview
> STS is a task of estimating the semantic similarity of a sentence pair. Gold similarity is usually assigned as an average of the integer values 0 (completely different meaning) to 5 (equivalent meaning) assigned by multiple workers through crowdsourcing.
> NLI is a task of recognizing the inference relation that a premise sentence has to a hypothesis sentence. Inference relations are generally defined by three labels: “entailment”, “contradiction”, and “neutral”. Gold inference relations are often assigned by majority voting after collecting answers from multiple workers through crowdsourcing.
> For the STS and NLI tasks, STS-B (Cer et al., 2017) and MultiNLI (Williams et al., 2018) are included in GLUE, respectively. As Japanese datasets, JSNLI (Yoshikoshi et al., 2020) is a machine translated dataset of the NLI dataset SNLI (Stanford NLI), and JSICK (Yanaka and Mineshima, 2021) is a human translated dataset of the STS/NLI dataset SICK (Marelli et al., 2014). As mentioned in Section 1, these have problems originating from automatic/manual translations. To solve this problem, we construct STS/NLI datasets in Japanese from scratch. We basically extract sentence pairs in JSTS and JNLI from the Japanese version of the MS COCO Caption Dataset (Chen et al., 2015), the YJ Captions Dataset (Miyazaki and Shimizu, 2016). Most of the sentence pairs in JSTS and JNLI overlap, allowing us to analyze the relationship between similarities and inference relations for the same sentence pairs like SICK and JSICK.
> The similarity value in JSTS is assigned a real number from 0 to 5 as in STS-B. The inference relation in JNLI is assigned from the above three labels as in SNLI and MultiNLI. The definitions of the inference relations are also based on SNLI.
> ### Method of Construction
> Our construction flow for JSTS and JNLI is shown in Figure 1. Basically, two captions for the same image of YJ Captions are used as sentence pairs. For these sentence pairs, similarities and NLI relations of entailment and neutral are obtained by crowdsourcing. However, it is difficult to collect sentence pairs with low similarity and contradiction relations from captions for the same image. To solve this problem, we collect sentence pairs with low similarity from captions for different images. We collect contradiction relations by asking workers to write contradictory sentences for a given caption.
> The detailed construction procedure for JSTS and JNLI is described below.
> 1. We crowdsource an STS task using two captions for the same image from YJ Captions. We ask five workers to answer the similarity between two captions and take the mean value as the gold similarity. We delete sentence pairs with a large variance in the answers because such pairs have poor answer quality. We performed this task on 16,000 sentence pairs and deleted sentence pairs with a similarity variance of 1.0 or higher, resulting in the collection of 10,236 sentence pairs with gold similarity. We refer to this collected data as JSTS-A.
> 2. To collect sentence pairs with low similarity, we crowdsource the same STS task as Step 1 using sentence pairs of captions for different images. We conducted this task on 4,000 sentence pairs and collected 2,970 sentence pairs with gold similarity. We refer to this collected data as JSTS-B.
> 3. For JSTS-A, we crowdsource an NLI task. Since inference relations are directional, we obtain inference relations in both directions for sentence pairs. As mentioned earlier,it is difficult to collect instances of contradiction from JSTS-A, which was collected from the captions of the same images,and thus we collect instances of entailment and neutral in this step. We collect inference relation answers from 10 workers. If six or more people give the same answer, we adopt it as the gold label if it is entailment or neutral. To obtain inference relations in both directions for JSTS-A, we performed this task on 20,472 sentence pairs, twice as many as JSTS-A. As a result, we collected inference relations for 17,501 sentence pairs. We refer to this collected data as JNLI-A. We do not use JSTS-B for the NLI task because it is difficult to define and determine the inference relations between captions of different images.
> 4. To collect NLI instances of contradiction, we crowdsource a task of writing four contradictory sentences for each caption in YJCaptions. From the written sentences, we remove sentence pairs with an edit distance of 0.75 or higher to remove low-quality sentences, such as short sentences and sentences with low relevance to the original sentence. Furthermore, we perform a one-way NLI task with 10 workers to verify whether the created sentence pairs are contradictory. Only the sentence pairs answered as contradiction by at least six workers are adopted. Finally,since the contradiction relation has no direction, we automatically assign contradiction in the opposite direction of the adopted sentence pairs. Using 1,800 captions, we acquired 7,200 sentence pairs, from which we collected 3,779 sentence pairs to which we assigned the one-way contradiction relation.By automatically assigning the contradiction relation in the opposite direction, we doubled the number of instances to 7,558. We refer to this collected data as JNLI-C.
> 5. For the 3,779 sentence pairs collected in Step 4, we crowdsource an STS task, assigning similarity and filtering in the same way as in Steps1 and 2. In this way, we collected 2,303 sentence pairs with gold similarity from 3,779 pairs. We refer to this collected data as JSTS-C.
##### JSQuAD
From [JGLUE's paper](https://aclanthology.org/2022.lrec-1.317/):
> As QA datasets, we build a Japanese version of SQuAD (Rajpurkar et al., 2016), one of the datasets of reading comprehension, and a Japanese version ofCommonsenseQA, which is explained in the next section.
> Reading comprehension is the task of reading a document and answering questions about it. Many reading comprehension evaluation sets have been built in English, followed by those in other languages or multilingual ones.
> In Japanese, reading comprehension datasets for quizzes (Suzukietal.,2018) and those in the drivingdomain (Takahashi et al., 2019) have been built, but none are in the general domain. We use Wikipedia to build a dataset for the general domain. The construction process is basically based on SQuAD 1.1 (Rajpurkar et al., 2016).
> First, to extract high-quality articles from Wikipedia, we use Nayuki, which estimates the quality of articles on the basis of hyperlinks in Wikipedia. We randomly chose 822 articles from the top-ranked 10,000 articles. For example, the articles include “熊本県 (Kumamoto Prefecture)” and “フランス料理 (French cuisine)”. Next, we divide an article into paragraphs, present each paragraph to crowdworkers, and ask them to write questions and answers that can be answered if one understands the paragraph. Figure 2 shows an example of JSQuAD. We ask workers to write two additional answers for the dev and test sets to make the system evaluation robust.
##### JCommonsenseQA
From [JGLUE's paper](https://aclanthology.org/2022.lrec-1.317/):
> ### Overview
> JCommonsenseQA is a Japanese version of CommonsenseQA (Talmor et al., 2019), which consists of five choice QA to evaluate commonsense reasoning ability. Figure 3 shows examples of JCommonsenseQA. In the same way as CommonsenseQA, JCommonsenseQA is built using crowdsourcing with seeds extracted from the knowledge base ConceptNet (Speer et al., 2017). ConceptNet is a multilingual knowledge base that consists of triplets of two concepts and their relation. The triplets are directional and represented as (source concept, relation, target concept), for example (bullet train, AtLocation, station).
> ### Method of Construction
> The construction flow for JCommonsenseQA is shown in Figure 4. First, we collect question sets (QSs) from ConceptNet, each of which consists of a source concept and three target concepts that have the same relation to the source concept. Next, for each QS, we crowdAtLocation 2961source a task of writing a question with only one target concept as the answer and a task of adding two distractors. We describe the detailed construction procedure for JCommonsenseQA below, showing how it differs from CommonsenseQA.
> 1. We collect Japanese QSs from ConceptNet. CommonsenseQA uses only forward relations (source concept, relation, target concept) excluding general ones such as “RelatedTo” and “IsA”. JCommonsenseQA similarly uses a set of 22 relations5, excluding general ones, but the direction of the relations is bidirectional to make the questions more diverse. In other words, we also use relations in the opposite direction (source concept, relation−1, target concept).6 With this setup, we extracted 43,566 QSs with Japanese source/target concepts and randomly selected 7,500 from them.
> 2. Some low-quality questions in CommonsenseQA contain distractors that can be considered to be an answer. To improve the quality of distractors, we add the following two processes that are not adopted in CommonsenseQA. First, if three target concepts of a QS include a spelling variation or a synonym of one another, this QS is removed. To identify spelling variations, we use the word ID of the morphological dictionary Juman Dic7. Second, we crowdsource a task of judging whether target concepts contain a synonym. As a result, we adopted 5,920 QSs from 7,500.
> 3. For each QS, we crowdsource a task of writing a question sentence in which only one from the three target concepts is an answer. In the example shown in Figure 4, “駅 (station)” is an answer, and the others are distractors. To remove low quality question sentences, we remove the following question sentences.
> - Question sentences that contain a choice word(this is because such a question is easily solved).
> - Question sentences that contain the expression “XX characters”.8 (XX is a number).
> - Improperly formatted question sentences that do not end with “?”.
> - As a result, 5,920 × 3 = 17,760question sentences were created, from which we adopted 15,310 by removing inappropriate question sentences.
> 4. In CommonsenseQA, when adding distractors, one is selected from ConceptNet, and the other is created by crowdsourcing. In JCommonsenseQA, to have a wider variety of distractors, two distractors are created by crowdsourcing instead of selecting from ConceptNet. To improve the quality of the questions9, we remove questions whose added distractors fall into one of the following categories:
> - Distractors are included in a question sentence.
> - Distractors overlap with one of existing choices.
> - As a result, distractors were added to the 15,310 questions, of which we adopted 13,906.
> 5. We asked three crowdworkers to answer each question and adopt only those answered correctly by at least two workers. As a result, we adopted 11,263 out of the 13,906 questions.
#### Who are the annotators?
From [JGLUE's README.md](https://github.com/yahoojapan/JGLUE/blob/main/README.md#tasksdatasets):
> We use Yahoo! Crowdsourcing for all crowdsourcing tasks in constructing the datasets.
From [JCoLA's paper](https://arxiv.org/abs/2309.12676):
> As a reference for the upper limit of accuracy in JCoLA, human acceptability judgment experiments were conducted on Lancers2 with a subset of the JCoLA data.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
From [JGLUE's paper](https://aclanthology.org/2022.lrec-1.317/):
> We build a Japanese NLU benchmark, JGLUE, from scratch without translation to measure the general NLU ability in Japanese. We hope that JGLUE will facilitate NLU research in Japanese.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
From [JCoLA's paper](https://arxiv.org/abs/2309.12676):
> All the sentences included in JCoLA have been extracted from textbooks, handbooks and journal articles on theoretical syntax. Therefore, those sentences are guaranteed to be theoretically meaningful, making JCoLA a challenging dataset. However, the distribution of linguistic phenomena directly reflects that of the source literature and thus turns out to be extremely skewed. Indeed, as can be seen in Table 3, while the number of sentences exceeds 100 for most linguistic phenomena, there are several linguistic phenomena for which there are only about 10 sentences. In addition, since it is difficult to force language models to interpret sentences given specific contexts, those sentences whose unacceptability depends on contexts were inevitably removed from JCoLA. This removal process resulted in the deletion of unacceptable sentences from some linguistic phenomena (such as ellipsis), consequently skewing the balance between acceptable and unacceptable sentences (with a higher proportion of acceptable sentences).
## Additional Information
- 日本語言語理解ベンチマーク JGLUE の構築 〜 自然言語処理モデルの評価用データセットを公開しました - Yahoo! JAPAN Tech Blog https://techblog.yahoo.co.jp/entry/2022122030379907/
### Dataset Curators
#### MARC-ja
- Keung, Phillip, et al. "The Multilingual Amazon Reviews Corpus." Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). 2020.
#### JCoLA
- Someya, Sugimoto, and Oseki. "JCoLA: Japanese Corpus of Linguistic Acceptability." arxiv preprint arXiv:2309.12676 (2023).
#### JSTS and JNLI
- Miyazaki, Takashi, and Nobuyuki Shimizu. "Cross-lingual image caption generation." Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2016.
#### JSQuAD
The JGLUE's 'authors curated the original data for JSQuAD from the Japanese wikipedia dump.
#### JCommonsenseQA
In the same way as CommonsenseQA, JCommonsenseQA is built using crowdsourcing with seeds extracted from the knowledge base ConceptNet
### Licensing Information
#### JGLUE
From [JGLUE's README.md'](https://github.com/yahoojapan/JGLUE#license):
> This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
#### JCoLA
From [JCoLA's README.md'](https://github.com/osekilab/JCoLA#license):
> The text in this corpus is excerpted from the published works, and copyright (where applicable) remains with the original authors or publishers. We expect that research use within Japan is legal under fair use, but make no guarantee of this.
### Citation Information
#### JGLUE
```bibtex
@inproceedings{kurihara-lrec-2022-jglue,
title={JGLUE: Japanese general language understanding evaluation},
author={Kurihara, Kentaro and Kawahara, Daisuke and Shibata, Tomohide},
booktitle={Proceedings of the Thirteenth Language Resources and Evaluation Conference},
pages={2957--2966},
year={2022},
url={https://aclanthology.org/2022.lrec-1.317/}
}
```
```bibtex
@inproceedings{kurihara-nlp-2022-jglue,
title={JGLUE: 日本語言語理解ベンチマーク},
author={栗原健太郎 and 河原大輔 and 柴田知秀},
booktitle={言語処理学会第 28 回年次大会},
pages={2023--2028},
year={2022},
url={https://www.anlp.jp/proceedings/annual_meeting/2022/pdf_dir/E8-4.pdf},
note={in Japanese}
}
```
#### MARC-ja
```bibtex
@inproceedings{marc_reviews,
title={The Multilingual Amazon Reviews Corpus},
author={Keung, Phillip and Lu, Yichao and Szarvas, György and Smith, Noah A.},
booktitle={Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing},
year={2020}
}
```
#### JCoLA
```bibtex
@article{someya-arxiv-2023-jcola,
title={JCoLA: Japanese Corpus of Linguistic Acceptability},
author={Taiga Someya and Yushi Sugimoto and Yohei Oseki},
year={2023},
eprint={2309.12676},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```bibtex
@inproceedings{someya-nlp-2022-jcola,
title={日本語版 CoLA の構築},
author={染谷 大河 and 大関 洋平},
booktitle={言語処理学会第 28 回年次大会},
pages={1872--1877},
year={2022},
url={https://www.anlp.jp/proceedings/annual_meeting/2022/pdf_dir/E7-1.pdf},
note={in Japanese}
}
```
#### JSTS and JNLI
```bibtex
@inproceedings{miyazaki2016cross,
title={Cross-lingual image caption generation},
author={Miyazaki, Takashi and Shimizu, Nobuyuki},
booktitle={Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)},
pages={1780--1790},
year={2016}
}
```
### Contributions
Thanks to [Kentaro Kurihara](https://twitter.com/kkurihara_cs), [Daisuke Kawahara](https://twitter.com/daisukekawahar1), and [Tomohide Shibata](https://twitter.com/stomohide) for creating JGLUE dataset.
Thanks to [Taiga Someya](https://twitter.com/T0a8i0g9a) for creating JCoLA dataset.
|
ba188/NHS_HES | ---
tags:
- medical
- healthcare
- NHS
language:
- en
---
# Dataset Card for NHS_HES Data
<!-- Provide a quick summary of the dataset. -->
This dataset consists of data taken from three CSV files containing Hospital Episode Statistics (HES) for Admitted Patient Care and Outpatient Data supplied by National Health Services (NHS) England from 2018 - 2023.
## Dataset Details
### Dataset Description
The data includes monthly counts from hospital visits and admissions of different types in England for April 2018 to December 2023. The data includes both total counts for every category of visit/appointment considered as well as a breakdown of those visits/admissions by treatment specialty and age-group.
<!-- Provide a longer summary of what this dataset is. -->
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
More information and the original CSV files can be found at: https://digital.nhs.uk/data-and-information/publications/statistical/provisional-monthly-hospital-episode-statistics-for-admitted-patient-care-outpatient-and-accident-and-emergency-data/april-2023---december-2023.
Incorporated CSVs are: 'Provisional Monthly Hospital Episode Statistics for Admitted Patient Care and Outpatients, December 2023: Open Data - Totals',
'Provisional Monthly Hospital Episode Statistics for Admitted Patient Care and Outpatients, December 2023: Open Data - Treatment Specialties'
'Provisional Monthly Hospital Episode Statistics for Admitted Patient Care and Outpatients, December 2023: Open Data - Age Groups'
## Uses
The linked Google Colab file shows one possible use for a subset of this data: examining the pattern in hospital admissions episodes before, during, and after the COVID-19 pandemic and analysing whether there is a seasonal trend in those admissions and whether or not that changed during the pandemic.
Ex.) https://colab.research.google.com/drive/1u7jNC-CFnoVBCCDnNUIEM7zmt9nJLmF2?usp=sharing
<!-- Address questions about how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
The dataset is a csv that is 69*73. Each row contains the data for a single month from April 2018 to December 2023. The columns contain data on each of the variables counts were collected for (e.g. Finished Consultant Episodes, Finished Consultant Episodes with Procedure) split into the three original datasets, with separate columns for the total counts, the age bands, and the specialties. Within these columns, there are lists of dictionaries containing the data.
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
#### Personal and Sensitive Information
While this data is related to healthcare, the units of interest are the months, rather than individual patients, so patient privacy is not an issue here. There are also no identifiable features of the patients themselves, and the data was originally released by the NHS for public use.
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases, and limitations of the dataset. More information is needed for further recommendations.
## Dataset Card Contact
[More Information Needed] |
yleo/openhermes2.5-dpo-binarized | ---
dataset_info:
features:
- name: skip_prompt_formatting
dtype: bool
- name: category
dtype: string
- name: system_prompt
dtype: 'null'
- name: views
dtype: 'null'
- name: id
dtype: 'null'
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: weight
dtype: 'null'
- name: language
dtype: 'null'
- name: avatarUrl
dtype: 'null'
- name: custom_instruction
dtype: 'null'
- name: source
dtype: string
- name: model
dtype: 'null'
- name: hash
dtype: 'null'
- name: model_name
dtype: 'null'
- name: topic
dtype: 'null'
- name: idx
dtype: 'null'
- name: title
dtype: 'null'
- name: input
dtype: string
- name: generation_model
sequence: string
- name: generation_prompt
sequence: string
- name: raw_generation_responses
sequence: string
- name: generations
sequence: string
- name: rating
sequence: float32
- name: chosen
dtype: string
- name: rejected
dtype: string
- name: chosen_model
dtype: string
- name: rejected_model
dtype: string
- name: rejected_score
dtype: float64
- name: chosen_score
dtype: float64
splits:
- name: train
num_bytes: 930348
num_examples: 100
download_size: 593836
dataset_size: 930348
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
panchub/preference_dataset | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen_response
dtype: string
- name: rejected_response
dtype: string
- name: chosen_rank
dtype: int32
- name: rejected_rank
dtype: int32
splits:
- name: train
num_bytes: 57903
num_examples: 50
download_size: 30814
dataset_size: 57903
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Sijuade/diffusion_latent_test | ---
license: mit
dataset_info:
features:
- name: latent
sequence:
sequence:
sequence:
sequence: float64
- name: noised_latents
sequence:
sequence:
sequence:
sequence: float64
- name: noise
sequence:
sequence:
sequence:
sequence: float64
- name: timesteps
dtype: int64
- name: label
dtype: int64
splits:
- name: train
num_bytes: 2543200
num_examples: 100
download_size: 3122286
dataset_size: 2543200
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
gsh3729/sw_v1 | ---
dataset_info:
features:
- name: filename
dtype: string
- name: tif
dtype: binary
- name: tfw
dtype: binary
splits:
- name: val
num_bytes: 273238365
num_examples: 20000
download_size: 270945126
dataset_size: 273238365
configs:
- config_name: default
data_files:
- split: val
path: data/val-*
---
|
jinghan23/DatasetofPEMCompostition | ---
license: cc-by-nc-4.0
---
Training dataset for Alpaca-LoRA negation of [PEM composition](https://arxiv.org/abs/2306.14870).
Instructions for model evaluation on helpfulness and toxicity.
More concise interpretation can be found at [Git repo](https://github.com/hkust-nlp/PEM_composition).
**Toxic datasets abusement is dangerous for AI community, so this repo is gated and users have to request for approval.** |
mikehemberger/planet-earth | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': planet-earth-s01-e01-from-pole-to-pole
'1': planet-earth-s01-e02-mountains
'2': planet-earth-s01-e03-freshwater
'3': planet-earth-s01-e04-caves
'4': planet-earth-s01-e05-deserts
'5': planet-earth-s01-e06-ice-worlds
'6': planet-earth-s01-e07-great-plains
'7': planet-earth-s01-e08-jungles
'8': planet-earth-s01-e09-shallow-seas
'9': planet-earth-s01-e10-seasonal-forests
'10': planet-earth-s01-e11-ocean-deep
- name: file_name
dtype: string
- name: show_name
dtype: string
- name: relative_path
dtype: string
splits:
- name: train
num_bytes: 976527400.0
num_examples: 77296
download_size: 968089912
dataset_size: 976527400.0
---
# Dataset Card for "planet-earth"
TODO: upload blip2 captions
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
DRAWTHECOINGO/samuelvictor | ---
license: openrail
---
|
AdapterOcean/python3-standardized_cluster_8 | ---
dataset_info:
features:
- name: text
dtype: string
- name: conversation_id
dtype: int64
- name: embedding
sequence: float64
- name: cluster
dtype: int64
splits:
- name: train
num_bytes: 51861710
num_examples: 4685
download_size: 0
dataset_size: 51861710
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "python3-standardized_cluster_8"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tobydrew/GhostBr | ---
license: openrail
---
|
Vezora/SmallQuick | ---
license: apache-2.0
---
|
codeparrot/xlcost-text-to-code | ---
annotations_creators: []
language_creators:
- crowdsourced
- expert-generated
language:
- code
license:
- cc-by-sa-4.0
multilinguality:
- multilingual
size_categories:
- unknown
source_datasets: []
task_categories:
- text-generation
task_ids:
- language-modeling
pretty_name: xlcost-text-to-code
---
# XLCost for text-to-code synthesis
## Dataset Description
This is a subset of [XLCoST benchmark](https://github.com/reddy-lab-code-research/XLCoST), for text-to-code generation at snippet level and program level for **7** programming languages: `Python, C, C#, C++, Java, Javascript and PHP`.
## Languages
The dataset contains text in English and its corresponding code translation. Each program is divided into several code snippets, so the snipppet-level subsets contain these code snippets with their corresponding comments, for program-level subsets, the comments were concatenated in one long description. Moreover, programs in all the languages are aligned at the snippet level and the comment for a particular snippet is the same across all the languages.
## Dataset Structure
To load the dataset you need to specify a subset among the **14 exiting instances**: `LANGUAGE-snippet-level/LANGUAGE-program-level` for `LANGUAGE` in `[Python, C, Csharp, C++, Java, Javascript and PHP]`. By default `Python-snippet-level` is loaded.
```python
from datasets import load_dataset
load_dataset("codeparrot/xlcost-text-to-code", "Python-program-level")
DatasetDict({
train: Dataset({
features: ['text', 'code'],
num_rows: 9263
})
test: Dataset({
features: ['text', 'code'],
num_rows: 887
})
validation: Dataset({
features: ['text', 'code'],
num_rows: 472
})
})
```
```python
next(iter(data["train"]))
{'text': 'Maximum Prefix Sum possible by merging two given arrays | Python3 implementation of the above approach ; Stores the maximum prefix sum of the array A [ ] ; Traverse the array A [ ] ; Stores the maximum prefix sum of the array B [ ] ; Traverse the array B [ ] ; Driver code',
'code': 'def maxPresum ( a , b ) : NEW_LINE INDENT X = max ( a [ 0 ] , 0 ) NEW_LINE for i in range ( 1 , len ( a ) ) : NEW_LINE INDENT a [ i ] += a [ i - 1 ] NEW_LINE X = max ( X , a [ i ] ) NEW_LINE DEDENT Y = max ( b [ 0 ] , 0 ) NEW_LINE for i in range ( 1 , len ( b ) ) : NEW_LINE INDENT b [ i ] += b [ i - 1 ] NEW_LINE Y = max ( Y , b [ i ] ) NEW_LINE DEDENT return X + Y NEW_LINE DEDENT A = [ 2 , - 1 , 4 , - 5 ] NEW_LINE B = [ 4 , - 3 , 12 , 4 , - 3 ] NEW_LINE print ( maxPresum ( A , B ) ) NEW_LINE'}
```
Note that the data undergo some tokenization hence the additional whitespaces and the use of NEW_LINE instead of `\n` and INDENT instead of `\t`, DEDENT to cancel indentation...
## Data Fields
* text: natural language description/comment
* code: code at snippet/program level
## Data Splits
Each subset has three splits: train, test and validation.
## Citation Information
```
@misc{zhu2022xlcost,
title = {XLCoST: A Benchmark Dataset for Cross-lingual Code Intelligence},
url = {https://arxiv.org/abs/2206.08474},
author = {Zhu, Ming and Jain, Aneesh and Suresh, Karthik and Ravindran, Roshan and Tipirneni, Sindhu and Reddy, Chandan K.},
year = {2022},
eprint={2206.08474},
archivePrefix={arXiv}
}
``` |
helliun/happychat-dataset-tenth-split | ---
dataset_info:
features:
- name: convo
sequence: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 1400318
num_examples: 1010
download_size: 728652
dataset_size: 1400318
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "happychat-dataset-tenth-split"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Uchenna/BLOOM-Tutorial | ---
license: mit
task_categories:
- text-generation
language:
- en
--- |
liuyanchen1015/MULTI_VALUE_qqp_possessives_for_post | ---
dataset_info:
features:
- name: question1
dtype: string
- name: question2
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: value_score
dtype: int64
splits:
- name: dev
num_bytes: 1825480
num_examples: 9983
- name: test
num_bytes: 18192433
num_examples: 98443
- name: train
num_bytes: 16766881
num_examples: 91164
download_size: 22183214
dataset_size: 36784794
---
# Dataset Card for "MULTI_VALUE_qqp_possessives_for_post"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Back-up/test_2 | ---
dataset_info:
features:
- name: title
dtype: string
- name: url
dtype: string
- name: date
dtype: string
- name: view
struct:
- name: number_of_response
dtype: string
- name: number_of_view
dtype: string
- name: content
list:
- name: date_comment
dtype: string
- name: res
dtype: string
splits:
- name: train
num_bytes: 160595202
num_examples: 2935
download_size: 58208648
dataset_size: 160595202
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
FourthBrainGenAI/Product-Descriptions-and-Ads | ---
dataset_info:
features:
- name: product
dtype: string
- name: description
dtype: string
- name: ad
dtype: string
splits:
- name: train
num_bytes: 27511.2
num_examples: 90
- name: test
num_bytes: 3056.8
num_examples: 10
download_size: 24914
dataset_size: 30568
license: openrail
task_categories:
- text-generation
language:
- en
tags:
- art
pretty_name: Product Descriptions and Ads
size_categories:
- n<1K
---
# Synthetic Dataset for Product Descriptions and Ads
The basic process was as follows:
1. Prompt GPT-4 to create a list of 100 sample clothing items and descriptions for those items.
2. Split the output into desired format `{"product" : "<PRODUCT NAME>", "description" : "<DESCRIPTION>"}
3. Prompt GPT-4 to create adverts for each of the 100 samples based on their name and description.
This data was not cleaned or verified manually. |
ssbuild/alpaca_thoughtsource | ---
license: apache-2.0
---
|
lylcst/key2text_essays | ---
license: mit
---
|
txx99999/test_dataset | ---
license: apache-2.0
---
|
filevich/fact2019 | ---
license: mit
---
|
seonglae/wikipedia-256 | ---
language:
- en
task_categories:
- question-answering
dataset_info:
config_name: gpt-4
features:
- name: id
dtype: string
- name: title
dtype: string
- name: url
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 24166736905
num_examples: 21462234
download_size: 12274801108
dataset_size: 24166736905
configs:
- config_name: gpt-4
data_files:
- split: train
path: gpt-4/train-*
tags:
- wikipedia
---
This is Wikidedia passages dataset for ODQA retriever.
Each passages have 256~ tokens splitteed by gpt-4 tokenizer using tiktoken.
Token count
```ts
{'~128': 1415068, '128~256': 1290011,
'256~512': 18756476, '512~1024': 667,
'1024~2048': 12, '2048~4096': 0, '4096~8192': 0,
'8192~16384': 0, '16384~32768': 0, '32768~65536': 0,
'65536~128000': 0, '128000~': 0}
```
Text count
```ts
{'~512': 1556876,'512~1024': 6074975, '1024~2048': 13830329,
'2048~4096': 49, '4096~8192': 2, '8192~16384': 3, '16384~32768': 0,
'32768~65536': 0, '65536~': 0}
```
Token percent
```ts
{'~128': '6.59%', '128~256': '6.01%', '256~512': '87.39%',
'512~1024': '0.00%', '1024~2048': '0.00%', '2048~4096': '0.00%',
'4096~8192': '0.00%', '8192~16384': '0.00%', '16384~32768': '0.00%',
'32768~65536': '0.00%', '65536~128000': '0.00%', '128000~': '0.00%'}
```
Text percent
```ts
{'~512': '7.25%', '512~1024': '28.31%', '1024~2048': '64.44%',
'2048~4096': '0.00%', '4096~8192': '0.00%', '8192~16384': '0.00%',
'16384~32768': '0.00%', '32768~65536': '0.00%', '65536~': '0.00%'}
```
|
SINAI/CONAN-SP | ---
license: cc-by-nc-sa-4.0
language:
- es
tags:
- counternarrative
- counter-speech
pretty_name: CONAN-SP
configs:
- config_name: default
data_files:
- split: exp1
path: CONAN-SP/GPT3-exp1.csv
- split: exp2
path: CONAN-SP/GPT3-exp2.csv
- split: exp3
path: CONAN-SP/GPT3-exp3.csv
---
### Dataset Description
**Paper**: [Automatic counter-narrative generation for hate speech in Spanish](http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/download/6556/3956)
**Point of Contact**: mevallec@ujaen.es
CONAN-SP is a a new dataset for the Spanish counter-narrative. It includes a hate-speech comment (HS) and the corresponding counter-narrative (CN).
#### How is it constructed?
CONAN-SP is based on CONAN-KN ([Yi-Ling Chung et al. , 2021](https://aclanthology.org/2021.findings-acl.79.pdf)). CONAN-KN consists of 195 HS-CN pairs covering multiple hate targets (islamophobia, misogyny, antisemitism, racism, and homophobia), provided along with the relevant knowledge automatically retrieved. Since CONAN-KN is in English, we use DeepL, an automatic translator tool to translate English pairs to Spanish.
To construct CONAN-SP, we remove the pairs that contain duplicates of hate-speech texts and the examples used to calculate the agreement between annotators. The structure of CONAN-SP is the hate-speech provided by CONAN-KN and the counter-narrative texts generated by GPT-3.5 model. We do not apply any filter to the CN generated by GPT-3. Furthermore, we associated the target of the offensive comment with the hate speech and counter-narrative pair.
To obtain the CN generated by GPT-3.5, we follow 3 different prompt strategies:
- **Exp1: General prompt** task definition + 5 examples (1 for each target).
- **Exp2: 5 Specific prompt** (1 for target) task definition + 3 examples for the same target.
- **Exp3: General prompt** 5 examples (1 for each target)
|Experiment | #Instances|
|--|--|
|Experiment 1| 84|
|Experiment 2| 70|
|Experiment 3| 84|
Finally, we obtained 238 pairs of hate-speech and counter-narrative among the 3 experiments. All of these pairs are labeled by human annotators in different proposed metrics (Offensiveness, Stance, and Informativeness).
### Licensing Information
SHARE is released under the [Apache-2.0 License](http://www.apache.org/licenses/LICENSE-2.0).
### Citation Information
```bibtex
@article{Vallecillo2023,
author = "Vallecillo, E. and Montejo, A. and Martín-Valdivia, M.T.",
title = "{Automatic counter-narrative generation for hate speech in Spanish}",
journal = "Procesamiento del Lenguaje Natural",
year = 2023,
volume = "71",
number = "",
pages = "",
note = "",
month = ""
}
``` |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/e107145a | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 178
num_examples: 10
download_size: 1330
dataset_size: 178
---
# Dataset Card for "e107145a"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Konthee/en_thai_small | ---
dataset_info:
features:
- name: src_input_ids
sequence: int64
- name: src_attention_mask
sequence: int64
- name: trg_input_ids
sequence: int64
- name: trg_attention_mask
sequence: int64
splits:
- name: train
num_bytes: 2747840
num_examples: 1108
download_size: 89577
dataset_size: 2747840
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "en_thai_small"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CyberHarem/kawashiro_nitori_touhou | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of kawashiro_nitori/河城にとり/카와시로니토리 (Touhou)
This is the dataset of kawashiro_nitori/河城にとり/카와시로니토리 (Touhou), containing 500 images and their tags.
The core tags of this character are `blue_hair, two_side_up, hair_ornament, blue_eyes, hat, short_hair, twintails`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 529.50 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kawashiro_nitori_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 359.26 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kawashiro_nitori_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1068 | 683.60 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kawashiro_nitori_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 492.00 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kawashiro_nitori_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1068 | 874.41 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kawashiro_nitori_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/kawashiro_nitori_touhou',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 5 |  |  |  |  |  | 1girl, backpack, hair_bobbles, key, open_mouth, smile, solo |
| 1 | 6 |  |  |  |  |  | 1girl, backpack, hair_bobbles, key, open_mouth, smile, solo, rubber_boots, skirt_set, water |
| 2 | 14 |  |  |  |  |  | 1girl, hair_bobbles, key, solo, backpack, underwater, air_bubble, skirt, smile, boots, open_mouth |
| 3 | 15 |  |  |  |  |  | 1girl, backpack, bangs, green_headwear, hair_bobbles, solo, blue_shirt, flat_cap, key, blue_footwear, blue_skirt, looking_at_viewer, pocket, long_sleeves, rubber_boots, full_body, green_bag, skirt_set, smile, blush, closed_mouth, frilled_shirt_collar, open_mouth |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | backpack | hair_bobbles | key | open_mouth | smile | solo | rubber_boots | skirt_set | water | underwater | air_bubble | skirt | boots | bangs | green_headwear | blue_shirt | flat_cap | blue_footwear | blue_skirt | looking_at_viewer | pocket | long_sleeves | full_body | green_bag | blush | closed_mouth | frilled_shirt_collar |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-----------|:---------------|:------|:-------------|:--------|:-------|:---------------|:------------|:--------|:-------------|:-------------|:--------|:--------|:--------|:-----------------|:-------------|:-----------|:----------------|:-------------|:--------------------|:---------|:---------------|:------------|:------------|:--------|:---------------|:-----------------------|
| 0 | 5 |  |  |  |  |  | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | |
| 1 | 6 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | |
| 2 | 14 |  |  |  |  |  | X | X | X | X | X | X | X | | | | X | X | X | X | | | | | | | | | | | | | | |
| 3 | 15 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/6p62_girlsfrontline | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of 6p62/6P62/6P62 (Girls' Frontline)
This is the dataset of 6p62/6P62/6P62 (Girls' Frontline), containing 11 images and their tags.
The core tags of this character are `long_hair, red_hair, blue_eyes, hat, breasts, large_breasts, bangs, glasses, ponytail`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:---------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 11 | 14.27 MiB | [Download](https://huggingface.co/datasets/CyberHarem/6p62_girlsfrontline/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 11 | 9.10 MiB | [Download](https://huggingface.co/datasets/CyberHarem/6p62_girlsfrontline/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 24 | 16.68 MiB | [Download](https://huggingface.co/datasets/CyberHarem/6p62_girlsfrontline/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 11 | 13.25 MiB | [Download](https://huggingface.co/datasets/CyberHarem/6p62_girlsfrontline/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 24 | 22.35 MiB | [Download](https://huggingface.co/datasets/CyberHarem/6p62_girlsfrontline/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/6p62_girlsfrontline',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------|
| 0 | 11 |  |  |  |  |  | 1girl, solo, pantyhose, looking_at_viewer, gun, long_sleeves, shirt, smile, thighhighs, boots, full_body, jacket, white_background |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | pantyhose | looking_at_viewer | gun | long_sleeves | shirt | smile | thighhighs | boots | full_body | jacket | white_background |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:------------|:--------------------|:------|:---------------|:--------|:--------|:-------------|:--------|:------------|:---------|:-------------------|
| 0 | 11 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
jhu-clsp/bernice-pretrain-data | ---
annotations_creators:
- no-annotation
language:
- en
- es
- pt
- ja
- ar
- in
- ko
- tr
- fr
- tl
- ru
- it
- th
- de
- hi
- pl
- nl
- fa
- et
- ht
- ur
- sv
- ca
- el
- fi
- cs
- iw
- da
- vi
- zh
- ta
- ro
- no
- uk
- cy
- ne
- hu
- eu
- sl
- lv
- lt
- bn
- sr
- bg
- mr
- ml
- is
- te
- gu
- kn
- ps
- ckb
- si
- hy
- or
- pa
- am
- sd
- my
- ka
- km
- dv
- lo
- ug
- bo
language_creators:
- found
license:
- mit
multilinguality:
- multilingual
pretty_name: Bernice Pretrain Data
size_categories:
- 1B<n<10B
source_datasets:
- original
tags:
- twitter
- slang
- code switch
- social
- social media
task_categories:
- other
task_ids: []
---
# Dataset Card for Bernice Pre-train Data
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** N/A
- **Repository:** https://github.com/JHU-CLSP/Bernice-Twitter-encoder
- **Paper:** _Bernice: A Multilingual Pre-trained Encoder for Twitter_ at [EMNLP 2022](https://preview.aclanthology.org/emnlp-22-ingestion/2022.emnlp-main.415)
- **Leaderboard:** N/A
- **Point of Contact:** Alexandra DeLucia aadelucia (at) jhu.edu
### Dataset Summary
Tweet IDs for the 2.5 billion multilingual tweets used to train Bernice, a Twitter encoder.
Read the paper [here](https://preview.aclanthology.org/emnlp-22-ingestion/2022.emnlp-main.415).
The tweets are from the public 1% Twitter API stream from January 2016 to December 2021.
Twitter-provided language metadata is provided with the tweet ID. The data contains 66 unique languages, as identified by [ISO 639 language codes](https://www.wikiwand.com/en/List_of_ISO_639-1_codes), including `und` for undefined languages.
Tweets need to be re-gathered via the Twitter API. We suggest [Hydrator](https://github.com/DocNow/hydrator) or [tweepy](https://www.tweepy.org/).
To load with HuggingFace:
```python
from datasets import load_dataset
dataset = load_dataset("jhu-clsp/bernice-pretrain-data")
for i, row in enumerate(dataset["train"]):
print(row)
if i > 10:
break
```
If you only want Indic languages, use
```python
dataset = load_dataset("jhu-clsp/bernice-pretrain-data", "indic")
```
### Supported Tasks and Leaderboards
N/A
### Languages
65 languages (ISO 639 codes shown below), plus an `und` (undefined) category.
All language identification provided by Twitter API.
| | | | | | | |
|----|-----|----|----|----|-----|----|
| en | ru | ht | zh | bn | ps | lt |
| es | bo | ur | ta | sr | ckb | km |
| pt | it | sv | ro | bg | si | dv |
| ja | th | ca | no | mr | hy | lo |
| ar | de | el | uk | ml | or | ug |
| in | hi | fi | cy | is | pa | |
| ko | pl | cs | ne | te | am | |
| tr | nl | iw | hu | gu | sd | |
| fr | fa | da | eu | kn | my | |
| tl | et | vi | sl | lv | ka | |
## Dataset Structure
### Data Instances
Data is provided in gzip'd files organized by year and month of tweet origin.
Tweets are one per line, with fields separated by tabs.
### Data Fields
* `tweet ID`: ID of tweet
* `lang`: ISO 639 code of language, provided by Twitter metadata. Accuracy of label is not known.
* `year`: Year tweet was created. Year is also provided in the file names.
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
Data was gathered to support the training of Bernice, a multilingual pre-trained Twitter encoder.
### Source Data
#### Initial Data Collection and Normalization
Data was gathered via the Twitter API public 1% stream from January 2016 through December 2021.
Tweets with less than three non-username or URL space-delimited words were removed.
All usernames and URLs were replaced with `@USER` and `HTTPURL`, respectively.
#### Who are the source language producers?
Data was produced by users on Twitter.
### Annotations
N/A
### Personal and Sensitive Information
As per Twitter guidelines, only tweet IDs and not full tweets are shared.
Tweets will only be accessible if user has not removed their account (or been banned), tweets were deleted or removed, or a user changed their account access to private.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Dataset gathered and processed by Mark Dredze, Alexandra DeLucia, Shijie Wu, Aaron Mueller, Carlos Aguirre, and Philip Resnik.
### Licensing Information
MIT
### Citation Information
Please cite the Bernice paper if you use this dataset:
> Alexandra DeLucia, Shijie Wu, Aaron Mueller, Carlos Aguirre, Philip Resnik, and Mark Dredze. 2022. Bernice: A Multilingual Pre-trained Encoder for Twitter. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 6191–6205, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
### Contributions
Dataset uploaded by [@AADeLucia](https://github.com/AADeLucia).
|
open-llm-leaderboard/details_jambroz__sixtyoneeighty-7b-chat | ---
pretty_name: Evaluation run of jambroz/sixtyoneeighty-7b-chat
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [jambroz/sixtyoneeighty-7b-chat](https://huggingface.co/jambroz/sixtyoneeighty-7b-chat)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_jambroz__sixtyoneeighty-7b-chat\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-04-05T12:46:53.046967](https://huggingface.co/datasets/open-llm-leaderboard/details_jambroz__sixtyoneeighty-7b-chat/blob/main/results_2024-04-05T12-46-53.046967.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6529073614839261,\n\
\ \"acc_stderr\": 0.03214334252439312,\n \"acc_norm\": 0.6544409415660778,\n\
\ \"acc_norm_stderr\": 0.032791861097330115,\n \"mc1\": 0.5140758873929009,\n\
\ \"mc1_stderr\": 0.01749656371704279,\n \"mc2\": 0.6757680170745927,\n\
\ \"mc2_stderr\": 0.014861299253949599\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.6655290102389079,\n \"acc_stderr\": 0.013787460322441374,\n\
\ \"acc_norm\": 0.6911262798634812,\n \"acc_norm_stderr\": 0.013501770929344\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6801433977295359,\n\
\ \"acc_stderr\": 0.004654675606841551,\n \"acc_norm\": 0.8636725751842262,\n\
\ \"acc_norm_stderr\": 0.0034243464481037156\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.34,\n \"acc_stderr\": 0.047609522856952365,\n \
\ \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.047609522856952365\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6148148148148148,\n\
\ \"acc_stderr\": 0.04203921040156279,\n \"acc_norm\": 0.6148148148148148,\n\
\ \"acc_norm_stderr\": 0.04203921040156279\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.7105263157894737,\n \"acc_stderr\": 0.03690677986137283,\n\
\ \"acc_norm\": 0.7105263157894737,\n \"acc_norm_stderr\": 0.03690677986137283\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.6,\n\
\ \"acc_stderr\": 0.04923659639173309,\n \"acc_norm\": 0.6,\n \
\ \"acc_norm_stderr\": 0.04923659639173309\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.6943396226415094,\n \"acc_stderr\": 0.028353298073322666,\n\
\ \"acc_norm\": 0.6943396226415094,\n \"acc_norm_stderr\": 0.028353298073322666\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7361111111111112,\n\
\ \"acc_stderr\": 0.03685651095897532,\n \"acc_norm\": 0.7361111111111112,\n\
\ \"acc_norm_stderr\": 0.03685651095897532\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.47,\n \"acc_stderr\": 0.05016135580465919,\n \
\ \"acc_norm\": 0.47,\n \"acc_norm_stderr\": 0.05016135580465919\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.56,\n \"acc_stderr\": 0.04988876515698589,\n \"acc_norm\": 0.56,\n\
\ \"acc_norm_stderr\": 0.04988876515698589\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.35,\n \"acc_stderr\": 0.04793724854411018,\n \
\ \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.04793724854411018\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6473988439306358,\n\
\ \"acc_stderr\": 0.03643037168958548,\n \"acc_norm\": 0.6473988439306358,\n\
\ \"acc_norm_stderr\": 0.03643037168958548\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.45098039215686275,\n \"acc_stderr\": 0.049512182523962625,\n\
\ \"acc_norm\": 0.45098039215686275,\n \"acc_norm_stderr\": 0.049512182523962625\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.78,\n \"acc_stderr\": 0.04163331998932263,\n \"acc_norm\": 0.78,\n\
\ \"acc_norm_stderr\": 0.04163331998932263\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.5702127659574469,\n \"acc_stderr\": 0.03236214467715564,\n\
\ \"acc_norm\": 0.5702127659574469,\n \"acc_norm_stderr\": 0.03236214467715564\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.47368421052631576,\n\
\ \"acc_stderr\": 0.046970851366478626,\n \"acc_norm\": 0.47368421052631576,\n\
\ \"acc_norm_stderr\": 0.046970851366478626\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5724137931034483,\n \"acc_stderr\": 0.04122737111370333,\n\
\ \"acc_norm\": 0.5724137931034483,\n \"acc_norm_stderr\": 0.04122737111370333\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.40476190476190477,\n \"acc_stderr\": 0.025279850397404904,\n \"\
acc_norm\": 0.40476190476190477,\n \"acc_norm_stderr\": 0.025279850397404904\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.5238095238095238,\n\
\ \"acc_stderr\": 0.04467062628403273,\n \"acc_norm\": 0.5238095238095238,\n\
\ \"acc_norm_stderr\": 0.04467062628403273\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695236,\n \
\ \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695236\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7935483870967742,\n\
\ \"acc_stderr\": 0.02302589961718871,\n \"acc_norm\": 0.7935483870967742,\n\
\ \"acc_norm_stderr\": 0.02302589961718871\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.5270935960591133,\n \"acc_stderr\": 0.03512819077876106,\n\
\ \"acc_norm\": 0.5270935960591133,\n \"acc_norm_stderr\": 0.03512819077876106\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\"\
: 0.69,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7818181818181819,\n \"acc_stderr\": 0.03225078108306289,\n\
\ \"acc_norm\": 0.7818181818181819,\n \"acc_norm_stderr\": 0.03225078108306289\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.8080808080808081,\n \"acc_stderr\": 0.028057791672989017,\n \"\
acc_norm\": 0.8080808080808081,\n \"acc_norm_stderr\": 0.028057791672989017\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.8963730569948186,\n \"acc_stderr\": 0.02199531196364424,\n\
\ \"acc_norm\": 0.8963730569948186,\n \"acc_norm_stderr\": 0.02199531196364424\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.6743589743589744,\n \"acc_stderr\": 0.02375966576741229,\n \
\ \"acc_norm\": 0.6743589743589744,\n \"acc_norm_stderr\": 0.02375966576741229\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.337037037037037,\n \"acc_stderr\": 0.028820884666253255,\n \
\ \"acc_norm\": 0.337037037037037,\n \"acc_norm_stderr\": 0.028820884666253255\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.7016806722689075,\n \"acc_stderr\": 0.029719142876342853,\n\
\ \"acc_norm\": 0.7016806722689075,\n \"acc_norm_stderr\": 0.029719142876342853\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.36423841059602646,\n \"acc_stderr\": 0.03929111781242742,\n \"\
acc_norm\": 0.36423841059602646,\n \"acc_norm_stderr\": 0.03929111781242742\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8330275229357799,\n \"acc_stderr\": 0.01599015488507338,\n \"\
acc_norm\": 0.8330275229357799,\n \"acc_norm_stderr\": 0.01599015488507338\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.5555555555555556,\n \"acc_stderr\": 0.03388857118502325,\n \"\
acc_norm\": 0.5555555555555556,\n \"acc_norm_stderr\": 0.03388857118502325\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.8382352941176471,\n \"acc_stderr\": 0.025845017986926917,\n \"\
acc_norm\": 0.8382352941176471,\n \"acc_norm_stderr\": 0.025845017986926917\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.7974683544303798,\n \"acc_stderr\": 0.026160568246601436,\n \
\ \"acc_norm\": 0.7974683544303798,\n \"acc_norm_stderr\": 0.026160568246601436\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.695067264573991,\n\
\ \"acc_stderr\": 0.030898610882477515,\n \"acc_norm\": 0.695067264573991,\n\
\ \"acc_norm_stderr\": 0.030898610882477515\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.7709923664122137,\n \"acc_stderr\": 0.036853466317118506,\n\
\ \"acc_norm\": 0.7709923664122137,\n \"acc_norm_stderr\": 0.036853466317118506\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.768595041322314,\n \"acc_stderr\": 0.03849856098794088,\n \"acc_norm\"\
: 0.768595041322314,\n \"acc_norm_stderr\": 0.03849856098794088\n },\n\
\ \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7962962962962963,\n\
\ \"acc_stderr\": 0.03893542518824847,\n \"acc_norm\": 0.7962962962962963,\n\
\ \"acc_norm_stderr\": 0.03893542518824847\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7791411042944786,\n \"acc_stderr\": 0.03259177392742178,\n\
\ \"acc_norm\": 0.7791411042944786,\n \"acc_norm_stderr\": 0.03259177392742178\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4732142857142857,\n\
\ \"acc_stderr\": 0.047389751192741546,\n \"acc_norm\": 0.4732142857142857,\n\
\ \"acc_norm_stderr\": 0.047389751192741546\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7572815533980582,\n \"acc_stderr\": 0.04245022486384495,\n\
\ \"acc_norm\": 0.7572815533980582,\n \"acc_norm_stderr\": 0.04245022486384495\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8931623931623932,\n\
\ \"acc_stderr\": 0.02023714900899091,\n \"acc_norm\": 0.8931623931623932,\n\
\ \"acc_norm_stderr\": 0.02023714900899091\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.73,\n \"acc_stderr\": 0.044619604333847394,\n \
\ \"acc_norm\": 0.73,\n \"acc_norm_stderr\": 0.044619604333847394\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8326947637292464,\n\
\ \"acc_stderr\": 0.013347327202920332,\n \"acc_norm\": 0.8326947637292464,\n\
\ \"acc_norm_stderr\": 0.013347327202920332\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7312138728323699,\n \"acc_stderr\": 0.02386800326250011,\n\
\ \"acc_norm\": 0.7312138728323699,\n \"acc_norm_stderr\": 0.02386800326250011\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.41899441340782123,\n\
\ \"acc_stderr\": 0.016501579306861677,\n \"acc_norm\": 0.41899441340782123,\n\
\ \"acc_norm_stderr\": 0.016501579306861677\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.7287581699346405,\n \"acc_stderr\": 0.02545775669666788,\n\
\ \"acc_norm\": 0.7287581699346405,\n \"acc_norm_stderr\": 0.02545775669666788\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.707395498392283,\n\
\ \"acc_stderr\": 0.02583989833487798,\n \"acc_norm\": 0.707395498392283,\n\
\ \"acc_norm_stderr\": 0.02583989833487798\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.75,\n \"acc_stderr\": 0.02409347123262133,\n \
\ \"acc_norm\": 0.75,\n \"acc_norm_stderr\": 0.02409347123262133\n \
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\"\
: 0.48936170212765956,\n \"acc_stderr\": 0.029820747191422473,\n \"\
acc_norm\": 0.48936170212765956,\n \"acc_norm_stderr\": 0.029820747191422473\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4654498044328553,\n\
\ \"acc_stderr\": 0.012739711554045704,\n \"acc_norm\": 0.4654498044328553,\n\
\ \"acc_norm_stderr\": 0.012739711554045704\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.6801470588235294,\n \"acc_stderr\": 0.028332959514031208,\n\
\ \"acc_norm\": 0.6801470588235294,\n \"acc_norm_stderr\": 0.028332959514031208\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6683006535947712,\n \"acc_stderr\": 0.019047485239360378,\n \
\ \"acc_norm\": 0.6683006535947712,\n \"acc_norm_stderr\": 0.019047485239360378\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6727272727272727,\n\
\ \"acc_stderr\": 0.0449429086625209,\n \"acc_norm\": 0.6727272727272727,\n\
\ \"acc_norm_stderr\": 0.0449429086625209\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.7428571428571429,\n \"acc_stderr\": 0.02797982353874455,\n\
\ \"acc_norm\": 0.7428571428571429,\n \"acc_norm_stderr\": 0.02797982353874455\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.845771144278607,\n\
\ \"acc_stderr\": 0.025538433368578323,\n \"acc_norm\": 0.845771144278607,\n\
\ \"acc_norm_stderr\": 0.025538433368578323\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.85,\n \"acc_stderr\": 0.0358870281282637,\n \
\ \"acc_norm\": 0.85,\n \"acc_norm_stderr\": 0.0358870281282637\n },\n\
\ \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5301204819277109,\n\
\ \"acc_stderr\": 0.03885425420866767,\n \"acc_norm\": 0.5301204819277109,\n\
\ \"acc_norm_stderr\": 0.03885425420866767\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8421052631578947,\n \"acc_stderr\": 0.02796678585916089,\n\
\ \"acc_norm\": 0.8421052631578947,\n \"acc_norm_stderr\": 0.02796678585916089\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.5140758873929009,\n\
\ \"mc1_stderr\": 0.01749656371704279,\n \"mc2\": 0.6757680170745927,\n\
\ \"mc2_stderr\": 0.014861299253949599\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.8050513022888713,\n \"acc_stderr\": 0.011134099415938278\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6194086429112965,\n \
\ \"acc_stderr\": 0.013373971277729817\n }\n}\n```"
repo_url: https://huggingface.co/jambroz/sixtyoneeighty-7b-chat
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_04_05T10_41_03.654782
path:
- '**/details_harness|arc:challenge|25_2024-04-05T10-41-03.654782.parquet'
- split: 2024_04_05T12_46_53.046967
path:
- '**/details_harness|arc:challenge|25_2024-04-05T12-46-53.046967.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-04-05T12-46-53.046967.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_04_05T10_41_03.654782
path:
- '**/details_harness|gsm8k|5_2024-04-05T10-41-03.654782.parquet'
- split: 2024_04_05T12_46_53.046967
path:
- '**/details_harness|gsm8k|5_2024-04-05T12-46-53.046967.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-04-05T12-46-53.046967.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_04_05T10_41_03.654782
path:
- '**/details_harness|hellaswag|10_2024-04-05T10-41-03.654782.parquet'
- split: 2024_04_05T12_46_53.046967
path:
- '**/details_harness|hellaswag|10_2024-04-05T12-46-53.046967.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-04-05T12-46-53.046967.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_04_05T10_41_03.654782
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-05T10-41-03.654782.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-05T10-41-03.654782.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-05T10-41-03.654782.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-05T10-41-03.654782.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-05T10-41-03.654782.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-05T10-41-03.654782.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-05T10-41-03.654782.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-05T10-41-03.654782.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-05T10-41-03.654782.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-05T10-41-03.654782.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-05T10-41-03.654782.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-05T10-41-03.654782.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-05T10-41-03.654782.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-05T10-41-03.654782.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-05T10-41-03.654782.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-05T10-41-03.654782.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-05T10-41-03.654782.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-05T10-41-03.654782.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-05T10-41-03.654782.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-05T10-41-03.654782.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-05T10-41-03.654782.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-05T10-41-03.654782.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-05T10-41-03.654782.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-05T10-41-03.654782.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-05T10-41-03.654782.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-05T10-41-03.654782.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-05T10-41-03.654782.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-05T10-41-03.654782.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-05T10-41-03.654782.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-05T10-41-03.654782.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-05T10-41-03.654782.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-05T10-41-03.654782.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-05T10-41-03.654782.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-05T10-41-03.654782.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-04-05T10-41-03.654782.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-05T10-41-03.654782.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-05T10-41-03.654782.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-05T10-41-03.654782.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-04-05T10-41-03.654782.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-04-05T10-41-03.654782.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-05T10-41-03.654782.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-05T10-41-03.654782.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-05T10-41-03.654782.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-05T10-41-03.654782.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-05T10-41-03.654782.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-05T10-41-03.654782.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-05T10-41-03.654782.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-05T10-41-03.654782.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-05T10-41-03.654782.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-05T10-41-03.654782.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-05T10-41-03.654782.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-05T10-41-03.654782.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-05T10-41-03.654782.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-04-05T10-41-03.654782.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-05T10-41-03.654782.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-04-05T10-41-03.654782.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-05T10-41-03.654782.parquet'
- split: 2024_04_05T12_46_53.046967
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-05T12-46-53.046967.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-04-05T12-46-53.046967.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-05T12-46-53.046967.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_04_05T10_41_03.654782
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-05T10-41-03.654782.parquet'
- split: 2024_04_05T12_46_53.046967
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-05T12-46-53.046967.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-05T12-46-53.046967.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_04_05T10_41_03.654782
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-05T10-41-03.654782.parquet'
- split: 2024_04_05T12_46_53.046967
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-05T12-46-53.046967.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-05T12-46-53.046967.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_04_05T10_41_03.654782
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-05T10-41-03.654782.parquet'
- split: 2024_04_05T12_46_53.046967
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-05T12-46-53.046967.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-05T12-46-53.046967.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_04_05T10_41_03.654782
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-05T10-41-03.654782.parquet'
- split: 2024_04_05T12_46_53.046967
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-05T12-46-53.046967.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-05T12-46-53.046967.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_04_05T10_41_03.654782
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-05T10-41-03.654782.parquet'
- split: 2024_04_05T12_46_53.046967
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-05T12-46-53.046967.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-05T12-46-53.046967.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_04_05T10_41_03.654782
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-05T10-41-03.654782.parquet'
- split: 2024_04_05T12_46_53.046967
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-05T12-46-53.046967.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-05T12-46-53.046967.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_04_05T10_41_03.654782
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-05T10-41-03.654782.parquet'
- split: 2024_04_05T12_46_53.046967
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-05T12-46-53.046967.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-05T12-46-53.046967.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_04_05T10_41_03.654782
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-05T10-41-03.654782.parquet'
- split: 2024_04_05T12_46_53.046967
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-05T12-46-53.046967.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-05T12-46-53.046967.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_04_05T10_41_03.654782
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-05T10-41-03.654782.parquet'
- split: 2024_04_05T12_46_53.046967
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-05T12-46-53.046967.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-05T12-46-53.046967.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_04_05T10_41_03.654782
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-05T10-41-03.654782.parquet'
- split: 2024_04_05T12_46_53.046967
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-05T12-46-53.046967.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-05T12-46-53.046967.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_04_05T10_41_03.654782
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-05T10-41-03.654782.parquet'
- split: 2024_04_05T12_46_53.046967
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-05T12-46-53.046967.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-05T12-46-53.046967.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_04_05T10_41_03.654782
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-05T10-41-03.654782.parquet'
- split: 2024_04_05T12_46_53.046967
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-05T12-46-53.046967.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-05T12-46-53.046967.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_04_05T10_41_03.654782
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-05T10-41-03.654782.parquet'
- split: 2024_04_05T12_46_53.046967
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-05T12-46-53.046967.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-05T12-46-53.046967.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_04_05T10_41_03.654782
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-05T10-41-03.654782.parquet'
- split: 2024_04_05T12_46_53.046967
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-05T12-46-53.046967.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-05T12-46-53.046967.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_04_05T10_41_03.654782
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-05T10-41-03.654782.parquet'
- split: 2024_04_05T12_46_53.046967
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-05T12-46-53.046967.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-05T12-46-53.046967.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_04_05T10_41_03.654782
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-05T10-41-03.654782.parquet'
- split: 2024_04_05T12_46_53.046967
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-05T12-46-53.046967.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-05T12-46-53.046967.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_04_05T10_41_03.654782
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-05T10-41-03.654782.parquet'
- split: 2024_04_05T12_46_53.046967
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-05T12-46-53.046967.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-05T12-46-53.046967.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_04_05T10_41_03.654782
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-05T10-41-03.654782.parquet'
- split: 2024_04_05T12_46_53.046967
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-05T12-46-53.046967.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-05T12-46-53.046967.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_04_05T10_41_03.654782
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-05T10-41-03.654782.parquet'
- split: 2024_04_05T12_46_53.046967
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-05T12-46-53.046967.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-05T12-46-53.046967.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_04_05T10_41_03.654782
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-05T10-41-03.654782.parquet'
- split: 2024_04_05T12_46_53.046967
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-05T12-46-53.046967.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-05T12-46-53.046967.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_04_05T10_41_03.654782
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-05T10-41-03.654782.parquet'
- split: 2024_04_05T12_46_53.046967
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-05T12-46-53.046967.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-05T12-46-53.046967.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_04_05T10_41_03.654782
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-05T10-41-03.654782.parquet'
- split: 2024_04_05T12_46_53.046967
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-05T12-46-53.046967.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-05T12-46-53.046967.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_04_05T10_41_03.654782
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-05T10-41-03.654782.parquet'
- split: 2024_04_05T12_46_53.046967
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-05T12-46-53.046967.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-05T12-46-53.046967.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_04_05T10_41_03.654782
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-05T10-41-03.654782.parquet'
- split: 2024_04_05T12_46_53.046967
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-05T12-46-53.046967.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-05T12-46-53.046967.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_04_05T10_41_03.654782
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-05T10-41-03.654782.parquet'
- split: 2024_04_05T12_46_53.046967
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-05T12-46-53.046967.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-05T12-46-53.046967.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_04_05T10_41_03.654782
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-05T10-41-03.654782.parquet'
- split: 2024_04_05T12_46_53.046967
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-05T12-46-53.046967.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-05T12-46-53.046967.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_04_05T10_41_03.654782
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-05T10-41-03.654782.parquet'
- split: 2024_04_05T12_46_53.046967
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-05T12-46-53.046967.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-05T12-46-53.046967.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_04_05T10_41_03.654782
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-05T10-41-03.654782.parquet'
- split: 2024_04_05T12_46_53.046967
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-05T12-46-53.046967.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-05T12-46-53.046967.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_04_05T10_41_03.654782
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-05T10-41-03.654782.parquet'
- split: 2024_04_05T12_46_53.046967
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-05T12-46-53.046967.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-05T12-46-53.046967.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_04_05T10_41_03.654782
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-05T10-41-03.654782.parquet'
- split: 2024_04_05T12_46_53.046967
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-05T12-46-53.046967.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-05T12-46-53.046967.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_04_05T10_41_03.654782
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-05T10-41-03.654782.parquet'
- split: 2024_04_05T12_46_53.046967
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-05T12-46-53.046967.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-05T12-46-53.046967.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_04_05T10_41_03.654782
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-05T10-41-03.654782.parquet'
- split: 2024_04_05T12_46_53.046967
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-05T12-46-53.046967.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-05T12-46-53.046967.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_04_05T10_41_03.654782
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-05T10-41-03.654782.parquet'
- split: 2024_04_05T12_46_53.046967
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-05T12-46-53.046967.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-05T12-46-53.046967.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_04_05T10_41_03.654782
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-05T10-41-03.654782.parquet'
- split: 2024_04_05T12_46_53.046967
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-05T12-46-53.046967.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-05T12-46-53.046967.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_04_05T10_41_03.654782
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-04-05T10-41-03.654782.parquet'
- split: 2024_04_05T12_46_53.046967
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-04-05T12-46-53.046967.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-04-05T12-46-53.046967.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_04_05T10_41_03.654782
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-05T10-41-03.654782.parquet'
- split: 2024_04_05T12_46_53.046967
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-05T12-46-53.046967.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-05T12-46-53.046967.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_04_05T10_41_03.654782
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-05T10-41-03.654782.parquet'
- split: 2024_04_05T12_46_53.046967
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-05T12-46-53.046967.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-05T12-46-53.046967.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_04_05T10_41_03.654782
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-05T10-41-03.654782.parquet'
- split: 2024_04_05T12_46_53.046967
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-05T12-46-53.046967.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-05T12-46-53.046967.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_04_05T10_41_03.654782
path:
- '**/details_harness|hendrycksTest-management|5_2024-04-05T10-41-03.654782.parquet'
- split: 2024_04_05T12_46_53.046967
path:
- '**/details_harness|hendrycksTest-management|5_2024-04-05T12-46-53.046967.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-04-05T12-46-53.046967.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_04_05T10_41_03.654782
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-04-05T10-41-03.654782.parquet'
- split: 2024_04_05T12_46_53.046967
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-04-05T12-46-53.046967.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-04-05T12-46-53.046967.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_04_05T10_41_03.654782
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-05T10-41-03.654782.parquet'
- split: 2024_04_05T12_46_53.046967
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-05T12-46-53.046967.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-05T12-46-53.046967.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_04_05T10_41_03.654782
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-05T10-41-03.654782.parquet'
- split: 2024_04_05T12_46_53.046967
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-05T12-46-53.046967.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-05T12-46-53.046967.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_04_05T10_41_03.654782
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-05T10-41-03.654782.parquet'
- split: 2024_04_05T12_46_53.046967
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-05T12-46-53.046967.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-05T12-46-53.046967.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_04_05T10_41_03.654782
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-05T10-41-03.654782.parquet'
- split: 2024_04_05T12_46_53.046967
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-05T12-46-53.046967.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-05T12-46-53.046967.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_04_05T10_41_03.654782
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-05T10-41-03.654782.parquet'
- split: 2024_04_05T12_46_53.046967
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-05T12-46-53.046967.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-05T12-46-53.046967.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_04_05T10_41_03.654782
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-05T10-41-03.654782.parquet'
- split: 2024_04_05T12_46_53.046967
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-05T12-46-53.046967.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-05T12-46-53.046967.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_04_05T10_41_03.654782
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-05T10-41-03.654782.parquet'
- split: 2024_04_05T12_46_53.046967
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-05T12-46-53.046967.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-05T12-46-53.046967.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_04_05T10_41_03.654782
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-05T10-41-03.654782.parquet'
- split: 2024_04_05T12_46_53.046967
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-05T12-46-53.046967.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-05T12-46-53.046967.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_04_05T10_41_03.654782
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-05T10-41-03.654782.parquet'
- split: 2024_04_05T12_46_53.046967
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-05T12-46-53.046967.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-05T12-46-53.046967.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_04_05T10_41_03.654782
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-05T10-41-03.654782.parquet'
- split: 2024_04_05T12_46_53.046967
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-05T12-46-53.046967.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-05T12-46-53.046967.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_04_05T10_41_03.654782
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-05T10-41-03.654782.parquet'
- split: 2024_04_05T12_46_53.046967
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-05T12-46-53.046967.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-05T12-46-53.046967.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_04_05T10_41_03.654782
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-05T10-41-03.654782.parquet'
- split: 2024_04_05T12_46_53.046967
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-05T12-46-53.046967.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-05T12-46-53.046967.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_04_05T10_41_03.654782
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-05T10-41-03.654782.parquet'
- split: 2024_04_05T12_46_53.046967
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-05T12-46-53.046967.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-05T12-46-53.046967.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_04_05T10_41_03.654782
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-04-05T10-41-03.654782.parquet'
- split: 2024_04_05T12_46_53.046967
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-04-05T12-46-53.046967.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-04-05T12-46-53.046967.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_04_05T10_41_03.654782
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-05T10-41-03.654782.parquet'
- split: 2024_04_05T12_46_53.046967
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-05T12-46-53.046967.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-05T12-46-53.046967.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_04_05T10_41_03.654782
path:
- '**/details_harness|hendrycksTest-virology|5_2024-04-05T10-41-03.654782.parquet'
- split: 2024_04_05T12_46_53.046967
path:
- '**/details_harness|hendrycksTest-virology|5_2024-04-05T12-46-53.046967.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-04-05T12-46-53.046967.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_04_05T10_41_03.654782
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-05T10-41-03.654782.parquet'
- split: 2024_04_05T12_46_53.046967
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-05T12-46-53.046967.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-05T12-46-53.046967.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_04_05T10_41_03.654782
path:
- '**/details_harness|truthfulqa:mc|0_2024-04-05T10-41-03.654782.parquet'
- split: 2024_04_05T12_46_53.046967
path:
- '**/details_harness|truthfulqa:mc|0_2024-04-05T12-46-53.046967.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-04-05T12-46-53.046967.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_04_05T10_41_03.654782
path:
- '**/details_harness|winogrande|5_2024-04-05T10-41-03.654782.parquet'
- split: 2024_04_05T12_46_53.046967
path:
- '**/details_harness|winogrande|5_2024-04-05T12-46-53.046967.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-04-05T12-46-53.046967.parquet'
- config_name: results
data_files:
- split: 2024_04_05T10_41_03.654782
path:
- results_2024-04-05T10-41-03.654782.parquet
- split: 2024_04_05T12_46_53.046967
path:
- results_2024-04-05T12-46-53.046967.parquet
- split: latest
path:
- results_2024-04-05T12-46-53.046967.parquet
---
# Dataset Card for Evaluation run of jambroz/sixtyoneeighty-7b-chat
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [jambroz/sixtyoneeighty-7b-chat](https://huggingface.co/jambroz/sixtyoneeighty-7b-chat) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_jambroz__sixtyoneeighty-7b-chat",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-04-05T12:46:53.046967](https://huggingface.co/datasets/open-llm-leaderboard/details_jambroz__sixtyoneeighty-7b-chat/blob/main/results_2024-04-05T12-46-53.046967.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6529073614839261,
"acc_stderr": 0.03214334252439312,
"acc_norm": 0.6544409415660778,
"acc_norm_stderr": 0.032791861097330115,
"mc1": 0.5140758873929009,
"mc1_stderr": 0.01749656371704279,
"mc2": 0.6757680170745927,
"mc2_stderr": 0.014861299253949599
},
"harness|arc:challenge|25": {
"acc": 0.6655290102389079,
"acc_stderr": 0.013787460322441374,
"acc_norm": 0.6911262798634812,
"acc_norm_stderr": 0.013501770929344
},
"harness|hellaswag|10": {
"acc": 0.6801433977295359,
"acc_stderr": 0.004654675606841551,
"acc_norm": 0.8636725751842262,
"acc_norm_stderr": 0.0034243464481037156
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.34,
"acc_stderr": 0.047609522856952365,
"acc_norm": 0.34,
"acc_norm_stderr": 0.047609522856952365
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6148148148148148,
"acc_stderr": 0.04203921040156279,
"acc_norm": 0.6148148148148148,
"acc_norm_stderr": 0.04203921040156279
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.7105263157894737,
"acc_stderr": 0.03690677986137283,
"acc_norm": 0.7105263157894737,
"acc_norm_stderr": 0.03690677986137283
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.6,
"acc_stderr": 0.04923659639173309,
"acc_norm": 0.6,
"acc_norm_stderr": 0.04923659639173309
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6943396226415094,
"acc_stderr": 0.028353298073322666,
"acc_norm": 0.6943396226415094,
"acc_norm_stderr": 0.028353298073322666
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7361111111111112,
"acc_stderr": 0.03685651095897532,
"acc_norm": 0.7361111111111112,
"acc_norm_stderr": 0.03685651095897532
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.47,
"acc_stderr": 0.05016135580465919,
"acc_norm": 0.47,
"acc_norm_stderr": 0.05016135580465919
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.56,
"acc_stderr": 0.04988876515698589,
"acc_norm": 0.56,
"acc_norm_stderr": 0.04988876515698589
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.35,
"acc_stderr": 0.04793724854411018,
"acc_norm": 0.35,
"acc_norm_stderr": 0.04793724854411018
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6473988439306358,
"acc_stderr": 0.03643037168958548,
"acc_norm": 0.6473988439306358,
"acc_norm_stderr": 0.03643037168958548
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.45098039215686275,
"acc_stderr": 0.049512182523962625,
"acc_norm": 0.45098039215686275,
"acc_norm_stderr": 0.049512182523962625
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.78,
"acc_stderr": 0.04163331998932263,
"acc_norm": 0.78,
"acc_norm_stderr": 0.04163331998932263
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5702127659574469,
"acc_stderr": 0.03236214467715564,
"acc_norm": 0.5702127659574469,
"acc_norm_stderr": 0.03236214467715564
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.47368421052631576,
"acc_stderr": 0.046970851366478626,
"acc_norm": 0.47368421052631576,
"acc_norm_stderr": 0.046970851366478626
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5724137931034483,
"acc_stderr": 0.04122737111370333,
"acc_norm": 0.5724137931034483,
"acc_norm_stderr": 0.04122737111370333
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.40476190476190477,
"acc_stderr": 0.025279850397404904,
"acc_norm": 0.40476190476190477,
"acc_norm_stderr": 0.025279850397404904
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.5238095238095238,
"acc_stderr": 0.04467062628403273,
"acc_norm": 0.5238095238095238,
"acc_norm_stderr": 0.04467062628403273
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.34,
"acc_stderr": 0.04760952285695236,
"acc_norm": 0.34,
"acc_norm_stderr": 0.04760952285695236
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7935483870967742,
"acc_stderr": 0.02302589961718871,
"acc_norm": 0.7935483870967742,
"acc_norm_stderr": 0.02302589961718871
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5270935960591133,
"acc_stderr": 0.03512819077876106,
"acc_norm": 0.5270935960591133,
"acc_norm_stderr": 0.03512819077876106
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.69,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.69,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7818181818181819,
"acc_stderr": 0.03225078108306289,
"acc_norm": 0.7818181818181819,
"acc_norm_stderr": 0.03225078108306289
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.8080808080808081,
"acc_stderr": 0.028057791672989017,
"acc_norm": 0.8080808080808081,
"acc_norm_stderr": 0.028057791672989017
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8963730569948186,
"acc_stderr": 0.02199531196364424,
"acc_norm": 0.8963730569948186,
"acc_norm_stderr": 0.02199531196364424
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6743589743589744,
"acc_stderr": 0.02375966576741229,
"acc_norm": 0.6743589743589744,
"acc_norm_stderr": 0.02375966576741229
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.337037037037037,
"acc_stderr": 0.028820884666253255,
"acc_norm": 0.337037037037037,
"acc_norm_stderr": 0.028820884666253255
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.7016806722689075,
"acc_stderr": 0.029719142876342853,
"acc_norm": 0.7016806722689075,
"acc_norm_stderr": 0.029719142876342853
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.36423841059602646,
"acc_stderr": 0.03929111781242742,
"acc_norm": 0.36423841059602646,
"acc_norm_stderr": 0.03929111781242742
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8330275229357799,
"acc_stderr": 0.01599015488507338,
"acc_norm": 0.8330275229357799,
"acc_norm_stderr": 0.01599015488507338
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5555555555555556,
"acc_stderr": 0.03388857118502325,
"acc_norm": 0.5555555555555556,
"acc_norm_stderr": 0.03388857118502325
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8382352941176471,
"acc_stderr": 0.025845017986926917,
"acc_norm": 0.8382352941176471,
"acc_norm_stderr": 0.025845017986926917
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7974683544303798,
"acc_stderr": 0.026160568246601436,
"acc_norm": 0.7974683544303798,
"acc_norm_stderr": 0.026160568246601436
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.695067264573991,
"acc_stderr": 0.030898610882477515,
"acc_norm": 0.695067264573991,
"acc_norm_stderr": 0.030898610882477515
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7709923664122137,
"acc_stderr": 0.036853466317118506,
"acc_norm": 0.7709923664122137,
"acc_norm_stderr": 0.036853466317118506
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.768595041322314,
"acc_stderr": 0.03849856098794088,
"acc_norm": 0.768595041322314,
"acc_norm_stderr": 0.03849856098794088
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7962962962962963,
"acc_stderr": 0.03893542518824847,
"acc_norm": 0.7962962962962963,
"acc_norm_stderr": 0.03893542518824847
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7791411042944786,
"acc_stderr": 0.03259177392742178,
"acc_norm": 0.7791411042944786,
"acc_norm_stderr": 0.03259177392742178
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.4732142857142857,
"acc_stderr": 0.047389751192741546,
"acc_norm": 0.4732142857142857,
"acc_norm_stderr": 0.047389751192741546
},
"harness|hendrycksTest-management|5": {
"acc": 0.7572815533980582,
"acc_stderr": 0.04245022486384495,
"acc_norm": 0.7572815533980582,
"acc_norm_stderr": 0.04245022486384495
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8931623931623932,
"acc_stderr": 0.02023714900899091,
"acc_norm": 0.8931623931623932,
"acc_norm_stderr": 0.02023714900899091
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.73,
"acc_stderr": 0.044619604333847394,
"acc_norm": 0.73,
"acc_norm_stderr": 0.044619604333847394
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8326947637292464,
"acc_stderr": 0.013347327202920332,
"acc_norm": 0.8326947637292464,
"acc_norm_stderr": 0.013347327202920332
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7312138728323699,
"acc_stderr": 0.02386800326250011,
"acc_norm": 0.7312138728323699,
"acc_norm_stderr": 0.02386800326250011
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.41899441340782123,
"acc_stderr": 0.016501579306861677,
"acc_norm": 0.41899441340782123,
"acc_norm_stderr": 0.016501579306861677
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7287581699346405,
"acc_stderr": 0.02545775669666788,
"acc_norm": 0.7287581699346405,
"acc_norm_stderr": 0.02545775669666788
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.707395498392283,
"acc_stderr": 0.02583989833487798,
"acc_norm": 0.707395498392283,
"acc_norm_stderr": 0.02583989833487798
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.75,
"acc_stderr": 0.02409347123262133,
"acc_norm": 0.75,
"acc_norm_stderr": 0.02409347123262133
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.48936170212765956,
"acc_stderr": 0.029820747191422473,
"acc_norm": 0.48936170212765956,
"acc_norm_stderr": 0.029820747191422473
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4654498044328553,
"acc_stderr": 0.012739711554045704,
"acc_norm": 0.4654498044328553,
"acc_norm_stderr": 0.012739711554045704
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6801470588235294,
"acc_stderr": 0.028332959514031208,
"acc_norm": 0.6801470588235294,
"acc_norm_stderr": 0.028332959514031208
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6683006535947712,
"acc_stderr": 0.019047485239360378,
"acc_norm": 0.6683006535947712,
"acc_norm_stderr": 0.019047485239360378
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6727272727272727,
"acc_stderr": 0.0449429086625209,
"acc_norm": 0.6727272727272727,
"acc_norm_stderr": 0.0449429086625209
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7428571428571429,
"acc_stderr": 0.02797982353874455,
"acc_norm": 0.7428571428571429,
"acc_norm_stderr": 0.02797982353874455
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.845771144278607,
"acc_stderr": 0.025538433368578323,
"acc_norm": 0.845771144278607,
"acc_norm_stderr": 0.025538433368578323
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.85,
"acc_stderr": 0.0358870281282637,
"acc_norm": 0.85,
"acc_norm_stderr": 0.0358870281282637
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5301204819277109,
"acc_stderr": 0.03885425420866767,
"acc_norm": 0.5301204819277109,
"acc_norm_stderr": 0.03885425420866767
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8421052631578947,
"acc_stderr": 0.02796678585916089,
"acc_norm": 0.8421052631578947,
"acc_norm_stderr": 0.02796678585916089
},
"harness|truthfulqa:mc|0": {
"mc1": 0.5140758873929009,
"mc1_stderr": 0.01749656371704279,
"mc2": 0.6757680170745927,
"mc2_stderr": 0.014861299253949599
},
"harness|winogrande|5": {
"acc": 0.8050513022888713,
"acc_stderr": 0.011134099415938278
},
"harness|gsm8k|5": {
"acc": 0.6194086429112965,
"acc_stderr": 0.013373971277729817
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
xekri/audio_letters_eo | ---
license: cc-by-4.0
task_categories:
- automatic-speech-recognition
language:
- eo
size_categories:
- n<1K
---
Audio files sampled at 48000Hz of an American male pronouncing the names of the Esperanto letters in three ways. Retroflex-r and trilled-r are included. |
einfy/floor | ---
license: apache-2.0
---
|
johnny9210/instruction_001 | ---
license: apache-2.0
task_categories:
- question-answering
--- |
adsabs/FOCAL | ---
annotations_creators:
- expert-generated
license: cc-by-4.0
task_categories:
- token-classification
language:
- en
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
tags:
- astronomy
dataset_info:
features:
- name: Identifier
dtype: string
- name: Paragraph
dtype: string
- name: Citation Text
sequence: string
- name: Functions Text
sequence: string
- name: Functions Label
sequence: string
- name: Citation Start End
sequence:
sequence: int64
- name: Functions Start End
sequence:
sequence: int64
splits:
- name: train
num_bytes: 7096500
num_examples: 2421
- name: validation
num_bytes: 1761751
num_examples: 606
- name: test
num_bytes: 2512022
num_examples: 821
download_size: 5649484
dataset_size: 11370273
---
# Function Of Citation in Astrophysics Literature (FOCAL): Dataset and Task
*Can you explain why the authors made a given citation?*
This dataset was created as a [shared task](https://ui.adsabs.harvard.edu/WIESP/2023/shared_task_1) for [WIESP @ AACL-IJCNLP 2023](https://ui.adsabs.harvard.edu/WIESP/2023/).
## Dataset Description
Datasets are in JSON Lines format (each line is a json dictionary).
Each entry consists of a dictionary with the following keys:
- `"Identifier"`: unique string to identify the entry
- `"Paragraph"`: text string from an astrophysics paper
- `"Citation Text"`: list of strings forming the citation (most often a single string, but sometimes the citation text is split up)
- `"Citation Start End"`: list of integer pairs denoting where the citation starts and end in `"Paragraph"` (most often a single pair, sometimes the citation text is split up, if so follows the order in `"Citation Text"`)
- `"Functions Text"`: list of strings highlighting parts of the paragraph that explain the function of the citation
- `"Functions Label"`: list of strings with the label for each text element in `"Functions Text"` (in same order)
- `"Functions Start End"`: list of integer pairs denoting where the elements in `"Functions Text"` start and end in `"Paragraph"`(in same order)
start and end are defined by the character position in the `"Paragraph"` string.
## Instructions for Workshop Participants:
How to load the data using the Huggingface library:
```python
from datasets import load_dataset
dataset = load_dataset("adsabs/FOCAL")
```
How to load the data if you cloned the repository locally:
(assuming `./FOCAL-TRAINING.jsonl` is in the current directory, change as needed)
- python (as list of dictionaries):
```python
import json
with open("./FOCAL-TRAINING.jsonl", 'r') as f:
focal_training_from_json = [json.loads(l) for l in list(f)]
```
- into Huggingface (as a Huggingface Dataset):
```python
from datasets import Dataset
focal_training_from_json = Dataset.from_json(path_or_paths="./FOCAL-TRAINING.jsonl")
```
## File List
```
├── FOCAL-TRAINING.jsonl (2421 samples for training)
├── FOCAL-VALIDATION.jsonl (606 samples for validating your training methods)
├── FOCAL-TESTING.jsonl (821 samples for testing)
├── FOCAL-VALIDATION-NO-LABELS.jsonl (606 samples for validation without the labels. Used during the shared task of [WIESP-2023](https://ui.adsabs.harvard.edu/WIESP/2023/)
├── FOCAL-TESTING-NO-LABELS.jsonl (821 samples for testing without the labels. Used during the shared task of [WIESP-2023](https://ui.adsabs.harvard.edu/WIESP/2023/)
├── /scoring_scripts/score_focal_seqeval.py (scoring scripts used during the shared task of [WIESP-2023](https://ui.adsabs.harvard.edu/WIESP/2023/)
├── /scoring_scripts/score_focal_labels_only.py (scoring scripts used during the shared task of [WIESP-2023](https://ui.adsabs.harvard.edu/WIESP/2023/)
├── /data/*.parquet (files used when loading the dataset through Huggingface's API)
├── README.MD (this file)
└──
```
Maintainer: Felix Grezes (ORCID: 0000-0001-8714-7774)
Data annotator: Tom Allen (ORCID: 0000-0002-5532-4809) |
shidowake/augmxnt_ultra-orca-boros-en-ja-v1_split_17 | ---
dataset_info:
features:
- name: id
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: weight
dtype: float64
- name: source
dtype: string
splits:
- name: train
num_bytes: 20639999.933149945
num_examples: 9397
download_size: 10587436
dataset_size: 20639999.933149945
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
cyrilzhang/TinyStories-ascii | ---
license: cdla-sharing-1.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1885104075.974957
num_examples: 2105240
- name: validation
num_bytes: 19045764.092269212
num_examples: 21839
download_size: 985544567
dataset_size: 1904149840.0672262
---
- `TinyStories-{train,validation}.txt` from [roneneldan/TinyStories](https://huggingface.co/datasets/roneneldan/TinyStories)
- ad-hoc Unicode -> ASCII normalization
- remove empty/incomplete stories
|
NickKolok/regs-anythingv3 | ---
license: agpl-3.0
---
|
zolak/twitter_dataset_78_1713230032 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 93032
num_examples: 259
download_size: 53850
dataset_size: 93032
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
MaxReynolds/cifar10_TextLabels | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 113699226.0
num_examples: 50000
download_size: 119693021
dataset_size: 113699226.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "cifar10_TextLabels"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
duarteocarmo/lusiadas | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
- name: input
dtype: string
splits:
- name: train
num_bytes: 382140
num_examples: 1102
download_size: 225500
dataset_size: 382140
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
DynamicSuperbPrivate/PronounciationEvaluationProsodic_Speechocean762 | ---
dataset_info:
features:
- name: file
dtype: string
- name: audio
dtype: audio
- name: instruction
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 320842769.0
num_examples: 2000
- name: validation
num_bytes: 65555751.0
num_examples: 500
download_size: 315736699
dataset_size: 386398520.0
---
# Dataset Card for "PronounciationEvaluationProsodic_Speechocean762"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Marcus12/bulunamayanlar | ---
task_categories:
- token-classification
language:
- tr
- en
tags:
- not-for-all-audiences
size_categories:
- n<1K
--- |
CyberHarem/utage_arknights | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of utage/ウタゲ/宴 (Arknights)
This is the dataset of utage/ウタゲ/宴 (Arknights), containing 500 images and their tags.
The core tags of this character are `animal_ears, purple_eyes, animal_ear_fluff, breasts, hair_ornament, hairclip, large_breasts, short_hair, tail, brown_hair, blonde_hair, hat, fang, skin_fang`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-----------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 809.02 MiB | [Download](https://huggingface.co/datasets/CyberHarem/utage_arknights/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 387.38 MiB | [Download](https://huggingface.co/datasets/CyberHarem/utage_arknights/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1324 | 912.33 MiB | [Download](https://huggingface.co/datasets/CyberHarem/utage_arknights/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 677.88 MiB | [Download](https://huggingface.co/datasets/CyberHarem/utage_arknights/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1324 | 1.40 GiB | [Download](https://huggingface.co/datasets/CyberHarem/utage_arknights/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/utage_arknights',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 6 |  |  |  |  |  | 1girl, beret, black_headwear, black_jacket, black_skirt, collared_shirt, glasses, long_sleeves, looking_at_viewer, miniskirt, neck_ribbon, official_alternate_costume, open_jacket, pleated_skirt, red_ribbon, simple_background, solo, white_background, black_shirt, garter_straps, smile, yellow_thighhighs, medium_hair, open_mouth, sitting, yellow_pantyhose, blush, holding_pen |
| 1 | 9 |  |  |  |  |  | 1girl, beret, black_jacket, looking_at_viewer, neck_ribbon, official_alternate_costume, red_ribbon, smile, solo, collared_shirt, glasses, long_sleeves, open_jacket, simple_background, upper_body, black_headwear, purple_shirt, closed_mouth, white_background, black_shirt, holding, blush, medium_hair |
| 2 | 6 |  |  |  |  |  | 1girl, blush, hair_between_eyes, long_sleeves, open_jacket, simple_background, solo, white_jacket, looking_at_viewer, upper_body, grey_shirt, open_mouth, white_background, :d, x_hair_ornament |
| 3 | 12 |  |  |  |  |  | 1girl, long_sleeves, open_jacket, smile, solo, white_jacket, katana, looking_at_viewer, holding_sword, scabbard, simple_background, black_thighhighs, grey_shirt, x_hair_ornament, hair_between_eyes, open_mouth, white_background, cowboy_shot, dress |
| 4 | 5 |  |  |  |  |  | 1girl, cowboy_shot, grey_shirt, long_sleeves, looking_at_viewer, open_jacket, smile, solo, white_jacket, short_dress, x_hair_ornament, black_nails, black_thighhighs, closed_mouth, collarbone, grey_dress, pink_eyes, simple_background, white_background, zettai_ryouiki, nail_polish |
| 5 | 6 |  |  |  |  |  | 1girl, cleavage, eyewear_on_head, looking_at_viewer, necklace, official_alternate_costume, purple-tinted_eyewear, smile, solo, straw_hat, sunglasses, upper_body, vertical-striped_clothes, collarbone, outdoors, beach, day, grey_bikini, huge_breasts, long_hair, off_shoulder, vertical-striped_bikini, bare_shoulders, closed_mouth, navel, open_jacket, round_eyewear, shorts, stomach, sun_hat, swimsuit_cover-up, twin_braids |
| 6 | 9 |  |  |  |  |  | 1girl, cleavage, eyewear_on_head, looking_at_viewer, official_alternate_costume, outdoors, solo, sunglasses, vertical-striped_bikini, blue_sky, braid, day, necklace, short_shorts, smile, vertical-striped_clothes, aqua_nails, bare_shoulders, beach, nail_polish, off_shoulder, purple-tinted_eyewear, straw_hat, crazy_straw, drinking_glass, holding_cup, huge_breasts, long_hair, open_clothes, sun_hat, swimsuit_cover-up, blue_nails, sitting, x_hair_ornament, closed_mouth, collarbone, jacket, ocean |
| 7 | 5 |  |  |  |  |  | 1boy, 1girl, hetero, looking_at_viewer, mosaic_censoring, nipples, penis, pov, solo_focus, blush, completely_nude, open_mouth, paizuri, breasts_squeezed_together, dark-skinned_male, smile, braid, collarbone, ejaculation, huge_breasts, interracial, nail_polish, simple_background, steaming_body |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | beret | black_headwear | black_jacket | black_skirt | collared_shirt | glasses | long_sleeves | looking_at_viewer | miniskirt | neck_ribbon | official_alternate_costume | open_jacket | pleated_skirt | red_ribbon | simple_background | solo | white_background | black_shirt | garter_straps | smile | yellow_thighhighs | medium_hair | open_mouth | sitting | yellow_pantyhose | blush | holding_pen | upper_body | purple_shirt | closed_mouth | holding | hair_between_eyes | white_jacket | grey_shirt | :d | x_hair_ornament | katana | holding_sword | scabbard | black_thighhighs | cowboy_shot | dress | short_dress | black_nails | collarbone | grey_dress | pink_eyes | zettai_ryouiki | nail_polish | cleavage | eyewear_on_head | necklace | purple-tinted_eyewear | straw_hat | sunglasses | vertical-striped_clothes | outdoors | beach | day | grey_bikini | huge_breasts | long_hair | off_shoulder | vertical-striped_bikini | bare_shoulders | navel | round_eyewear | shorts | stomach | sun_hat | swimsuit_cover-up | twin_braids | blue_sky | braid | short_shorts | aqua_nails | crazy_straw | drinking_glass | holding_cup | open_clothes | blue_nails | jacket | ocean | 1boy | hetero | mosaic_censoring | nipples | penis | pov | solo_focus | completely_nude | paizuri | breasts_squeezed_together | dark-skinned_male | ejaculation | interracial | steaming_body |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:-----------------|:---------------|:--------------|:-----------------|:----------|:---------------|:--------------------|:------------|:--------------|:-----------------------------|:--------------|:----------------|:-------------|:--------------------|:-------|:-------------------|:--------------|:----------------|:--------|:--------------------|:--------------|:-------------|:----------|:-------------------|:--------|:--------------|:-------------|:---------------|:---------------|:----------|:--------------------|:---------------|:-------------|:-----|:------------------|:---------|:----------------|:-----------|:-------------------|:--------------|:--------|:--------------|:--------------|:-------------|:-------------|:------------|:-----------------|:--------------|:-----------|:------------------|:-----------|:------------------------|:------------|:-------------|:---------------------------|:-----------|:--------|:------|:--------------|:---------------|:------------|:---------------|:--------------------------|:-----------------|:--------|:----------------|:---------|:----------|:----------|:--------------------|:--------------|:-----------|:--------|:---------------|:-------------|:--------------|:-----------------|:--------------|:---------------|:-------------|:---------|:--------|:-------|:---------|:-------------------|:----------|:--------|:------|:-------------|:------------------|:----------|:----------------------------|:--------------------|:--------------|:--------------|:----------------|
| 0 | 6 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 9 |  |  |  |  |  | X | X | X | X | | X | X | X | X | | X | X | X | | X | X | X | X | X | | X | | X | | | | X | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 6 |  |  |  |  |  | X | | | | | | | X | X | | | | X | | | X | X | X | | | | | | X | | | X | | X | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 12 |  |  |  |  |  | X | | | | | | | X | X | | | | X | | | X | X | X | | | X | | | X | | | | | | | | | X | X | X | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 5 |  |  |  |  |  | X | | | | | | | X | X | | | | X | | | X | X | X | | | X | | | | | | | | | | X | | | X | X | | X | | | | X | X | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 6 |  |  |  |  |  | X | | | | | | | | X | | | X | X | | | | X | | | | X | | | | | | | | X | | X | | | | | | | | | | | | | | | X | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | |
| 6 | 9 |  |  |  |  |  | X | | | | | | | | X | | | X | | | | | X | | | | X | | | | X | | | | | | X | | | | | | X | | | | | | | | | X | | | | X | X | X | X | X | X | X | X | X | X | X | | X | X | X | X | X | | | | | X | X | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | |
| 7 | 5 |  |  |  |  |  | X | | | | | | | | X | | | | | | | X | | | | | X | | | X | | | X | | | | | | | | | | | | | | | | | | | X | | | | X | | | | | | | | | | | | X | | | | | | | | | | | | | X | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
ceyda/test-privacy | ---
license: other
---
|
liuyanchen1015/MULTI_VALUE_mnli_proximal_distal_demonstratives | ---
dataset_info:
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: score
dtype: int64
splits:
- name: dev_matched
num_bytes: 293851
num_examples: 1159
- name: dev_mismatched
num_bytes: 374907
num_examples: 1540
- name: test_matched
num_bytes: 306786
num_examples: 1226
- name: test_mismatched
num_bytes: 382598
num_examples: 1558
- name: train
num_bytes: 12036102
num_examples: 47739
download_size: 8133528
dataset_size: 13394244
---
# Dataset Card for "MULTI_VALUE_mnli_proximal_distal_demonstratives"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
SilpaCS/Augmented_alzheimer | ---
task_categories:
- image-classification
language:
- en
tags:
- medical
size_categories:
- 10K<n<100K
--- |
IndianaUniversityDatasetsModels/MIMIC-medical-report | ---
dataset_info:
features:
- name: FileName
dtype: string
- name: INDICATION
dtype: string
- name: IMPRESSION
dtype: string
- name: FINDINGS
dtype: string
splits:
- name: train
num_bytes: 45203432.183416
num_examples: 83971
- name: test
num_bytes: 461341.9082919998
num_examples: 857
- name: validation
num_bytes: 461341.9082919998
num_examples: 857
download_size: 20175619
dataset_size: 46126116.00000001
---
# Dataset Card for "MIMIC-medical-report"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
shashank-indukuri/TinyStories | ---
license: mit
---
|
itinerai/restaurants | ---
license: apache-2.0
---
|
catlove007/multilingual-text-matching | ---
license: apache-2.0
---
|
Tonyhacker/Narrador_Combo_FPS | ---
license: openrail
---
|
Mariszka/whisper_in_cs | ---
dataset_info:
features:
- name: input_features
sequence:
sequence: float32
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 21281114736
num_examples: 22155
- name: test
num_bytes: 7409573720
num_examples: 7714
download_size: 4782714641
dataset_size: 28690688456
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
heliosprime/twitter_dataset_1713192750 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 23019
num_examples: 63
download_size: 20097
dataset_size: 23019
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "twitter_dataset_1713192750"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mateuzim/minhasvozes | ---
license: openrail
---
|
TeeA/ChartQA | ---
dataset_info:
features:
- name: id_image
dtype: string
- name: image
dtype: image
- name: table
dtype: string
- name: chart_type
dtype: string
- name: qa
list:
- name: label
dtype: string
- name: query
dtype: string
- name: vi_qa
list:
- name: label
dtype: string
- name: query
dtype: string
splits:
- name: train
num_bytes: 862926495.214
num_examples: 18317
- name: validation
num_bytes: 50417499.392
num_examples: 1056
- name: test
num_bytes: 70065736.487
num_examples: 1509
download_size: 963220308
dataset_size: 983409731.0929999
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
Dataset is converted from https://github.com/vis-nlp/ChartQA |
liuyanchen1015/MULTI_VALUE_wnli_volition_changes | ---
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: value_score
dtype: int64
splits:
- name: dev
num_bytes: 753
num_examples: 3
- name: test
num_bytes: 546
num_examples: 2
- name: train
num_bytes: 1355
num_examples: 5
download_size: 11158
dataset_size: 2654
---
# Dataset Card for "MULTI_VALUE_wnli_volition_changes"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Pranavkpba2000/skin_cancer_complete_dataset_resized_123 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': AK
'1': BCC
'2': BKL
'3': DF
'4': MEL
'5': NV
'6': SCC
'7': VASC
splits:
- name: train
num_bytes: 170043159.063
num_examples: 28449
- name: test
num_bytes: 46642856.68
num_examples: 7112
download_size: 204564103
dataset_size: 216686015.743
---
# Dataset Card for "skin_cancer_complete_dataset_resized_123"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
TokenBender/roleplay_alpaca | ---
license: artistic-2.0
---
|
lleticiasilvaa/test-CNPJ-sample-PT-simple | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: tables
sequence: string
- name: schema
dtype: string
- name: example_values
dtype: string
- name: schema_with_example_values
dtype: string
splits:
- name: train
num_bytes: 20629
num_examples: 10
download_size: 18708
dataset_size: 20629
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
tyzhu/rareid_find_last_sent_train_30_eval_10 | ---
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: title
dtype: string
- name: context
dtype: string
splits:
- name: train
num_bytes: 89861
num_examples: 70
- name: validation
num_bytes: 9933
num_examples: 10
download_size: 65206
dataset_size: 99794
---
# Dataset Card for "rareid_find_last_sent_train_30_eval_10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Intel/neural-chat-dataset-v2 | ---
license: apache-2.0
---
Here is a collective list of instruction dataset used for Neural Chat fine-tuning. The total number of instruction samples and tokens are about 1.5M and 5M respectively.
| Type | Language | Dataset | Number |
|--| ---- |--------|----|
| HC3 | en | [HC3](https://huggingface.co/datasets/Hello-SimpleAI/HC3) | 24K |
| dolly | en | [databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) | 15K |
| alpaca-zh | zh | [tigerbot-alpaca-zh-0.5m](https://huggingface.co/datasets/TigerResearch/tigerbot-alpaca-zh-0.5m) | 500K |
| alpaca-en | en | [TigerResearch/tigerbot-alpaca-en-50k](https://huggingface.co/datasets/TigerResearch/tigerbot-alpaca-en-50k) | 50K |
| math | en | [tigerbot-gsm-8k-en](https://huggingface.co/datasets/TigerResearch/tigerbot-gsm-8k-en) | 8K |
| general | en | [tigerbot-stackexchange-qa-en-0.5m](https://huggingface.co/datasets/TigerResearch/tigerbot-stackexchange-qa-en-0.5m) | 500K |
| OpenOrca | en | [Open-Orca/OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca) | 400K (sampled) |
The collective dataset has been validated on multiple LLMs (such as MPT, LLama, Llama2) by the NeuralChat team (Kaokao Lv, Wenxin Zhang, Xuhui Ren, and Haihao Shen) from Intel/SATG/AIA/AIPT. Thanks to [Hello-SimpleAI](https://huggingface.co/Hello-SimpleAI), [databricks](https://huggingface.co/databricks), [TigerResearch/TigerBot](https://github.com/TigerResearch/TigerBot), [Open-Orca](https://huggingface.co/Open-Orca) for releasing the open-source instruction dataset.
|
togethercomputer/RedPajama-Data-Instruct | ---
license: apache-2.0
---
# Dataset Summary
RedPajama-Instruct-Data is curated from a diverse collection of NLP tasks from both [P3 (BigScience)](https://huggingface.co/datasets/bigscience/P3) and [Natural Instruction (AI2)](https://github.com/allenai/natural-instructions),
and conduct aggressive decontamination against [HELM]((https://crfm.stanford.edu/helm/latest/?group=core_scenarios)),
in two steps: (1) We first conduct semantic search using each validation example in HELM as the query and get top-100 similar instances from the Instruct data set and check tasks that have any returned instances overlapping (using 10-Gram) with the validation example.
We remove the entire task if the returned instance and the validation example correspond to the same task
(In this step, we keep the task in the case that the returned instance happens to use the same Wikipedia article as the validation example, but asks different questions);
(2) We then remove all instances that have any 10-Gram overlap with any HELM validation example.
In total, we filtered out 137 tasks and 5.2M instances (out of 1069 tasks and 93.3M instances).
# QuickStart
The materialized version of P3 includes three main fields. The inputs field contains task instructions and data inputs, while the targets field denotes the labels. The third field, meta, provides meta information.
```python
data = load_dataset('togethercomputer/RedPajama-Instruct-Data', data_files='data/P3_decontaminated.jsonl.zst', split='train')
```
For NI, the definition field refers to the task instructions, while inputs represent the input data. The targets field pertains to the labels, and meta provides relevant meta information.
```python
data = load_dataset('togethercomputer/RedPajama-Instruct-Data', data_files='data/NI_decontaminated.jsonl.zst', split='train')
```
# Source Data
RedPajama-Instruct-Data is sourced from two prominent datasets:
- [Public Pool of Prompts](https://huggingface.co/datasets/bigscience/P3): A large dataset featuring various creative tasks obtained from crowdsourcing efforts.
- [Natural-Instructions](https://github.com/allenai/natural-instructions): An instruction-tuning dataset comprising a diverse set of tasks in natural languages.
# Languages
Primarily English.
# Licensing Information
This dataset is released under the licsence of Apache 2.0.
|
heliosprime/twitter_dataset_1713000159 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 12396
num_examples: 27
download_size: 10612
dataset_size: 12396
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "twitter_dataset_1713000159"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
josedonoso/oranges-dataset-v1 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 3039274.0
num_examples: 192
- name: test
num_bytes: 734602.0
num_examples: 48
download_size: 3764144
dataset_size: 3773876.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
Cubpaw/test_bce_dataset_voxelgym_3c_42x42_200 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype: image
- name: rgb_label
dtype: image
- name: path_label
dtype: image
- name: path_rgb_label
dtype: image
splits:
- name: train
num_bytes: 144820.0
num_examples: 160
- name: validation
num_bytes: 36018.0
num_examples: 40
download_size: 156017
dataset_size: 180838.0
---
# Dataset Card for "test_bce_dataset_voxelgym_3c_100_42x42"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Deojoandco/capstone_hal_with_gold | ---
dataset_info:
features:
- name: dialog_id
dtype: int32
- name: source
sequence: string
- name: tags
sequence:
class_label:
names:
'0': C
'1': M
'2': N
'3': O
'4': OB
'5': W
splits:
- name: train
num_bytes: 268989
num_examples: 76
- name: validation
num_bytes: 53862
num_examples: 12
- name: test
num_bytes: 31570
num_examples: 12
download_size: 39058
dataset_size: 354421
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
# Dataset Card for "capstone_hal_with_gold"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
adyarpit/mini-platypus | ---
license: mit
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 4186564
num_examples: 1000
download_size: 2245921
dataset_size: 4186564
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
blockplacer4/hobby-dataset-v5 | ---
task_categories:
- text-generation
language:
- de
pretty_name: Troll
size_categories:
- n<1K
--- |
autoevaluate/autoeval-staging-eval-project-squad_v2-b7567fd1-11675555 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- squad_v2
eval_info:
task: extractive_question_answering
model: deepset/roberta-base-squad2-distilled
metrics: []
dataset_name: squad_v2
dataset_config: squad_v2
dataset_split: validation
col_mapping:
context: context
question: question
answers-text: answers.text
answers-answer_start: answers.answer_start
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/roberta-base-squad2-distilled
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@yjernite](https://huggingface.co/yjernite) for evaluating this model. |
tasksource/jigsaw | ---
license: apache-2.0
---
|
novaia/srtm-1-arc-second-global | ---
task_categories:
- image-classification
- unconditional-image-generation
size_categories:
- 10K<n<100K
---
# SRTM 1 Arc-Second Global
GeoTIFF heightmaps of the Earth's surface labelled according to latitude and longitude.
## Mission Description
The Shuttle Radar Topography Mission (SRTM) was flown aboard the space shuttle Endeavour February 11-22, 2000. The National Aeronautics and Space Administration (NASA) and the National Geospatial-Intelligence Agency (NGA) participated in an international project to acquire radar data which were used to create the first near-global set of land elevations.
The radars used during the SRTM mission were actually developed and flown on two Endeavour missions in 1994. The C-band Spaceborne Imaging Radar and the X-Band Synthetic Aperture Radar (X-SAR) hardware were used on board the space shuttle in April and October 1994 to gather data about Earth's environment. The technology was modified for the SRTM mission to collect interferometric radar, which compared two radar images or signals taken at slightly different angles. This mission used single-pass interferometry, which acquired two signals at the same time by using two different radar antennas. An antenna located on board the space shuttle collected one data set and the other data set was collected by an antenna located at the end of a 60-meter mast that extended from the shuttle. Differences between the two signals allowed for the calculation of surface elevation.
Endeavour orbited Earth 16 times each day during the 11-day mission, completing 176 orbits. SRTM successfully collected radar data over 80% of the Earth's land surface between 60° north and 56° south latitude with data points posted every 1 arc-second (approximately 30 meters).
## Original Dataset
The original dataset as well as the [SRTM Non-Void Filled](https://doi.org/10.5066/F7K072R7) and [SRTM Void Filled](https://doi.org/10.5066/F7F76B1X) variants can be accessed on [EarthExplorer](https://earthexplorer.usgs.gov/).
## Digital Object Identifier (DOI)
[Shuttle Radar Topography Mission 1 Arc-Second Global (Digital Object Identifier (DOI) number: /10.5066/F7PR7TFT](https://doi.org/10.5066/F7PR7TFT) |
RussianNLP/rucola | ---
license: apache-2.0
task_categories:
- text-classification
language:
- ru
size_categories:
- 10K<n<100K
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:** https://rucola-benchmark.com
- **Repository:** https://github.com/RussianNLP/RuCoLA
- **Paper:** https://aclanthology.org/2022.emnlp-main.348/
- **ArXiv:** https://arxiv.org/abs/2210.12814
- **Leaderboard:** https://rucola-benchmark.com/leaderboard
- **Point of Contact:** vmikhailovhse@gmail.com
- **Language:** Russian
### Dataset Summary

Russian Corpus of Linguistic Acceptability (RuCoLA) is a novel benchmark of 13.4k sentences labeled as acceptable or not. RuCoLA combines in-domain sentences manually collected from linguistic literature and out-of-domain sentences produced by nine machine translation and paraphrase generation models.
The motivation behind the out-of-domain set is to facilitate the practical use of acceptability judgments for improving language generation.
Each unacceptable sentence is additionally labeled with four standard and machine-specific coarse-grained categories: morphology, syntax, semantics, and hallucinations.
## Dataset Structure
### Supported Tasks and Leaderboards
- **Task:** binary classification.
- **Metrics:** MCC/Acc.
- **Leaderboard:** https://rucola-benchmark.com/leaderboard
### Languages
Russian.
### Data Instances
```
{
"id": 19,
"sentence": "Люк останавливает удачу от этого.",
"label": 0,
"error_type": "Hallucination",
"detailed_source": "WikiMatrix"}
}
```
The example in English for illustration purposes:
```
{
"id": 19,
"sentence": "Luck stops luck from doing this.",
"label": 0,
"error_type": "Hallucination",
"detailed_source": "WikiMatrix"}
}
```
### Data Fields
- ```id (int64)```: the sentence's id.
- ```sentence (str)```: the sentence.
- ```label (str)```: the target class. "1" refers to "acceptable", while "0" corresponds to "unacceptable".
- ```error_type (str)```: the coarse-grained violation category (Morphology, Syntax, Semantics, or Hallucination); "0" if the sentence is acceptable.
- ```detailed_source```: the data source.
### Data Splits
RuCoLA consists of the training, development, and private test sets organised under two subsets: in-domain (linguistic publications) and out-of-domain (texts produced by natural language generation models).
- ```train```: 7869 in-domain samples (```"data/in_domain_train.csv"```).
- ```validation```: 2787 in-domain and out-of-domain samples. The in-domain (```"data/in_domain_dev.csv"```) and out-of-domain (```"data/out_of_domain_dev.csv"```) validation sets are merged into ```"data/dev.csv"``` for convenience.
- ```test```: 2789 in-domain and out-of-domain samples (```"data/test.csv"```).
## Dataset Creation
### Curation Rationale
- **In-domain Subset:** The in-domain sentences and the corresponding authors’ acceptability judgments are *manually* drawn from fundamental linguistic textbooks, academic publications, and methodological materials.
- **Out-of-domain Subset:** The out-of-domain sentences are produced by nine open-source MT and paraphrase generation models.
### Source Data
<details>
<summary>Linguistic publications and resources</summary>
|Original source |Transliterated source |Source id |
|---|---|---|
|[Проект корпусного описания русской грамматики](http://rusgram.ru) | [Proekt korpusnogo opisaniya russkoj grammatiki](http://rusgram.ru/)|Rusgram |
|Тестелец, Я.Г., 2001. *Введение в общий синтаксис*. Федеральное государственное бюджетное образовательное учреждение высшего образования Российский государственный гуманитарный университет.|Yakov Testelets. 2001. Vvedeniye v obschiy sintaksis. Russian State University for the Humanities. |Testelets |
|Лютикова, Е.А., 2010. *К вопросу о категориальном статусе именных групп в русском языке*. Вестник Московского университета. Серия 9. Филология, (6), pp.36-76. |Ekaterina Lutikova. 2010. K voprosu o kategorial’nom statuse imennykh grup v russkom yazyke. Moscow University Philology Bulletin. |Lutikova |
|Митренина, О.В., Романова, Е.Е. and Слюсарь, Н.А., 2017. *Введение в генеративную грамматику*. Общество с ограниченной ответственностью "Книжный дом ЛИБРОКОМ". |Olga Mitrenina et al. 2017. Vvedeniye v generativnuyu grammatiku. Limited Liability Company “LIBROCOM”. |Mitrenina |
|Падучева, Е.В., 2004. *Динамические модели в семантике лексики*. М.: Языки славянской культуры.| Elena Paducheva. 2004. Dinamicheskiye modeli v semantike leksiki. Languages of Slavonic culture. |Paducheva2004 |
|Падучева, Е.В., 2010. *Семантические исследования: Семантика времени и вида в русском языке; Семантика нарратива*. М.: Языки славянской культуры. | Elena Paducheva. 2010. Semanticheskiye issledovaniya: Semantika vremeni i vida v russkom yazyke; Semantika narrativa. Languages of Slavonic culture.|Paducheva2010 |
|Падучева, Е.В., 2013. *Русское отрицательное предложение*. М.: Языки славянской культуры |Elena Paducheva. 2013. Russkoye otritsatel’noye predlozheniye. Languages of Slavonic culture. |Paducheva2013 |
|Селиверстова, О.Н., 2004. *Труды по семантике*. М.: Языки славянской культуры | Olga Seliverstova. 2004. Trudy po semantike. Languages of Slavonic culture.|Seliverstova |
| Набор данных ЕГЭ по русскому языку | Shavrina et al. 2020. [Humans Keep It One Hundred: an Overview of AI Journey](https://aclanthology.org/2020.lrec-1.277/) |USE5, USE7, USE8 |
</details>
<details>
<summary>Machine-generated sentences</summary>
<br>
**Datasets**
|Original source |Source id|
|---|---|
|Mikel Artetxe and Holger Schwenk. 2019. [Massively Multilingual Sentence Embeddings for Zero-Shot Cross-Lingual Transfer and Beyond](https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00288/43523/Massively-Multilingual-Sentence-Embeddings-for)|Tatoeba |
|Holger Schwenk et al. 2021. [WikiMatrix: Mining 135M Parallel Sentences in 1620 Language Pairs from Wikipedia](https://aclanthology.org/2021.eacl-main.115/)|WikiMatrix |
|Ye Qi et al. 2018. [When and Why Are Pre-Trained Word Embeddings Useful for Neural Machine Translation?](https://aclanthology.org/N18-2084/)|TED |
|Alexandra Antonova and Alexey Misyurev. 2011. [Building a Web-Based Parallel Corpus and Filtering Out Machine-Translated Text](https://aclanthology.org/W11-1218/)|YandexCorpus |
**Models**
[EasyNMT models](https://github.com/UKPLab/EasyNMT):
1. OPUS-MT. Jörg Tiedemann and Santhosh Thottingal. 2020. [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/)
2. M-BART50. Yuqing Tang et al. 2020. [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401)
3. M2M-100. Angela Fan et al. 2021. [Beyond English-Centric Multilingual Machine Translation](https://jmlr.org/papers/volume22/20-1307/20-1307.pdf)
[Paraphrase generation models](https://github.com/RussianNLP/russian_paraphrasers):
1. [ruGPT2-Large](https://huggingface.co/sberbank-ai/rugpt2large)
2. [ruT5](https://huggingface.co/cointegrated/rut5-base-paraphraser)
3. mT5. Linting Xue et al. 2021. [mT5: A Massively Multilingual Pre-trained Text-to-Text Transformer](https://aclanthology.org/2021.naacl-main.41/)
</details>
### Annotations
#### Annotation process
The out-of-domain sentences undergo a two-stage annotation procedure on [Toloka](https://toloka.ai), a crowd-sourcing platform for data labeling.
Each stage includes an unpaid training phase with explanations, control tasks for tracking annotation quality, and the main annotation task. Before starting, the worker is given detailed instructions describing the task, explaining the labels, and showing plenty of examples.
The instruction is available at any time during both the training and main annotation phases. To get access to the main phase, the worker should first complete the training phase by labeling more than 70% of its examples correctly. Each trained worker receives a page with five sentences, one of which is a control one.
We collect the majority vote labels via a dy- namic overlap from three to five workers after filtering them by response time and performance on control tasks.
- **Stage 1: Acceptability Judgments**
The first annotation stage defines whether a given sentence is acceptable or not. Access to the project is granted to workers certified as native speakers of Russian by Toloka and ranked top-60% workers according to the Toloka rating system.
Each worker answers 30 examples in the training phase. Each training example is accompanied by an explanation that appears in an incorrect answer.
The main annotation phase counts 3.6k machine-generated sentences. The pay rate is on average $2.55/hr, which is twice the amount of the hourly minimum wage in Russia. Each of 1.3k trained workers get paid, but we keep votes from only 960 workers whose annotation quality rate on the control sentences is more than 50%.
- **Stage 2: Violation Categories**
The second stage includes validation and annotation of sentences labeled unacceptable on Stage 1 according to five answer options: “Morphology”, “Syntax”, “Semantics”, “Hallucinations” and “Other”. The task is framed as a multi-label classification, i.e., the sentence may contain more than one violation in some rare cases or be re-labeled as acceptable.
We create a team of 30 annotators who are undergraduate BA and MA in philology and linguistics from several Russian universities. The students are asked to study the works on CoLA, TGEA, and hallucinations. We also hold an online seminar to discuss the works and clarify the task specifics. Each student undergoes platform-based training on 15 examples before moving onto the main phase of 1.3k sentences.
The students are paid on average $5.42/hr and are eligible to get credits for an academic course or an internship. This stage provides direct interaction between authors and students in a group chat. We keep submissions with more than 30 seconds of response time per page and collect the majority vote labels for each answer independently.
Sentences having more than one violation category or labeled as “Other” by the majority are filtered out.
### Personal and Sensitive Information
The annotators are warned about potentially sensitive topics in data (e.g., politics, culture, and religion).
## Considerations for Using the Data
### Social Impact of Dataset
RuCoLA may serve as training data for acceptability classifiers, which may benefit the quality of generated texts.
We recognize that such improvements in text generation may lead to misuse of LMs for malicious purposes. However, our corpus can be used to train adversarial defense and artificial text detection models.
We introduce a novel dataset for **research and development needs**, and the potential negative uses are not lost on us.
### Discussion of Biases
Although we aim to control the number of high-frequency tokens in the RuCoLA’s sentences, we assume that potential word frequency distribution shift between LMs’ pretraining corpora and our corpus can introduce bias in the evaluation.
Furthermore, linguistic publications represent a specific domain as the primary source of acceptability judgments. On the one hand, it can lead to a domain shift when using RuCoLA for practical purposes.
On the other hand, we observe moderate acceptability classification performance on the out-of-domain test, which spans multiple domains, ranging from subtitles to Wikipedia.
### Other Known Limitations
- **Data Collection**
Acceptability judgments datasets require a source of unacceptable sentences.
Collecting judgments from linguistic literature has become a standard practice replicated in multiple languages. However, this approach has several limitations. First, many studies raise concerns about the reliability and reproducibility of acceptability judgments. Second, the linguists’ judgments may limit data representativeness, as they may not reflect the errors that speakers tend to produce. Third, enriching acceptability judgments datasets is time-consuming, while creating new ones can be challenging due to limited resources, e.g., in low-resource languages.
- **Expert vs. Non-expert**
One of the open methodological questions on acceptability judgments is whether they should be collected from expert or non-expert speakers.
On the one hand, prior linguistic knowledge can introduce bias in reporting judgments. On the other hand, expertise may increase the quality of the linguists’ judgments over the ones of non-linguists. At the same time, the latter tend to be influenced by an individual’s exposure to ungrammatical language use.
The objective of involving students with a linguistic background is to maximize the annotation quality.
- **Fine-grained Annotation**
The coarse-grained annotation scheme of the RuCoLA’s unacceptable sentences relies on four major categories. While the annotation can be helpful for model error analysis, it limits the scope of LMs’ diagnostic evaluation concerning linguistic and machine-specific phenomena.
## Additional Information
### Dataset Curators
Correspondence: ```vmikhailovhse@gmail.com```
### Licensing Information
Our baseline code and acceptability labels are available under the Apache 2.0 license. The copyright (where applicable) of texts from the linguistic publications and resources remains with the original authors or publishers.
### Citation Information
```
@inproceedings{mikhailov-etal-2022-rucola,
title = "{R}u{C}o{LA}: {R}ussian Corpus of Linguistic Acceptability",
author = "Mikhailov, Vladislav and
Shamardina, Tatiana and
Ryabinin, Max and
Pestova, Alena and
Smurov, Ivan and
Artemova, Ekaterina",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.emnlp-main.348",
pages = "5207--5227",
abstract = "Linguistic acceptability (LA) attracts the attention of the research community due to its many uses, such as testing the grammatical knowledge of language models and filtering implausible texts with acceptability classifiers.However, the application scope of LA in languages other than English is limited due to the lack of high-quality resources.To this end, we introduce the Russian Corpus of Linguistic Acceptability (RuCoLA), built from the ground up under the well-established binary LA approach. RuCoLA consists of 9.8k in-domain sentences from linguistic publications and 3.6k out-of-domain sentences produced by generative models. The out-of-domain set is created to facilitate the practical use of acceptability for improving language generation.Our paper describes the data collection protocol and presents a fine-grained analysis of acceptability classification experiments with a range of baseline approaches.In particular, we demonstrate that the most widely used language models still fall behind humans by a large margin, especially when detecting morphological and semantic errors. We release RuCoLA, the code of experiments, and a public leaderboard to assess the linguistic competence of language models for Russian.",
}
```
### Other
Please refer to our [paper](https://aclanthology.org/2022.emnlp-main.348/) for more details. |
itsskofficial/llama-2-linkedin-data | ---
license: cc0-1.0
---
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.