id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 68.7k ⌀ | citation stringlengths 0 10.7k ⌀ | cardData null | likes int64 0 3.55k | downloads int64 0 10.1M | card stringlengths 0 1.01M |
|---|---|---|---|---|---|---|---|---|---|
open-llm-leaderboard/details_meta-llama__Llama-2-70b-hf | 2023-09-18T06:46:57.000Z | [
"region:us"
] | open-llm-leaderboard | null | null | null | 0 | 781 | ---
pretty_name: Evaluation run of meta-llama/Llama-2-70b-hf
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [meta-llama/Llama-2-70b-hf](https://huggingface.co/meta-llama/Llama-2-70b-hf)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 124 configuration, each one coresponding to one of\
\ the evaluated task.\n\nThe dataset has been created from 10 run(s). Each run can\
\ be found as a specific split in each configuration, the split being named using\
\ the timestamp of the run.The \"train\" split is always pointing to the latest\
\ results.\n\nAn additional configuration \"results\" store all the aggregated results\
\ of the run (and is used to compute and display the agregated metrics on the [Open\
\ LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_meta-llama__Llama-2-70b-hf\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-09-18T06:46:44.905361](https://huggingface.co/datasets/open-llm-leaderboard/details_meta-llama__Llama-2-70b-hf/blob/main/results_2023-09-18T06-46-44.905361.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0017827181208053692,\n\
\ \"em_stderr\": 0.00043200973460388544,\n \"f1\": 0.06615562080536916,\n\
\ \"f1_stderr\": 0.0013739852117668813,\n \"acc\": 0.5885312292623206,\n\
\ \"acc_stderr\": 0.011707750309504293\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.0017827181208053692,\n \"em_stderr\": 0.00043200973460388544,\n\
\ \"f1\": 0.06615562080536916,\n \"f1_stderr\": 0.0013739852117668813\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.33965125094768767,\n \
\ \"acc_stderr\": 0.01304504506766526\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.8374112075769534,\n \"acc_stderr\": 0.010370455551343326\n\
\ }\n}\n```"
repo_url: https://huggingface.co/meta-llama/Llama-2-70b-hf
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|arc:challenge|25_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|arc:challenge|25_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|arc:challenge|25_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|arc:challenge|25_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_09_08T23_38_08.931556
path:
- '**/details_harness|drop|3_2023-09-08T23-38-08.931556.parquet'
- split: 2023_09_18T06_46_44.905361
path:
- '**/details_harness|drop|3_2023-09-18T06-46-44.905361.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-09-18T06-46-44.905361.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_09_08T23_38_08.931556
path:
- '**/details_harness|gsm8k|5_2023-09-08T23-38-08.931556.parquet'
- split: 2023_09_18T06_46_44.905361
path:
- '**/details_harness|gsm8k|5_2023-09-18T06-46-44.905361.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-09-18T06-46-44.905361.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hellaswag|10_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hellaswag|10_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hellaswag|10_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hellaswag|10_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_0
data_files:
- split: 2023_08_21T11_06_07.240233
path:
- '**/details_harness|hendrycksTest-abstract_algebra|0_2023-08-21T11:06:07.240233.parquet'
- split: 2023_08_21T11_28_25.684618
path:
- '**/details_harness|hendrycksTest-abstract_algebra|0_2023-08-21T11:28:25.684618.parquet'
- split: 2023_08_21T20_33_55.417483
path:
- '**/details_harness|hendrycksTest-abstract_algebra|0_2023-08-21T20:33:55.417483.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|0_2023-08-21T20:33:55.417483.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_0
data_files:
- split: 2023_08_21T11_06_07.240233
path:
- '**/details_harness|hendrycksTest-abstract_algebra|0_2023-08-21T11:06:07.240233.parquet'
- split: 2023_08_21T11_28_25.684618
path:
- '**/details_harness|hendrycksTest-abstract_algebra|0_2023-08-21T11:28:25.684618.parquet'
- split: 2023_08_21T20_33_55.417483
path:
- '**/details_harness|hendrycksTest-abstract_algebra|0_2023-08-21T20:33:55.417483.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|0_2023-08-21T20:33:55.417483.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_09_08T23_38_08.931556
path:
- '**/details_harness|winogrande|5_2023-09-08T23-38-08.931556.parquet'
- split: 2023_09_18T06_46_44.905361
path:
- '**/details_harness|winogrande|5_2023-09-18T06-46-44.905361.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-09-18T06-46-44.905361.parquet'
- config_name: original_mmlu_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:abstract_algebra|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:anatomy|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:astronomy|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:business_ethics|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:clinical_knowledge|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:college_biology|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:college_chemistry|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:college_computer_science|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:college_mathematics|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:college_medicine|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:college_physics|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:computer_security|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:conceptual_physics|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:econometrics|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:electrical_engineering|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:elementary_mathematics|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:formal_logic|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:global_facts|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:high_school_biology|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:high_school_chemistry|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:high_school_computer_science|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:high_school_european_history|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:high_school_geography|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:high_school_macroeconomics|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:high_school_mathematics|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:high_school_microeconomics|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:high_school_physics|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:high_school_psychology|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:high_school_statistics|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:high_school_us_history|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:high_school_world_history|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:human_aging|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:human_sexuality|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:international_law|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:jurisprudence|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:logical_fallacies|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:machine_learning|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:management|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:marketing|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:medical_genetics|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:miscellaneous|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:moral_disputes|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:moral_scenarios|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:nutrition|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:philosophy|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:prehistory|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:professional_accounting|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:professional_law|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:professional_medicine|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:professional_psychology|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:public_relations|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:security_studies|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:sociology|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:us_foreign_policy|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:virology|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:world_religions|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:abstract_algebra|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:anatomy|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:astronomy|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:business_ethics|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:clinical_knowledge|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:college_biology|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:college_chemistry|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:college_computer_science|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:college_mathematics|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:college_medicine|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:college_physics|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:computer_security|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:conceptual_physics|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:econometrics|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:electrical_engineering|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:elementary_mathematics|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:formal_logic|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:global_facts|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:high_school_biology|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:high_school_chemistry|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:high_school_computer_science|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:high_school_european_history|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:high_school_geography|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:high_school_macroeconomics|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:high_school_mathematics|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:high_school_microeconomics|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:high_school_physics|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:high_school_psychology|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:high_school_statistics|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:high_school_us_history|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:high_school_world_history|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:human_aging|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:human_sexuality|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:international_law|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:jurisprudence|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:logical_fallacies|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:machine_learning|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:management|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:marketing|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:medical_genetics|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:miscellaneous|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:moral_disputes|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:moral_scenarios|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:nutrition|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:philosophy|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:prehistory|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:professional_accounting|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:professional_law|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:professional_medicine|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:professional_psychology|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:public_relations|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:security_studies|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:sociology|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:us_foreign_policy|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:virology|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:world_religions|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_abstract_algebra_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:abstract_algebra|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:abstract_algebra|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_anatomy_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:anatomy|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:anatomy|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_astronomy_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:astronomy|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:astronomy|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_business_ethics_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:business_ethics|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:business_ethics|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_clinical_knowledge_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:clinical_knowledge|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:clinical_knowledge|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_college_biology_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:college_biology|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:college_biology|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_college_chemistry_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:college_chemistry|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:college_chemistry|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_college_computer_science_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:college_computer_science|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:college_computer_science|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_college_mathematics_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:college_mathematics|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:college_mathematics|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_college_medicine_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:college_medicine|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:college_medicine|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_college_physics_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:college_physics|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:college_physics|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_computer_security_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:computer_security|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:computer_security|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_conceptual_physics_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:conceptual_physics|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:conceptual_physics|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_econometrics_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:econometrics|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:econometrics|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_electrical_engineering_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:electrical_engineering|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:electrical_engineering|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_elementary_mathematics_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:elementary_mathematics|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:elementary_mathematics|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_formal_logic_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:formal_logic|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:formal_logic|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_global_facts_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:global_facts|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:global_facts|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_high_school_biology_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:high_school_biology|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_biology|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_high_school_chemistry_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:high_school_chemistry|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_chemistry|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_high_school_computer_science_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:high_school_computer_science|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_computer_science|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_high_school_european_history_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:high_school_european_history|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_european_history|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_high_school_geography_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:high_school_geography|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_geography|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_high_school_government_and_politics_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_high_school_macroeconomics_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:high_school_macroeconomics|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_macroeconomics|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_high_school_mathematics_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:high_school_mathematics|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_mathematics|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_high_school_microeconomics_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:high_school_microeconomics|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_microeconomics|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_high_school_physics_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:high_school_physics|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_physics|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_high_school_psychology_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:high_school_psychology|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_psychology|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_high_school_statistics_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:high_school_statistics|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_statistics|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_high_school_us_history_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:high_school_us_history|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_us_history|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_high_school_world_history_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:high_school_world_history|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_world_history|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_human_aging_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:human_aging|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:human_aging|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_human_sexuality_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:human_sexuality|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:human_sexuality|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_international_law_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:international_law|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:international_law|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_jurisprudence_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:jurisprudence|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:jurisprudence|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_logical_fallacies_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:logical_fallacies|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:logical_fallacies|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_machine_learning_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:machine_learning|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:machine_learning|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_management_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:management|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:management|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_marketing_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:marketing|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:marketing|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_medical_genetics_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:medical_genetics|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:medical_genetics|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_miscellaneous_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:miscellaneous|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:miscellaneous|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_moral_disputes_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:moral_disputes|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:moral_disputes|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_moral_scenarios_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:moral_scenarios|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:moral_scenarios|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_nutrition_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:nutrition|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:nutrition|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_philosophy_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:philosophy|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:philosophy|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_prehistory_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:prehistory|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:prehistory|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_professional_accounting_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:professional_accounting|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:professional_accounting|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_professional_law_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:professional_law|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:professional_law|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_professional_medicine_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:professional_medicine|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:professional_medicine|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_professional_psychology_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:professional_psychology|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:professional_psychology|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_public_relations_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:public_relations|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:public_relations|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_security_studies_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:security_studies|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:security_studies|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_sociology_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:sociology|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:sociology|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_us_foreign_policy_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:us_foreign_policy|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:us_foreign_policy|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_virology_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:virology|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:virology|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_world_religions_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:world_religions|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:world_religions|5_2023-08-28T20:36:26.123850.parquet'
- config_name: results
data_files:
- split: 2023_08_21T11_06_07.240233
path:
- results_2023-08-21T11:06:07.240233.parquet
- split: 2023_08_21T11_28_25.684618
path:
- results_2023-08-21T11:28:25.684618.parquet
- split: 2023_08_21T20_33_55.417483
path:
- results_2023-08-21T20:33:55.417483.parquet
- split: 2023_08_22T09_05_23.035851
path:
- results_2023-08-22T09:05:23.035851.parquet
- split: 2023_08_22T10_47_05.866748
path:
- results_2023-08-22T10:47:05.866748.parquet
- split: 2023_08_22T13_42_09.433095
path:
- results_2023-08-22T13:42:09.433095.parquet
- split: 2023_08_22T13_47_53.141854
path:
- results_2023-08-22T13:47:53.141854.parquet
- split: 2023_08_28T20_36_26.123850
path:
- results_2023-08-28T20:36:26.123850.parquet
- split: 2023_09_08T23_38_08.931556
path:
- results_2023-09-08T23-38-08.931556.parquet
- split: 2023_09_18T06_46_44.905361
path:
- results_2023-09-18T06-46-44.905361.parquet
- split: latest
path:
- results_2023-09-18T06-46-44.905361.parquet
---
# Dataset Card for Evaluation run of meta-llama/Llama-2-70b-hf
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/meta-llama/Llama-2-70b-hf
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [meta-llama/Llama-2-70b-hf](https://huggingface.co/meta-llama/Llama-2-70b-hf) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 124 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 10 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_meta-llama__Llama-2-70b-hf",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-09-18T06:46:44.905361](https://huggingface.co/datasets/open-llm-leaderboard/details_meta-llama__Llama-2-70b-hf/blob/main/results_2023-09-18T06-46-44.905361.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.0017827181208053692,
"em_stderr": 0.00043200973460388544,
"f1": 0.06615562080536916,
"f1_stderr": 0.0013739852117668813,
"acc": 0.5885312292623206,
"acc_stderr": 0.011707750309504293
},
"harness|drop|3": {
"em": 0.0017827181208053692,
"em_stderr": 0.00043200973460388544,
"f1": 0.06615562080536916,
"f1_stderr": 0.0013739852117668813
},
"harness|gsm8k|5": {
"acc": 0.33965125094768767,
"acc_stderr": 0.01304504506766526
},
"harness|winogrande|5": {
"acc": 0.8374112075769534,
"acc_stderr": 0.010370455551343326
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
result-kand2-sdxl-wuerst-karlo/323c0619 | 2023-09-15T06:43:16.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | null | 0 | 779 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 236
num_examples: 10
download_size: 1424
dataset_size: 236
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "323c0619"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
clarin-pl/polemo2-official | 2022-08-29T16:40:01.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:expert-generated",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:8K",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:pl",
"license:cc-by-sa-4.0",
"region:us"
] | clarin-pl | PolEmo 2.0: Corpus of Multi-Domain Consumer Reviews, evaluation data for article presented at CoNLL. | @inproceedings{kocon-etal-2019-multi,
title = "Multi-Level Sentiment Analysis of {P}ol{E}mo 2.0: Extended Corpus of Multi-Domain Consumer Reviews",
author = "Koco{\'n}, Jan and
Mi{\l}kowski, Piotr and
Za{\'s}ko-Zieli{\'n}ska, Monika",
booktitle = "Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)",
month = nov,
year = "2019",
address = "Hong Kong, China",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/K19-1092",
doi = "10.18653/v1/K19-1092",
pages = "980--991",} | null | 4 | 778 | ---
annotations_creators:
- expert-generated
language_creators:
- other
language:
- pl
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: 'Polemo2'
size_categories:
- 8K
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
---
# Polemo2
## Description
The PolEmo2.0 is a dataset of online consumer reviews from four domains: medicine, hotels, products, and university. It is human-annotated on a level of full reviews and individual sentences. Current version (PolEmo 2.0) contains 8,216 reviews having 57,466 sentences. Each text and sentence was manually annotated with sentiment in the 2+1 scheme, which gives a total of 197,046 annotations. About 85% of the reviews are from the medicine and hotel domains. Each review is annotated with four labels: positive, negative, neutral, or ambiguous.
## Tasks (input, output and metrics)
The task is to predict the correct label of the review.
**Input** ('*text*' column): sentence
**Output** ('*target*' column): label for sentence sentiment ('zero': neutral, 'minus': negative, 'plus': positive, 'amb': ambiguous)
**Domain**: Online reviews
**Measurements**: Accuracy, F1 Macro
**Example**:
Input: `Na samym wejściu hotel śmierdzi . W pokojach jest pleśń na ścianach , brudny dywan . W łazience śmierdzi chemią , hotel nie grzeje w pokojach panuje chłód . Wyposażenie pokoju jest stare , kran się rusza , drzwi na balkon nie domykają się . Jedzenie jest w małych ilościach i nie smaczne . Nie polecam nikomu tego hotelu .`
Input (translated by DeepL): `At the very entrance the hotel stinks . In the rooms there is mold on the walls , dirty carpet . The bathroom smells of chemicals , the hotel does not heat in the rooms are cold . The room furnishings are old , the faucet moves , the door to the balcony does not close . The food is in small quantities and not tasty . I would not recommend this hotel to anyone .`
Output: `1` (negative)
## Data splits
| Subset | Cardinality |
|--------|------------:|
| train | 6573 |
| val | 823 |
| test | 820 |
## Class distribution
| Class | train | dev | test |
|:--------|--------:|-------------:|-------:|
| minus | 0.3756 | 0.3694 | 0.4134 |
| plus | 0.2775 | 0.2868 | 0.2768 |
| amb | 0.1991 | 0.1883 | 0.1659 |
| zero | 0.1477 | 0.1555 | 0.1439 |
## Citation
```
@inproceedings{kocon-etal-2019-multi,
title = "Multi-Level Sentiment Analysis of {P}ol{E}mo 2.0: Extended Corpus of Multi-Domain Consumer Reviews",
author = "Koco{\'n}, Jan and
Mi{\l}kowski, Piotr and
Za{\'s}ko-Zieli{\'n}ska, Monika",
booktitle = "Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)",
month = nov,
year = "2019",
address = "Hong Kong, China",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/K19-1092",
doi = "10.18653/v1/K19-1092",
pages = "980--991",
abstract = "In this article we present an extended version of PolEmo {--} a corpus of consumer reviews from 4 domains: medicine, hotels, products and school. Current version (PolEmo 2.0) contains 8,216 reviews having 57,466 sentences. Each text and sentence was manually annotated with sentiment in 2+1 scheme, which gives a total of 197,046 annotations. We obtained a high value of Positive Specific Agreement, which is 0.91 for texts and 0.88 for sentences. PolEmo 2.0 is publicly available under a Creative Commons copyright license. We explored recent deep learning approaches for the recognition of sentiment, such as Bi-directional Long Short-Term Memory (BiLSTM) and Bidirectional Encoder Representations from Transformers (BERT).",
}
```
## License
```
Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)
```
## Links
[HuggingFace](https://huggingface.co/datasets/clarin-pl/polemo2-official)
[Source](https://clarin-pl.eu/dspace/handle/11321/710)
[Paper](https://aclanthology.org/K19-1092/)
## Examples
### Loading
```python
from pprint import pprint
from datasets import load_dataset
dataset = load_dataset("clarin-pl/polemo2-official")
pprint(dataset['train'][0])
# {'target': 1,
# 'text': 'Na samym wejściu hotel śmierdzi . W pokojach jest pleśń na ścianach '
# ', brudny dywan . W łazience śmierdzi chemią , hotel nie grzeje w '
# 'pokojach panuje chłód . Wyposażenie pokoju jest stare , kran się '
# 'rusza , drzwi na balkon nie domykają się . Jedzenie jest w małych '
# 'ilościach i nie smaczne . Nie polecam nikomu tego hotelu .'}
```
### Evaluation
```python
import random
from pprint import pprint
from datasets import load_dataset, load_metric
dataset = load_dataset("clarin-pl/polemo2-official")
references = dataset["test"]["target"]
# generate random predictions
predictions = [random.randrange(max(references) + 1) for _ in range(len(references))]
acc = load_metric("accuracy")
f1 = load_metric("f1")
acc_score = acc.compute(predictions=predictions, references=references)
f1_score = f1.compute(predictions=predictions, references=references, average='macro')
pprint(acc_score)
pprint(f1_score)
# {'accuracy': 0.2475609756097561}
# {'f1': 0.23747048177471738}
```
|
result-kand2-sdxl-wuerst-karlo/f0cdf5c4 | 2023-09-15T09:18:20.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | null | 0 | 776 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 207
num_examples: 10
download_size: 1427
dataset_size: 207
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "f0cdf5c4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
dkoterwa/kor-sts | 2023-07-25T09:52:30.000Z | [
"license:cc-by-sa-4.0",
"region:us"
] | dkoterwa | null | null | null | 0 | 775 | ---
license: cc-by-sa-4.0
dataset_info:
features:
- name: id
dtype: int64
- name: genre
dtype: string
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: score
dtype: float64
splits:
- name: train
num_bytes: 1034815
num_examples: 5691
- name: valid
num_bytes: 297254
num_examples: 1465
- name: test
num_bytes: 247409
num_examples: 1376
download_size: 837346
dataset_size: 1579478
---
# Korean Semantic Textual Similarity (KorSTS) Dataset
For a better dataset description, please visit this GitHub repository prepared by the authors of the article: [LINK](https://github.com/kakaobrain/kor-nlu-datasets) <br>
<br>
**This dataset was prepared by converting tsv files from this repository.** The idea was to share the dataset for broader audience. I am not an original author of it. <br>
Because of the specifity of read_csv method from Pandas library, there are couple of observations, which had to be deleted because of the formatting (54 in train, 35 in valid, and 1 in test)
Additionaly, **None values have been removed from the dataset** (5 from train, 1 from eval, and 3 from test)
**How to download**
```
from datasets import load_dataset
data = load_dataset("dkoterwa/kor-sts")
```
**If you use this dataset for research, please cite this paper:**
```
@article{ham2020kornli,
title={KorNLI and KorSTS: New Benchmark Datasets for Korean Natural Language Understanding},
author={Ham, Jiyeon and Choe, Yo Joong and Park, Kyubyong and Choi, Ilji and Soh, Hyungjoon},
journal={arXiv preprint arXiv:2004.03289},
year={2020}
}
``` |
SetFit/mrpc | 2022-02-28T13:18:30.000Z | [
"region:us"
] | SetFit | null | null | null | 4 | 774 | # Glue MRPC
This dataset is a port of the official [`mrpc` dataset](https://huggingface.co/datasets/glue/viewer/mrpc/train) on the Hub.
Note that the sentence1 and sentence2 columns have been renamed to text1 and text2 respectively.
Also, the test split is not labeled; the label column values are always -1.
|
result-kand2-sdxl-wuerst-karlo/d6e12779 | 2023-09-15T09:41:14.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | null | 0 | 774 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 208
num_examples: 10
download_size: 1403
dataset_size: 208
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "d6e12779"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
result-kand2-sdxl-wuerst-karlo/a350d62a | 2023-09-15T11:08:21.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | null | 0 | 774 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 179
num_examples: 10
download_size: 1365
dataset_size: 179
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "a350d62a"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
rewoo/planner_instruction_tuning_2k | 2023-05-22T04:54:20.000Z | [
"license:mit",
"region:us"
] | rewoo | null | null | null | 15 | 771 | ---
license: mit
---
*Bootstrap 2k Planner finetuning dataset for ReWOO.*
It is a mixture of "correct" HotpotQA and TriviaQA task planning trajectories in ReWOO Framework. |
result-kand2-sdxl-wuerst-karlo/e395fcfb | 2023-09-15T15:42:19.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | null | 0 | 771 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 152
num_examples: 10
download_size: 1308
dataset_size: 152
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "e395fcfb"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Alanox/stanford-dogs | 2023-09-08T13:51:01.000Z | [
"license:mit",
"region:us"
] | Alanox | The Stanford Dogs dataset contains images of 120 breeds of dogs from around the world. This dataset has been built using images and annotation from ImageNet for the task of fine-grained image categorization. | null | null | 1 | 769 | ---
pretty_name: "Stanford Dogs"
license: "mit"
task_category: "Classification"
---
# Dataset
This dataset is extracted from [Stanford Dogs Dataset](http://vision.stanford.edu/aditya86/ImageNetDogs/)
# Load
```python
import datasets
dataset = datasets.load_dataset("Alanox/stanford-dogs", split="full")
print(dataset)
"""
Dataset({
features: ['name', 'annotations', 'target', 'image'],
num_rows: 20580
})
"""
print(dataset.features)
"""
{
'name': Value(dtype='string', id=None),
'annotations': Array2D(shape=(None, 4), dtype='int32', id=None),
# ["xmin", "ymin", "xmax", "ymax"]
'target': Value(dtype='string', id=None),
'image': Image(decode=True, id=None)
}
"""
```
This dataset was created by the scripts from [this github repo](https://github.com/AlanBlanchet/ClassezDesImagesAvecDesAlgorithmesDeDeeplearning)
# Fixes
- `n02105855_2933.jpg` was not a `.jpg`. Converted all images to `.jpg`
|
result-kand2-sdxl-wuerst-karlo/94daaaa5 | 2023-09-15T16:14:38.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | null | 0 | 769 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 198
num_examples: 10
download_size: 1363
dataset_size: 198
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "94daaaa5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Dahoas/hf_cot_gsm8k | 2023-10-01T14:40:46.000Z | [
"region:us"
] | Dahoas | null | null | null | 0 | 768 | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: prompt
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 8663589
num_examples: 7217
- name: val
num_bytes: 301562
num_examples: 256
- name: test
num_bytes: 1610805
num_examples: 1319
download_size: 5575205
dataset_size: 10575956
---
# Dataset Card for "hf_cot_gsm8k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
result-kand2-sdxl-wuerst-karlo/e06f76e8 | 2023-09-15T18:17:10.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | null | 0 | 766 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 169
num_examples: 10
download_size: 1323
dataset_size: 169
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "e06f76e8"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tner/bionlp2004 | 2022-08-10T01:01:51.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:en",
"license:other",
"region:us"
] | tner | [BioNLP2004 NER dataset](https://aclanthology.org/W04-1213.pdf) | @inproceedings{collier-kim-2004-introduction,
title = "Introduction to the Bio-entity Recognition Task at {JNLPBA}",
author = "Collier, Nigel and
Kim, Jin-Dong",
booktitle = "Proceedings of the International Joint Workshop on Natural Language Processing in Biomedicine and its Applications ({NLPBA}/{B}io{NLP})",
month = aug # " 28th and 29th",
year = "2004",
address = "Geneva, Switzerland",
publisher = "COLING",
url = "https://aclanthology.org/W04-1213",
pages = "73--78",
} | null | 2 | 764 | ---
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
task_categories:
- token-classification
task_ids:
- named-entity-recognition
pretty_name: BioNLP2004
---
# Dataset Card for "tner/bionlp2004"
## Dataset Description
- **Repository:** [T-NER](https://github.com/asahi417/tner)
- **Paper:** [https://aclanthology.org/U15-1010.pdf](https://aclanthology.org/U15-1010.pdf)
- **Dataset:** BioNLP2004
- **Domain:** Biochemical
- **Number of Entity:** 5
### Dataset Summary
BioNLP2004 NER dataset formatted in a part of [TNER](https://github.com/asahi417/tner) project.
BioNLP2004 dataset contains training and test only, so we randomly sample a half size of test instances from the training set to create validation set.
- Entity Types: `DNA`, `protein`, `cell_type`, `cell_line`, `RNA`
## Dataset Structure
### Data Instances
An example of `train` looks as follows.
```
{
'tags': [0, 0, 0, 0, 3, 0, 9, 10, 0, 0, 0, 0, 0, 7, 8, 0, 3, 0, 0, 9, 10, 10, 0, 0],
'tokens': ['In', 'the', 'presence', 'of', 'Epo', ',', 'c-myb', 'mRNA', 'declined', 'and', '20', '%', 'of', 'K562', 'cells', 'synthesized', 'Hb', 'regardless', 'of', 'antisense', 'myb', 'RNA', 'expression', '.']
}
```
### Label ID
The label2id dictionary can be found at [here](https://huggingface.co/datasets/tner/fin/raw/main/dataset/label.json).
```python
{
"O": 0,
"B-DNA": 1,
"I-DNA": 2,
"B-protein": 3,
"I-protein": 4,
"B-cell_type": 5,
"I-cell_type": 6,
"B-cell_line": 7,
"I-cell_line": 8,
"B-RNA": 9,
"I-RNA": 10
}
```
### Data Splits
| name |train|validation|test|
|---------|----:|---------:|---:|
|bionlp2004 |16619 | 1927| 3856|
### Citation Information
```
@inproceedings{collier-kim-2004-introduction,
title = "Introduction to the Bio-entity Recognition Task at {JNLPBA}",
author = "Collier, Nigel and
Kim, Jin-Dong",
booktitle = "Proceedings of the International Joint Workshop on Natural Language Processing in Biomedicine and its Applications ({NLPBA}/{B}io{NLP})",
month = aug # " 28th and 29th",
year = "2004",
address = "Geneva, Switzerland",
publisher = "COLING",
url = "https://aclanthology.org/W04-1213",
pages = "73--78",
}
``` |
result-kand2-sdxl-wuerst-karlo/bbe01f48 | 2023-09-15T18:27:46.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | null | 0 | 764 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 217
num_examples: 10
download_size: 1377
dataset_size: 217
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "bbe01f48"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
cambridgeltl/vsr_zeroshot | 2023-03-22T17:27:58.000Z | [
"task_categories:text-classification",
"task_categories:question-answering",
"size_categories:1K<n<10K",
"language:en",
"license:cc-by-4.0",
"multimodal",
"vision-and-language",
"arxiv:2205.00363",
"region:us"
] | cambridgeltl | null | null | null | 1 | 763 | ---
license: cc-by-4.0
task_categories:
- text-classification
- question-answering
language:
- en
tags:
- multimodal
- vision-and-language
pretty_name: VSR (zeroshot)
size_categories:
- 1K<n<10K
---
# VSR: Visual Spatial Reasoning
This is the **zero-shot set** of **VSR**: *Visual Spatial Reasoning* (TACL 2023) [[paper]](https://arxiv.org/abs/2205.00363).
### Usage
```python
from datasets import load_dataset
data_files = {"train": "train.jsonl", "dev": "dev.jsonl", "test": "test.jsonl"}
dataset = load_dataset("cambridgeltl/vsr_zeroshot", data_files=data_files)
```
Note that the image files still need to be downloaded separately. See [`data/`](https://github.com/cambridgeltl/visual-spatial-reasoning/tree/master/data) for details.
Go to our [github repo](https://github.com/cambridgeltl/visual-spatial-reasoning) for more introductions.
### Citation
If you find VSR useful:
```bibtex
@article{Liu2022VisualSR,
title={Visual Spatial Reasoning},
author={Fangyu Liu and Guy Edward Toh Emerson and Nigel Collier},
journal={Transactions of the Association for Computational Linguistics},
year={2023},
}
```
|
C-MTEB/T2Retrieval | 2023-07-28T10:11:06.000Z | [
"region:us"
] | C-MTEB | null | null | null | 0 | 761 | ---
configs:
- config_name: default
data_files:
- split: corpus
path: data/corpus-*
- split: queries
path: data/queries-*
dataset_info:
features:
- name: id
dtype: string
- name: text
dtype: string
splits:
- name: corpus
num_bytes: 265607316
num_examples: 118605
- name: queries
num_bytes: 1000130
num_examples: 22812
download_size: 157606535
dataset_size: 266607446
---
# Dataset Card for "T2Retrieval"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
teven/enwiki_100k | 2023-04-03T17:16:55.000Z | [
"region:us"
] | teven | null | null | null | 1 | 755 | ---
dataset_info:
features:
- name: metadata
dtype: string
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 2570893740
num_examples: 1000000
download_size: 1550572660
dataset_size: 2570893740
---
# Dataset Card for "enwiki_100k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
fantasyfish/laion-art | 2023-06-30T08:55:13.000Z | [
"region:us"
] | fantasyfish | null | null | null | 0 | 755 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
- name: aesthetic
dtype: float64
splits:
- name: train
num_bytes: 11640624315.8
num_examples: 20072
- name: test
num_bytes: 538961083.0
num_examples: 855
download_size: 12347056207
dataset_size: 12179585398.8
---
# Dataset Card for "laion-art"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mstz/heart_failure | 2023-04-16T17:31:15.000Z | [
"task_categories:tabular-classification",
"size_categories:n<1K",
"language:en",
"license:cc",
"heart failure",
"tabular_classification",
"binary_classification",
"UCI",
"region:us"
] | mstz | null | null | null | 2 | 754 | ---
language:
- en
tags:
- heart failure
- tabular_classification
- binary_classification
- UCI
pretty_name: Heart failure
size_categories:
- n<1K
task_categories:
- tabular-classification
configs:
- death
license: cc
---
# Heart failure
The [Heart failure dataset](https://www.kaggle.com/datasets/andrewmvd/heart-failure-clinical-data) from Kaggle.
Predict patient death from earth failure given some personal medical data .
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|-----------------------------------------------------------------|
| death | Binary classification | Did the patient die? |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/heart_failure", "death")["train"]
```
# Features
|**Feature** |**Type** |
|----------------------------------------------------|-----------|
|`age` |`int8` |
|`has_anaemia` |`int8` |
|`creatinine_phosphokinase_concentration_in_blood` |`float64` |
|`has_diabetes` |`int8` |
|`heart_ejection_fraction` |`float64` |
|`has_high_blood_pressure` |`int8` |
|`platelets_concentration_in_blood` |`float64` |
|`serum_creatinine_concentration_in_blood` |`float64` |
|`serum_sodium_concentration_in_blood` |`float64` |
|`sex` |`int8` |
|`is_smoker` |`int8` |
|`days_in_study` |`int64` | |
result-kand2-sdxl-wuerst-karlo/1d35978a | 2023-09-16T01:30:03.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | null | 0 | 754 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 163
num_examples: 10
download_size: 1301
dataset_size: 163
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "1d35978a"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mteb/biosses-sts | 2022-09-27T19:13:38.000Z | [
"language:en",
"region:us"
] | mteb | null | null | null | 0 | 752 | ---
language:
- en
--- |
wdc/products-2017 | 2022-10-23T05:50:24.000Z | [
"task_categories:text-classification",
"annotations_creators:weak supervision",
"annotations_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:unknown",
"region:us"
] | wdc | Many e-shops have started to mark-up product data within their HTML pages using the schema.org vocabulary. The Web Data Commons project regularly extracts such data from the Common Crawl, a large public web crawl. The Web Data Commons Training and Test Sets for Large-Scale Product Matching contain product offers from different e-shops in the form of binary product pairs (with corresponding label "match" or "no match")
In order to support the evaluation of machine learning-based matching methods, the data is split into training, validation and test set. We provide training and validation sets in four different sizes for four product categories. The labels of the test sets were manually checked while those of the training sets were derived using shared product identifiers from the Web via weak supervision.
The data stems from the WDC Product Data Corpus for Large-Scale Product Matching - Version 2.0 which consists of 26 million product offers originating from 79 thousand websites. | @inproceedings{primpeli2019wdc,
title={The WDC training dataset and gold standard for large-scale product matching},
author={Primpeli, Anna and Peeters, Ralph and Bizer, Christian},
booktitle={Companion Proceedings of The 2019 World Wide Web Conference},
pages={381--386},
year={2019}
} | null | 1 | 751 | ---
annotations_creators:
- weak supervision
- expert-generated
language:
- en
language_bcp47:
- en-US
license:
- unknown
multilinguality:
- monolingual
pretty_name: products-2017
size_categories:
- 1K<n<10K
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
- data-integration
task_ids:
- entity-matching
- identity-resolution
- product-matching
paperswithcode_id: wdc-products
---
# Dataset Card for [products-2017]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Annotations](#annotations)
- [Additional Information](#additional-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [LSPCv2 Homepage](http://webdatacommons.org/largescaleproductcorpus/v2/index.html)
- **Point of Contact:** [Ralph Peeters](mailto:ralph.peeters@uni-mannheim.de)
### Dataset Summary
Many e-shops have started to mark-up product data within their HTML pages using the schema.org vocabulary. The Web Data Commons project regularly extracts such data from the Common Crawl, a large public web crawl. The Web Data Commons Training and Test Sets for Large-Scale Product Matching contain product offers from different e-shops in the form of binary product pairs (with corresponding label "match" or "no match")
In order to support the evaluation of machine learning-based matching methods, the data is split into training, validation and test set. We provide training and validation sets in four different sizes for four product categories. The labels of the test sets were manually checked while those of the training sets were derived using shared product identifiers from the Web via weak supervision.
The data stems from the WDC Product Data Corpus for Large-Scale Product Matching - Version 2.0 which consists of 26 million product offers originating from 79 thousand websites.
### Supported Tasks and Leaderboards
Entity Matching, Product Matching
### Languages
English
## Dataset Structure
### Data Instances
The data is structured as pairs of product offers with the corresponding match/non-match label. This is an example instance from the computers category:
```
{"pair_id":"581109#16637861","label":0,"id_left":581109,"category_left":"Computers_and_Accessories","cluster_id_left":1324529,"brand_left":"\"Gigabyte\"@en","title_left":" \"Gigabyte Radeon RX 480 G1 Gaming 4096MB GDDR5 PCI-Express Graphics Card\"@en \"Gigabyte Gr| OcUK\"@en","description_left":"\"GV-RX480G1 GAMING-4GD, Core Clock: 1202MHz, Boost Clock: 1290MHz, Memory: 4096MB 7000MHz GDDR5, Stream Processors: 2304, Crossfire Ready, VR Ready, FreeSync Ready, 3 Years Warranty\"@en ","price_left":null,"specTableContent_left":null,"id_right":16637861,"category_right":"Computers_and_Accessories","cluster_id_right":107415,"brand_right":"\"Gigabyte\"@en","title_right":" \"Gigabyte Radeon RX 550 Gaming OC 2048MB GDDR5 PCI-Express Graphics Card\"@en \"Gigabyte Gr| OcUK\"@en","description_right":"\"GV-RX550GAMING OC-2GD, Boost: 1219MHz, Memory: 2048MB 7000MHz GDDR5, Stream Processors: 512, DirectX 12 Support, 3 Years Warranty\"@en ","price_right":null,"specTableContent_right":null}
```
### Data Fields
- pair_id: unique identifier of a pair (string)
- label: binary label, match or non-match (int)
The following attributes are contained twice, once for the first and once for the second product offer
- id: unique id of the product offer (int)
- category: product category (string)
- cluster_id: id of the product cluster from the original corpus this offer belongs to (int)
- brand: brand of the product (string)
- title: product title (string)
- description: longer product description (string)
- price: price of the product offer (string)
- specTableContent: additional data found in specification tables on the webpage that contains the product offer (string)
### Data Splits
- Computers
- Test set - 1100 pairs
- Small Train set - 2267 pairs
- Small Validation set - 567 pairs
- Medium Train set - 6475 pairs
- Medium Validation set - 1619 pairs
- Large Train set - 26687 pairs
- Large Validation set - 6672 pairs
- XLarge Train set - 54768 pairs
- Xlarge Validation set - 13693 pairs
- Cameras
- Test set - 1100 pairs
- Small Train set - 1508 pairs
- Small Validation set - 378 pairs
- Medium Train set - 4204 pairs
- Medium Validation set - 1051 pairs
- Large Train set - 16028 pairs
- Large Validation set - 4008 pairs
- XLarge Train set - 33821 pairs
- Xlarge Validation set - 8456 pairs
- Watches
- Test set - 1100 pairs
- Small Train set - 1804 pairs
- Small Validation set - 451 pairs
- Medium Train set - 5130 pairs
- Medium Validation set - 1283 pairs
- Large Train set - 21621 pairs
- Large Validation set - 5406 pairs
- XLarge Train set - 49255 pairs
- Xlarge Validation set - 12314 pairs
- Shoes
- Test set - 1100 pairs
- Small Train set - 1650 pairs
- Small Validation set - 413 pairs
- Medium Train set - 4644 pairs
- Medium Validation set - 1161 pairs
- Large Train set - 18391 pairs
- Large Validation set - 4598 pairs
- XLarge Train set - 33943 pairs
- Xlarge Validation set - 8486 pairs
## Dataset Creation
### Annotations
#### Annotation process
- Training and Validation sets: distant supervision via shared schema.org product IDs
- Test sets: Single expert annotator
#### Who are the annotators?
[Ralph Peeters](https://www.uni-mannheim.de/dws/people/researchers/phd-students/ralph-peeters/)
## Additional Information
### Citation Information
```
@inproceedings{primpeli2019wdc,
title={The WDC training dataset and gold standard for large-scale product matching},
author={Primpeli, Anna and Peeters, Ralph and Bizer, Christian},
booktitle={Companion Proceedings of The 2019 World Wide Web Conference},
pages={381--386},
year={2019}
}
```
|
madao33/new-title-chinese | 2022-07-01T06:26:15.000Z | [
"region:us"
] | madao33 | null | null | null | 1 | 751 | Entry not found |
BeIR/nfcorpus | 2022-10-23T06:01:44.000Z | [
"task_categories:text-retrieval",
"task_ids:entity-linking-retrieval",
"task_ids:fact-checking-retrieval",
"multilinguality:monolingual",
"language:en",
"license:cc-by-sa-4.0",
"region:us"
] | BeIR | null | null | null | 0 | 745 | ---
annotations_creators: []
language_creators: []
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
paperswithcode_id: beir
pretty_name: BEIR Benchmark
size_categories:
msmarco:
- 1M<n<10M
trec-covid:
- 100k<n<1M
nfcorpus:
- 1K<n<10K
nq:
- 1M<n<10M
hotpotqa:
- 1M<n<10M
fiqa:
- 10K<n<100K
arguana:
- 1K<n<10K
touche-2020:
- 100K<n<1M
cqadupstack:
- 100K<n<1M
quora:
- 100K<n<1M
dbpedia:
- 1M<n<10M
scidocs:
- 10K<n<100K
fever:
- 1M<n<10M
climate-fever:
- 1M<n<10M
scifact:
- 1K<n<10K
source_datasets: []
task_categories:
- text-retrieval
- zero-shot-retrieval
- information-retrieval
- zero-shot-information-retrieval
task_ids:
- passage-retrieval
- entity-linking-retrieval
- fact-checking-retrieval
- tweet-retrieval
- citation-prediction-retrieval
- duplication-question-retrieval
- argument-retrieval
- news-retrieval
- biomedical-information-retrieval
- question-answering-retrieval
---
# Dataset Card for BEIR Benchmark
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/UKPLab/beir
- **Repository:** https://github.com/UKPLab/beir
- **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ
- **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns
- **Point of Contact:** nandan.thakur@uwaterloo.ca
### Dataset Summary
BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:
- Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact)
- Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/)
- Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/)
- News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html)
- Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data)
- Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/)
- Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs)
- Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html)
- Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/)
All these datasets have been preprocessed and can be used for your experiments.
```python
```
### Supported Tasks and Leaderboards
The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.
The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/).
### Languages
All tasks are in English (`en`).
## Dataset Structure
All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:
- `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}`
- `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}`
- `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1`
### Data Instances
A high level example of any beir dataset:
```python
corpus = {
"doc1" : {
"title": "Albert Einstein",
"text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \
one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \
its influence on the philosophy of science. He is best known to the general public for his mass–energy \
equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \
Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \
of the photoelectric effect', a pivotal step in the development of quantum theory."
},
"doc2" : {
"title": "", # Keep title an empty string if not present
"text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \
malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\
with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)."
},
}
queries = {
"q1" : "Who developed the mass-energy equivalence formula?",
"q2" : "Which beer is brewed with a large proportion of wheat?"
}
qrels = {
"q1" : {"doc1": 1},
"q2" : {"doc2": 1},
}
```
### Data Fields
Examples from all configurations have the following features:
### Corpus
- `corpus`: a `dict` feature representing the document title and passage text, made up of:
- `_id`: a `string` feature representing the unique document id
- `title`: a `string` feature, denoting the title of the document.
- `text`: a `string` feature, denoting the text of the document.
### Queries
- `queries`: a `dict` feature representing the query, made up of:
- `_id`: a `string` feature representing the unique query id
- `text`: a `string` feature, denoting the text of the query.
### Qrels
- `qrels`: a `dict` feature representing the query document relevance judgements, made up of:
- `_id`: a `string` feature representing the query id
- `_id`: a `string` feature, denoting the document id.
- `score`: a `int32` feature, denoting the relevance judgement between query and document.
### Data Splits
| Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 |
| -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:|
| MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` |
| TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` |
| NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` |
| BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) |
| NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` |
| HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` |
| FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` |
| Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) |
| TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) |
| ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` |
| Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` |
| CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` |
| Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` |
| DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` |
| SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` |
| FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` |
| Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` |
| SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` |
| Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
Cite as:
```
@inproceedings{
thakur2021beir,
title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models},
author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
year={2021},
url={https://openreview.net/forum?id=wCu6T5xFjeJ}
}
```
### Contributions
Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset. |
C-MTEB/T2Retrieval-qrels | 2023-07-28T10:11:11.000Z | [
"region:us"
] | C-MTEB | null | null | null | 0 | 744 | ---
configs:
- config_name: default
data_files:
- split: dev
path: data/dev-*
dataset_info:
features:
- name: qid
dtype: string
- name: pid
dtype: string
- name: score
dtype: int64
splits:
- name: dev
num_bytes: 3133383
num_examples: 118932
download_size: 1146734
dataset_size: 3133383
---
# Dataset Card for "T2Retrieval-qrels"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
result-kand2-sdxl-wuerst-karlo/a48196ad | 2023-09-16T10:13:26.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | null | 0 | 744 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 155
num_examples: 10
download_size: 1306
dataset_size: 155
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "a48196ad"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
result-kand2-sdxl-wuerst-karlo/8e18a25b | 2023-09-16T15:18:38.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | null | 0 | 739 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 191
num_examples: 10
download_size: 1358
dataset_size: 191
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "8e18a25b"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
result-kand2-sdxl-wuerst-karlo/a2d1bcf0 | 2023-09-16T15:16:31.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | null | 0 | 738 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 220
num_examples: 10
download_size: 1379
dataset_size: 220
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "a2d1bcf0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
aqua_rat | 2022-11-18T18:20:44.000Z | [
"task_categories:question-answering",
"task_ids:multiple-choice-qa",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"arxiv:1705.04146",
"region:us"
] | null | A large-scale dataset consisting of approximately 100,000 algebraic word problems.
The solution to each question is explained step-by-step using natural language.
This data is used to train a program generation model that learns to generate the explanation,
while generating the program that solves the question. | @InProceedings{ACL,
title = {Program induction by rationale generation: Learning to solve and explain algebraic word problems},
authors={Ling, Wang and Yogatama, Dani and Dyer, Chris and Blunsom, Phil},
year={2017}
} | null | 7 | 734 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
- expert-generated
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- multiple-choice-qa
paperswithcode_id: aqua-rat
pretty_name: Algebra Question Answering with Rationales
dataset_info:
- config_name: raw
features:
- name: question
dtype: string
- name: options
sequence: string
- name: rationale
dtype: string
- name: correct
dtype: string
splits:
- name: train
num_bytes: 42333259
num_examples: 97467
- name: test
num_bytes: 116779
num_examples: 254
- name: validation
num_bytes: 118636
num_examples: 254
download_size: 47833135
dataset_size: 42568674
- config_name: tokenized
features:
- name: question
dtype: string
- name: options
sequence: string
- name: rationale
dtype: string
- name: correct
dtype: string
splits:
- name: train
num_bytes: 46493843
num_examples: 97467
- name: test
num_bytes: 126283
num_examples: 254
- name: validation
num_bytes: 128873
num_examples: 254
download_size: 52003894
dataset_size: 46748999
---
# Dataset Card for AQUA-RAT
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/deepmind/AQuA](https://github.com/deepmind/AQuA)
- **Repository:** [https://github.com/deepmind/AQuA](https://github.com/deepmind/AQuA)
- **Paper:** [https://arxiv.org/pdf/1705.04146.pdf](https://arxiv.org/pdf/1705.04146.pdf)
### Dataset Summary
A large-scale dataset consisting of approximately 100,000 algebraic word problems.
The solution to each question is explained step-by-step using natural language.
This data is used to train a program generation model that learns to generate the explanation,
while generating the program that solves the question.
### Supported Tasks and Leaderboards
### Languages
en
## Dataset Structure
### Data Instances
```
{
"question": "A grocery sells a bag of ice for $1.25, and makes 20% profit. If it sells 500 bags of ice, how much total profit does it make?",
"options": ["A)125", "B)150", "C)225", "D)250", "E)275"],
"rationale": "Profit per bag = 1.25 * 0.20 = 0.25\nTotal profit = 500 * 0.25 = 125\nAnswer is A.",
"correct": "A"
}
```
### Data Fields
- `question` : (str) A natural language definition of the problem to solve
- `options` : (list(str)) 5 possible options (A, B, C, D and E), among which one is correct
- `rationale` : (str) A natural language description of the solution to the problem
- `correct` : (str) The correct option
### Data Splits
| | Train | Valid | Test |
| ----- | ------ | ----- | ---- |
| Examples | 97467 | 254 | 254 |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
Copyright 2017 Google Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
### Citation Information
```
@article{ling2017program,
title={Program induction by rationale generation: Learning to solve and explain algebraic word problems},
author={Ling, Wang and Yogatama, Dani and Dyer, Chris and Blunsom, Phil},
journal={ACL},
year={2017}
}
```
### Contributions
Thanks to [@arkhalid](https://github.com/arkhalid) for adding this dataset. |
covost2 | 2022-11-18T19:46:56.000Z | [
"task_categories:automatic-speech-recognition",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"source_datasets:extended|other-common-voice",
"language:ar",
"language:ca",
"language:cy",
"language:de",
"language:es",
"language:et",
"language:fa",
"language:fr",
"language:id",
"language:it",
"language:ja",
"language:lv",
"language:mn",
"language:nl",
"language:pt",
"language:ru",
"language:sl",
"language:sv",
"language:ta",
"language:tr",
"language:zh",
"license:cc-by-nc-4.0",
"arxiv:2007.10310",
"region:us"
] | null | CoVoST 2, a large-scale multilingual speech translation corpus covering translations from 21 languages into English and from English into 15 languages. The dataset is created using Mozilla’s open source Common Voice database of crowdsourced voice recordings.
Note that in order to limit the required storage for preparing this dataset, the audio
is stored in the .mp3 format and is not converted to a float32 array. To convert, the audio
file to a float32 array, please make use of the `.map()` function as follows:
```python
import torchaudio
def map_to_array(batch):
speech_array, _ = torchaudio.load(batch["file"])
batch["speech"] = speech_array.numpy()
return batch
dataset = dataset.map(map_to_array, remove_columns=["file"])
``` | @misc{wang2020covost,
title={CoVoST 2: A Massively Multilingual Speech-to-Text Translation Corpus},
author={Changhan Wang and Anne Wu and Juan Pino},
year={2020},
eprint={2007.10310},
archivePrefix={arXiv},
primaryClass={cs.CL} | null | 6 | 734 | ---
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
- expert-generated
language:
- ar
- ca
- cy
- de
- es
- et
- fa
- fr
- id
- it
- ja
- lv
- mn
- nl
- pt
- ru
- sl
- sv
- ta
- tr
- zh
language_bcp47:
- sv-SE
- zh-CN
license:
- cc-by-nc-4.0
multilinguality:
- multilingual
size_categories:
- 100K<n<1M
source_datasets:
- extended|other-common-voice
task_categories:
- automatic-speech-recognition
task_ids: []
paperswithcode_id: null
pretty_name: CoVoST 2
dataset_info:
- config_name: en_de
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 110716293
num_examples: 289430
- name: validation
num_bytes: 5971731
num_examples: 15531
- name: test
num_bytes: 5689684
num_examples: 15531
download_size: 25779505
dataset_size: 122377708
- config_name: en_tr
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 109474265
num_examples: 289430
- name: validation
num_bytes: 5914622
num_examples: 15531
- name: test
num_bytes: 5619271
num_examples: 15531
download_size: 23659131
dataset_size: 121008158
- config_name: en_fa
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 119490720
num_examples: 289430
- name: validation
num_bytes: 6423535
num_examples: 15531
- name: test
num_bytes: 6103617
num_examples: 15531
download_size: 26148420
dataset_size: 132017872
- config_name: en_sv-SE
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 108557530
num_examples: 289430
- name: validation
num_bytes: 5845918
num_examples: 15531
- name: test
num_bytes: 5580039
num_examples: 15531
download_size: 23671482
dataset_size: 119983487
- config_name: en_mn
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 123950136
num_examples: 289430
- name: validation
num_bytes: 6693044
num_examples: 15531
- name: test
num_bytes: 6293633
num_examples: 15531
download_size: 27527436
dataset_size: 136936813
- config_name: en_zh-CN
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 106490939
num_examples: 289430
- name: validation
num_bytes: 5735331
num_examples: 15531
- name: test
num_bytes: 5487808
num_examples: 15531
download_size: 24280932
dataset_size: 117714078
- config_name: en_cy
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 109317182
num_examples: 289430
- name: validation
num_bytes: 5894579
num_examples: 15531
- name: test
num_bytes: 5626428
num_examples: 15531
download_size: 24224499
dataset_size: 120838189
- config_name: en_ca
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 109922455
num_examples: 289430
- name: validation
num_bytes: 5924345
num_examples: 15531
- name: test
num_bytes: 5623227
num_examples: 15531
download_size: 24167201
dataset_size: 121470027
- config_name: en_sl
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 107987860
num_examples: 289430
- name: validation
num_bytes: 5838299
num_examples: 15531
- name: test
num_bytes: 5537805
num_examples: 15531
download_size: 23421999
dataset_size: 119363964
- config_name: en_et
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 107707024
num_examples: 289430
- name: validation
num_bytes: 5810185
num_examples: 15531
- name: test
num_bytes: 5543309
num_examples: 15531
download_size: 23223843
dataset_size: 119060518
- config_name: en_id
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 109456930
num_examples: 289430
- name: validation
num_bytes: 5896953
num_examples: 15531
- name: test
num_bytes: 5634939
num_examples: 15531
download_size: 22904065
dataset_size: 120988822
- config_name: en_ar
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 116732296
num_examples: 289430
- name: validation
num_bytes: 6280190
num_examples: 15531
- name: test
num_bytes: 5947069
num_examples: 15531
download_size: 25301304
dataset_size: 128959555
- config_name: en_ta
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 146318684
num_examples: 289430
- name: validation
num_bytes: 7944020
num_examples: 15531
- name: test
num_bytes: 7411400
num_examples: 15531
download_size: 30037790
dataset_size: 161674104
- config_name: en_lv
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 109532576
num_examples: 289430
- name: validation
num_bytes: 5905197
num_examples: 15531
- name: test
num_bytes: 5625189
num_examples: 15531
download_size: 24573927
dataset_size: 121062962
- config_name: en_ja
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 114741253
num_examples: 289430
- name: validation
num_bytes: 6161930
num_examples: 15531
- name: test
num_bytes: 5883608
num_examples: 15531
download_size: 26664247
dataset_size: 126786791
- config_name: fr_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 75792665
num_examples: 207374
- name: validation
num_bytes: 5487082
num_examples: 14760
- name: test
num_bytes: 5525498
num_examples: 14760
download_size: 7282129
dataset_size: 86805245
- config_name: de_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 47678171
num_examples: 127834
- name: validation
num_bytes: 5106253
num_examples: 13511
- name: test
num_bytes: 5066500
num_examples: 13511
download_size: 9926797
dataset_size: 57850924
- config_name: es_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 29152515
num_examples: 79015
- name: validation
num_bytes: 4974593
num_examples: 13221
- name: test
num_bytes: 4983920
num_examples: 13221
download_size: 3202080
dataset_size: 39111028
- config_name: ca_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 35902579
num_examples: 95854
- name: validation
num_bytes: 4798435
num_examples: 12730
- name: test
num_bytes: 4804941
num_examples: 12730
download_size: 5021926
dataset_size: 45505955
- config_name: it_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 11952709
num_examples: 31698
- name: validation
num_bytes: 3393315
num_examples: 8940
- name: test
num_bytes: 3412207
num_examples: 8951
download_size: 1691247
dataset_size: 18758231
- config_name: ru_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 5610194
num_examples: 12112
- name: validation
num_bytes: 2819414
num_examples: 6110
- name: test
num_bytes: 2923961
num_examples: 6300
download_size: 1443078
dataset_size: 11353569
- config_name: zh-CN_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 2791288
num_examples: 7085
- name: validation
num_bytes: 1918796
num_examples: 4843
- name: test
num_bytes: 1908633
num_examples: 4898
download_size: 587550
dataset_size: 6618717
- config_name: pt_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 3095722
num_examples: 9158
- name: validation
num_bytes: 1133404
num_examples: 3318
- name: test
num_bytes: 1384251
num_examples: 4023
download_size: 476419
dataset_size: 5613377
- config_name: fa_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 18015738
num_examples: 53949
- name: validation
num_bytes: 1241531
num_examples: 3445
- name: test
num_bytes: 1263271
num_examples: 3445
download_size: 3864623
dataset_size: 20520540
- config_name: et_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 808508
num_examples: 1782
- name: validation
num_bytes: 690694
num_examples: 1576
- name: test
num_bytes: 685375
num_examples: 1571
download_size: 246569
dataset_size: 2184577
- config_name: mn_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 900588
num_examples: 2067
- name: validation
num_bytes: 765543
num_examples: 1761
- name: test
num_bytes: 762577
num_examples: 1759
download_size: 189710
dataset_size: 2428708
- config_name: nl_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 2468140
num_examples: 7108
- name: validation
num_bytes: 594458
num_examples: 1699
- name: test
num_bytes: 594979
num_examples: 1699
download_size: 543795
dataset_size: 3657577
- config_name: tr_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 1391148
num_examples: 3966
- name: validation
num_bytes: 566458
num_examples: 1624
- name: test
num_bytes: 570760
num_examples: 1629
download_size: 280904
dataset_size: 2528366
- config_name: ar_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 743065
num_examples: 2283
- name: validation
num_bytes: 575077
num_examples: 1758
- name: test
num_bytes: 552356
num_examples: 1695
download_size: 109802
dataset_size: 1870498
- config_name: sv-SE_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 698800
num_examples: 2160
- name: validation
num_bytes: 438319
num_examples: 1349
- name: test
num_bytes: 517738
num_examples: 1595
download_size: 96161
dataset_size: 1654857
- config_name: lv_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 747290
num_examples: 2337
- name: validation
num_bytes: 360941
num_examples: 1125
- name: test
num_bytes: 519183
num_examples: 1629
download_size: 88836
dataset_size: 1627414
- config_name: sl_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 602420
num_examples: 1843
- name: validation
num_bytes: 165977
num_examples: 509
- name: test
num_bytes: 115414
num_examples: 360
download_size: 58445
dataset_size: 883811
- config_name: ta_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 534564
num_examples: 1358
- name: validation
num_bytes: 150428
num_examples: 384
- name: test
num_bytes: 303843
num_examples: 786
download_size: 55659
dataset_size: 988835
- config_name: ja_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 396334
num_examples: 1119
- name: validation
num_bytes: 226054
num_examples: 635
- name: test
num_bytes: 241310
num_examples: 684
download_size: 54666
dataset_size: 863698
- config_name: id_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 406989
num_examples: 1243
- name: validation
num_bytes: 259134
num_examples: 792
- name: test
num_bytes: 277053
num_examples: 844
download_size: 51755
dataset_size: 943176
- config_name: cy_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 432071
num_examples: 1241
- name: validation
num_bytes: 236107
num_examples: 690
- name: test
num_bytes: 236713
num_examples: 690
download_size: 875557
dataset_size: 904891
---
# Dataset Card for covost2
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/facebookresearch/covost
- **Repository:** https://github.com/facebookresearch/covost
- **Paper:** https://arxiv.org/abs/2007.10310
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** Changhan Wang (changhan@fb.com), Juan Miguel Pino (juancarabina@fb.com), Jiatao Gu (jgu@fb.com)
### Dataset Summary
CoVoST 2 is a large-scale multilingual speech translation corpus covering translations from 21 languages into English \
and from English into 15 languages. The dataset is created using Mozillas open-source Common Voice database of \
crowdsourced voice recordings. There are 2,900 hours of speech represented in the corpus.
### Supported Tasks and Leaderboards
`speech-translation`: The dataset can be used for Speech-to-text translation (ST). The model is presented with an audio file in one language and asked to transcribe the audio file to written text in another language. The most common evaluation metric is the BLEU score. Examples can be found at https://github.com/pytorch/fairseq/blob/master/examples/speech_to_text/docs/covost_example.md .
### Languages
The dataset contains the audio, transcriptions, and translations in the following languages, French, German, Dutch, Russian, Spanish, Italian, Turkish, Persian, Swedish, Mongolian, Chinese, Welsh, Catalan, Slovenian, Estonian, Indonesian, Arabic, Tamil, Portuguese, Latvian, and Japanese.
## Dataset Structure
### Data Instances
A typical data point comprises the path to the audio file, usually called `file`, its transcription, called `sentence`, and the translation in target language called `translation`.
```
{'client_id': 'd277a1f3904ae00b09b73122b87674e7c2c78e08120721f37b5577013ead08d1ea0c053ca5b5c2fb948df2c81f27179aef2c741057a17249205d251a8fe0e658',
'file': '/home/suraj/projects/fairseq_s2t/covst/dataset/en/clips/common_voice_en_18540003.mp3',
'audio': {'path': '/home/suraj/projects/fairseq_s2t/covst/dataset/en/clips/common_voice_en_18540003.mp3',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 48000},
'id': 'common_voice_en_18540003',
'sentence': 'When water is scarce, avoid wasting it.',
'translation': 'Wenn Wasser knapp ist, verschwenden Sie es nicht.'}
```
### Data Fields
- file: A path to the downloaded audio file in .mp3 format.
- audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
- sentence: The transcription of the audio file in source language.
- translation: The transcription of the audio file in the target language.
- id: unique id of the data sample.
### Data Splits
| config | train | validation | test |
|----------|--------|------------|-------|
| en_de | 289430 | 15531 | 15531 |
| en_tr | 289430 | 15531 | 15531 |
| en_fa | 289430 | 15531 | 15531 |
| en_sv-SE | 289430 | 15531 | 15531 |
| en_mn | 289430 | 15531 | 15531 |
| en_zh-CN | 289430 | 15531 | 15531 |
| en_cy | 289430 | 15531 | 15531 |
| en_ca | 289430 | 15531 | 15531 |
| en_sl | 289430 | 15531 | 15531 |
| en_et | 289430 | 15531 | 15531 |
| en_id | 289430 | 15531 | 15531 |
| en_ar | 289430 | 15531 | 15531 |
| en_ta | 289430 | 15531 | 15531 |
| en_lv | 289430 | 15531 | 15531 |
| en_ja | 289430 | 15531 | 15531 |
| fr_en | 207374 | 14760 | 14760 |
| de_en | 127834 | 13511 | 13511 |
| es_en | 79015 | 13221 | 13221 |
| ca_en | 95854 | 12730 | 12730 |
| it_en | 31698 | 8940 | 8951 |
| ru_en | 12112 | 6110 | 6300 |
| zh-CN_en | 7085 | 4843 | 4898 |
| pt_en | 9158 | 3318 | 4023 |
| fa_en | 53949 | 3445 | 3445 |
| et_en | 1782 | 1576 | 1571 |
| mn_en | 2067 | 1761 | 1759 |
| nl_en | 7108 | 1699 | 1699 |
| tr_en | 3966 | 1624 | 1629 |
| ar_en | 2283 | 1758 | 1695 |
| sv-SE_en | 2160 | 1349 | 1595 |
| lv_en | 2337 | 1125 | 1629 |
| sl_en | 1843 | 509 | 360 |
| ta_en | 1358 | 384 | 786 |
| ja_en | 1119 | 635 | 684 |
| id_en | 1243 | 792 | 844 |
| cy_en | 1241 | 690 | 690 |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[CC BY-NC 4.0](https://github.com/facebookresearch/covost/blob/main/LICENSE)
### Citation Information
```
@misc{wang2020covost,
title={CoVoST 2: A Massively Multilingual Speech-to-Text Translation Corpus},
author={Changhan Wang and Anne Wu and Juan Pino},
year={2020},
eprint={2007.10310},
archivePrefix={arXiv},
primaryClass={cs.CL}
```
### Contributions
Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset. |
health_fact | 2023-01-25T14:32:02.000Z | [
"task_categories:text-classification",
"task_ids:fact-checking",
"task_ids:multi-class-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:mit",
"arxiv:2010.09926",
"region:us"
] | null | PUBHEALTH is a comprehensive dataset for explainable automated fact-checking of
public health claims. Each instance in the PUBHEALTH dataset has an associated
veracity label (true, false, unproven, mixture). Furthermore each instance in the
dataset has an explanation text field. The explanation is a justification for which
the claim has been assigned a particular veracity label.
The dataset was created to explore fact-checking of difficult to verify claims i.e.,
those which require expertise from outside of the journalistics domain, in this case
biomedical and public health expertise.
It was also created in response to the lack of fact-checking datasets which provide
gold standard natural language explanations for verdicts/labels.
NOTE: There are missing labels in the dataset and we have replaced them with -1. | @inproceedings{kotonya-toni-2020-explainable,
title = "Explainable Automated Fact-Checking for Public Health Claims",
author = "Kotonya, Neema and Toni, Francesca",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods
in Natural Language Processing (EMNLP)",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.emnlp-main.623",
pages = "7740--7754",
} | null | 14 | 734 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- fact-checking
- multi-class-classification
paperswithcode_id: pubhealth
pretty_name: PUBHEALTH
dataset_info:
features:
- name: claim_id
dtype: string
- name: claim
dtype: string
- name: date_published
dtype: string
- name: explanation
dtype: string
- name: fact_checkers
dtype: string
- name: main_text
dtype: string
- name: sources
dtype: string
- name: label
dtype:
class_label:
names:
'0': 'false'
'1': mixture
'2': 'true'
'3': unproven
- name: subjects
dtype: string
splits:
- name: train
num_bytes: 53985377
num_examples: 9832
- name: test
num_bytes: 6825221
num_examples: 1235
- name: validation
num_bytes: 6653044
num_examples: 1225
download_size: 24892660
dataset_size: 67463642
train-eval-index:
- config: default
task: text-classification
task_id: multi_class_classification
splits:
train_split: train
eval_split: test
col_mapping:
claim: text
label: target
metrics:
- type: accuracy
name: Accuracy
- type: f1
name: F1 macro
args:
average: macro
- type: f1
name: F1 micro
args:
average: micro
- type: f1
name: F1 weighted
args:
average: weighted
- type: precision
name: Precision macro
args:
average: macro
- type: precision
name: Precision micro
args:
average: micro
- type: precision
name: Precision weighted
args:
average: weighted
- type: recall
name: Recall macro
args:
average: macro
- type: recall
name: Recall micro
args:
average: micro
- type: recall
name: Recall weighted
args:
average: weighted
---
# Dataset Card for PUBHEALTH
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [PUBHEALTH homepage](https://github.com/neemakot/Health-Fact-Checking)
- **Repository:** [PUBHEALTH repository](https://github.com/neemakot/Health-Fact-Checking/blob/master/data/DATASHEET.md)
- **Paper:** [Explainable Automated Fact-Checking for Public Health Claims"](https://arxiv.org/abs/2010.09926)
- **Point of Contact:**[Neema Kotonya](mailto:nk2418@ic.ac.uk)
### Dataset Summary
PUBHEALTH is a comprehensive dataset for explainable automated fact-checking of public health claims. Each instance in the PUBHEALTH dataset has an associated veracity label (true, false, unproven, mixture). Furthermore each instance in the dataset has an explanation text field. The explanation is a justification for which the claim has been assigned a particular veracity label.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The text in the dataset is in English.
## Dataset Structure
### Data Instances
The following is an example instance of the PUBHEALTH dataset:
| Field | Example |
| ----------------- | -------------------------------------------------------------|
| __claim__ | Expired boxes of cake and pancake mix are dangerously toxic. |
| __explanation__ | What's True: Pancake and cake mixes that contain mold can cause life-threatening allergic reactions. What's False: Pancake and cake mixes that have passed their expiration dates are not inherently dangerous to ordinarily healthy people, and the yeast in packaged baking products does not "over time develops spores." |
| __label__ | mixture |
| __author(s)__ | David Mikkelson |
| __date published__ | April 19, 2006 |
| __tags__ | food, allergies, baking, cake |
| __main_text__ | In April 2006, the experience of a 14-year-old who had eaten pancakes made from a mix that had gone moldy was described in the popular newspaper column Dear Abby. The account has since been circulated widely on the Internet as scores of concerned homemakers ponder the safety of the pancake and other baking mixes lurking in their larders [...] |
| __evidence sources__ | [1] Bennett, Allan and Kim Collins. “An Unusual Case of Anaphylaxis: Mold in Pancake Mix.” American Journal of Forensic Medicine & Pathology. September 2001 (pp. 292-295). [2] Phillips, Jeanne. “Dear Abby.” 14 April 2006 [syndicated column]. |
### Data Fields
Mentioned above in data instances.
### Data Splits
| | # Instances |
|-----------|-------------|
| train.tsv | 9832 |
| dev.tsv | 1221 |
| test.tsv | 1235 |
| total | 12288 |
## Dataset Creation
### Curation Rationale
The dataset was created to explore fact-checking of difficult to verify claims i.e., those which require expertise from outside of the journalistics domain, in this case biomedical and public health expertise.
It was also created in response to the lack of fact-checking datasets which provide gold standard natural language explanations for verdicts/labels.
### Source Data
#### Initial Data Collection and Normalization
The dataset was retrieved from the following fact-checking, news reviews and news websites:
| URL | Type |
|-----------------------------------|--------------------|
| http://snopes.com/ | fact-checking |
| http://politifact.com/ | fact-checking |
| http://truthorfiction.com/ | fact-checking |
| https://www.factcheck.org/ | fact-checking |
| https://fullfact.org/ | fact-checking |
| https://apnews.com/ | news |
| https://uk.reuters.com/ | news |
| https://www.healthnewsreview.org/ | health news review |
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
Not to our knowledge, but if it is brought to our attention that we are mistaken we will make the appropriate corrections to the dataset.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The dataset was created by Neema Kotonya, and Francesca Toni, for their research paper "Explainable Automated Fact-Checking for Public Health Claims" presented at EMNLP 2020.
### Licensing Information
MIT License
### Citation Information
```
@inproceedings{kotonya-toni-2020-explainable,
title = "Explainable Automated Fact-Checking for Public Health Claims",
author = "Kotonya, Neema and
Toni, Francesca",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.emnlp-main.623",
pages = "7740--7754",
}
```
### Contributions
Thanks to [@bhavitvyamalik](https://github.com/bhavitvyamalik) for adding this dataset. |
teknium/GPT4-LLM-Cleaned | 2023-05-04T01:48:35.000Z | [
"region:us"
] | teknium | null | null | null | 84 | 734 | This is the GPT4-LLM dataset from : https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM
It has been filtered of all OpenAI disclaimers and refusals. (Disclaimer: It may have removed some additional things besides just OAI disclaimers, as I used the followings script which is a bit more broad: https://huggingface.co/datasets/ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered/blob/main/wizardlm_clean.py)
There is a modified script of that in the repo that was used specifically for this. |
marsyas/gtzan | 2022-11-06T20:34:20.000Z | [
"region:us"
] | marsyas | GTZAN is a dataset for musical genre classification of audio signals. The dataset consists of 1,000 audio tracks, each of 30 seconds long. It contains 10 genres, each represented by 100 tracks. The tracks are all 22,050Hz Mono 16-bit audio files in WAV format. The genres are: blues, classical, country, disco, hiphop, jazz, metal, pop, reggae, and rock. | @misc{tzanetakis_essl_cook_2001,
author = "Tzanetakis, George and Essl, Georg and Cook, Perry",
title = "Automatic Musical Genre Classification Of Audio Signals",
url = "http://ismir2001.ismir.net/pdf/tzanetakis.pdf",
publisher = "The International Society for Music Information Retrieval",
year = "2001"
} | null | 5 | 732 | ---
pretty_name: GTZAN
---
# Dataset Card for GTZAN
## Table of Contents
- [Dataset Card for GTZAN](#dataset-card-for-gtzan)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [http://marsyas.info/downloads/datasets.html](http://marsyas.info/downloads/datasets.html)
- **Paper:** [http://ismir2001.ismir.net/pdf/tzanetakis.pdf](http://ismir2001.ismir.net/pdf/tzanetakis.pdf)
- **Point of Contact:**
### Dataset Summary
GTZAN is a dataset for musical genre classification of audio signals. The dataset consists of 1,000 audio tracks, each of 30 seconds long. It contains 10 genres, each represented by 100 tracks. The tracks are all 22,050Hz Mono 16-bit audio files in WAV format. The genres are: blues, classical, country, disco, hiphop, jazz, metal, pop, reggae, and rock.
### Languages
English
## Dataset Structure
GTZAN is distributed as a single dataset without a predefined training and test split. The information below refers to the single `train` split that is assigned by default.
### Data Instances
An example of GTZAN looks as follows:
```python
{
"file": "/path/to/cache/genres/blues/blues.00000.wav",
"audio": {
"path": "/path/to/cache/genres/blues/blues.00000.wav",
"array": array(
[
0.00732422,
0.01660156,
0.00762939,
...,
-0.05560303,
-0.06106567,
-0.06417847,
],
dtype=float32,
),
"sampling_rate": 22050,
},
"genre": 0,
}
```
### Data Fields
The types associated with each of the data fields is as follows:
* `file`: a `string` feature.
* `audio`: an `Audio` feature containing the `path` of the sound file, the decoded waveform in the `array` field, and the `sampling_rate`.
* `genre`: a `ClassLabel` feature.
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@misc{tzanetakis_essl_cook_2001,
author = "Tzanetakis, George and Essl, Georg and Cook, Perry",
title = "Automatic Musical Genre Classification Of Audio Signals",
url = "http://ismir2001.ismir.net/pdf/tzanetakis.pdf",
publisher = "The International Society for Music Information Retrieval",
year = "2001"
}
```
### Contributions
Thanks to [@lewtun](https://github.com/lewtun) for adding this dataset. |
result-kand2-sdxl-wuerst-karlo/fbc48c23 | 2023-09-16T20:33:59.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | null | 0 | 732 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 169
num_examples: 10
download_size: 1322
dataset_size: 169
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "fbc48c23"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
svhn | 2023-01-25T14:45:04.000Z | [
"task_categories:image-classification",
"task_categories:object-detection",
"annotations_creators:machine-generated",
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:other",
"region:us"
] | null | SVHN is a real-world image dataset for developing machine learning and object recognition algorithms with minimal requirement on data preprocessing and formatting.
It can be seen as similar in flavor to MNIST (e.g., the images are of small cropped digits), but incorporates an order of magnitude more labeled data (over 600,000 digit images)
and comes from a significantly harder, unsolved, real world problem (recognizing digits and numbers in natural scene images). SVHN is obtained from house numbers in Google Street View images. | @article{netzer2011reading,
title={Reading digits in natural images with unsupervised feature learning},
author={Netzer, Yuval and Wang, Tao and Coates, Adam and Bissacco, Alessandro and Wu, Bo and Ng, Andrew Y},
year={2011}
} | null | 9 | 731 | ---
annotations_creators:
- machine-generated
- expert-generated
language_creators:
- machine-generated
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- image-classification
- object-detection
task_ids: []
paperswithcode_id: svhn
pretty_name: Street View House Numbers
dataset_info:
- config_name: full_numbers
features:
- name: image
dtype: image
- name: digits
sequence:
- name: bbox
sequence: int32
length: 4
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
'2': '2'
'3': '3'
'4': '4'
'5': '5'
'6': '6'
'7': '7'
'8': '8'
'9': '9'
splits:
- name: train
num_bytes: 390404309
num_examples: 33402
- name: test
num_bytes: 271503052
num_examples: 13068
- name: extra
num_bytes: 1868720340
num_examples: 202353
download_size: 2636187279
dataset_size: 2530627701
- config_name: cropped_digits
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
'2': '2'
'3': '3'
'4': '4'
'5': '5'
'6': '6'
'7': '7'
'8': '8'
'9': '9'
splits:
- name: train
num_bytes: 128364360
num_examples: 73257
- name: test
num_bytes: 44464040
num_examples: 26032
- name: extra
num_bytes: 967853504
num_examples: 531131
download_size: 1575594780
dataset_size: 1140681904
---
# Dataset Card for Street View House Numbers
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://ufldl.stanford.edu/housenumbers
- **Repository:**
- **Paper:** [Reading Digits in Natural Images with Unsupervised Feature Learning](http://ufldl.stanford.edu/housenumbers/nips2011_housenumbers.pdf)
- **Leaderboard:** https://paperswithcode.com/sota/image-classification-on-svhn
- **Point of Contact:** streetviewhousenumbers@gmail.com
### Dataset Summary
SVHN is a real-world image dataset for developing machine learning and object recognition algorithms with minimal requirement on data preprocessing and formatting. It can be seen as similar in flavor to MNIST (e.g., the images are of small cropped digits), but incorporates an order of magnitude more labeled data (over 600,000 digit images) and comes from a significantly harder, unsolved, real world problem (recognizing digits and numbers in natural scene images). SVHN is obtained from house numbers in Google Street View images. The dataset comes in two formats:
1. Original images with character level bounding boxes.
2. MNIST-like 32-by-32 images centered around a single character (many of the images do contain some distractors at the sides).
### Supported Tasks and Leaderboards
- `object-detection`: The dataset can be used to train a model for digit detection.
- `image-classification`: The dataset can be used to train a model for Image Classification where the task is to predict a correct digit on the image. The leaderboard for this task is available at:
https://paperswithcode.com/sota/image-classification-on-svhn
### Languages
English
## Dataset Structure
### Data Instances
#### full_numbers
The original, variable-resolution, color house-number images with character level bounding boxes.
```
{
'image': <PIL.PngImagePlugin.PngImageFile image mode=RGB size=98x48 at 0x259E3F01780>,
'digits': {
'bbox': [
[36, 7, 13, 32],
[50, 7, 12, 32]
],
'label': [6, 9]
}
}
```
#### cropped_digits
Character level ground truth in an MNIST-like format. All digits have been resized to a fixed resolution of 32-by-32 pixels. The original character bounding boxes are extended in the appropriate dimension to become square windows, so that resizing them to 32-by-32 pixels does not introduce aspect ratio distortions. Nevertheless this preprocessing introduces some distracting digits to the sides of the digit of interest.
```
{
'image': <PIL.PngImagePlugin.PngImageFile image mode=RGB size=32x32 at 0x25A89494780>,
'label': 1
}
```
### Data Fields
#### full_numbers
- `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
- `digits`: a dictionary containing digits' bounding boxes and labels
- `bbox`: a list of bounding boxes (in the [coco](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco) format) corresponding to the digits present on the image
- `label`: a list of integers between 0 and 9 representing the digit.
#### cropped_digits
- `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
- `digit`: an integer between 0 and 9 representing the digit.
### Data Splits
#### full_numbers
The data is split into training, test and extra set. The training set contains 33402 images, test set 13068 and the extra set 202353 images.
#### cropped_digits
The data is split into training, test and extra set. The training set contains 73257 images, test set 26032 and the extra set 531131 images.
The extra set can be used as extra training data. The extra set was obtained in a similar manner to the training and test set, but with the increased detection threshold in order to generate this large amount of labeled data. The SVHN extra subset is thus somewhat biased toward less difficult detections, and is thus easier than SVHN train/SVHN test.
## Dataset Creation
### Curation Rationale
From the paper:
> As mentioned above, the venerable MNIST dataset has been a valuable goal post for researchers seeking to build better learning systems whose benchmark performance could be expected to translate into improved performance on realistic applications. However, computers have now reached essentially human levels of performance on this problem—a testament to progress in machine learning and computer vision. The Street View House Numbers (SVHN) digit database that we provide can be seen as similar in flavor to MNIST (e.g., the images are of small cropped characters), but the SVHN dataset incorporates an order of magnitude more labeled data and comes from a significantly harder, unsolved, real world problem. Here the gap between human performance and state of the art feature representations is significant. Going forward, we expect that this dataset may fulfill a similar role for modern feature learning algorithms: it provides a new and difficult benchmark where increased performance can be expected to translate into tangible gains on a realistic application.
### Source Data
#### Initial Data Collection and Normalization
From the paper:
> The SVHN dataset was obtained from a large number of Street View images using a combination
of automated algorithms and the Amazon Mechanical Turk (AMT) framework, which was
used to localize and transcribe the single digits. We downloaded a very large set of images from
urban areas in various countries.
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
From the paper:
> From these randomly selected images, the house-number patches were extracted using a dedicated sliding window house-numbers detector using a low threshold on the detector’s confidence score in order to get a varied, unbiased dataset of house-number signs. These low precision detections were screened and transcribed by AMT workers.
#### Who are the annotators?
The AMT workers.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu and Andrew Y. Ng
### Licensing Information
Non-commerical use only.
### Citation Information
```
@article{netzer2011reading,
title={Reading digits in natural images with unsupervised feature learning},
author={Netzer, Yuval and Wang, Tao and Coates, Adam and Bissacco, Alessandro and Wu, Bo and Ng, Andrew Y},
year={2011}
}
```
### Contributions
Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset. |
OxAISH-AL-LLM/wiki_toxic | 2022-09-19T15:53:19.000Z | [
"task_categories:text-classification",
"task_ids:hate-speech-detection",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended|other",
"language:en",
"license:cc0-1.0",
"wikipedia",
"toxicity",
"toxic comments",
"region:us"
] | OxAISH-AL-LLM | Jigsaw Toxic Comment Challenge dataset. This dataset was the basis of a Kaggle competition run by Jigsaw | """
_DESCRIPTION = | null | 8 | 729 | ---
annotations_creators:
- crowdsourced
language:
- en
language_creators:
- found
license:
- cc0-1.0
multilinguality:
- monolingual
pretty_name: Toxic Wikipedia Comments
size_categories:
- 100K<n<1M
source_datasets:
- extended|other
tags:
- wikipedia
- toxicity
- toxic comments
task_categories:
- text-classification
task_ids:
- hate-speech-detection
---
# Dataset Card for Wiki Toxic
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The Wiki Toxic dataset is a modified, cleaned version of the dataset used in the [Kaggle Toxic Comment Classification challenge](https://www.kaggle.com/competitions/jigsaw-toxic-comment-classification-challenge/overview) from 2017/18. The dataset contains comments collected from Wikipedia forums and classifies them into two categories, `toxic` and `non-toxic`.
The Kaggle dataset was cleaned using the included `clean.py` file.
### Supported Tasks and Leaderboards
- Text Classification: the dataset can be used for training a model to recognise toxicity in sentences and classify them accordingly.
### Languages
The sole language used in the dataset is English.
## Dataset Structure
### Data Instances
For each data point, there is an id, the comment_text itself, and a label (0 for non-toxic, 1 for toxic).
```
{'id': 'a123a58f610cffbc',
'comment_text': '"This article SUCKS. It may be poorly written, poorly formatted, or full of pointless crap that no one cares about, and probably all of the above. If it can be rewritten into something less horrible, please, for the love of God, do so, before the vacuum caused by its utter lack of quality drags the rest of Wikipedia down into a bottomless pit of mediocrity."',
'label': 1}
```
### Data Fields
- `id`: A unique identifier string for each comment
- `comment_text`: A string containing the text of the comment
- `label`: An integer, either 0 if the comment is non-toxic, or 1 if the comment is toxic
### Data Splits
The Wiki Toxic dataset has three splits: *train*, *validation*, and *test*. The statistics for each split are below:
| Dataset Split | Number of data points in split |
| ----------- | ----------- |
| Train | 127,656 |
| Validation | 31,915 |
| Test | 63,978 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. |
wiki_snippets | 2023-04-05T13:43:20.000Z | [
"task_categories:text-generation",
"task_categories:other",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"size_categories:10M<n<100M",
"source_datasets:extended|wiki40b",
"source_datasets:extended|wikipedia",
"language:en",
"license:unknown",
"text-search",
"region:us"
] | null | Wikipedia version split into plain text snippets for dense semantic indexing. | @ONLINE {wikidump,
author = {Wikimedia Foundation},
title = {Wikimedia Downloads},
url = {https://dumps.wikimedia.org}
} | null | 0 | 728 | ---
annotations_creators:
- no-annotation
language_creators:
- crowdsourced
language:
- en
license:
- unknown
multilinguality:
- multilingual
pretty_name: WikiSnippets
size_categories:
- 10M<n<100M
source_datasets:
- extended|wiki40b
- extended|wikipedia
task_categories:
- text-generation
- other
task_ids:
- language-modeling
paperswithcode_id: null
tags:
- text-search
dataset_info:
- config_name: wiki40b_en_100_0
features:
- name: _id
dtype: string
- name: datasets_id
dtype: int32
- name: wiki_id
dtype: string
- name: start_paragraph
dtype: int32
- name: start_character
dtype: int32
- name: end_paragraph
dtype: int32
- name: end_character
dtype: int32
- name: article_title
dtype: string
- name: section_title
dtype: string
- name: passage_text
dtype: string
splits:
- name: train
num_bytes: 12938641686
num_examples: 17553713
download_size: 0
dataset_size: 12938641686
- config_name: wikipedia_en_100_0
features:
- name: _id
dtype: string
- name: datasets_id
dtype: int32
- name: wiki_id
dtype: string
- name: start_paragraph
dtype: int32
- name: start_character
dtype: int32
- name: end_paragraph
dtype: int32
- name: end_character
dtype: int32
- name: article_title
dtype: string
- name: section_title
dtype: string
- name: passage_text
dtype: string
splits:
- name: train
num_bytes: 26407884393
num_examples: 33849898
download_size: 0
dataset_size: 26407884393
---
# Dataset Card for "wiki_snippets"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://dumps.wikimedia.org](https://dumps.wikimedia.org)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
Wikipedia version split into plain text snippets for dense semantic indexing.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
We show detailed information for 2 configurations of the dataset (with 100 snippet passage length and 0 overlap) in
English:
- wiki40b_en_100_0: Wiki-40B
- wikipedia_en_100_0: Wikipedia
### Data Instances
#### wiki40b_en_100_0
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 12.94 GB
- **Total amount of disk used:** 12.94 GB
An example of 'train' looks as follows:
```
{'_id': '{"datasets_id": 0, "wiki_id": "Q1294448", "sp": 2, "sc": 0, "ep": 6, "ec": 610}',
'datasets_id': 0,
'wiki_id': 'Q1294448',
'start_paragraph': 2,
'start_character': 0,
'end_paragraph': 6,
'end_character': 610,
'article_title': 'Ági Szalóki',
'section_title': 'Life',
'passage_text': "Ági Szalóki Life She started singing as a toddler, considering Márta Sebestyén a role model. Her musical background is traditional folk music; she first won recognition for singing with Ökrös in a traditional folk style, and Besh o droM, a Balkan gypsy brass band. With these ensembles she toured around the world from the Montreal Jazz Festival, through Glastonbury Festival to the Théatre de la Ville in Paris, from New York to Beijing.\nSince 2005, she began to pursue her solo career and explore various genres, such as jazz, thirties ballads, or children's songs.\nUntil now, three of her six released albums"}
```
#### wikipedia_en_100_0
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 26.41 GB
- **Total amount of disk used:** 26.41 GB
An example of 'train' looks as follows:
```
{'_id': '{"datasets_id": 0, "wiki_id": "Anarchism", "sp": 0, "sc": 0, "ep": 2, "ec": 129}',
'datasets_id': 0,
'wiki_id': 'Anarchism',
'start_paragraph': 0,
'start_character': 0,
'end_paragraph': 2,
'end_character': 129,
'article_title': 'Anarchism',
'section_title': 'Start',
'passage_text': 'Anarchism is a political philosophy and movement that is sceptical of authority and rejects all involuntary, coercive forms of hierarchy. Anarchism calls for the abolition of the state, which it holds to be unnecessary, undesirable, and harmful. As a historically left-wing movement, placed on the farthest left of the political spectrum, it is usually described alongside communalism and libertarian Marxism as the libertarian wing (libertarian socialism) of the socialist movement, and has a strong historical association with anti-capitalism and socialism. Humans lived in societies without formal hierarchies long before the establishment of formal states, realms, or empires. With the'}
```
### Data Fields
The data fields are the same for all configurations:
- `_id`: a `string` feature.
- `datasets_id`: a `int32` feature.
- `wiki_id`: a `string` feature.
- `start_paragraph`: a `int32` feature.
- `start_character`: a `int32` feature.
- `end_paragraph`: a `int32` feature.
- `end_character`: a `int32` feature.
- `article_title`: a `string` feature.
- `section_title`: a `string` feature.
- `passage_text`: a `string` feature.
### Data Splits
| name | train |
|:-------------------|---------:|
| wiki40b_en_100_0 | 17553713 |
| wikipedia_en_100_0 | 33849898 |
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
See licensing information of source datasets.
### Citation Information
Cite source datasets:
- Wiki-40B:
```
@inproceedings{49029,
title = {Wiki-40B: Multilingual Language Model Dataset},
author = {Mandy Guo and Zihang Dai and Denny Vrandecic and Rami Al-Rfou},
year = {2020},
booktitle = {LREC 2020}
}
```
- Wikipedia:
```
@ONLINE{wikidump,
author = "Wikimedia Foundation",
title = "Wikimedia Downloads",
url = "https://dumps.wikimedia.org"
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@lhoestq](https://github.com/lhoestq), [@mariamabarham](https://github.com/mariamabarham), [@yjernite](https://github.com/yjernite) for adding this dataset. |
result-kand2-sdxl-wuerst-karlo/0dc6521d | 2023-09-16T23:57:15.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | null | 0 | 727 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 166
num_examples: 10
download_size: 1304
dataset_size: 166
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "0dc6521d"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
amitness/sentiment-mt | 2023-08-15T10:39:03.000Z | [
"language:mt",
"region:us"
] | amitness | null | null | null | 0 | 726 | ---
language: mt
dataset_info:
features:
- name: label
dtype:
class_label:
names:
'0': negative
'1': positive
- name: text
dtype: string
splits:
- name: train
num_bytes: 83382
num_examples: 595
- name: validation
num_bytes: 11602
num_examples: 85
- name: test
num_bytes: 25749
num_examples: 171
download_size: 0
dataset_size: 120733
---
# Dataset Card for "sentiment-mt"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
thesistranslation/wmt14 | 2023-08-09T13:08:40.000Z | [
"region:us"
] | thesistranslation | null | @InProceedings{bojar-EtAl:2014:W14-33,
author = {Bojar, Ondrej and Buck, Christian and Federmann, Christian and Haddow, Barry and Koehn, Philipp and Leveling, Johannes and Monz, Christof and Pecina, Pavel and Post, Matt and Saint-Amand, Herve and Soricut, Radu and Specia, Lucia and Tamchyna, Ale\v{s}},
title = {Findings of the 2014 Workshop on Statistical Machine Translation},
booktitle = {Proceedings of the Ninth Workshop on Statistical Machine Translation},
month = {June},
year = {2014},
address = {Baltimore, Maryland, USA},
publisher = {Association for Computational Linguistics},
pages = {12--58},
url = {http://www.aclweb.org/anthology/W/W14/W14-3302}
} | null | 0 | 722 | # Aim of this dataset
The code used to retrieve and create this dataset is almost identical to the one that you can find here [wmt14](https://huggingface.co/datasets/wmt14).
We only added the possibility to retrieve the "es-en" translation pairs from the wmt13. Keep in mind that for this language pair the validation and test sets are the newstest2012 and newstest2013 respectively.
**Pay attention**: some es-en pair sentences on the validation set contain the backslash followed by a double quote character (\\").
Thanks to the Huggingface team for all the work they have done! |
sem_eval_2010_task_8 | 2023-04-05T13:39:59.000Z | [
"language:en",
"region:us"
] | null | The SemEval-2010 Task 8 focuses on Multi-way classification of semantic relations between pairs of nominals.
The task was designed to compare different approaches to semantic relation classification
and to provide a standard testbed for future research. | @inproceedings{hendrickx-etal-2010-semeval,
title = "{S}em{E}val-2010 Task 8: Multi-Way Classification of Semantic Relations between Pairs of Nominals",
author = "Hendrickx, Iris and
Kim, Su Nam and
Kozareva, Zornitsa and
Nakov, Preslav and
{\'O} S{\'e}aghdha, Diarmuid and
Pad{\'o}, Sebastian and
Pennacchiotti, Marco and
Romano, Lorenza and
Szpakowicz, Stan",
booktitle = "Proceedings of the 5th International Workshop on Semantic Evaluation",
month = jul,
year = "2010",
address = "Uppsala, Sweden",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/S10-1006",
pages = "33--38",
} | null | 4 | 721 | ---
language:
- en
paperswithcode_id: semeval-2010-task-8
pretty_name: SemEval-2010 Task 8
dataset_info:
features:
- name: sentence
dtype: string
- name: relation
dtype:
class_label:
names:
'0': Cause-Effect(e1,e2)
'1': Cause-Effect(e2,e1)
'2': Component-Whole(e1,e2)
'3': Component-Whole(e2,e1)
'4': Content-Container(e1,e2)
'5': Content-Container(e2,e1)
'6': Entity-Destination(e1,e2)
'7': Entity-Destination(e2,e1)
'8': Entity-Origin(e1,e2)
'9': Entity-Origin(e2,e1)
'10': Instrument-Agency(e1,e2)
'11': Instrument-Agency(e2,e1)
'12': Member-Collection(e1,e2)
'13': Member-Collection(e2,e1)
'14': Message-Topic(e1,e2)
'15': Message-Topic(e2,e1)
'16': Product-Producer(e1,e2)
'17': Product-Producer(e2,e1)
'18': Other
splits:
- name: train
num_bytes: 1054352
num_examples: 8000
- name: test
num_bytes: 357075
num_examples: 2717
download_size: 1964087
dataset_size: 1411427
train-eval-index:
- config: default
task: text-classification
task_id: multi_class_classification
splits:
train_split: train
eval_split: test
col_mapping:
sentence: text
relation: target
metrics:
- type: accuracy
name: Accuracy
- type: f1
name: F1 macro
args:
average: macro
- type: f1
name: F1 micro
args:
average: micro
- type: f1
name: F1 weighted
args:
average: weighted
- type: precision
name: Precision macro
args:
average: macro
- type: precision
name: Precision micro
args:
average: micro
- type: precision
name: Precision weighted
args:
average: weighted
- type: recall
name: Recall macro
args:
average: macro
- type: recall
name: Recall micro
args:
average: micro
- type: recall
name: Recall weighted
args:
average: weighted
---
# Dataset Card for "sem_eval_2010_task_8"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://semeval2.fbk.eu/semeval2.php?location=tasks&taskid=11](https://semeval2.fbk.eu/semeval2.php?location=tasks&taskid=11)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 1.96 MB
- **Size of the generated dataset:** 1.42 MB
- **Total amount of disk used:** 3.38 MB
### Dataset Summary
The SemEval-2010 Task 8 focuses on Multi-way classification of semantic relations between pairs of nominals.
The task was designed to compare different approaches to semantic relation classification
and to provide a standard testbed for future research.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 1.96 MB
- **Size of the generated dataset:** 1.42 MB
- **Total amount of disk used:** 3.38 MB
An example of 'train' looks as follows.
```
{
"relation": 3,
"sentence": "The system as described above has its greatest application in an arrayed <e1>configuration</e1> of antenna <e2>elements</e2>."
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `sentence`: a `string` feature.
- `relation`: a classification label, with possible values including `Cause-Effect(e1,e2)` (0), `Cause-Effect(e2,e1)` (1), `Component-Whole(e1,e2)` (2), `Component-Whole(e2,e1)` (3), `Content-Container(e1,e2)` (4).
### Data Splits
| name |train|test|
|-------|----:|---:|
|default| 8000|2717|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@inproceedings{hendrickx-etal-2010-semeval,
title = "{S}em{E}val-2010 Task 8: Multi-Way Classification of Semantic Relations between Pairs of Nominals",
author = "Hendrickx, Iris and
Kim, Su Nam and
Kozareva, Zornitsa and
Nakov, Preslav and
{'O} S{'e}aghdha, Diarmuid and
Pad{'o}, Sebastian and
Pennacchiotti, Marco and
Romano, Lorenza and
Szpakowicz, Stan",
booktitle = "Proceedings of the 5th International Workshop on Semantic Evaluation",
month = jul,
year = "2010",
address = "Uppsala, Sweden",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/S10-1006",
pages = "33--38",
}
```
### Contributions
Thanks to [@JoelNiklaus](https://github.com/JoelNiklaus) for adding this dataset. |
result-kand2-sdxl-wuerst-karlo/76e05263 | 2023-09-17T02:45:19.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | null | 0 | 721 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 197
num_examples: 10
download_size: 1361
dataset_size: 197
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "76e05263"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
deepset/prompt-injections | 2023-07-31T15:04:06.000Z | [
"region:us"
] | deepset | null | null | null | 15 | 720 | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 71720
num_examples: 546
- name: test
num_bytes: 15981
num_examples: 116
download_size: 51215
dataset_size: 87701
license: cc-by-4.0
---
# Dataset Card for "deberta-v3-base-injection-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
liar | 2023-01-25T14:34:21.000Z | [
"task_categories:text-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:unknown",
"fake-news-detection",
"arxiv:1705.00648",
"region:us"
] | null | LIAR is a dataset for fake news detection with 12.8K human labeled short statements from politifact.com's API, and each statement is evaluated by a politifact.com editor for its truthfulness. The distribution of labels in the LIAR dataset is relatively well-balanced: except for 1,050 pants-fire cases, the instances for all other labels range from 2,063 to 2,638. In each case, the labeler provides a lengthy analysis report to ground each judgment. | @inproceedings{wang-2017-liar,
title = "{``}Liar, Liar Pants on Fire{''}: A New Benchmark Dataset for Fake News Detection",
author = "Wang, William Yang",
booktitle = "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)",
month = jul,
year = "2017",
address = "Vancouver, Canada",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/P17-2067",
doi = "10.18653/v1/P17-2067",
pages = "422--426",
abstract = "Automatic fake news detection is a challenging problem in deception detection, and it has tremendous real-world political and social impacts. However, statistical approaches to combating fake news has been dramatically limited by the lack of labeled benchmark datasets. In this paper, we present LIAR: a new, publicly available dataset for fake news detection. We collected a decade-long, 12.8K manually labeled short statements in various contexts from PolitiFact.com, which provides detailed analysis report and links to source documents for each case. This dataset can be used for fact-checking research as well. Notably, this new dataset is an order of magnitude larger than previously largest public fake news datasets of similar type. Empirically, we investigate automatic fake news detection based on surface-level linguistic patterns. We have designed a novel, hybrid convolutional neural network to integrate meta-data with text. We show that this hybrid approach can improve a text-only deep learning model.",
} | null | 4 | 717 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids: []
paperswithcode_id: liar
pretty_name: LIAR
tags:
- fake-news-detection
dataset_info:
features:
- name: id
dtype: string
- name: label
dtype:
class_label:
names:
'0': 'false'
'1': half-true
'2': mostly-true
'3': 'true'
'4': barely-true
'5': pants-fire
- name: statement
dtype: string
- name: subject
dtype: string
- name: speaker
dtype: string
- name: job_title
dtype: string
- name: state_info
dtype: string
- name: party_affiliation
dtype: string
- name: barely_true_counts
dtype: float32
- name: false_counts
dtype: float32
- name: half_true_counts
dtype: float32
- name: mostly_true_counts
dtype: float32
- name: pants_on_fire_counts
dtype: float32
- name: context
dtype: string
splits:
- name: train
num_bytes: 2730651
num_examples: 10269
- name: test
num_bytes: 341414
num_examples: 1283
- name: validation
num_bytes: 341592
num_examples: 1284
download_size: 1013571
dataset_size: 3413657
train-eval-index:
- config: default
task: text-classification
task_id: multi_class_classification
splits:
train_split: train
eval_split: test
col_mapping:
statement: text
label: target
metrics:
- type: accuracy
name: Accuracy
- type: f1
name: F1 macro
args:
average: macro
- type: f1
name: F1 micro
args:
average: micro
- type: f1
name: F1 weighted
args:
average: weighted
- type: precision
name: Precision macro
args:
average: macro
- type: precision
name: Precision micro
args:
average: micro
- type: precision
name: Precision weighted
args:
average: weighted
- type: recall
name: Recall macro
args:
average: macro
- type: recall
name: Recall micro
args:
average: micro
- type: recall
name: Recall weighted
args:
average: weighted
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://sites.cs.ucsb.edu/~william/
- **Repository:**
- **Paper:** https://arxiv.org/abs/1705.00648
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
LIAR is a dataset for fake news detection with 12.8K human labeled short statements from politifact.com's API, and each statement is evaluated by a politifact.com editor for its truthfulness. The distribution of labels in the LIAR dataset is relatively well-balanced: except for 1,050 pants-fire cases, the instances for all other labels range from 2,063 to 2,638. In each case, the labeler provides a lengthy analysis report to ground each judgment.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
English.
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@hugoabonizio](https://github.com/hugoabonizio) for adding this dataset. |
EleutherAI/fever | 2023-04-30T00:09:28.000Z | [
"task_categories:text-classification",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended|wikipedia",
"language:en",
"license:cc-by-sa-3.0",
"license:gpl-3.0",
"knowledge-verification",
"region:us"
] | EleutherAI | null | null | null | 1 | 717 | ---
language:
- en
paperswithcode_id: fever
annotations_creators:
- crowdsourced
language_creators:
- found
license:
- cc-by-sa-3.0
- gpl-3.0
multilinguality:
- monolingual
pretty_name: FEVER
size_categories:
- 100K<n<1M
source_datasets:
- extended|wikipedia
task_categories:
- text-classification
task_ids: []
tags:
- knowledge-verification
dataset_info:
- config_name: v1.0
features:
- name: id
dtype: int32
- name: label
dtype: string
- name: claim
dtype: string
- name: evidence_annotation_id
dtype: int32
- name: evidence_id
dtype: int32
- name: evidence_wiki_url
dtype: string
- name: evidence_sentence_id
dtype: int32
splits:
- name: train
num_bytes: 24147163
num_examples: 263822
- name: dev
num_bytes: 2696375
num_examples: 28625
- name: paper_dev
num_bytes: 1348943
num_examples: 14475
- name: paper_test
num_bytes: 1347432
num_examples: 14150
download_size: 44853972
dataset_size: 40043693
- config_name: v2.0
features:
- name: id
dtype: int32
- name: label
dtype: string
- name: claim
dtype: string
- name: evidence_annotation_id
dtype: int32
- name: evidence_id
dtype: int32
- name: evidence_wiki_url
dtype: string
- name: evidence_sentence_id
dtype: int32
splits:
- name: validation
num_bytes: 306243
num_examples: 2384
download_size: 392466
dataset_size: 306243
- config_name: wiki_pages
features:
- name: id
dtype: string
- name: text
dtype: string
- name: lines
dtype: string
splits:
- name: wikipedia_pages
num_bytes: 7254115038
num_examples: 5416537
download_size: 1713485474
dataset_size: 7254115038
---
# Dataset Card for "fever"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://fever.ai/](https://fever.ai/)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
With billions of individual pages on the web providing information on almost every conceivable topic, we should have
the ability to collect facts that answer almost every conceivable question. However, only a small fraction of this
information is contained in structured sources (Wikidata, Freebase, etc.) – we are therefore limited by our ability to
transform free-form text to structured knowledge. There is, however, another problem that has become the focus of a lot
of recent research and media coverage: false information coming from unreliable sources.
The FEVER workshops are a venue for work in verifiable knowledge extraction and to stimulate progress in this direction.
- FEVER Dataset: FEVER (Fact Extraction and VERification) consists of 185,445 claims generated by altering sentences
extracted from Wikipedia and subsequently verified without knowledge of the sentence they were derived from. The claims
are classified as Supported, Refuted or NotEnoughInfo. For the first two classes, the annotators also recorded the
sentence(s) forming the necessary evidence for their judgment.
- FEVER 2.0 Adversarial Attacks Dataset: The FEVER 2.0 Dataset consists of 1174 claims created by the submissions of
participants in the Breaker phase of the 2019 shared task. Participants (Breakers) were tasked with generating
adversarial examples that induce classification errors for the existing systems. Breakers submitted a dataset of up to
1000 instances with equal number of instances for each of the three classes (Supported, Refuted NotEnoughInfo). Only
novel claims (i.e. not contained in the original FEVER dataset) were considered as valid entries to the shared task.
The submissions were then manually evaluated for Correctness (grammatical, appropriately labeled and meet the FEVER
annotation guidelines requirements).
### Supported Tasks and Leaderboards
The task is verification of textual claims against textual sources.
When compared to textual entailment (TE)/natural language inference, the key difference is that in these tasks the
passage to verify each claim is given, and in recent years it typically consists a single sentence, while in
verification systems it is retrieved from a large set of documents in order to form the evidence.
### Languages
The dataset is in English.
## Dataset Structure
### Data Instances
#### v1.0
- **Size of downloaded dataset files:** 44.86 MB
- **Size of the generated dataset:** 40.05 MB
- **Total amount of disk used:** 84.89 MB
An example of 'train' looks as follows.
```
'claim': 'Nikolaj Coster-Waldau worked with the Fox Broadcasting Company.',
'evidence_wiki_url': 'Nikolaj_Coster-Waldau',
'label': 'SUPPORTS',
'id': 75397,
'evidence_id': 104971,
'evidence_sentence_id': 7,
'evidence_annotation_id': 92206}
```
#### v2.0
- **Size of downloaded dataset files:** 0.39 MB
- **Size of the generated dataset:** 0.30 MB
- **Total amount of disk used:** 0.70 MB
#### wiki_pages
- **Size of downloaded dataset files:** 1.71 GB
- **Size of the generated dataset:** 7.25 GB
- **Total amount of disk used:** 8.97 GB
An example of 'wikipedia_pages' looks as follows.
```
{'text': 'The following are the football -LRB- soccer -RRB- events of the year 1928 throughout the world . ',
'lines': '0\tThe following are the football -LRB- soccer -RRB- events of the year 1928 throughout the world .\n1\t',
'id': '1928_in_association_football'}
```
### Data Fields
The data fields are the same among all splits.
#### v1.0
- `id`: a `int32` feature.
- `label`: a `string` feature.
- `claim`: a `string` feature.
- `evidence_annotation_id`: a `int32` feature.
- `evidence_id`: a `int32` feature.
- `evidence_wiki_url`: a `string` feature.
- `evidence_sentence_id`: a `int32` feature.
#### v2.0
- `id`: a `int32` feature.
- `label`: a `string` feature.
- `claim`: a `string` feature.
- `evidence_annotation_id`: a `int32` feature.
- `evidence_id`: a `int32` feature.
- `evidence_wiki_url`: a `string` feature.
- `evidence_sentence_id`: a `int32` feature.
#### wiki_pages
- `id`: a `string` feature.
- `text`: a `string` feature.
- `lines`: a `string` feature.
### Data Splits
#### v1.0
| | train | dev | paper_dev | paper_test |
|------|-------:|------:|----------:|-----------:|
| v1.0 | 311431 | 37566 | 18999 | 18567 |
#### v2.0
| | validation |
|------|-----------:|
| v2.0 | 2384 |
#### wiki_pages
| | wikipedia_pages |
|------------|----------------:|
| wiki_pages | 5416537 |
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
FEVER license:
```
These data annotations incorporate material from Wikipedia, which is licensed pursuant to the Wikipedia Copyright Policy. These annotations are made available under the license terms described on the applicable Wikipedia article pages, or, where Wikipedia license terms are unavailable, under the Creative Commons Attribution-ShareAlike License (version 3.0), available at http://creativecommons.org/licenses/by-sa/3.0/ (collectively, the “License Termsâ€). You may not use these files except in compliance with the applicable License Terms.
```
### Citation Information
If you use "FEVER Dataset", please cite:
```bibtex
@inproceedings{Thorne18Fever,
author = {Thorne, James and Vlachos, Andreas and Christodoulopoulos, Christos and Mittal, Arpit},
title = {{FEVER}: a Large-scale Dataset for Fact Extraction and {VERification}},
booktitle = {NAACL-HLT},
year = {2018}
}
```
If you use "FEVER 2.0 Adversarial Attacks Dataset", please cite:
```bibtex
@inproceedings{Thorne19FEVER2,
author = {Thorne, James and Vlachos, Andreas and Cocarascu, Oana and Christodoulopoulos, Christos and Mittal, Arpit},
title = {The {FEVER2.0} Shared Task},
booktitle = {Proceedings of the Second Workshop on {Fact Extraction and VERification (FEVER)}},
year = {2018}
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@lhoestq](https://github.com/lhoestq),
[@mariamabarham](https://github.com/mariamabarham), [@lewtun](https://github.com/lewtun),
[@albertvillanova](https://github.com/albertvillanova) for adding this dataset. |
cdminix/libritts-aligned | 2023-09-19T06:13:05.000Z | [
"task_categories:automatic-speech-recognition",
"task_categories:text-to-speech",
"annotations_creators:crowdsourced",
"language:en",
"license:cc-by-4.0",
"speech",
"audio",
"automatic-speech-recognition",
"text-to-speech",
"arxiv:1904.02882",
"arxiv:2211.16049",
"region:us"
] | cdminix | Dataset used for loading TTS spectrograms and waveform audio with alignments and a number of configurable "measures", which are extracted from the raw audio. | @article{zen2019libritts,
title={LibriTTS: A Corpus Derived from LibriSpeech for Text-to-Speech},
author={Zen, Heiga and Dang, Viet and Clark, Rob and Zhang, Yu and Weiss, Ron J and Jia, Ye and Chen, Zhifeng and Wu, Yonghui},
journal={Interspeech},
year={2019}
}
@article{https://doi.org/10.48550/arxiv.2211.16049,
author = {Minixhofer, Christoph and Klejch, Ondřej and Bell, Peter},
title = {Evaluating and reducing the distance between synthetic and real speech distributions},
year = {2022}
} | null | 3 | 717 | ---
pretty_name: LibriTTS Corpus with Forced Alignments
annotations_creators:
- crowdsourced
language: en
tags:
- speech
- audio
- automatic-speech-recognition
- text-to-speech
license:
- cc-by-4.0
task_categories:
- automatic-speech-recognition
- text-to-speech
extra_gated_prompt: "When using this dataset to download LibriTTS, you agree to the terms on https://www.openslr.org"
---
> There is also an identical dataset for the new libritts-r dataset at [cdminix/libritts-r-aligned](https://huggingface.co/datasets/cdminix/libritts-r-aligned)
# Dataset Card for LibriTTS with Forced Alignments (and Measures)
This dataset downloads LibriTTS and preprocesses it on your machine to create alignments using [montreal forced aligner](https://montreal-forced-aligner.readthedocs.io/en/latest/).
You need to run ``pip install alignments phones`` before using this dataset.
When running this the first time, it can take an hour or two, but subsequent runs will be lightning fast.
## Requirements
- ``pip install alignments phones`` **(required)**
- ``pip install speech-collator`` (optional)
## Example Item
```json
{
'id': '100_122655_000073_000002.wav',
'speaker': '100',
'text': 'the day after, diana and mary quitted it for distant b.',
'start': 0.0,
'end': 3.6500000953674316,
'phones': ['[SILENCE]', 'ð', 'ʌ', '[SILENCE]', 'd', 'eɪ', '[SILENCE]', 'æ', 'f', 't', 'ɜ˞', '[COMMA]', 'd', 'aɪ', 'æ', 'n', 'ʌ', '[SILENCE]', 'æ', 'n', 'd', '[SILENCE]', 'm', 'ɛ', 'ɹ', 'i', '[SILENCE]', 'k', 'w', 'ɪ', 't', 'ɪ', 'd', '[SILENCE]', 'ɪ', 't', '[SILENCE]', 'f', 'ɜ˞', '[SILENCE]', 'd', 'ɪ', 's', 't', 'ʌ', 'n', 't', '[SILENCE]', 'b', 'i', '[FULL STOP]'],
'phone_durations': [5, 2, 4, 0, 5, 13, 0, 16, 7, 5, 20, 2, 6, 9, 15, 4, 2, 0, 11, 3, 5, 0, 3, 8, 9, 8, 0, 13, 3, 5, 3, 6, 4, 0, 8, 5, 0, 9, 5, 0, 7, 5, 6, 7, 4, 5, 10, 0, 3, 35, 9],
'audio': '/dev/shm/metts/train-clean-360-alignments/100/100_122655_000073_000002.wav'
}
```
The phones are IPA phones, and the phone durations are in frames (assuming a hop length of 256, sample rate of 22050 and window length of 1024). These attributes can be changed using the ``hop_length``, ``sample_rate`` and ``window_length`` arguments to ``LibriTTSAlign``.
## Data Collator
This dataset comes with a data collator which can be used to create batches of data for training.
It can be installed using ``pip install speech-collator`` ([MiniXC/speech-collator](https://www.github.com/MiniXC/speech-collator)) and can be used as follows:
```python
import json
from datasets import load_dataset
from speech_collator import SpeechCollator
from torch.utils.data import DataLoader
dataset = load_dataset('cdminix/libritts-aligned', split="train")
speaker2ixd = json.load(open("speaker2idx.json"))
phone2ixd = json.load(open("phone2idx.json"))
collator = SpeechCollator(
speaker2ixd=speaker2idx,
phone2ixd=phone2idx ,
)
dataloader = DataLoader(dataset, collate_fn=collator.collate_fn, batch_size=8)
```
You can either download the ``speaker2idx.json`` and ``phone2idx.json`` files from [here](https://huggingface.co/datasets/cdminix/libritts-aligned/tree/main/data) or create them yourself using the following code:
```python
import json
from datasets import load_dataset
from speech_collator import SpeechCollator, create_speaker2idx, create_phone2idx
dataset = load_dataset("cdminix/libritts-aligned", split="train")
# Create speaker2idx and phone2idx
speaker2idx = create_speaker2idx(dataset, unk_idx=0)
phone2idx = create_phone2idx(dataset, unk_idx=0)
# save to json
with open("speaker2idx.json", "w") as f:
json.dump(speaker2idx, f)
with open("phone2idx.json", "w") as f:
json.dump(phone2idx, f)
```
### Measures
When using ``speech-collator`` you can also use the ``measures`` argument to specify which measures to use. The following example extracts Pitch and Energy on the fly.
```python
import json
from torch.utils.data import DataLoader
from datasets import load_dataset
from speech_collator import SpeechCollator, create_speaker2idx, create_phone2idx
from speech_collator.measures import PitchMeasure, EnergyMeasure
dataset = load_dataset("cdminix/libritts-aligned", split="train")
speaker2idx = json.load(open("data/speaker2idx.json"))
phone2idx = json.load(open("data/phone2idx.json"))
# Create SpeechCollator
speech_collator = SpeechCollator(
speaker2idx=speaker2idx,
phone2idx=phone2idx,
measures=[PitchMeasure(), EnergyMeasure()],
return_keys=["measures"]
)
# Create DataLoader
dataloader = DataLoader(
dataset,
batch_size=8,
collate_fn=speech_collator.collate_fn,
)
```
COMING SOON: Detailed documentation on how to use the measures at [MiniXC/speech-collator](https://www.github.com/MiniXC/speech-collator).
## Splits
This dataset has the following splits:
- ``train``: All the training data, except one sample per speaker which is used for validation.
- ``dev``: The validation data, one sample per speaker.
- ``train.clean.100``: Training set derived from the original materials of the train-clean-100 subset of LibriSpeech.
- ``train.clean.360``: Training set derived from the original materials of the train-clean-360 subset of LibriSpeech.
- ``train.other.500``: Training set derived from the original materials of the train-other-500 subset of LibriSpeech.
- ``dev.clean``: Validation set derived from the original materials of the dev-clean subset of LibriSpeech.
- ``dev.other``: Validation set derived from the original materials of the dev-other subset of LibriSpeech.
- ``test.clean``: Test set derived from the original materials of the test-clean subset of LibriSpeech.
- ``test.other``: Test set derived from the original materials of the test-other subset of LibriSpeech.
## Environment Variables
There are a few environment variable which can be set.
- ``LIBRITTS_VERBOSE``: If set, will print out more information about the dataset creation process.
- ``LIBRITTS_MAX_WORKERS``: The number of workers to use when creating the alignments. Defaults to ``cpu_count()``.
- ``LIBRITTS_PATH``: The path to download LibriTTS to. Defaults to the value of ``HF_DATASETS_CACHE``.
# Citation
When using LibriTTS please cite the following papers:
- [LibriTTS: A Corpus Derived from LibriSpeech for Text-to-Speech](https://arxiv.org/abs/1904.02882)
- [Montreal Forced Aligner: Trainable text-speech alignment using Kaldi](https://www.researchgate.net/publication/319185277_Montreal_Forced_Aligner_Trainable_Text-Speech_Alignment_Using_Kaldi)
When using the Measures please cite the following paper (ours):
- [Evaluating and reducing the distance between synthetic and real speech distributions](https://arxiv.org/abs/2211.16049) |
asset | 2023-06-01T14:59:51.000Z | [
"task_categories:text-classification",
"task_categories:text2text-generation",
"task_ids:text-simplification",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"source_datasets:extended|other-turkcorpus",
"language:en",
"license:cc-by-sa-4.0",
"simplification-evaluation",
"region:us"
] | null | ASSET is a dataset for evaluating Sentence Simplification systems with multiple rewriting transformations,
as described in "ASSET: A Dataset for Tuning and Evaluation of Sentence Simplification Models with Multiple Rewriting Transformations".
The corpus is composed of 2000 validation and 359 test original sentences that were each simplified 10 times by different annotators.
The corpus also contains human judgments of meaning preservation, fluency and simplicity for the outputs of several automatic text simplification systems. | @inproceedings{alva-manchego-etal-2020-asset,
title = "{ASSET}: {A} Dataset for Tuning and Evaluation of Sentence Simplification Models with Multiple Rewriting Transformations",
author = "Alva-Manchego, Fernando and
Martin, Louis and
Bordes, Antoine and
Scarton, Carolina and
Sagot, Benoit and
Specia, Lucia",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.acl-main.424",
pages = "4668--4679",
} | null | 9 | 716 | ---
annotations_creators:
- machine-generated
language_creators:
- found
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
- extended|other-turkcorpus
task_categories:
- text-classification
- text2text-generation
task_ids:
- text-simplification
paperswithcode_id: asset
pretty_name: ASSET
tags:
- simplification-evaluation
dataset_info:
- config_name: simplification
features:
- name: original
dtype: string
- name: simplifications
sequence: string
splits:
- name: validation
num_bytes: 2303496
num_examples: 2000
- name: test
num_bytes: 411031
num_examples: 359
download_size: 3639353
dataset_size: 2714527
- config_name: ratings
features:
- name: original
dtype: string
- name: simplification
dtype: string
- name: original_sentence_id
dtype: int32
- name: aspect
dtype:
class_label:
names:
'0': meaning
'1': fluency
'2': simplicity
- name: worker_id
dtype: int32
- name: rating
dtype: int32
splits:
- name: full
num_bytes: 1036853
num_examples: 4500
download_size: 3639353
dataset_size: 1036853
config_names:
- ratings
- simplification
---
# Dataset Card for ASSET
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [ASSET Github repository](https://github.com/facebookresearch/asset)
- **Paper:** [ASSET: A Dataset for Tuning and Evaluation of Sentence Simplification Models with Multiple Rewriting Transformations](https://www.aclweb.org/anthology/2020.acl-main.424/)
- **Point of Contact:** [Louis Martin](louismartincs@gmail.com)
### Dataset Summary
[ASSET](https://github.com/facebookresearch/asset) [(Alva-Manchego et al., 2020)](https://www.aclweb.org/anthology/2020.acl-main.424.pdf) is multi-reference dataset for the evaluation of sentence simplification in English. The dataset uses the same 2,359 sentences from [TurkCorpus]( https://github.com/cocoxu/simplification/) [(Xu et al., 2016)](https://www.aclweb.org/anthology/Q16-1029.pdf) and each sentence is associated with 10 crowdsourced simplifications. Unlike previous simplification datasets, which contain a single transformation (e.g., lexical paraphrasing in TurkCorpus or sentence
splitting in [HSplit](https://www.aclweb.org/anthology/D18-1081.pdf)), the simplifications in ASSET encompass a variety of rewriting transformations.
### Supported Tasks and Leaderboards
The dataset supports the evaluation of `text-simplification` systems. Success in this tasks is typically measured using the [SARI](https://huggingface.co/metrics/sari) and [FKBLEU](https://huggingface.co/metrics/fkbleu) metrics described in the paper [Optimizing Statistical Machine Translation for Text Simplification](https://www.aclweb.org/anthology/Q16-1029.pdf).
### Languages
The text in this dataset is in English (`en`).
## Dataset Structure
### Data Instances
- `simplification` configuration: an instance consists in an original sentence and 10 possible reference simplifications.
- `ratings` configuration: a data instance consists in an original sentence, a simplification obtained by an automated system, and a judgment of quality along one of three axes by a crowd worker.
### Data Fields
- `original`: an original sentence from the source datasets
- `simplifications`: in the `simplification` config, a set of reference simplifications produced by crowd workers.
- `simplification`: in the `ratings` config, a simplification of the original obtained by an automated system
- `aspect`: in the `ratings` config, the aspect on which the simplification is evaluated, one of `meaning`, `fluency`, `simplicity`
- `rating`: a quality rating between 0 and 100
### Data Splits
ASSET does not contain a training set; many models use [WikiLarge](https://github.com/XingxingZhang/dress) (Zhang and Lapata, 2017) for training.
Each input sentence has 10 associated reference simplified sentences. The statistics of ASSET are given below.
| | Dev | Test | Total |
| ----- | ------ | ---- | ----- |
| Input Sentences | 2000 | 359 | 2359 |
| Reference Simplifications | 20000 | 3590 | 23590 |
The test and validation sets are the same as those of TurkCorpus. The split was random.
There are 19.04 tokens per reference on average (lower than 21.29 and 25.49 for TurkCorpus and HSplit, respectively). Most (17,245) of the referece sentences do not involve sentence splitting.
## Dataset Creation
### Curation Rationale
ASSET was created in order to improve the evaluation of sentence simplification. It uses the same input sentences as the [TurkCorpus]( https://github.com/cocoxu/simplification/) dataset from [(Xu et al., 2016)](https://www.aclweb.org/anthology/Q16-1029.pdf). The 2,359 input sentences of TurkCorpus are a sample of "standard" (not simple) sentences from the [Parallel Wikipedia Simplification (PWKP)](https://www.informatik.tu-darmstadt.de/ukp/research_6/data/sentence_simplification/simple_complex_sentence_pairs/index.en.jsp) dataset [(Zhu et al., 2010)](https://www.aclweb.org/anthology/C10-1152.pdf), which come from the August 22, 2009 version of Wikipedia. The sentences of TurkCorpus were chosen to be of similar length [(Xu et al., 2016)](https://www.aclweb.org/anthology/Q16-1029.pdf). No further information is provided on the sampling strategy.
The TurkCorpus dataset was developed in order to overcome some of the problems with sentence pairs from Standard and Simple Wikipedia: a large fraction of sentences were misaligned, or not actually simpler [(Xu et al., 2016)](https://www.aclweb.org/anthology/Q16-1029.pdf). However, TurkCorpus mainly focused on *lexical paraphrasing*, and so cannot be used to evaluate simplifications involving *compression* (deletion) or *sentence splitting*. HSplit [(Sulem et al., 2018)](https://www.aclweb.org/anthology/D18-1081.pdf), on the other hand, can only be used to evaluate sentence splitting. The reference sentences in ASSET include a wider variety of sentence rewriting strategies, combining splitting, compression and paraphrasing. Annotators were given examples of each kind of transformation individually, as well as all three transformations used at once, but were allowed to decide which transformations to use for any given sentence.
An example illustrating the differences between TurkCorpus, HSplit and ASSET is given below:
> **Original:** He settled in London, devoting himself chiefly to practical teaching.
>
> **TurkCorpus:** He rooted in London, devoting himself mainly to practical teaching.
>
> **HSplit:** He settled in London. He devoted himself chiefly to practical teaching.
>
> **ASSET:** He lived in London. He was a teacher.
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
The input sentences are from English Wikipedia (August 22, 2009 version). No demographic information is available for the writers of these sentences. However, most Wikipedia editors are male (Lam, 2011; Graells-Garrido, 2015), which has an impact on the topics covered (see also [the Wikipedia page on Wikipedia gender bias](https://en.wikipedia.org/wiki/Gender_bias_on_Wikipedia)). In addition, Wikipedia editors are mostly white, young, and from the Northern Hemisphere [(Wikipedia: Systemic bias)](https://en.wikipedia.org/wiki/Wikipedia:Systemic_bias).
Reference sentences were written by 42 workers on Amazon Mechanical Turk (AMT). The requirements for being an annotator were:
- Passing a Qualification Test (appropriately simplifying sentences). Out of 100 workers, 42 passed the test.
- Being a resident of the United States, United Kingdom or Canada.
- Having a HIT approval rate over 95%, and over 1000 HITs approved.
No other demographic or compensation information is provided in the ASSET paper.
### Annotations
#### Annotation process
The instructions given to the annotators are available [here](https://github.com/facebookresearch/asset/blob/master/crowdsourcing/AMT_AnnotationInstructions.pdf).
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
The dataset may contain some social biases, as the input sentences are based on Wikipedia. Studies have shown that the English Wikipedia contains both gender biases (Schmahl et al., 2020) and racial biases (Adams et al., 2019).
> Adams, Julia, Hannah Brückner, and Cambria Naslund. "Who Counts as a Notable Sociologist on Wikipedia? Gender, Race, and the “Professor Test”." Socius 5 (2019): 2378023118823946.
> Schmahl, Katja Geertruida, et al. "Is Wikipedia succeeding in reducing gender bias? Assessing changes in gender bias in Wikipedia using word embeddings." Proceedings of the Fourth Workshop on Natural Language Processing and Computational Social Science. 2020.
### Other Known Limitations
Dataset provided for research purposes only. Please check dataset license for additional information.
## Additional Information
### Dataset Curators
ASSET was developed by researchers at the University of Sheffield, Inria,
Facebook AI Research, and Imperial College London. The work was partly supported by Benoît Sagot's chair in the PRAIRIE institute, funded by the French National Research Agency (ANR) as part of the "Investissements d’avenir" program (reference ANR-19-P3IA-0001).
### Licensing Information
[Attribution-NonCommercial 4.0 International (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/)
### Citation Information
```
@inproceedings{alva-manchego-etal-2020-asset,
title = "{ASSET}: {A} Dataset for Tuning and Evaluation of Sentence Simplification Models with Multiple Rewriting Transformations",
author = "Alva-Manchego, Fernando and
Martin, Louis and
Bordes, Antoine and
Scarton, Carolina and
Sagot, Beno{\^\i}t and
Specia, Lucia",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.acl-main.424",
pages = "4668--4679",
}
```
This dataset card uses material written by [Juan Diego Rodriguez](https://github.com/juand-r).
### Contributions
Thanks to [@yjernite](https://github.com/yjernite) for adding this dataset. |
shibing624/nli_zh | 2022-10-30T06:30:56.000Z | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"task_ids:semantic-similarity-scoring",
"task_ids:text-scoring",
"annotations_creators:shibing624",
"language_creators:shibing624",
"multilinguality:monolingual",
"size_categories:100K<n<20M",
"source_datasets:https://github.com/shibing624/text2vec",
"source_datasets:https://github.com/IceFlameWorm/NLP_Datasets/tree/master/ATEC",
"source_datasets:http://icrc.hitsz.edu.cn/info/1037/1162.htm",
"source_datasets:http://icrc.hitsz.edu.cn/Article/show/171.html",
"source_datasets:https://arxiv.org/abs/1908.11828",
"source_datasets:https://github.com/pluto-junzeng/CNSD",
"language:zh",
"license:cc-by-4.0",
"arxiv:1908.11828",
"region:us"
] | shibing624 | 纯文本数据,格式:(sentence1, sentence2, label)。常见中文语义匹配数据集,包含ATEC、BQ、LCQMC、PAWSX、STS-B共5个任务。 | null | null | 32 | 714 | ---
annotations_creators:
- shibing624
language_creators:
- shibing624
language:
- zh
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 100K<n<20M
source_datasets:
- https://github.com/shibing624/text2vec
- https://github.com/IceFlameWorm/NLP_Datasets/tree/master/ATEC
- http://icrc.hitsz.edu.cn/info/1037/1162.htm
- http://icrc.hitsz.edu.cn/Article/show/171.html
- https://arxiv.org/abs/1908.11828
- https://github.com/pluto-junzeng/CNSD
task_categories:
- text-classification
task_ids:
- natural-language-inference
- semantic-similarity-scoring
- text-scoring
paperswithcode_id: snli
pretty_name: Stanford Natural Language Inference
---
# Dataset Card for NLI_zh
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [Chinese NLI dataset](https://github.com/shibing624/text2vec)
- **Leaderboard:** [NLI_zh leaderboard](https://github.com/shibing624/text2vec) (located on the homepage)
- **Size of downloaded dataset files:** 16 MB
- **Total amount of disk used:** 42 MB
### Dataset Summary
常见中文语义匹配数据集,包含[ATEC](https://github.com/IceFlameWorm/NLP_Datasets/tree/master/ATEC)、[BQ](http://icrc.hitsz.edu.cn/info/1037/1162.htm)、[LCQMC](http://icrc.hitsz.edu.cn/Article/show/171.html)、[PAWSX](https://arxiv.org/abs/1908.11828)、[STS-B](https://github.com/pluto-junzeng/CNSD)共5个任务。
数据源:
- ATEC: https://github.com/IceFlameWorm/NLP_Datasets/tree/master/ATEC
- BQ: http://icrc.hitsz.edu.cn/info/1037/1162.htm
- LCQMC: http://icrc.hitsz.edu.cn/Article/show/171.html
- PAWSX: https://arxiv.org/abs/1908.11828
- STS-B: https://github.com/pluto-junzeng/CNSD
### Supported Tasks and Leaderboards
Supported Tasks: 支持中文文本匹配任务,文本相似度计算等相关任务。
中文匹配任务的结果目前在顶会paper上出现较少,我罗列一个我自己训练的结果:
**Leaderboard:** [NLI_zh leaderboard](https://github.com/shibing624/text2vec)
### Languages
数据集均是简体中文文本。
## Dataset Structure
### Data Instances
An example of 'train' looks as follows.
```
{
"sentence1": "刘诗诗杨幂谁漂亮",
"sentence2": "刘诗诗和杨幂谁漂亮",
"label": 1,
}
{
"sentence1": "汇理财怎么样",
"sentence2": "怎么样去理财",
"label": 0,
}
```
### Data Fields
The data fields are the same among all splits.
- `sentence1`: a `string` feature.
- `sentence2`: a `string` feature.
- `label`: a classification label, with possible values including `similarity` (1), `dissimilarity` (0).
### Data Splits
#### ATEC
```shell
$ wc -l ATEC/*
20000 ATEC/ATEC.test.data
62477 ATEC/ATEC.train.data
20000 ATEC/ATEC.valid.data
102477 total
```
#### BQ
```shell
$ wc -l BQ/*
10000 BQ/BQ.test.data
100000 BQ/BQ.train.data
10000 BQ/BQ.valid.data
120000 total
```
#### LCQMC
```shell
$ wc -l LCQMC/*
12500 LCQMC/LCQMC.test.data
238766 LCQMC/LCQMC.train.data
8802 LCQMC/LCQMC.valid.data
260068 total
```
#### PAWSX
```shell
$ wc -l PAWSX/*
2000 PAWSX/PAWSX.test.data
49401 PAWSX/PAWSX.train.data
2000 PAWSX/PAWSX.valid.data
53401 total
```
#### STS-B
```shell
$ wc -l STS-B/*
1361 STS-B/STS-B.test.data
5231 STS-B/STS-B.train.data
1458 STS-B/STS-B.valid.data
8050 total
```
## Dataset Creation
### Curation Rationale
作为中文NLI(natural langauge inference)数据集,这里把这个数据集上传到huggingface的datasets,方便大家使用。
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
数据集的版权归原作者所有,使用各数据集时请尊重原数据集的版权。
BQ: Jing Chen, Qingcai Chen, Xin Liu, Haijun Yang, Daohe Lu, Buzhou Tang, The BQ Corpus: A Large-scale Domain-specific Chinese Corpus For Sentence Semantic Equivalence Identification EMNLP2018.
### Annotations
#### Annotation process
#### Who are the annotators?
原作者。
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
This dataset was developed as a benchmark for evaluating representational systems for text, especially including those induced by representation learning methods, in the task of predicting truth conditions in a given context.
Systems that are successful at such a task may be more successful in modeling semantic representations.
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
- 苏剑林对文件名称有整理
- 我上传到huggingface的datasets
### Licensing Information
用于学术研究。
The BQ corpus is free to the public for academic research.
### Contributions
Thanks to [@shibing624](https://github.com/shibing624) add this dataset.
|
open-llm-leaderboard/details_lmsys__vicuna-7b-v1.3 | 2023-08-27T12:30:19.000Z | [
"region:us"
] | open-llm-leaderboard | null | null | null | 0 | 713 | ---
pretty_name: Evaluation run of lmsys/vicuna-7b-v1.3
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [lmsys/vicuna-7b-v1.3](https://huggingface.co/lmsys/vicuna-7b-v1.3) on the [Open\
\ LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 61 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_lmsys__vicuna-7b-v1.3\"\
,\n\t\"harness_truthfulqa_mc_0\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\
\nThese are the [latest results from run 2023-07-19T16:22:02.219224](https://huggingface.co/datasets/open-llm-leaderboard/details_lmsys__vicuna-7b-v1.3/blob/main/results_2023-07-19T16%3A22%3A02.219224.json)\
\ (note that their might be results for other tasks in the repos if successive evals\
\ didn't cover the same tasks. You find each in the results and the \"latest\" split\
\ for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.4829612438141863,\n\
\ \"acc_stderr\": 0.035041389482858204,\n \"acc_norm\": 0.48663764074426863,\n\
\ \"acc_norm_stderr\": 0.03502942029152831,\n \"mc1\": 0.31701346389228885,\n\
\ \"mc1_stderr\": 0.016289203374403392,\n \"mc2\": 0.47006281499614255,\n\
\ \"mc2_stderr\": 0.015102334330899319\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.47696245733788395,\n \"acc_stderr\": 0.014595873205358264,\n\
\ \"acc_norm\": 0.5042662116040956,\n \"acc_norm_stderr\": 0.014610858923956955\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.5795658235411273,\n\
\ \"acc_stderr\": 0.004926198483948701,\n \"acc_norm\": 0.7691694881497709,\n\
\ \"acc_norm_stderr\": 0.004205030476886542\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.24,\n \"acc_stderr\": 0.042923469599092816,\n \
\ \"acc_norm\": 0.24,\n \"acc_norm_stderr\": 0.042923469599092816\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.42962962962962964,\n\
\ \"acc_stderr\": 0.04276349494376599,\n \"acc_norm\": 0.42962962962962964,\n\
\ \"acc_norm_stderr\": 0.04276349494376599\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.4868421052631579,\n \"acc_stderr\": 0.04067533136309173,\n\
\ \"acc_norm\": 0.4868421052631579,\n \"acc_norm_stderr\": 0.04067533136309173\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.49,\n\
\ \"acc_stderr\": 0.05024183937956912,\n \"acc_norm\": 0.49,\n \
\ \"acc_norm_stderr\": 0.05024183937956912\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.539622641509434,\n \"acc_stderr\": 0.030676096599389184,\n\
\ \"acc_norm\": 0.539622641509434,\n \"acc_norm_stderr\": 0.030676096599389184\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.5069444444444444,\n\
\ \"acc_stderr\": 0.04180806750294938,\n \"acc_norm\": 0.5069444444444444,\n\
\ \"acc_norm_stderr\": 0.04180806750294938\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \
\ \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.44,\n \"acc_stderr\": 0.049888765156985884,\n \"acc_norm\": 0.44,\n\
\ \"acc_norm_stderr\": 0.049888765156985884\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.36,\n \"acc_stderr\": 0.048241815132442176,\n \
\ \"acc_norm\": 0.36,\n \"acc_norm_stderr\": 0.048241815132442176\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.3930635838150289,\n\
\ \"acc_stderr\": 0.03724249595817728,\n \"acc_norm\": 0.3930635838150289,\n\
\ \"acc_norm_stderr\": 0.03724249595817728\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.2549019607843137,\n \"acc_stderr\": 0.043364327079931785,\n\
\ \"acc_norm\": 0.2549019607843137,\n \"acc_norm_stderr\": 0.043364327079931785\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.6,\n \"acc_stderr\": 0.049236596391733084,\n \"acc_norm\": 0.6,\n\
\ \"acc_norm_stderr\": 0.049236596391733084\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.3617021276595745,\n \"acc_stderr\": 0.03141082197596239,\n\
\ \"acc_norm\": 0.3617021276595745,\n \"acc_norm_stderr\": 0.03141082197596239\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.21929824561403508,\n\
\ \"acc_stderr\": 0.03892431106518753,\n \"acc_norm\": 0.21929824561403508,\n\
\ \"acc_norm_stderr\": 0.03892431106518753\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.45517241379310347,\n \"acc_stderr\": 0.04149886942192117,\n\
\ \"acc_norm\": 0.45517241379310347,\n \"acc_norm_stderr\": 0.04149886942192117\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.3253968253968254,\n \"acc_stderr\": 0.02413015829976261,\n \"\
acc_norm\": 0.3253968253968254,\n \"acc_norm_stderr\": 0.02413015829976261\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.3412698412698413,\n\
\ \"acc_stderr\": 0.042407993275749255,\n \"acc_norm\": 0.3412698412698413,\n\
\ \"acc_norm_stderr\": 0.042407993275749255\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \
\ \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.5161290322580645,\n\
\ \"acc_stderr\": 0.028429203176724555,\n \"acc_norm\": 0.5161290322580645,\n\
\ \"acc_norm_stderr\": 0.028429203176724555\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.37438423645320196,\n \"acc_stderr\": 0.03405155380561952,\n\
\ \"acc_norm\": 0.37438423645320196,\n \"acc_norm_stderr\": 0.03405155380561952\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.41,\n \"acc_stderr\": 0.04943110704237101,\n \"acc_norm\"\
: 0.41,\n \"acc_norm_stderr\": 0.04943110704237101\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.5878787878787879,\n \"acc_stderr\": 0.03843566993588717,\n\
\ \"acc_norm\": 0.5878787878787879,\n \"acc_norm_stderr\": 0.03843566993588717\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.6262626262626263,\n \"acc_stderr\": 0.034468977386593325,\n \"\
acc_norm\": 0.6262626262626263,\n \"acc_norm_stderr\": 0.034468977386593325\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.7202072538860104,\n \"acc_stderr\": 0.03239637046735704,\n\
\ \"acc_norm\": 0.7202072538860104,\n \"acc_norm_stderr\": 0.03239637046735704\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.47692307692307695,\n \"acc_stderr\": 0.025323990861736118,\n\
\ \"acc_norm\": 0.47692307692307695,\n \"acc_norm_stderr\": 0.025323990861736118\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.25925925925925924,\n \"acc_stderr\": 0.026719240783712163,\n \
\ \"acc_norm\": 0.25925925925925924,\n \"acc_norm_stderr\": 0.026719240783712163\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.4369747899159664,\n \"acc_stderr\": 0.03221943636566196,\n \
\ \"acc_norm\": 0.4369747899159664,\n \"acc_norm_stderr\": 0.03221943636566196\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.2980132450331126,\n \"acc_stderr\": 0.037345356767871984,\n \"\
acc_norm\": 0.2980132450331126,\n \"acc_norm_stderr\": 0.037345356767871984\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.6293577981651376,\n \"acc_stderr\": 0.02070745816435298,\n \"\
acc_norm\": 0.6293577981651376,\n \"acc_norm_stderr\": 0.02070745816435298\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.4583333333333333,\n \"acc_stderr\": 0.03398110890294636,\n \"\
acc_norm\": 0.4583333333333333,\n \"acc_norm_stderr\": 0.03398110890294636\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.6225490196078431,\n \"acc_stderr\": 0.034022720443407026,\n \"\
acc_norm\": 0.6225490196078431,\n \"acc_norm_stderr\": 0.034022720443407026\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.6371308016877637,\n \"acc_stderr\": 0.031299208255302136,\n \
\ \"acc_norm\": 0.6371308016877637,\n \"acc_norm_stderr\": 0.031299208255302136\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.57847533632287,\n\
\ \"acc_stderr\": 0.03314190222110658,\n \"acc_norm\": 0.57847533632287,\n\
\ \"acc_norm_stderr\": 0.03314190222110658\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.5725190839694656,\n \"acc_stderr\": 0.04338920305792401,\n\
\ \"acc_norm\": 0.5725190839694656,\n \"acc_norm_stderr\": 0.04338920305792401\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.6776859504132231,\n \"acc_stderr\": 0.04266416363352168,\n \"\
acc_norm\": 0.6776859504132231,\n \"acc_norm_stderr\": 0.04266416363352168\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.6666666666666666,\n\
\ \"acc_stderr\": 0.04557239513497751,\n \"acc_norm\": 0.6666666666666666,\n\
\ \"acc_norm_stderr\": 0.04557239513497751\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.5521472392638037,\n \"acc_stderr\": 0.03906947479456606,\n\
\ \"acc_norm\": 0.5521472392638037,\n \"acc_norm_stderr\": 0.03906947479456606\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.29464285714285715,\n\
\ \"acc_stderr\": 0.043270409325787275,\n \"acc_norm\": 0.29464285714285715,\n\
\ \"acc_norm_stderr\": 0.043270409325787275\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.6310679611650486,\n \"acc_stderr\": 0.0477761518115674,\n\
\ \"acc_norm\": 0.6310679611650486,\n \"acc_norm_stderr\": 0.0477761518115674\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.717948717948718,\n\
\ \"acc_stderr\": 0.029480360549541194,\n \"acc_norm\": 0.717948717948718,\n\
\ \"acc_norm_stderr\": 0.029480360549541194\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.56,\n \"acc_stderr\": 0.04988876515698589,\n \
\ \"acc_norm\": 0.56,\n \"acc_norm_stderr\": 0.04988876515698589\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.6551724137931034,\n\
\ \"acc_stderr\": 0.01699712334611344,\n \"acc_norm\": 0.6551724137931034,\n\
\ \"acc_norm_stderr\": 0.01699712334611344\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.5202312138728323,\n \"acc_stderr\": 0.026897049996382868,\n\
\ \"acc_norm\": 0.5202312138728323,\n \"acc_norm_stderr\": 0.026897049996382868\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.2424581005586592,\n\
\ \"acc_stderr\": 0.014333522059217889,\n \"acc_norm\": 0.2424581005586592,\n\
\ \"acc_norm_stderr\": 0.014333522059217889\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.5326797385620915,\n \"acc_stderr\": 0.028568699752225868,\n\
\ \"acc_norm\": 0.5326797385620915,\n \"acc_norm_stderr\": 0.028568699752225868\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.5337620578778135,\n\
\ \"acc_stderr\": 0.02833327710956279,\n \"acc_norm\": 0.5337620578778135,\n\
\ \"acc_norm_stderr\": 0.02833327710956279\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.558641975308642,\n \"acc_stderr\": 0.02762873715566877,\n\
\ \"acc_norm\": 0.558641975308642,\n \"acc_norm_stderr\": 0.02762873715566877\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.34397163120567376,\n \"acc_stderr\": 0.028338017428611317,\n \
\ \"acc_norm\": 0.34397163120567376,\n \"acc_norm_stderr\": 0.028338017428611317\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.363754889178618,\n\
\ \"acc_stderr\": 0.012286991879902896,\n \"acc_norm\": 0.363754889178618,\n\
\ \"acc_norm_stderr\": 0.012286991879902896\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.46691176470588236,\n \"acc_stderr\": 0.030306257722468317,\n\
\ \"acc_norm\": 0.46691176470588236,\n \"acc_norm_stderr\": 0.030306257722468317\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.4362745098039216,\n \"acc_stderr\": 0.020062874243539128,\n \
\ \"acc_norm\": 0.4362745098039216,\n \"acc_norm_stderr\": 0.020062874243539128\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.5,\n\
\ \"acc_stderr\": 0.04789131426105757,\n \"acc_norm\": 0.5,\n \
\ \"acc_norm_stderr\": 0.04789131426105757\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.5510204081632653,\n \"acc_stderr\": 0.03184213866687579,\n\
\ \"acc_norm\": 0.5510204081632653,\n \"acc_norm_stderr\": 0.03184213866687579\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.6766169154228856,\n\
\ \"acc_stderr\": 0.03307615947979034,\n \"acc_norm\": 0.6766169154228856,\n\
\ \"acc_norm_stderr\": 0.03307615947979034\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.68,\n \"acc_stderr\": 0.04688261722621504,\n \
\ \"acc_norm\": 0.68,\n \"acc_norm_stderr\": 0.04688261722621504\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.3674698795180723,\n\
\ \"acc_stderr\": 0.03753267402120575,\n \"acc_norm\": 0.3674698795180723,\n\
\ \"acc_norm_stderr\": 0.03753267402120575\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.6608187134502924,\n \"acc_stderr\": 0.03631053496488905,\n\
\ \"acc_norm\": 0.6608187134502924,\n \"acc_norm_stderr\": 0.03631053496488905\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.31701346389228885,\n\
\ \"mc1_stderr\": 0.016289203374403392,\n \"mc2\": 0.47006281499614255,\n\
\ \"mc2_stderr\": 0.015102334330899319\n }\n}\n```"
repo_url: https://huggingface.co/lmsys/vicuna-7b-v1.3
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|arc:challenge|25_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hellaswag|10_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-management|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|truthfulqa:mc|0_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-07-19T16:22:02.219224.parquet'
- config_name: results
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- results_2023-07-19T16:22:02.219224.parquet
- split: latest
path:
- results_2023-07-19T16:22:02.219224.parquet
---
# Dataset Card for Evaluation run of lmsys/vicuna-7b-v1.3
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/lmsys/vicuna-7b-v1.3
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [lmsys/vicuna-7b-v1.3](https://huggingface.co/lmsys/vicuna-7b-v1.3) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_lmsys__vicuna-7b-v1.3",
"harness_truthfulqa_mc_0",
split="train")
```
## Latest results
These are the [latest results from run 2023-07-19T16:22:02.219224](https://huggingface.co/datasets/open-llm-leaderboard/details_lmsys__vicuna-7b-v1.3/blob/main/results_2023-07-19T16%3A22%3A02.219224.json) (note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.4829612438141863,
"acc_stderr": 0.035041389482858204,
"acc_norm": 0.48663764074426863,
"acc_norm_stderr": 0.03502942029152831,
"mc1": 0.31701346389228885,
"mc1_stderr": 0.016289203374403392,
"mc2": 0.47006281499614255,
"mc2_stderr": 0.015102334330899319
},
"harness|arc:challenge|25": {
"acc": 0.47696245733788395,
"acc_stderr": 0.014595873205358264,
"acc_norm": 0.5042662116040956,
"acc_norm_stderr": 0.014610858923956955
},
"harness|hellaswag|10": {
"acc": 0.5795658235411273,
"acc_stderr": 0.004926198483948701,
"acc_norm": 0.7691694881497709,
"acc_norm_stderr": 0.004205030476886542
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.24,
"acc_stderr": 0.042923469599092816,
"acc_norm": 0.24,
"acc_norm_stderr": 0.042923469599092816
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.42962962962962964,
"acc_stderr": 0.04276349494376599,
"acc_norm": 0.42962962962962964,
"acc_norm_stderr": 0.04276349494376599
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.4868421052631579,
"acc_stderr": 0.04067533136309173,
"acc_norm": 0.4868421052631579,
"acc_norm_stderr": 0.04067533136309173
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.49,
"acc_stderr": 0.05024183937956912,
"acc_norm": 0.49,
"acc_norm_stderr": 0.05024183937956912
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.539622641509434,
"acc_stderr": 0.030676096599389184,
"acc_norm": 0.539622641509434,
"acc_norm_stderr": 0.030676096599389184
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.5069444444444444,
"acc_stderr": 0.04180806750294938,
"acc_norm": 0.5069444444444444,
"acc_norm_stderr": 0.04180806750294938
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.44,
"acc_stderr": 0.049888765156985884,
"acc_norm": 0.44,
"acc_norm_stderr": 0.049888765156985884
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.36,
"acc_stderr": 0.048241815132442176,
"acc_norm": 0.36,
"acc_norm_stderr": 0.048241815132442176
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.3930635838150289,
"acc_stderr": 0.03724249595817728,
"acc_norm": 0.3930635838150289,
"acc_norm_stderr": 0.03724249595817728
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.2549019607843137,
"acc_stderr": 0.043364327079931785,
"acc_norm": 0.2549019607843137,
"acc_norm_stderr": 0.043364327079931785
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.6,
"acc_stderr": 0.049236596391733084,
"acc_norm": 0.6,
"acc_norm_stderr": 0.049236596391733084
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.3617021276595745,
"acc_stderr": 0.03141082197596239,
"acc_norm": 0.3617021276595745,
"acc_norm_stderr": 0.03141082197596239
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.21929824561403508,
"acc_stderr": 0.03892431106518753,
"acc_norm": 0.21929824561403508,
"acc_norm_stderr": 0.03892431106518753
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.45517241379310347,
"acc_stderr": 0.04149886942192117,
"acc_norm": 0.45517241379310347,
"acc_norm_stderr": 0.04149886942192117
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.3253968253968254,
"acc_stderr": 0.02413015829976261,
"acc_norm": 0.3253968253968254,
"acc_norm_stderr": 0.02413015829976261
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.3412698412698413,
"acc_stderr": 0.042407993275749255,
"acc_norm": 0.3412698412698413,
"acc_norm_stderr": 0.042407993275749255
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.5161290322580645,
"acc_stderr": 0.028429203176724555,
"acc_norm": 0.5161290322580645,
"acc_norm_stderr": 0.028429203176724555
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.37438423645320196,
"acc_stderr": 0.03405155380561952,
"acc_norm": 0.37438423645320196,
"acc_norm_stderr": 0.03405155380561952
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.41,
"acc_stderr": 0.04943110704237101,
"acc_norm": 0.41,
"acc_norm_stderr": 0.04943110704237101
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.5878787878787879,
"acc_stderr": 0.03843566993588717,
"acc_norm": 0.5878787878787879,
"acc_norm_stderr": 0.03843566993588717
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.6262626262626263,
"acc_stderr": 0.034468977386593325,
"acc_norm": 0.6262626262626263,
"acc_norm_stderr": 0.034468977386593325
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.7202072538860104,
"acc_stderr": 0.03239637046735704,
"acc_norm": 0.7202072538860104,
"acc_norm_stderr": 0.03239637046735704
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.47692307692307695,
"acc_stderr": 0.025323990861736118,
"acc_norm": 0.47692307692307695,
"acc_norm_stderr": 0.025323990861736118
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.25925925925925924,
"acc_stderr": 0.026719240783712163,
"acc_norm": 0.25925925925925924,
"acc_norm_stderr": 0.026719240783712163
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.4369747899159664,
"acc_stderr": 0.03221943636566196,
"acc_norm": 0.4369747899159664,
"acc_norm_stderr": 0.03221943636566196
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.2980132450331126,
"acc_stderr": 0.037345356767871984,
"acc_norm": 0.2980132450331126,
"acc_norm_stderr": 0.037345356767871984
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.6293577981651376,
"acc_stderr": 0.02070745816435298,
"acc_norm": 0.6293577981651376,
"acc_norm_stderr": 0.02070745816435298
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.4583333333333333,
"acc_stderr": 0.03398110890294636,
"acc_norm": 0.4583333333333333,
"acc_norm_stderr": 0.03398110890294636
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.6225490196078431,
"acc_stderr": 0.034022720443407026,
"acc_norm": 0.6225490196078431,
"acc_norm_stderr": 0.034022720443407026
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.6371308016877637,
"acc_stderr": 0.031299208255302136,
"acc_norm": 0.6371308016877637,
"acc_norm_stderr": 0.031299208255302136
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.57847533632287,
"acc_stderr": 0.03314190222110658,
"acc_norm": 0.57847533632287,
"acc_norm_stderr": 0.03314190222110658
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.5725190839694656,
"acc_stderr": 0.04338920305792401,
"acc_norm": 0.5725190839694656,
"acc_norm_stderr": 0.04338920305792401
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.6776859504132231,
"acc_stderr": 0.04266416363352168,
"acc_norm": 0.6776859504132231,
"acc_norm_stderr": 0.04266416363352168
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.6666666666666666,
"acc_stderr": 0.04557239513497751,
"acc_norm": 0.6666666666666666,
"acc_norm_stderr": 0.04557239513497751
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.5521472392638037,
"acc_stderr": 0.03906947479456606,
"acc_norm": 0.5521472392638037,
"acc_norm_stderr": 0.03906947479456606
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.29464285714285715,
"acc_stderr": 0.043270409325787275,
"acc_norm": 0.29464285714285715,
"acc_norm_stderr": 0.043270409325787275
},
"harness|hendrycksTest-management|5": {
"acc": 0.6310679611650486,
"acc_stderr": 0.0477761518115674,
"acc_norm": 0.6310679611650486,
"acc_norm_stderr": 0.0477761518115674
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.717948717948718,
"acc_stderr": 0.029480360549541194,
"acc_norm": 0.717948717948718,
"acc_norm_stderr": 0.029480360549541194
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.56,
"acc_stderr": 0.04988876515698589,
"acc_norm": 0.56,
"acc_norm_stderr": 0.04988876515698589
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.6551724137931034,
"acc_stderr": 0.01699712334611344,
"acc_norm": 0.6551724137931034,
"acc_norm_stderr": 0.01699712334611344
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.5202312138728323,
"acc_stderr": 0.026897049996382868,
"acc_norm": 0.5202312138728323,
"acc_norm_stderr": 0.026897049996382868
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.2424581005586592,
"acc_stderr": 0.014333522059217889,
"acc_norm": 0.2424581005586592,
"acc_norm_stderr": 0.014333522059217889
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.5326797385620915,
"acc_stderr": 0.028568699752225868,
"acc_norm": 0.5326797385620915,
"acc_norm_stderr": 0.028568699752225868
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.5337620578778135,
"acc_stderr": 0.02833327710956279,
"acc_norm": 0.5337620578778135,
"acc_norm_stderr": 0.02833327710956279
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.558641975308642,
"acc_stderr": 0.02762873715566877,
"acc_norm": 0.558641975308642,
"acc_norm_stderr": 0.02762873715566877
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.34397163120567376,
"acc_stderr": 0.028338017428611317,
"acc_norm": 0.34397163120567376,
"acc_norm_stderr": 0.028338017428611317
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.363754889178618,
"acc_stderr": 0.012286991879902896,
"acc_norm": 0.363754889178618,
"acc_norm_stderr": 0.012286991879902896
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.46691176470588236,
"acc_stderr": 0.030306257722468317,
"acc_norm": 0.46691176470588236,
"acc_norm_stderr": 0.030306257722468317
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.4362745098039216,
"acc_stderr": 0.020062874243539128,
"acc_norm": 0.4362745098039216,
"acc_norm_stderr": 0.020062874243539128
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.5,
"acc_stderr": 0.04789131426105757,
"acc_norm": 0.5,
"acc_norm_stderr": 0.04789131426105757
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.5510204081632653,
"acc_stderr": 0.03184213866687579,
"acc_norm": 0.5510204081632653,
"acc_norm_stderr": 0.03184213866687579
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.6766169154228856,
"acc_stderr": 0.03307615947979034,
"acc_norm": 0.6766169154228856,
"acc_norm_stderr": 0.03307615947979034
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.68,
"acc_stderr": 0.04688261722621504,
"acc_norm": 0.68,
"acc_norm_stderr": 0.04688261722621504
},
"harness|hendrycksTest-virology|5": {
"acc": 0.3674698795180723,
"acc_stderr": 0.03753267402120575,
"acc_norm": 0.3674698795180723,
"acc_norm_stderr": 0.03753267402120575
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.6608187134502924,
"acc_stderr": 0.03631053496488905,
"acc_norm": 0.6608187134502924,
"acc_norm_stderr": 0.03631053496488905
},
"harness|truthfulqa:mc|0": {
"mc1": 0.31701346389228885,
"mc1_stderr": 0.016289203374403392,
"mc2": 0.47006281499614255,
"mc2_stderr": 0.015102334330899319
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
NumbersStation/NSText2SQL | 2023-07-11T05:26:13.000Z | [
"task_categories:text2text-generation",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"language:en",
"license:other",
"text-to-sql",
"region:us"
] | NumbersStation | null | null | null | 24 | 712 | ---
language:
- en
task_categories:
- text2text-generation
license:
- other
language_creators:
- crowdsourced
- expert-generated
multilinguality:
- multilingual
tags:
- text-to-sql
size_categories:
- 100K<n<1M
pretty_name: NSText2SQL
---
# Dataset Summary
NSText2SQL dataset used to train [NSQL](https://huggingface.co/NumbersStation/nsql-6B) models. The data is curated from more than 20 different public sources across the web with permissable licenses (listed below). All of these datasets come with existing text-to-SQL pairs. We apply various data cleaning and pre-processing techniques including table schema augmentation, SQL cleaning, and instruction generation using existing LLMs. The resulting dataset contains around 290,000 samples of text-to-SQL pairs.
For more information and code, please see [this repository](https://github.com/NumbersStationAI/NSQL).
# How to use it
```python
from datasets import load_dataset
dataset = load_dataset("NumbersStation/NSText2SQL")
```
# Dataset Structure
## Data Instances
Each data instance in this dataset represents a text-to-SQL entry where the instruction has been formatted with the table schema and question. The output is the SQL in SQlite dialect.
## Data Fields
- `instruction` (string): the instruction to generate SQL.
- `output` (string): the ground truth SQL.
- `source` (string): the source dataset of the sample.
# Languages
The language of the data is primarily English.
# Source Data and Licensing Information
NSText2SQL is sourced from repositories with various licenses. Any use of all or part of the data gathered in NSText2SQL must abide by the terms of the original licenses, including attribution clauses when relevant. We thank all authors who provided these datasets. We provide provenance information for each dataset below.
| Datasets | License | Link |
| ---------------------- | ------------ | -------------------------------------------------------------------------------------------------------------------- |
| academic | Not Found | [https://github.com/jkkummerfeld/text2sql-data](https://github.com/jkkummerfeld/text2sql-data) |
| advising | CC-BY-4.0 | [https://github.com/jkkummerfeld/text2sql-data](https://github.com/jkkummerfeld/text2sql-data) |
| atis | Not Found | [https://github.com/jkkummerfeld/text2sql-data](https://github.com/jkkummerfeld/text2sql-data) |
| restaurants | Not Found | [https://github.com/jkkummerfeld/text2sql-data](https://github.com/jkkummerfeld/text2sql-data) |
| scholar | Not Found | [https://github.com/jkkummerfeld/text2sql-data](https://github.com/jkkummerfeld/text2sql-data) |
| imdb | Not Found | [https://github.com/jkkummerfeld/text2sql-data](https://github.com/jkkummerfeld/text2sql-data) |
| yelp | Not Found | [https://github.com/jkkummerfeld/text2sql-data](https://github.com/jkkummerfeld/text2sql-data) |
| criteria2sql | Apache-2.0 | [https://github.com/xiaojingyu92/Criteria2SQL](https://github.com/xiaojingyu92/Criteria2SQL) |
| css | CC-BY-4.0 | [https://huggingface.co/datasets/zhanghanchong/css](https://huggingface.co/datasets/zhanghanchong/css) |
| eICU | CC-BY-4.0 | [https://github.com/glee4810/EHRSQL](https://github.com/glee4810/EHRSQL) |
| mimic_iii | CC-BY-4.0 | [https://github.com/glee4810/EHRSQL](https://github.com/glee4810/EHRSQL) |
| geonucleardata | CC-BY-SA-4.0 | [https://github.com/chiahsuan156/KaggleDBQA](https://github.com/chiahsuan156/KaggleDBQA) |
| greatermanchestercrime | CC-BY-SA-4.0 | [https://github.com/chiahsuan156/KaggleDBQA](https://github.com/chiahsuan156/KaggleDBQA) |
| studentmathscore | CC-BY-SA-4.0 | [https://github.com/chiahsuan156/KaggleDBQA](https://github.com/chiahsuan156/KaggleDBQA) |
| thehistoryofbaseball | CC-BY-SA-4.0 | [https://github.com/chiahsuan156/KaggleDBQA](https://github.com/chiahsuan156/KaggleDBQA) |
| uswildfires | CC-BY-SA-4.0 | [https://github.com/chiahsuan156/KaggleDBQA](https://github.com/chiahsuan156/KaggleDBQA) |
| whatcdhiphop | CC-BY-SA-4.0 | [https://github.com/chiahsuan156/KaggleDBQA](https://github.com/chiahsuan156/KaggleDBQA) |
| worldsoccerdatabase | CC-BY-SA-4.0 | [https://github.com/chiahsuan156/KaggleDBQA](https://github.com/chiahsuan156/KaggleDBQA) |
| pesticide | CC-BY-SA-4.0 | [https://github.com/chiahsuan156/KaggleDBQA](https://github.com/chiahsuan156/KaggleDBQA) |
| mimicsql_data | MIT | [https://github.com/wangpinggl/TREQS](https://github.com/wangpinggl/TREQS) |
| nvbench | MIT | [https://github.com/TsinghuaDatabaseGroup/nvBench](https://github.com/TsinghuaDatabaseGroup/nvBench) |
| sede | Apache-2.0 | [https://github.com/hirupert/sede](https://github.com/hirupert/sede) |
| spider | CC-BY-SA-4.0 | [https://huggingface.co/datasets/spider](https://huggingface.co/datasets/spider) |
| sql_create_context | CC-BY-4.0 | [https://huggingface.co/datasets/b-mc2/sql-create-context](https://huggingface.co/datasets/b-mc2/sql-create-context) |
| squall | CC-BY-SA-4.0 | [https://github.com/tzshi/squall](https://github.com/tzshi/squall) |
| wikisql | BSD 3-Clause | [https://github.com/salesforce/WikiSQL](https://github.com/salesforce/WikiSQL) |
# Citing this work
If you use this data in your work, please cite our work _and_ the appropriate original sources:
To cite NSText2SQL, please use:
```TeX
@software{numbersstation2023NSText2SQL,
author = {Numbers Station Labs},
title = {NSText2SQL: An Open Source Text-to-SQL Dataset for Foundation Model Training},
month = {July},
year = {2023},
url = {https://github.com/NumbersStationAI/NSQL},
}
```
To cite dataset used in this work, please use:
| Datasets | Cite |
| ---------------------- | ---------------------------------------------------------------------------------------- |
| academic | `\cite{data-advising,data-academic}` |
| advising | `\cite{data-advising}` |
| atis | `\cite{data-advising,data-atis-original,data-atis-geography-scholar}` |
| restaurants | `\cite{data-advising,data-restaurants-logic,data-restaurants-original,data-restaurants}` |
| scholar | `\cite{data-advising,data-atis-geography-scholar}` |
| imdb | `\cite{data-advising,data-imdb-yelp}` |
| yelp | `\cite{data-advising,data-imdb-yelp}` |
| criteria2sql | `\cite{Criteria-to-SQL}` |
| css | `\cite{zhang2023css}` |
| eICU | `\cite{lee2022ehrsql}` |
| mimic_iii | `\cite{lee2022ehrsql}` |
| geonucleardata | `\cite{lee-2021-kaggle-dbqa}` |
| greatermanchestercrime | `\cite{lee-2021-kaggle-dbqa}` |
| studentmathscore | `\cite{lee-2021-kaggle-dbqa}` |
| thehistoryofbaseball | `\cite{lee-2021-kaggle-dbqa}` |
| uswildfires | `\cite{lee-2021-kaggle-dbqa}` |
| whatcdhiphop | `\cite{lee-2021-kaggle-dbqa}` |
| worldsoccerdatabase | `\cite{lee-2021-kaggle-dbqa}` |
| pesticide | `\cite{lee-2021-kaggle-dbqa}` |
| mimicsql_data | `\cite{wang2020text}` |
| nvbench | `\cite{nvBench_SIGMOD21}` |
| sede | `\cite{hazoom2021text}` |
| spider | `\cite{data-spider}` |
| sql_create_context | Not Found |
| squall | `\cite{squall}` |
| wikisql | `\cite{data-wikisql}` |
```TeX
@InProceedings{data-advising,
dataset = {Advising},
author = {Catherine Finegan-Dollak, Jonathan K. Kummerfeld, Li Zhang, Karthik Ramanathan, Sesh Sadasivam, Rui Zhang, and Dragomir Radev},
title = {Improving Text-to-SQL Evaluation Methodology},
booktitle = {Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)},
month = {July},
year = {2018},
location = {Melbourne, Victoria, Australia},
pages = {351--360},
url = {http://aclweb.org/anthology/P18-1033},
}
@InProceedings{data-imdb-yelp,
dataset = {IMDB and Yelp},
author = {Navid Yaghmazadeh, Yuepeng Wang, Isil Dillig, and Thomas Dillig},
title = {SQLizer: Query Synthesis from Natural Language},
booktitle = {International Conference on Object-Oriented Programming, Systems, Languages, and Applications, ACM},
month = {October},
year = {2017},
pages = {63:1--63:26},
url = {http://doi.org/10.1145/3133887},
}
@article{data-academic,
dataset = {Academic},
author = {Fei Li and H. V. Jagadish},
title = {Constructing an Interactive Natural Language Interface for Relational Databases},
journal = {Proceedings of the VLDB Endowment},
volume = {8},
number = {1},
month = {September},
year = {2014},
pages = {73--84},
url = {http://dx.doi.org/10.14778/2735461.2735468},
}
@InProceedings{data-atis-geography-scholar,
dataset = {Scholar, and Updated ATIS and Geography},
author = {Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, Jayant Krishnamurthy, and Luke Zettlemoyer},
title = {Learning a Neural Semantic Parser from User Feedback},
booktitle = {Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)},
year = {2017},
pages = {963--973},
location = {Vancouver, Canada},
url = {http://www.aclweb.org/anthology/P17-1089},
}
@article{data-atis-original,
dataset = {ATIS, original},
author = {Deborah A. Dahl, Madeleine Bates, Michael Brown, William Fisher, Kate Hunicke-Smith, David Pallett, Christine Pao, Alexander Rudnicky, and Elizabeth Shriber},
title = {{Expanding the scope of the ATIS task: The ATIS-3 corpus}},
journal = {Proceedings of the workshop on Human Language Technology},
year = {1994},
pages = {43--48},
url = {http://dl.acm.org/citation.cfm?id=1075823},
}
@inproceedings{data-restaurants-logic,
author = {Lappoon R. Tang and Raymond J. Mooney},
title = {Automated Construction of Database Interfaces: Intergrating Statistical and Relational Learning for Semantic Parsing},
booktitle = {2000 Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora},
year = {2000},
pages = {133--141},
location = {Hong Kong, China},
url = {http://www.aclweb.org/anthology/W00-1317},
}
@inproceedings{data-restaurants-original,
author = {Ana-Maria Popescu, Oren Etzioni, and Henry Kautz},
title = {Towards a Theory of Natural Language Interfaces to Databases},
booktitle = {Proceedings of the 8th International Conference on Intelligent User Interfaces},
year = {2003},
location = {Miami, Florida, USA},
pages = {149--157},
url = {http://doi.acm.org/10.1145/604045.604070},
}
@inproceedings{data-restaurants,
author = {Alessandra Giordani and Alessandro Moschitti},
title = {Automatic Generation and Reranking of SQL-derived Answers to NL Questions},
booktitle = {Proceedings of the Second International Conference on Trustworthy Eternal Systems via Evolving Software, Data and Knowledge},
year = {2012},
location = {Montpellier, France},
pages = {59--76},
url = {https://doi.org/10.1007/978-3-642-45260-4_5},
}
@InProceedings{data-spider,
author = {Tao Yu, Rui Zhang, Kai Yang, Michihiro Yasunaga, Dongxu Wang, Zifan Li, James Ma, Irene Li, Qingning Yao, Shanelle Roman, Zilin Zhang, and Dragomir Radev},
title = {Spider: A Large-Scale Human-Labeled Dataset for Complex and Cross-Domain Semantic Parsing and Text-to-SQL Task},
booktitle = {Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing},
year = {2018},
location = {Brussels, Belgium},
pages = {3911--3921},
url = {http://aclweb.org/anthology/D18-1425},
}
@article{data-wikisql,
author = {Victor Zhong, Caiming Xiong, and Richard Socher},
title = {Seq2SQL: Generating Structured Queries from Natural Language using Reinforcement Learning},
year = {2017},
journal = {CoRR},
volume = {abs/1709.00103},
}
@InProceedings{Criteria-to-SQL,
author = {Yu, Xiaojing and Chen, Tianlong and Yu, Zhengjie and Li, Huiyu and Yang, Yang and Jiang, Xiaoqian and Jiang, Anxiao},
title = {Dataset and Enhanced Model for Eligibility Criteria-to-SQL Semantic Parsing},
booktitle = {Proceedings of The 12th Language Resources and Evaluation Conference},
month = {May},
year = {2020},
address = {Marseille, France},
publisher = {European Language Resources Association},
pages = {5831--5839},
}
@misc{zhang2023css,
title = {CSS: A Large-scale Cross-schema Chinese Text-to-SQL Medical Dataset},
author = {Hanchong Zhang and Jieyu Li and Lu Chen and Ruisheng Cao and Yunyan Zhang and Yu Huang and Yefeng Zheng and Kai Yu},
year = {2023},
}
@article{lee2022ehrsql,
title = {EHRSQL: A Practical Text-to-SQL Benchmark for Electronic Health Records},
author = {Lee, Gyubok and Hwang, Hyeonji and Bae, Seongsu and Kwon, Yeonsu and Shin, Woncheol and Yang, Seongjun and Seo, Minjoon and Kim, Jong-Yeup and Choi, Edward},
journal = {Advances in Neural Information Processing Systems},
volume = {35},
pages = {15589--15601},
year = {2022},
}
@inproceedings{lee-2021-kaggle-dbqa,
title = {KaggleDBQA: Realistic Evaluation of Text-to-SQL Parsers},
author = {Lee, Chia-Hsuan and Polozov, Oleksandr and Richardson, Matthew},
booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)},
pages = {2261--2273},
year = {2021},
}
@inproceedings{squall,
title = {On the Potential of Lexico-logical Alignments for Semantic Parsing to {SQL} Queries},
author = {Tianze Shi and Chen Zhao and Jordan Boyd-Graber and Hal {Daum\'{e} III} and Lillian Lee},
booktitle = {Findings of EMNLP},
year = {2020},
}
@article{hazoom2021text,
title = {Text-to-SQL in the wild: a naturally-occurring dataset based on Stack exchange data},
author = {Hazoom, Moshe and Malik, Vibhor and Bogin, Ben},
journal = {arXiv preprint arXiv:2106.05006},
year = {2021},
}
@inproceedings{wang2020text,
title = {Text-to-SQL Generation for Question Answering on Electronic Medical Records},
author = {Wang, Ping and Shi, Tian and Reddy, Chandan K},
booktitle = {Proceedings of The Web Conference 2020},
pages = {350--361},
year = {2020},
}
@inproceedings{nvBench_SIGMOD21,
title = {Synthesizing Natural Language to Visualization (NL2VIS) Benchmarks from NL2SQL Benchmarks},
author = {Yuyu Luo and Nan Tang and Guoliang Li and Chengliang Chai and Wenbo Li and Xuedi Qin},
booktitle = {Proceedings of the 2021 International Conference on Management of Data, {SIGMOD} Conference 2021, June 20–25, 2021, Virtual Event, China},
publisher = {ACM},
year = {2021},
}
``` |
mozilla-foundation/common_voice_6_1 | 2023-07-29T16:00:07.000Z | [
"task_categories:automatic-speech-recognition",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"source_datasets:extended|common_voice",
"license:cc0-1.0",
"arxiv:1912.06670",
"region:us"
] | mozilla-foundation | null | @inproceedings{commonvoice:2020,
author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.},
title = {Common Voice: A Massively-Multilingual Speech Corpus},
booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)},
pages = {4211--4215},
year = 2020
} | null | 4 | 710 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
license:
- cc0-1.0
multilinguality:
- multilingual
size_categories:
ab:
- n<1K
ar:
- 10K<n<100K
as:
- n<1K
br:
- 10K<n<100K
ca:
- 100K<n<1M
cnh:
- 1K<n<10K
cs:
- 10K<n<100K
cv:
- 10K<n<100K
cy:
- 10K<n<100K
de:
- 100K<n<1M
dv:
- 10K<n<100K
el:
- 10K<n<100K
en:
- 1M<n<10M
eo:
- 10K<n<100K
es:
- 100K<n<1M
et:
- 10K<n<100K
eu:
- 10K<n<100K
fa:
- 100K<n<1M
fi:
- 1K<n<10K
fr:
- 100K<n<1M
fy-NL:
- 10K<n<100K
ga-IE:
- 1K<n<10K
hi:
- n<1K
hsb:
- 1K<n<10K
hu:
- 1K<n<10K
ia:
- 1K<n<10K
id:
- 10K<n<100K
it:
- 100K<n<1M
ja:
- 1K<n<10K
ka:
- 1K<n<10K
kab:
- 100K<n<1M
ky:
- 10K<n<100K
lg:
- 1K<n<10K
lt:
- 1K<n<10K
lv:
- 1K<n<10K
mn:
- 10K<n<100K
mt:
- 10K<n<100K
nl:
- 10K<n<100K
or:
- 1K<n<10K
pa-IN:
- 1K<n<10K
pl:
- 100K<n<1M
pt:
- 10K<n<100K
rm-sursilv:
- 1K<n<10K
rm-vallader:
- 1K<n<10K
ro:
- 1K<n<10K
ru:
- 10K<n<100K
rw:
- 1M<n<10M
sah:
- 1K<n<10K
sl:
- 1K<n<10K
sv-SE:
- 10K<n<100K
ta:
- 10K<n<100K
th:
- 10K<n<100K
tr:
- 10K<n<100K
tt:
- 10K<n<100K
uk:
- 10K<n<100K
vi:
- 1K<n<10K
vot:
- n<1K
zh-CN:
- 10K<n<100K
zh-HK:
- 10K<n<100K
zh-TW:
- 10K<n<100K
source_datasets:
- extended|common_voice
paperswithcode_id: common-voice
pretty_name: Common Voice Corpus 6.1
language_bcp47:
- ab
- ar
- as
- br
- ca
- cnh
- cs
- cv
- cy
- de
- dv
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy-NL
- ga-IE
- hi
- hsb
- hu
- ia
- id
- it
- ja
- ka
- kab
- ky
- lg
- lt
- lv
- mn
- mt
- nl
- or
- pa-IN
- pl
- pt
- rm-sursilv
- rm-vallader
- ro
- ru
- rw
- sah
- sl
- sv-SE
- ta
- th
- tr
- tt
- uk
- vi
- vot
- zh-CN
- zh-HK
- zh-TW
extra_gated_prompt: By clicking on “Access repository” below, you also agree to not
attempt to determine the identity of speakers in the Common Voice dataset.
task_categories:
- automatic-speech-recognition
---
# Dataset Card for Common Voice Corpus 6.1
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://commonvoice.mozilla.org/en/datasets
- **Repository:** https://github.com/common-voice/common-voice
- **Paper:** https://arxiv.org/abs/1912.06670
- **Leaderboard:** https://paperswithcode.com/dataset/common-voice
- **Point of Contact:** [Anton Lozhkov](mailto:anton@huggingface.co)
### Dataset Summary
The Common Voice dataset consists of a unique MP3 and corresponding text file.
Many of the 9283 recorded hours in the dataset also include demographic metadata like age, sex, and accent
that can help improve the accuracy of speech recognition engines.
The dataset currently consists of 7335 validated hours in 60 languages, but more voices and languages are always added.
Take a look at the [Languages](https://commonvoice.mozilla.org/en/languages) page to request a language or start contributing.
### Supported Tasks and Leaderboards
The results for models trained on the Common Voice datasets are available via the
[🤗 Speech Bench](https://huggingface.co/spaces/huggingface/hf-speech-bench)
### Languages
```
Abkhaz, Arabic, Assamese, Basque, Breton, Catalan, Chinese (China), Chinese (Hong Kong), Chinese (Taiwan), Chuvash, Czech, Dhivehi, Dutch, English, Esperanto, Estonian, Finnish, French, Frisian, Georgian, German, Greek, Hakha Chin, Hindi, Hungarian, Indonesian, Interlingua, Irish, Italian, Japanese, Kabyle, Kinyarwanda, Kyrgyz, Latvian, Lithuanian, Luganda, Maltese, Mongolian, Odia, Persian, Polish, Portuguese, Punjabi, Romanian, Romansh Sursilvan, Romansh Vallader, Russian, Sakha, Slovenian, Sorbian, Upper, Spanish, Swedish, Tamil, Tatar, Thai, Turkish, Ukrainian, Vietnamese, Votic, Welsh
```
## Dataset Structure
### Data Instances
A typical data point comprises the `path` to the audio file and its `sentence`.
Additional fields include `accent`, `age`, `client_id`, `up_votes`, `down_votes`, `gender`, `locale` and `segment`.
```python
{
'client_id': 'd59478fbc1ee646a28a3c652a119379939123784d99131b865a89f8b21c81f69276c48bd574b81267d9d1a77b83b43e6d475a6cfc79c232ddbca946ae9c7afc5',
'path': 'et/clips/common_voice_et_18318995.mp3',
'audio': {
'path': 'et/clips/common_voice_et_18318995.mp3',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 48000
},
'sentence': 'Tasub kokku saada inimestega, keda tunned juba ammust ajast saati.',
'up_votes': 2,
'down_votes': 0,
'age': 'twenties',
'gender': 'male',
'accent': '',
'locale': 'et',
'segment': ''
}
```
### Data Fields
`client_id` (`string`): An id for which client (voice) made the recording
`path` (`string`): The path to the audio file
`audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
`sentence` (`string`): The sentence the user was prompted to speak
`up_votes` (`int64`): How many upvotes the audio file has received from reviewers
`down_votes` (`int64`): How many downvotes the audio file has received from reviewers
`age` (`string`): The age of the speaker (e.g. `teens`, `twenties`, `fifties`)
`gender` (`string`): The gender of the speaker
`accent` (`string`): Accent of the speaker
`locale` (`string`): The locale of the speaker
`segment` (`string`): Usually an empty field
### Data Splits
The speech material has been subdivided into portions for dev, train, test, validated, invalidated, reported and other.
The validated data is data that has been validated with reviewers and received upvotes that the data is of high quality.
The invalidated data is data has been invalidated by reviewers
and received downvotes indicating that the data is of low quality.
The reported data is data that has been reported, for different reasons.
The other data is data that has not yet been reviewed.
The dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train.
## Data Preprocessing Recommended by Hugging Face
The following are data preprocessing steps advised by the Hugging Face team. They are accompanied by an example code snippet that shows how to put them to practice.
Many examples in this dataset have trailing quotations marks, e.g _“the cat sat on the mat.“_. These trailing quotation marks do not change the actual meaning of the sentence, and it is near impossible to infer whether a sentence is a quotation or not a quotation from audio data alone. In these cases, it is advised to strip the quotation marks, leaving: _the cat sat on the mat_.
In addition, the majority of training sentences end in punctuation ( . or ? or ! ), whereas just a small proportion do not. In the dev set, **almost all** sentences end in punctuation. Thus, it is recommended to append a full-stop ( . ) to the end of the small number of training examples that do not end in punctuation.
```python
from datasets import load_dataset
ds = load_dataset("mozilla-foundation/common_voice_6_1", "en", use_auth_token=True)
def prepare_dataset(batch):
"""Function to preprocess the dataset with the .map method"""
transcription = batch["sentence"]
if transcription.startswith('"') and transcription.endswith('"'):
# we can remove trailing quotation marks as they do not affect the transcription
transcription = transcription[1:-1]
if transcription[-1] not in [".", "?", "!"]:
# append a full-stop to sentences that do not end in punctuation
transcription = transcription + "."
batch["sentence"] = transcription
return batch
ds = ds.map(prepare_dataset, desc="preprocess dataset")
```
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
## Considerations for Using the Data
### Social Impact of Dataset
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Public Domain, [CC-0](https://creativecommons.org/share-your-work/public-domain/cc0/)
### Citation Information
```
@inproceedings{commonvoice:2020,
author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.},
title = {Common Voice: A Massively-Multilingual Speech Corpus},
booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)},
pages = {4211--4215},
year = 2020
}
```
|
nahyeon00/SQUAD | 2023-07-19T08:51:16.000Z | [
"region:us"
] | nahyeon00 | null | null | null | 0 | 710 | Entry not found |
Jean-Baptiste/wikiner_fr | 2023-06-26T15:33:17.000Z | [
"task_categories:token-classification",
"language:fr",
"region:us"
] | Jean-Baptiste | null | null | null | 3 | 709 | ---
language:
- fr
dataset_info:
features:
- name: id
dtype: int64
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': LOC
'2': PER
'3': MISC
'4': ORG
splits:
- name: test
num_bytes: 5954708
num_examples: 13410
- name: train
num_bytes: 54305659
num_examples: 120682
download_size: 12147768
dataset_size: 60260367
train-eval-index:
- config: Jean-Baptiste--wikiner_fr
task: token-classification
task_id: entity_extraction
splits:
eval_split: test
col_mapping:
tokens: tokens
ner_tags: tags
task_categories:
- token-classification
---
# Dataset Card for "wikiner_fr"
Dataset Description:
- **Homepage:** https://metatext.io/datasets/wikiner
- **Repository:**
- **Paper:** https://www.sciencedirect.com/science/article/pii/S0004370212000276?via%3Dihub
- **Leaderboard:**
- **Point of Contact:** |
result-kand2-sdxl-wuerst-karlo/b73eb60b | 2023-09-17T14:04:09.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | null | 0 | 709 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 206
num_examples: 10
download_size: 1380
dataset_size: 206
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "b73eb60b"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
wiki40b | 2023-04-05T13:43:07.000Z | [
"language:en",
"region:us"
] | null | Clean-up text for 40+ Wikipedia languages editions of pages
correspond to entities. The datasets have train/dev/test splits per language.
The dataset is cleaned up by page filtering to remove disambiguation pages,
redirect pages, deleted pages, and non-entity pages. Each example contains the
wikidata id of the entity, and the full Wikipedia article after page processing
that removes non-content sections and structured objects. | null | 8 | 708 | ---
language:
- en
paperswithcode_id: wiki-40b
pretty_name: Wiki-40B
dataset_info:
features:
- name: wikidata_id
dtype: string
- name: text
dtype: string
- name: version_id
dtype: string
config_name: en
splits:
- name: train
num_bytes: 9423623904
num_examples: 2926536
- name: validation
num_bytes: 527383016
num_examples: 163597
- name: test
num_bytes: 522219464
num_examples: 162274
download_size: 0
dataset_size: 10473226384
---
# Dataset Card for "wiki40b"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://research.google/pubs/pub49029/](https://research.google/pubs/pub49029/)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 10.47 GB
- **Total amount of disk used:** 10.47 GB
### Dataset Summary
Clean-up text for 40+ Wikipedia languages editions of pages
correspond to entities. The datasets have train/dev/test splits per language.
The dataset is cleaned up by page filtering to remove disambiguation pages,
redirect pages, deleted pages, and non-entity pages. Each example contains the
wikidata id of the entity, and the full Wikipedia article after page processing
that removes non-content sections and structured objects.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### en
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 10.47 GB
- **Total amount of disk used:** 10.47 GB
An example of 'train' looks as follows.
```
```
### Data Fields
The data fields are the same among all splits.
#### en
- `wikidata_id`: a `string` feature.
- `text`: a `string` feature.
- `version_id`: a `string` feature.
### Data Splits
|name| train |validation| test |
|----|------:|---------:|-----:|
|en |2926536| 163597|162274|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
```
### Contributions
Thanks to [@jplu](https://github.com/jplu), [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf), [@albertvillanova](https://github.com/albertvillanova), [@lhoestq](https://github.com/lhoestq) for adding this dataset. | |
Yijia-Xiao/pii-medical_flashcards | 2023-09-12T22:24:20.000Z | [
"region:us"
] | Yijia-Xiao | null | null | null | 1 | 706 | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: instruction
dtype: string
- name: cleaned_output
dtype: string
splits:
- name: train
num_bytes: 28620193
num_examples: 33955
download_size: 12411702
dataset_size: 28620193
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "pii-medical_flashcards"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
result-kand2-sdxl-wuerst-karlo/00dbfb2c | 2023-09-17T14:41:51.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | null | 0 | 706 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 240
num_examples: 10
download_size: 1450
dataset_size: 240
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "00dbfb2c"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
wiki_lingua | 2023-06-16T14:39:41.000Z | [
"task_categories:summarization",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:ar",
"language:cs",
"language:de",
"language:en",
"language:es",
"language:fr",
"language:hi",
"language:id",
"language:it",
"language:ja",
"language:ko",
"language:nl",
"language:pt",
"language:ru",
"language:th",
"language:tr",
"language:vi",
"language:zh",
"license:cc-by-3.0",
"arxiv:2010.03093",
"region:us"
] | null | WikiLingua is a large-scale multilingual dataset for the evaluation of
cross-lingual abstractive summarization systems. The dataset includes ~770k
article and summary pairs in 18 languages from WikiHow. The gold-standard
article-summary alignments across languages was done by aligning the images
that are used to describe each how-to step in an article. | @inproceedings{ladhak-etal-2020-wikilingua,
title = "{W}iki{L}ingua: A New Benchmark Dataset for Cross-Lingual Abstractive Summarization",
author = "Ladhak, Faisal and
Durmus, Esin and
Cardie, Claire and
McKeown, Kathleen",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.findings-emnlp.360",
doi = "10.18653/v1/2020.findings-emnlp.360",
pages = "4034--4048",
} | null | 23 | 704 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- ar
- cs
- de
- en
- es
- fr
- hi
- id
- it
- ja
- ko
- nl
- pt
- ru
- th
- tr
- vi
- zh
license:
- cc-by-3.0
multilinguality:
- multilingual
size_categories:
- 10K<n<100K
- 1K<n<10K
source_datasets:
- original
task_categories:
- summarization
task_ids: []
paperswithcode_id: wikilingua
pretty_name: WikiLingua
dataset_info:
- config_name: arabic
features:
- name: url
dtype: string
- name: article
sequence:
- name: section_name
dtype: string
- name: document
dtype: string
- name: summary
dtype: string
- name: english_url
dtype: string
- name: english_section_name
dtype: string
splits:
- name: train
num_bytes: 119116119
num_examples: 9995
download_size: 119358890
dataset_size: 119116119
- config_name: chinese
features:
- name: url
dtype: string
- name: article
sequence:
- name: section_name
dtype: string
- name: document
dtype: string
- name: summary
dtype: string
- name: english_url
dtype: string
- name: english_section_name
dtype: string
splits:
- name: train
num_bytes: 41170689
num_examples: 6541
download_size: 41345464
dataset_size: 41170689
- config_name: czech
features:
- name: url
dtype: string
- name: article
sequence:
- name: section_name
dtype: string
- name: document
dtype: string
- name: summary
dtype: string
- name: english_url
dtype: string
- name: english_section_name
dtype: string
splits:
- name: train
num_bytes: 20816390
num_examples: 2520
download_size: 20894511
dataset_size: 20816390
- config_name: dutch
features:
- name: url
dtype: string
- name: article
sequence:
- name: section_name
dtype: string
- name: document
dtype: string
- name: summary
dtype: string
- name: english_url
dtype: string
- name: english_section_name
dtype: string
splits:
- name: train
num_bytes: 87258040
num_examples: 10862
download_size: 87533442
dataset_size: 87258040
- config_name: english
features:
- name: url
dtype: string
- name: article
sequence:
- name: section_name
dtype: string
- name: document
dtype: string
- name: summary
dtype: string
splits:
- name: train
num_bytes: 333700114
num_examples: 57945
download_size: 338036185
dataset_size: 333700114
- config_name: french
features:
- name: url
dtype: string
- name: article
sequence:
- name: section_name
dtype: string
- name: document
dtype: string
- name: summary
dtype: string
- name: english_url
dtype: string
- name: english_section_name
dtype: string
splits:
- name: train
num_bytes: 197550376
num_examples: 21690
download_size: 198114157
dataset_size: 197550376
- config_name: german
features:
- name: url
dtype: string
- name: article
sequence:
- name: section_name
dtype: string
- name: document
dtype: string
- name: summary
dtype: string
- name: english_url
dtype: string
- name: english_section_name
dtype: string
splits:
- name: train
num_bytes: 168674340
num_examples: 20103
download_size: 169195050
dataset_size: 168674340
- config_name: hindi
features:
- name: url
dtype: string
- name: article
sequence:
- name: section_name
dtype: string
- name: document
dtype: string
- name: summary
dtype: string
- name: english_url
dtype: string
- name: english_section_name
dtype: string
splits:
- name: train
num_bytes: 63785051
num_examples: 3402
download_size: 63874759
dataset_size: 63785051
- config_name: indonesian
features:
- name: url
dtype: string
- name: article
sequence:
- name: section_name
dtype: string
- name: document
dtype: string
- name: summary
dtype: string
- name: english_url
dtype: string
- name: english_section_name
dtype: string
splits:
- name: train
num_bytes: 136408861
num_examples: 16308
download_size: 136833587
dataset_size: 136408861
- config_name: italian
features:
- name: url
dtype: string
- name: article
sequence:
- name: section_name
dtype: string
- name: document
dtype: string
- name: summary
dtype: string
- name: english_url
dtype: string
- name: english_section_name
dtype: string
splits:
- name: train
num_bytes: 138119527
num_examples: 17673
download_size: 138578956
dataset_size: 138119527
- config_name: japanese
features:
- name: url
dtype: string
- name: article
sequence:
- name: section_name
dtype: string
- name: document
dtype: string
- name: summary
dtype: string
- name: english_url
dtype: string
- name: english_section_name
dtype: string
splits:
- name: train
num_bytes: 40145031
num_examples: 4372
download_size: 40259570
dataset_size: 40145031
- config_name: korean
features:
- name: url
dtype: string
- name: article
sequence:
- name: section_name
dtype: string
- name: document
dtype: string
- name: summary
dtype: string
- name: english_url
dtype: string
- name: english_section_name
dtype: string
splits:
- name: train
num_bytes: 38647614
num_examples: 4111
download_size: 38748961
dataset_size: 38647614
- config_name: portuguese
features:
- name: url
dtype: string
- name: article
sequence:
- name: section_name
dtype: string
- name: document
dtype: string
- name: summary
dtype: string
- name: english_url
dtype: string
- name: english_section_name
dtype: string
splits:
- name: train
num_bytes: 204270845
num_examples: 28143
download_size: 204997686
dataset_size: 204270845
- config_name: russian
features:
- name: url
dtype: string
- name: article
sequence:
- name: section_name
dtype: string
- name: document
dtype: string
- name: summary
dtype: string
- name: english_url
dtype: string
- name: english_section_name
dtype: string
splits:
- name: train
num_bytes: 241924032
num_examples: 18143
download_size: 242377242
dataset_size: 241924032
- config_name: spanish
features:
- name: url
dtype: string
- name: article
sequence:
- name: section_name
dtype: string
- name: document
dtype: string
- name: summary
dtype: string
- name: english_url
dtype: string
- name: english_section_name
dtype: string
splits:
- name: train
num_bytes: 314618618
num_examples: 38795
download_size: 315609530
dataset_size: 314618618
- config_name: thai
features:
- name: url
dtype: string
- name: article
sequence:
- name: section_name
dtype: string
- name: document
dtype: string
- name: summary
dtype: string
- name: english_url
dtype: string
- name: english_section_name
dtype: string
splits:
- name: train
num_bytes: 86982851
num_examples: 5093
download_size: 87104200
dataset_size: 86982851
- config_name: turkish
features:
- name: url
dtype: string
- name: article
sequence:
- name: section_name
dtype: string
- name: document
dtype: string
- name: summary
dtype: string
- name: english_url
dtype: string
- name: english_section_name
dtype: string
splits:
- name: train
num_bytes: 11371821
num_examples: 1512
download_size: 11405793
dataset_size: 11371821
- config_name: vietnamese
features:
- name: url
dtype: string
- name: article
sequence:
- name: section_name
dtype: string
- name: document
dtype: string
- name: summary
dtype: string
- name: english_url
dtype: string
- name: english_section_name
dtype: string
splits:
- name: train
num_bytes: 69868788
num_examples: 6616
download_size: 70024093
dataset_size: 69868788
config_names:
- arabic
- chinese
- czech
- dutch
- english
- french
- german
- hindi
- indonesian
- italian
- japanese
- korean
- portuguese
- russian
- spanish
- thai
- turkish
- vietnamese
---
# Dataset Card for "wiki_lingua"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [URL](https://github.com/esdurmus/Wikilingua)
- **Paper:** [WikiLingua: A Multilingual Abstractive Summarization Dataset](https://arxiv.org/abs/2010.03093)
### Dataset Summary
We introduce WikiLingua, a large-scale, multilingual dataset for the evaluation of cross-lingual abstractive summarization systems. We extract article and summary pairs in 18 languages from WikiHow, a high quality, collaborative resource of how-to guides on a diverse set of topics written by human authors. We create gold-standard article-summary alignments across languages by aligning the images that are used to describe each how-to step in an article.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The table below shows number of article-summary pairs with a parallel article-summary pair in English.
______________________________
| Language | Num. parallel |
| ----------- | --------------|
| English | 141,457 |
| Spanish | 113,215 |
| Portuguese | 81,695 |
| French | 63,692 |
| German | 58,375 |
| Russian | 52,928 |
| Italian | 50,968 |
| Indonesian | 47,511 |
| Dutch | 31,270 |
| Arabic | 29,229 |
| Vietnamese | 19,600 |
| Chinese | 18,887 |
| Thai | 14,770 |
| Japanese | 12,669 |
| Korean | 12,189 |
| Hindi | 9,929 |
| Czech | 7,200 |
| Turkish | 4,503 |
## Dataset Structure
### Data Instances
```
{
'article': {
'document': ['make sure that the area is a safe place, especially if you plan on walking home at night. It’s always a good idea to practice the buddy system. Have a friend meet up and walk with you. Research the bus, train, or streetcar routes available in your area to find safe and affordable travel to your destination. Make sure you check the schedule for your outgoing and return travel. Some public transportation will cease to run late at night. Be sure if you take public transportation to the venue that you will also be able to get home late at night. Check the routes. Even if some public transit is still running late at night, the routing may change. Some may run express past many of the stops, or not travel all the way to the ends. Be sure that your stop will still be available when you need it for your return trip. If you are taking public transit in a vulnerable state after drinking, it is always a good idea to travel in groups. Having friends available is a good way to stay safe and make sure that you reach your destination. This is more expensive option than a taxi or ride share service, but could be a fun and fancy way to stay safe and ensure that you will have a ride home. Plan this service in advance with a scheduled time to pick you up from your home and the venue. You want to be sure that the service will still be available when you need to get home. This may be easy in a large city, but taxis may be less frequent in smaller towns. This is especially true late at night, so this is a less reliable option than scheduling a ride in advance. Have a friend accompany you and help you flag a cab to make sure you are able to get one. Set up a plan to call a friend when you get home to make sure that you made it safely to your destination. If there are no taxis readily available call a local service to send a car to pick you up. You can share a ride with your friends, or other people using the app at the same moment. If you are in a vulnerable state it is best to share the ride with your friends to make sure you get home safe. You can request the car to yourself rather than sharing rides with strangers. If you travel home on your own or are the last of your group to be dropped off, make plans to call a friend when you get home so they know you made it safely to your destination. There may be a designated driver service in your area which can chauffeur your group. Make reservations with them in advance and keep their contact information handy while you are drinking.',
"Designating a driver is a very popular tactic to avoid drinking and driving. It is important to plan in advance, because your brain function will slow down and your decision making skills will be impaired once you start drinking. Decide before you begin drinking that you will not drive. Figure out who will be getting you home before you leave. Make sure this person is responsible and keep them in your sight while you are drinking. Have their contact information handy in case you can’t find them when you are ready to leave. Choose a friend who doesn’t drink alcohol. You likely have someone in your friend group who doesn’t drink. This person is the most likely to remain sober. Decide on one person who will remain sober. You can take turns within your friend group, alternating who will be the designated driver on each occasion. Be sure that the designated driver actually remains sober. The person who has drank the least is still not sober. If you don’t have your car with you, you can guarantee that you won’t make the choice to drive it home. If you are drinking at your home. Give your keys to a responsible friend to ensure that you don't choose to drive somewhere after you have been drinking. It may be tempting to stay longer or leave with someone else. Stick to the plan you made in advance and only leave with your sober, designated driver. Keep the phone number of your driver handy in case you can't find them when you are ready to leave. If your designated driver drinks alcohol, find alternate transportation to get home.",
'If you have been drinking at all you are at least on the spectrum of drunkenness. You could be showing signs of impairment and slower brain function including lack of motor skills and slower reaction time, leading to the inability to operate a motor vehicle. Some of these signs could be: Poor balance or stumbling. Difficulty speaking clearly and slurred words. Abnormal behavior leading to you doing things you wouldn’t normally do if you were sober. As soon as you notice that you are showing signs of impairment, give your keys to a friend, the host or the bartender to ensure that you won’t drive until you are sober. Make sure to only give them your car key. Hold onto your house keys. If your friend, the host or the bartender are advising you not to drive, you are likely too drunk. Listen to their advice and acknowledge that they are trying to help you. Bystander intervention is common when it comes to drinking and driving. Many people will be willing to step in, take your keys and help you get home safely. If no one if offering to help, you may need to ask. Take a ride from a sober friend. It is best to get in a car with someone you trust when you are in this vulnerable state. Allow the host or bartender to call a cab or car service to take you home. If you are having a difficult time finding a safe way to get home, find a place to stay which does not involve you driving. Ask the host of the party if there is a place you can sleep. Give them your keys and ask that they keep them in a safe place until the morning. Stay with a friend if they live nearby and are on their way home. Find a hotel within walking distance. Call them to book a room, or have a friend help you secure one. Ask the friend if they will walk you to the hotel and make sure you get checked in safely. There are people in your life who care about you and want to be sure that you are safe. It may seem scary or embarrassing to call your parents or your siblings if you are too drunk to drive, but they will be glad you did. Your safety is the most important. You may need your phone to call someone for a ride or get help from a friend. Be sure to charge your phone before you leave the house. It is also a good idea to bring a charger with you in case your battery dies before the end of the night or you end up staying where you are and need to get home the next morning. You may also want to invest in a portable battery charger for your phone should there not be a power outlet available. Make sure it is fully charged before you leave your house. Keep it handy in your pocket or your bag throughout the night.'
],
'section_name': ['Finding Other Transportation',
'Designating a Driver',
'Staying Safe'
],
'summary': ['Walk to the venue where you will be drinking if it is close enough. Take public transit. Show up in style by hiring a limo or black car service. Flag a taxi cab for a convenient option to get where you’re going. Request a rideshare service like Uber or Lyft using an app on your phone. Reserve a designated driver service.',
'Plan in advance. Assign a designated driver. Leave your car at home. Leave the venue with your designated driver.',
'Pay attention to your body. Give up your keys. Listen to other people. Accept help. Stay where you are. Have an emergency back-up plan. Make sure that your phone is charged.'
]
},
'url': 'https://www.wikihow.com/Avoid-Drinking-and-Driving'
}
```
### Data Fields
- `url`: WikiHow URL of the article
- `article`: A dictionary containing `section_name`, `document` and `summary`
- `section_name`: List of section headings in an article
- `document`: List of documents, one for each section in the `section_name` list
- `summary`: List of summarized document
### Data Splits
| | train |
|:-----------|--------:|
| arabic | 9995 |
| chinese | 6541 |
| czech | 2520 |
| dutch | 10862 |
| english | 57945 |
| french | 21690 |
| german | 20103 |
| hindi | 3402 |
| indonesian | 16308 |
| italian | 17673 |
| japanese | 4372 |
| korean | 4111 |
| portuguese | 28143 |
| russian | 18143 |
| spanish | 6616 |
| thai | 5093 |
| turkish | 1512 |
| vietnamese | 6616 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
- Article provided by wikiHow https://www.wikihow.com/Main-Page, a wiki building the world's largest, highest quality how-to manual. Please edit this article and find author credits at wikiHow.com. Content on wikiHow can be shared under a [Creative Commons license](http://creativecommons.org/licenses/by-nc-sa/3.0/).
- Refer to [this webpage](https://www.wikihow.com/wikiHow:Attribution) for the specific attribution guidelines.
- also see https://gem-benchmark.com/data_cards/WikiLingua
### Citation Information
```bibtex
@inproceedings{ladhak-etal-2020-wikilingua,
title = "{W}iki{L}ingua: A New Benchmark Dataset for Cross-Lingual Abstractive Summarization",
author = "Ladhak, Faisal and
Durmus, Esin and
Cardie, Claire and
McKeown, Kathleen",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.findings-emnlp.360",
doi = "10.18653/v1/2020.findings-emnlp.360",
pages = "4034--4048",
}
```
### Contributions
Thanks to [@katnoria](https://github.com/katnoria) for adding this dataset. |
nomic-ai/gpt4all-j-prompt-generations | 2023-04-24T15:20:43.000Z | [
"size_categories:100K<n<1M",
"language:en",
"license:apache-2.0",
"region:us"
] | nomic-ai | null | null | null | 159 | 699 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 1774285641
num_examples: 808812
download_size: 990673616
dataset_size: 1774285641
license: apache-2.0
language:
- en
size_categories:
- 100K<n<1M
---
# Dataset Card for [GPT4All-J Prompt Generations]
## Dataset Description
Dataset used to train [GPT4All-J](https://huggingface.co/nomic-ai/gpt4all-j) and [GPT4All-J-LoRA](https://huggingface.co/nomic-ai/gpt4all-j-lora)
We release several versions of datasets
- **v1.0:** The original dataset we used to finetune GPT-J on
- **v1.1-breezy**: A filtered dataset where we removed all instances of `AI language model`
- **v1.2-jazzy**: A filtered dataset where we also removed instances like `I'm sorry, I can't answer...` and `AI language model`
- **v1.3-groovy**: The v1.2 dataset with ShareGPT and Dolly added with ~8% of semantic duplicates removed from the dataset using [Atlas](https://atlas.nomic.ai/)
The dataset defaults to `main` which is `v1.0`. To download a specific version, you can pass an argument to the keyword `revision` in `load_dataset`:
```python
from datasets import load_dataset
jazzy = load_dataset("nomic-ai/gpt4all-j-prompt-generations", revision='v1.2-jazzy')
```
- **Homepage:** [gpt4all.io](https://gpt4all.io/)
- **Repository:** [gpt4all](https://github.com/nomic-ai/gpt4all)
- **Paper:** [Technical Report](https://static.nomic.ai/gpt4all/2023_GPT4All-J_Technical_Report_2.pdf)
- **Atlas Map:** [Map of Prompts](https://atlas.nomic.ai/map/gpt4all-j-prompts-curated) and [Responses](https://atlas.nomic.ai/map/gpt4all-j-response-curated) |
result-kand2-sdxl-wuerst-karlo/8edb1fe9 | 2023-09-18T04:48:26.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | null | 0 | 697 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 258
num_examples: 10
download_size: 1429
dataset_size: 258
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "8edb1fe9"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tilyupo/trivia_qa | 2023-08-03T17:00:54.000Z | [
"region:us"
] | tilyupo | null | null | null | 0 | 696 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: question
dtype: string
- name: question_id
dtype: string
- name: question_source
dtype: string
- name: answer
struct:
- name: aliases
sequence: string
- name: normalized_aliases
sequence: string
- name: matched_wiki_entity_name
dtype: string
- name: normalized_matched_wiki_entity_name
dtype: string
- name: normalized_value
dtype: string
- name: type
dtype: string
- name: value
dtype: string
- name: passages
list:
- name: answer
dtype: string
- name: passage
dtype: string
- name: precise_score
dtype: float64
- name: rough_score
dtype: float64
- name: source
dtype: string
- name: title
dtype: string
splits:
- name: train
num_bytes: 3065861634
num_examples: 137282
- name: validation
num_bytes: 402091161
num_examples: 17817
download_size: 1805238996
dataset_size: 3467952795
---
# Dataset Card for "trivia_qa_passages"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
e2e_nlg_cleaned | 2022-11-18T19:59:46.000Z | [
"task_categories:text2text-generation",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-sa-4.0",
"meaning-representation-to-text",
"arxiv:1706.09254",
"arxiv:1901.11528",
"region:us"
] | null | An update release of E2E NLG Challenge data with cleaned MRs and scripts, accompanying the following paper:
Ondřej Dušek, David M. Howcroft, and Verena Rieser (2019): Semantic Noise Matters for Neural Natural Language Generation. In INLG, Tokyo, Japan. | @inproceedings{dusek-etal-2019-semantic,
title = "Semantic Noise Matters for Neural Natural Language Generation",
author = "Du{\v{s}}ek, Ond{\v{r}}ej and
Howcroft, David M. and
Rieser, Verena",
booktitle = "Proceedings of the 12th International Conference on Natural Language Generation",
month = oct # "{--}" # nov,
year = "2019",
address = "Tokyo, Japan",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/W19-8652",
doi = "10.18653/v1/W19-8652",
pages = "421--426"
} | null | 3 | 695 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text2text-generation
task_ids: []
paperswithcode_id: null
pretty_name: the Cleaned Version of the E2E Dataset
tags:
- meaning-representation-to-text
dataset_info:
features:
- name: meaning_representation
dtype: string
- name: human_reference
dtype: string
splits:
- name: train
num_bytes: 7474936
num_examples: 33525
- name: validation
num_bytes: 1056527
num_examples: 4299
- name: test
num_bytes: 1262597
num_examples: 4693
download_size: 14597407
dataset_size: 9794060
---
# Dataset Card for the Cleaned Version of the E2E Dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [homepage](http://www.macs.hw.ac.uk/InteractionLab/E2E/)
- **Repository:** [repository](https://github.com/tuetschek/e2e-dataset/)
- **Paper:** [paper](https://arxiv.org/abs/1706.09254)
- **Leaderboard:** [leaderboard](http://www.macs.hw.ac.uk/InteractionLab/E2E/)
### Dataset Summary
An update release of E2E NLG Challenge data with cleaned MRs and scripts, accompanying the following paper:
The E2E dataset is used for training end-to-end, data-driven natural language generation systems in the restaurant domain, which is ten times bigger than existing, frequently used datasets in this area.
The E2E dataset poses new challenges:
(1) its human reference texts show more lexical richness and syntactic variation, including discourse phenomena;
(2) generating from this set requires content selection. As such, learning from this dataset promises more natural, varied and less template-like system utterances.
E2E is released in the following paper where you can find more details and baseline results:
https://arxiv.org/abs/1706.09254
### Supported Tasks and Leaderboards
- `text2text-generation-other-meaning-representtion-to-text`: The dataset can be used to train a model to generate descriptions in the restaurant domain from meaning representations, which consists in taking as input some data about a restaurant and generate a sentence in natural language that presents the different aspects of the data about the restaurant.. Success on this task is typically measured by achieving a *high* [BLEU](https://huggingface.co/metrics/bleu), [NIST](https://huggingface.co/metrics/nist), [METEOR](https://huggingface.co/metrics/meteor), [Rouge-L](https://huggingface.co/metrics/rouge), [CIDEr](https://huggingface.co/metrics/cider).
This task has an inactive leaderboard which can be found [here](http://www.macs.hw.ac.uk/InteractionLab/E2E/) and ranks models based on the metrics above.
### Languages
The dataset is in english (en).
## Dataset Structure
### Data Instances
Example of one instance:
```
{'human_reference': 'The Vaults pub near Café Adriatic has a 5 star rating. Prices start at £30.',
'meaning_representation': 'name[The Vaults], eatType[pub], priceRange[more than £30], customer rating[5 out of 5], near[Café Adriatic]'}
```
### Data Fields
- `human_reference`: string, the text is natural language that describes the different characteristics in the meaning representation
- `meaning_representation`: list of slots and values to generate a description from
Each MR consists of 3–8 attributes (slots), such as name, food or area, and their values.
### Data Splits
The dataset is split into training, validation and testing sets (in a 76.5-8.5-15 ratio), keeping a similar distribution of MR and reference text lengths and ensuring that MRs in different sets are distinct.
| | train | validation | test |
|--------------|------:|-----------:|-----:|
| N. Instances | 33525 | 4299 | 4693 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
The data was collected using the CrowdFlower platform and quality-controlled following Novikova et al. (2016).
#### Who are the source language producers?
[More Information Needed]
### Annotations
Following Novikova et al. (2016), the E2E data was collected using pictures as stimuli, which was shown to elicit significantly more natural, more informative, and better phrased human references than textual MRs.
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@article{dusek.etal2020:csl,
title = {Evaluating the {{State}}-of-the-{{Art}} of {{End}}-to-{{End Natural Language Generation}}: {{The E2E NLG Challenge}}},
author = {Du{\v{s}}ek, Ond\v{r}ej and Novikova, Jekaterina and Rieser, Verena},
year = {2020},
month = jan,
volume = {59},
pages = {123--156},
doi = {10.1016/j.csl.2019.06.009},
archivePrefix = {arXiv},
eprint = {1901.11528},
eprinttype = {arxiv},
journal = {Computer Speech \& Language}
```
### Contributions
Thanks to [@yjernite](https://github.com/yjernite) for adding this dataset. |
Multimodal-Fatima/SNLI-VE_test | 2023-02-07T22:33:34.000Z | [
"region:us"
] | Multimodal-Fatima | null | null | null | 0 | 694 | ---
dataset_info:
features:
- name: image
dtype: image
- name: filename
dtype: string
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
- name: id
dtype: int64
- name: id_image
dtype: int64
- name: clip_tags_ViT_L_14
sequence: string
- name: blip_caption
dtype: string
- name: LLM_Description_gpt3_downstream_tasks_visual_genome_ViT_L_14
sequence: string
splits:
- name: test
num_bytes: 2483209080.488
num_examples: 17901
download_size: 911606574
dataset_size: 2483209080.488
---
# Dataset Card for "SNLI-VE_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ChristophSchuhmann/improved_aesthetics_6.5plus | 2022-08-10T11:34:17.000Z | [
"region:us"
] | ChristophSchuhmann | null | null | null | 32 | 688 | Entry not found |
mteb/imdb | 2022-09-27T19:14:44.000Z | [
"language:en",
"region:us"
] | mteb | null | null | null | 1 | 687 | ---
language:
- en
--- |
BeIR/trec-covid | 2022-10-23T06:00:45.000Z | [
"task_categories:text-retrieval",
"task_ids:entity-linking-retrieval",
"task_ids:fact-checking-retrieval",
"multilinguality:monolingual",
"language:en",
"license:cc-by-sa-4.0",
"region:us"
] | BeIR | null | null | null | 0 | 684 | ---
annotations_creators: []
language_creators: []
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
paperswithcode_id: beir
pretty_name: BEIR Benchmark
size_categories:
msmarco:
- 1M<n<10M
trec-covid:
- 100k<n<1M
nfcorpus:
- 1K<n<10K
nq:
- 1M<n<10M
hotpotqa:
- 1M<n<10M
fiqa:
- 10K<n<100K
arguana:
- 1K<n<10K
touche-2020:
- 100K<n<1M
cqadupstack:
- 100K<n<1M
quora:
- 100K<n<1M
dbpedia:
- 1M<n<10M
scidocs:
- 10K<n<100K
fever:
- 1M<n<10M
climate-fever:
- 1M<n<10M
scifact:
- 1K<n<10K
source_datasets: []
task_categories:
- text-retrieval
- zero-shot-retrieval
- information-retrieval
- zero-shot-information-retrieval
task_ids:
- passage-retrieval
- entity-linking-retrieval
- fact-checking-retrieval
- tweet-retrieval
- citation-prediction-retrieval
- duplication-question-retrieval
- argument-retrieval
- news-retrieval
- biomedical-information-retrieval
- question-answering-retrieval
---
# Dataset Card for BEIR Benchmark
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/UKPLab/beir
- **Repository:** https://github.com/UKPLab/beir
- **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ
- **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns
- **Point of Contact:** nandan.thakur@uwaterloo.ca
### Dataset Summary
BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:
- Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact)
- Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/)
- Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/)
- News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html)
- Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data)
- Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/)
- Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs)
- Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html)
- Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/)
All these datasets have been preprocessed and can be used for your experiments.
```python
```
### Supported Tasks and Leaderboards
The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.
The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/).
### Languages
All tasks are in English (`en`).
## Dataset Structure
All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:
- `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}`
- `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}`
- `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1`
### Data Instances
A high level example of any beir dataset:
```python
corpus = {
"doc1" : {
"title": "Albert Einstein",
"text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \
one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \
its influence on the philosophy of science. He is best known to the general public for his mass–energy \
equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \
Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \
of the photoelectric effect', a pivotal step in the development of quantum theory."
},
"doc2" : {
"title": "", # Keep title an empty string if not present
"text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \
malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\
with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)."
},
}
queries = {
"q1" : "Who developed the mass-energy equivalence formula?",
"q2" : "Which beer is brewed with a large proportion of wheat?"
}
qrels = {
"q1" : {"doc1": 1},
"q2" : {"doc2": 1},
}
```
### Data Fields
Examples from all configurations have the following features:
### Corpus
- `corpus`: a `dict` feature representing the document title and passage text, made up of:
- `_id`: a `string` feature representing the unique document id
- `title`: a `string` feature, denoting the title of the document.
- `text`: a `string` feature, denoting the text of the document.
### Queries
- `queries`: a `dict` feature representing the query, made up of:
- `_id`: a `string` feature representing the unique query id
- `text`: a `string` feature, denoting the text of the query.
### Qrels
- `qrels`: a `dict` feature representing the query document relevance judgements, made up of:
- `_id`: a `string` feature representing the query id
- `_id`: a `string` feature, denoting the document id.
- `score`: a `int32` feature, denoting the relevance judgement between query and document.
### Data Splits
| Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 |
| -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:|
| MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` |
| TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` |
| NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` |
| BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) |
| NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` |
| HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` |
| FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` |
| Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) |
| TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) |
| ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` |
| Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` |
| CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` |
| Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` |
| DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` |
| SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` |
| FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` |
| Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` |
| SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` |
| Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
Cite as:
```
@inproceedings{
thakur2021beir,
title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models},
author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
year={2021},
url={https://openreview.net/forum?id=wCu6T5xFjeJ}
}
```
### Contributions
Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset. |
Amod/mental_health_counseling_conversations | 2023-07-20T19:00:46.000Z | [
"task_categories:conversational",
"task_categories:text-generation",
"task_categories:question-answering",
"task_ids:sentiment-classification",
"task_ids:language-modeling",
"task_ids:open-domain-qa",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:openrail",
"region:us"
] | Amod | null | null | null | 26 | 683 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
license: openrail
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- conversational
- text-generation
- question-answering
task_ids:
- sentiment-classification
- language-modeling
- open-domain-qa
---
# Amod/mental_health_counseling_conversations
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:** Bertagnolli, Nicolas (2020). Counsel chat: Bootstrapping high-quality therapy data. Towards Data Science. https://towardsdatascience.com/counsel-chat
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset is a collection of questions and answers sourced from two online counseling and therapy platforms. The questions cover a wide range of mental health topics, and the answers are provided by qualified psychologists. The dataset is intended to be used for fine-tuning language models to improve their ability to provide mental health advice.
### Supported Tasks and Leaderboards
The dataset supports the task of text generation, particularly for generating advice or suggestions in response to a mental health-related question.
### Languages
The text in the dataset is in English.
## Dataset Structure
### Data Instances
A data instance includes a 'Context' and a 'Response'. 'Context' contains the question asked by a user, and 'Response' contains the corresponding answer provided by a psychologist.
### Data Fields
- 'Context': a string containing the question asked by a user
- 'Response': a string containing the corresponding answer provided by a psychologist
### Data Splits
The dataset has no predefined splits. Users can create their own splits as needed.
## Dataset Creation
### Curation Rationale
This dataset was created to aid in the development of AI models that can provide mental health advice or guidance. The raw data was meticulously cleaned to only include the conversations.
### Source Data
The data was sourced from two online counseling and therapy platforms. The raw data can be found [here](https://github.com/nbertagnolli/counsel-chat/tree/master/data).
### Annotations
The dataset does not contain any additional annotations.
### Personal and Sensitive Information
The dataset may contain sensitive information related to mental health. All data was anonymized and no personally identifiable information is included. |
result-kand2-sdxl-wuerst-karlo/b2489367 | 2023-09-18T15:20:18.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | null | 0 | 683 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 254
num_examples: 10
download_size: 1431
dataset_size: 254
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "b2489367"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
EleutherAI/hendrycks_ethics | 2023-07-05T21:23:28.000Z | [
"region:us"
] | EleutherAI | The ETHICS dataset is a benchmark that spans concepts in justice, well-being,
duties, virtues, and commonsense morality. Models predict widespread moral
judgments about diverse text scenarios. This requires connecting physical and
social world knowledge to value judgements, a capability that may enable us
to steer chatbot outputs or eventually regularize open-ended reinforcement
learning agents. | @article{hendrycks2021ethics
title={Aligning AI With Shared Human Values},
author={Dan Hendrycks and Collin Burns and Steven Basart and Andrew Critch and Jerry Li and Dawn Song and Jacob Steinhardt},
journal={Proceedings of the International Conference on Learning Representations (ICLR)},
year={2021}
} | null | 0 | 682 | Entry not found |
result-kand2-sdxl-wuerst-karlo/4e6d4d01 | 2023-09-18T17:02:52.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | null | 0 | 682 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 176
num_examples: 10
download_size: 1328
dataset_size: 176
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "4e6d4d01"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
result-kand2-sdxl-wuerst-karlo/9bc865b4 | 2023-09-18T17:00:59.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | null | 0 | 681 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 188
num_examples: 10
download_size: 1354
dataset_size: 188
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "9bc865b4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
result-kand2-sdxl-wuerst-karlo/b6ea8c05 | 2023-09-18T17:02:55.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | null | 0 | 681 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 176
num_examples: 10
download_size: 1328
dataset_size: 176
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "b6ea8c05"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
movie_rationales | 2023-04-05T10:09:59.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:unknown",
"region:us"
] | null | The movie rationale dataset contains human annotated rationales for movie
reviews. | @unpublished{eraser2019,
title = {ERASER: A Benchmark to Evaluate Rationalized NLP Models},
author = {Jay DeYoung and Sarthak Jain and Nazneen Fatema Rajani and Eric Lehman and Caiming Xiong and Richard Socher and Byron C. Wallace}
}
@InProceedings{zaidan-eisner-piatko-2008:nips,
author = {Omar F. Zaidan and Jason Eisner and Christine Piatko},
title = {Machine Learning with Annotator Rationales to Reduce Annotation Cost},
booktitle = {Proceedings of the NIPS*2008 Workshop on Cost Sensitive Learning},
month = {December},
year = {2008}
} | null | 2 | 680 | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
pretty_name: MovieRationales
dataset_info:
features:
- name: review
dtype: string
- name: label
dtype:
class_label:
names:
'0': NEG
'1': POS
- name: evidences
sequence: string
splits:
- name: test
num_bytes: 1046377
num_examples: 199
- name: train
num_bytes: 6853624
num_examples: 1600
- name: validation
num_bytes: 830417
num_examples: 200
download_size: 3899487
dataset_size: 8730418
---
# Dataset Card for "movie_rationales"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** https://github.com/jayded/eraserbenchmark
- **Paper:** [ERASER: A Benchmark to Evaluate Rationalized NLP Models](https://aclanthology.org/2020.acl-main.408/)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 3.90 MB
- **Size of the generated dataset:** 8.73 MB
- **Total amount of disk used:** 12.62 MB
### Dataset Summary
The movie rationale dataset contains human annotated rationales for movie
reviews.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 3.90 MB
- **Size of the generated dataset:** 8.73 MB
- **Total amount of disk used:** 12.62 MB
An example of 'validation' looks as follows.
```
{
"evidences": ["Fun movie"],
"label": 1,
"review": "Fun movie\n"
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `review`: a `string` feature.
- `label`: a classification label, with possible values including `NEG` (0), `POS` (1).
- `evidences`: a `list` of `string` features.
### Data Splits
| name |train|validation|test|
|-------|----:|---------:|---:|
|default| 1600| 200| 199|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@inproceedings{deyoung-etal-2020-eraser,
title = "{ERASER}: {A} Benchmark to Evaluate Rationalized {NLP} Models",
author = "DeYoung, Jay and
Jain, Sarthak and
Rajani, Nazneen Fatema and
Lehman, Eric and
Xiong, Caiming and
Socher, Richard and
Wallace, Byron C.",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.acl-main.408",
doi = "10.18653/v1/2020.acl-main.408",
pages = "4443--4458",
}
@InProceedings{zaidan-eisner-piatko-2008:nips,
author = {Omar F. Zaidan and Jason Eisner and Christine Piatko},
title = {Machine Learning with Annotator Rationales to Reduce Annotation Cost},
booktitle = {Proceedings of the NIPS*2008 Workshop on Cost Sensitive Learning},
month = {December},
year = {2008}
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun) for adding this dataset. |
nickrosh/Evol-Instruct-Code-80k-v1 | 2023-07-11T02:05:26.000Z | [
"license:cc-by-nc-sa-4.0",
"arxiv:2306.08568",
"region:us"
] | nickrosh | null | null | null | 83 | 678 | ---
license: cc-by-nc-sa-4.0
---
Open Source Implementation of Evol-Instruct-Code as described in the [WizardCoder Paper](https://arxiv.org/pdf/2306.08568.pdf).
Code for the intruction generation can be found on Github as [Evol-Teacher](https://github.com/nickrosh/evol-teacher).
|
lmqg/qg_squad | 2022-12-02T18:51:10.000Z | [
"task_categories:text-generation",
"task_ids:language-modeling",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:squad",
"language:en",
"license:cc-by-4.0",
"question-generation",
"arxiv:2210.03992",
"arxiv:1705.00106",
"region:us"
] | lmqg | [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/) evaluation set for the question generation (QG) models. The split
of test and development set follows the ["Neural Question Generation"](https://arxiv.org/abs/1705.00106) work and is
compatible with the [leader board](https://paperswithcode.com/sota/question-generation-on-squad11). | @inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
} | null | 4 | 675 | ---
license: cc-by-4.0
pretty_name: SQuAD for question generation
language: en
multilinguality: monolingual
size_categories: 10K<n<100K
source_datasets: squad
task_categories:
- text-generation
task_ids:
- language-modeling
tags:
- question-generation
---
# Dataset Card for "lmqg/qg_squad"
## Dataset Description
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
- **Point of Contact:** [Asahi Ushio](http://asahiushio.com/)
### Dataset Summary
This is a subset of [QG-Bench](https://github.com/asahi417/lm-question-generation/blob/master/QG_BENCH.md#datasets), a unified question generation benchmark proposed in
["Generative Language Models for Paragraph-Level Question Generation: A Unified Benchmark and Evaluation, EMNLP 2022 main conference"](https://arxiv.org/abs/2210.03992).
This is [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/) dataset for question generation (QG) task. The split
of train/development/test set follows the ["Neural Question Generation"](https://arxiv.org/abs/1705.00106) work and is
compatible with the [leader board](https://paperswithcode.com/sota/question-generation-on-squad11).
### Supported Tasks and Leaderboards
* `question-generation`: The dataset is assumed to be used to train a model for question generation.
Success on this task is typically measured by achieving a high BLEU4/METEOR/ROUGE-L/BERTScore/MoverScore (see our paper for more in detail).
This task has an active leaderboard which can be found at [here](https://paperswithcode.com/sota/question-generation-on-squad11).
### Languages
English (en)
## Dataset Structure
An example of 'train' looks as follows.
```
{
"question": "What is heresy mainly at odds with?",
"paragraph": "Heresy is any provocative belief or theory that is strongly at variance with established beliefs or customs. A heretic is a proponent of such claims or beliefs. Heresy is distinct from both apostasy, which is the explicit renunciation of one's religion, principles or cause, and blasphemy, which is an impious utterance or action concerning God or sacred things.",
"answer": "established beliefs or customs",
"sentence": "Heresy is any provocative belief or theory that is strongly at variance with established beliefs or customs .",
"paragraph_sentence": "<hl> Heresy is any provocative belief or theory that is strongly at variance with established beliefs or customs . <hl> A heretic is a proponent of such claims or beliefs. Heresy is distinct from both apostasy, which is the explicit renunciation of one's religion, principles or cause, and blasphemy, which is an impious utterance or action concerning God or sacred things.",
"paragraph_answer": "Heresy is any provocative belief or theory that is strongly at variance with <hl> established beliefs or customs <hl>. A heretic is a proponent of such claims or beliefs. Heresy is distinct from both apostasy, which is the explicit renunciation of one's religion, principles or cause, and blasphemy, which is an impious utterance or action concerning God or sacred things.",
"sentence_answer": "Heresy is any provocative belief or theory that is strongly at variance with <hl> established beliefs or customs <hl> ."
}
```
The data fields are the same among all splits.
- `question`: a `string` feature.
- `paragraph`: a `string` feature.
- `answer`: a `string` feature.
- `sentence`: a `string` feature.
- `paragraph_answer`: a `string` feature, which is same as the paragraph but the answer is highlighted by a special token `<hl>`.
- `paragraph_sentence`: a `string` feature, which is same as the paragraph but a sentence containing the answer is highlighted by a special token `<hl>`.
- `sentence_answer`: a `string` feature, which is same as the sentence but the answer is highlighted by a special token `<hl>`.
Each of `paragraph_answer`, `paragraph_sentence`, and `sentence_answer` feature is assumed to be used to train a question generation model,
but with different information. The `paragraph_answer` and `sentence_answer` features are for answer-aware question generation and
`paragraph_sentence` feature is for sentence-aware question generation.
## Data Splits
|train|validation|test |
|----:|---------:|----:|
|75722| 10570|11877|
## Citation Information
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
``` |
mteb/twentynewsgroups-clustering | 2022-09-27T19:13:51.000Z | [
"language:en",
"region:us"
] | mteb | null | null | null | 0 | 671 | ---
language:
- en
--- |
kmfoda/booksum | 2022-11-30T12:03:43.000Z | [
"license:bsd-3-clause",
"arxiv:2105.08209",
"region:us"
] | kmfoda | null | null | null | 25 | 670 | ---
license:
- bsd-3-clause
train-eval-index:
- config: kmfoda--booksum
task: summarization
task_id: summarization
splits:
eval_split: test
col_mapping:
chapter: text
summary_text: target
---
# BOOKSUM: A Collection of Datasets for Long-form Narrative Summarization
Authors: [Wojciech Kryściński](https://twitter.com/iam_wkr), [Nazneen Rajani](https://twitter.com/nazneenrajani), [Divyansh Agarwal](https://twitter.com/jigsaw2212), [Caiming Xiong](https://twitter.com/caimingxiong), [Dragomir Radev](http://www.cs.yale.edu/homes/radev/)
## Introduction
The majority of available text summarization datasets include short-form source documents that lack long-range causal and temporal dependencies, and often contain strong layout and stylistic biases.
While relevant, such datasets will offer limited challenges for future generations of text summarization systems.
We address these issues by introducing BookSum, a collection of datasets for long-form narrative summarization.
Our dataset covers source documents from the literature domain, such as novels, plays and stories, and includes highly abstractive, human written summaries on three levels of granularity of increasing difficulty: paragraph-, chapter-, and book-level.
The domain and structure of our dataset poses a unique set of challenges for summarization systems, which include: processing very long documents, non-trivial causal and temporal dependencies, and rich discourse structures.
To facilitate future work, we trained and evaluated multiple extractive and abstractive summarization models as baselines for our dataset.
## Links
- [paper](https://arxiv.org/abs/2105.08209) by SalesForce Research
- [GitHub repo](https://github.com/salesforce/booksum)
<p align="center"><img src="misc/book_sumv4.png"></p>
## Table of Contents
1. [Citation](#citation)
2. [Legal Note](#legal-note)
3. [License](#license)
## Citation
```
@article{kryscinski2021booksum,
title={BookSum: A Collection of Datasets for Long-form Narrative Summarization},
author={Wojciech Kry{\'s}ci{\'n}ski and Nazneen Rajani and Divyansh Agarwal and Caiming Xiong and Dragomir Radev},
year={2021},
eprint={2105.08209},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Legal Note
By downloading or using the resources, including any code or scripts, shared in this code
repository, you hereby agree to the following terms, and your use of the resources is conditioned
on and subject to these terms.
1. You may only use the scripts shared in this code repository for research purposes. You
may not use or allow others to use the scripts for any other purposes and other uses are
expressly prohibited.
2. You will comply with all terms and conditions, and are responsible for obtaining all
rights, related to the services you access and the data you collect.
3. We do not make any representations or warranties whatsoever regarding the sources from
which data is collected. Furthermore, we are not liable for any damage, loss or expense of
any kind arising from or relating to your use of the resources shared in this code
repository or the data collected, regardless of whether such liability is based in tort,
contract or otherwise.
## License
The code is released under the **BSD-3 License** (see `LICENSE.txt` for details). |
ceyda/smithsonian_butterflies | 2022-07-13T09:32:27.000Z | [
"task_categories:image-classification",
"task_ids:multi-label-image-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"language:en",
"license:cc0-1.0",
"region:us"
] | ceyda | null | null | null | 6 | 670 | ---
annotations_creators:
- expert-generated
language:
- en
language_creators:
- expert-generated
license:
- cc0-1.0
multilinguality:
- monolingual
pretty_name: Smithsonian Butterflies
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- image-classification
task_ids:
- multi-label-image-classification
---
# Dataset Card for [Smithsonian Butterflies]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** Smithsonian "Education and Outreach" & "NMNH - Entomology Dept." collections [here](https://collections.si.edu/search/results.htm?q=butterfly&view=list&fq=online_media_type%3A%22Images%22&fq=topic%3A%22Insects%22&fq=data_source%3A%22NMNH+-+Entomology+Dept.%22&media.CC0=true&dsort=title&start=0)
### Dataset Summary
High-res images from Smithsonian "Education and Outreach" & "NMNH - Entomology Dept." collections. Crawled
### Supported Tasks and Leaderboards
Includes metadata about the scientific name of butterflies, but there maybe missing values. Might be good for classification.
### Languages
English
## Dataset Structure
### Data Instances
# Example data
```
{'image_url': 'https://ids.si.edu/ids/deliveryService?id=ark:/65665/m3b3132f6666904de396880d9dc811c5cd',
'image_alt': 'view Aholibah Underwing digital asset number 1',
'id': 'ark:/65665/m3b3132f6666904de396880d9dc811c5cd',
'name': 'Aholibah Underwing',
'scientific_name': 'Catocala aholibah',
'gender': None,
'taxonomy': 'Animalia, Arthropoda, Hexapoda, Insecta, Lepidoptera, Noctuidae, Catocalinae',
'region': None,
'locality': None,
'date': None,
'usnm_no': 'EO400317-DSP',
'guid': 'http://n2t.net/ark:/65665/39b506292-715f-45a7-8511-b49bb087c7de',
'edan_url': 'edanmdm:nmnheducation_10866595',
'source': 'Smithsonian Education and Outreach collections',
'stage': None,
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=2000x1328 at 0x7F57D0504DC0>,
'image_hash': '27a5fe92f72f8b116d3b7d65bac84958',
'sim_score': 0.8440760970115662}
```
### Data Fields
sim-score indicates clip score for "pretty butterfly". This is to eliminate non-butterfly images(just id card images etc)
### Data Splits
No specific split exists.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
Crawled from "Education and Outreach" & "NMNH - Entomology Dept." collections found online [here](https://collections.si.edu/search/results.htm?q=butterfly&view=list&fq=online_media_type%3A%22Images%22&fq=topic%3A%22Insects%22&fq=data_source%3A%22NMNH+-+Entomology+Dept.%22&media.CC0=true&dsort=title&start=0)
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
Doesn't include all butterfly species ## Additional Information
### Dataset Curators
Smithsonian "Education and Outreach" & "NMNH - Entomology Dept." collections
### Licensing Information
Only results marked: CC0
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@cceyda](https://github.com/cceyda) for adding this dataset. |
juletxara/xcopa_mt | 2023-07-21T10:19:22.000Z | [
"task_categories:question-answering",
"task_ids:multiple-choice-qa",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:extended|copa",
"language:en",
"license:cc-by-4.0",
"region:us"
] | juletxara | XCOPA: A Multilingual Dataset for Causal Commonsense Reasoning
The Cross-lingual Choice of Plausible Alternatives dataset is a benchmark to evaluate the ability of machine learning models to transfer commonsense reasoning across
languages. The dataset is the translation and reannotation of the English COPA (Roemmele et al. 2011) and covers 11 languages from 11 families and several areas around
the globe. The dataset is challenging as it requires both the command of world knowledge and the ability to generalise to new languages. All the details about the
creation of XCOPA and the implementation of the baselines are available in the paper.\n | @article{ponti2020xcopa,
title={{XCOPA: A} Multilingual Dataset for Causal Commonsense Reasoning},
author={Edoardo M. Ponti, Goran Glava\v{s}, Olga Majewska, Qianchu Liu, Ivan Vuli\'{c} and Anna Korhonen},
journal={arXiv preprint},
year={2020},
url={https://ducdauge.github.io/files/xcopa.pdf}
}
@inproceedings{roemmele2011choice,
title={Choice of plausible alternatives: An evaluation of commonsense causal reasoning},
author={Roemmele, Melissa and Bejan, Cosmin Adrian and Gordon, Andrew S},
booktitle={2011 AAAI Spring Symposium Series},
year={2011},
url={https://people.ict.usc.edu/~gordon/publications/AAAI-SPRING11A.PDF},
} | null | 0 | 670 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: XCOPA MT
size_categories:
- unknown
source_datasets:
- extended|copa
task_categories:
- question-answering
task_ids:
- multiple-choice-qa
paperswithcode_id: xcopa
dataset_info:
- config_name: nllb-200-distilled-600M
features:
- name: premise
dtype: string
- name: choice1
dtype: string
- name: choice2
dtype: string
- name: question
dtype: string
- name: label
dtype: int32
- name: idx
dtype: int32
- name: changed
dtype: bool
splits:
- name: et
num_bytes: 58092
num_examples: 500
- name: ht
num_bytes: 58200
num_examples: 500
- name: it
num_bytes: 59156
num_examples: 500
- name: id
num_bytes: 59038
num_examples: 500
- name: qu
num_bytes: 60464
num_examples: 500
- name: sw
num_bytes: 58401
num_examples: 500
- name: zh
num_bytes: 58016
num_examples: 500
- name: ta
num_bytes: 60994
num_examples: 500
- name: th
num_bytes: 56797
num_examples: 500
- name: tr
num_bytes: 57256
num_examples: 500
- name: vi
num_bytes: 56733
num_examples: 500
download_size: 1009631
dataset_size: 643147
- config_name: nllb-200-distilled-1.3B
features:
- name: premise
dtype: string
- name: choice1
dtype: string
- name: choice2
dtype: string
- name: question
dtype: string
- name: label
dtype: int32
- name: idx
dtype: int32
- name: changed
dtype: bool
splits:
- name: et
num_bytes: 57531
num_examples: 500
- name: ht
num_bytes: 57998
num_examples: 500
- name: it
num_bytes: 58660
num_examples: 500
- name: id
num_bytes: 58835
num_examples: 500
- name: qu
num_bytes: 61138
num_examples: 500
- name: sw
num_bytes: 58634
num_examples: 500
- name: zh
num_bytes: 59319
num_examples: 500
- name: ta
num_bytes: 60468
num_examples: 500
- name: th
num_bytes: 56331
num_examples: 500
- name: tr
num_bytes: 56979
num_examples: 500
- name: vi
num_bytes: 56268
num_examples: 500
download_size: 1008646
dataset_size: 642161
- config_name: nllb-200-1.3B
features:
- name: premise
dtype: string
- name: choice1
dtype: string
- name: choice2
dtype: string
- name: question
dtype: string
- name: label
dtype: int32
- name: idx
dtype: int32
- name: changed
dtype: bool
splits:
- name: et
num_bytes: 57282
num_examples: 500
- name: ht
num_bytes: 57858
num_examples: 500
- name: it
num_bytes: 58515
num_examples: 500
- name: id
num_bytes: 58803
num_examples: 500
- name: qu
num_bytes: 60172
num_examples: 500
- name: sw
num_bytes: 58486
num_examples: 500
- name: zh
num_bytes: 57671
num_examples: 500
- name: ta
num_bytes: 60439
num_examples: 500
- name: th
num_bytes: 55874
num_examples: 500
- name: tr
num_bytes: 56806
num_examples: 500
- name: vi
num_bytes: 56200
num_examples: 500
download_size: 1004579
dataset_size: 638106
- config_name: nllb-200-3.3B
features:
- name: premise
dtype: string
- name: choice1
dtype: string
- name: choice2
dtype: string
- name: question
dtype: string
- name: label
dtype: int32
- name: idx
dtype: int32
- name: changed
dtype: bool
splits:
- name: et
num_bytes: 57660
num_examples: 500
- name: ht
num_bytes: 58114
num_examples: 500
- name: it
num_bytes: 58630
num_examples: 500
- name: id
num_bytes: 58976
num_examples: 500
- name: qu
num_bytes: 61276
num_examples: 500
- name: sw
num_bytes: 58854
num_examples: 500
- name: zh
num_bytes: 57851
num_examples: 500
- name: ta
num_bytes: 60905
num_examples: 500
- name: th
num_bytes: 56619
num_examples: 500
- name: tr
num_bytes: 57071
num_examples: 500
- name: vi
num_bytes: 56617
num_examples: 500
download_size: 1009049
dataset_size: 642573
- config_name: xglm-564M
features:
- name: premise
dtype: string
- name: choice1
dtype: string
- name: choice2
dtype: string
- name: question
dtype: string
- name: label
dtype: int32
- name: idx
dtype: int32
- name: changed
dtype: bool
splits:
- name: et
num_bytes: 63358
num_examples: 500
- name: ht
num_bytes: 64273
num_examples: 500
- name: it
num_bytes: 70578
num_examples: 500
- name: id
num_bytes: 63095
num_examples: 500
- name: qu
num_bytes: 76634
num_examples: 500
- name: sw
num_bytes: 68475
num_examples: 500
- name: zh
num_bytes: 127703
num_examples: 500
- name: ta
num_bytes: 109174
num_examples: 500
- name: th
num_bytes: 71764
num_examples: 500
- name: tr
num_bytes: 67498
num_examples: 500
- name: vi
num_bytes: 69529
num_examples: 500
download_size: 1362468
dataset_size: 852081
- config_name: xglm-1.7B
features:
- name: premise
dtype: string
- name: choice1
dtype: string
- name: choice2
dtype: string
- name: question
dtype: string
- name: label
dtype: int32
- name: idx
dtype: int32
- name: changed
dtype: bool
splits:
- name: et
num_bytes: 58674
num_examples: 500
- name: ht
num_bytes: 57964
num_examples: 500
- name: it
num_bytes: 59743
num_examples: 500
- name: id
num_bytes: 58521
num_examples: 500
- name: qu
num_bytes: 67219
num_examples: 500
- name: sw
num_bytes: 60062
num_examples: 500
- name: zh
num_bytes: 57233
num_examples: 500
- name: ta
num_bytes: 64706
num_examples: 500
- name: th
num_bytes: 59472
num_examples: 500
- name: tr
num_bytes: 58155
num_examples: 500
- name: vi
num_bytes: 57282
num_examples: 500
download_size: 1031393
dataset_size: 659031
- config_name: xglm-2.9B
features:
- name: premise
dtype: string
- name: choice1
dtype: string
- name: choice2
dtype: string
- name: question
dtype: string
- name: label
dtype: int32
- name: idx
dtype: int32
- name: changed
dtype: bool
splits:
- name: et
num_bytes: 56815
num_examples: 500
- name: ht
num_bytes: 59120
num_examples: 500
- name: it
num_bytes: 60146
num_examples: 500
- name: id
num_bytes: 60641
num_examples: 500
- name: qu
num_bytes: 82619
num_examples: 500
- name: sw
num_bytes: 60125
num_examples: 500
- name: zh
num_bytes: 57593
num_examples: 500
- name: ta
num_bytes: 67155
num_examples: 500
- name: th
num_bytes: 60159
num_examples: 500
- name: tr
num_bytes: 58299
num_examples: 500
- name: vi
num_bytes: 57881
num_examples: 500
download_size: 1047842
dataset_size: 680553
- config_name: xglm-4.5B
features:
- name: premise
dtype: string
- name: choice1
dtype: string
- name: choice2
dtype: string
- name: question
dtype: string
- name: label
dtype: int32
- name: idx
dtype: int32
- name: changed
dtype: bool
splits:
- name: et
num_bytes: 57355
num_examples: 500
- name: ht
num_bytes: 62183
num_examples: 500
- name: it
num_bytes: 59396
num_examples: 500
- name: id
num_bytes: 57704
num_examples: 500
- name: qu
num_bytes: 116554
num_examples: 500
- name: sw
num_bytes: 59244
num_examples: 500
- name: zh
num_bytes: 57123
num_examples: 500
- name: ta
num_bytes: 70289
num_examples: 500
- name: th
num_bytes: 58409
num_examples: 500
- name: tr
num_bytes: 58127
num_examples: 500
- name: vi
num_bytes: 57919
num_examples: 500
download_size: 1082379
dataset_size: 714303
- config_name: xglm-7.5B
features:
- name: premise
dtype: string
- name: choice1
dtype: string
- name: choice2
dtype: string
- name: question
dtype: string
- name: label
dtype: int32
- name: idx
dtype: int32
- name: changed
dtype: bool
splits:
- name: et
num_bytes: 56766
num_examples: 500
- name: ht
num_bytes: 57817
num_examples: 500
- name: it
num_bytes: 58333
num_examples: 500
- name: id
num_bytes: 57773
num_examples: 500
- name: qu
num_bytes: 67010
num_examples: 500
- name: sw
num_bytes: 58817
num_examples: 500
- name: zh
num_bytes: 57227
num_examples: 500
- name: ta
num_bytes: 62324
num_examples: 500
- name: th
num_bytes: 55932
num_examples: 500
- name: tr
num_bytes: 57305
num_examples: 500
- name: vi
num_bytes: 56529
num_examples: 500
download_size: 1012936
dataset_size: 645833
- config_name: bloom-560m
features:
- name: premise
dtype: string
- name: choice1
dtype: string
- name: choice2
dtype: string
- name: question
dtype: string
- name: label
dtype: int32
- name: idx
dtype: int32
- name: changed
dtype: bool
splits:
- name: et
num_bytes: 130778
num_examples: 500
- name: ht
num_bytes: 118299
num_examples: 500
- name: it
num_bytes: 95290
num_examples: 500
- name: id
num_bytes: 60064
num_examples: 500
- name: qu
num_bytes: 102968
num_examples: 500
- name: sw
num_bytes: 146899
num_examples: 500
- name: zh
num_bytes: 70813
num_examples: 500
- name: ta
num_bytes: 86233
num_examples: 500
- name: th
num_bytes: 155361
num_examples: 500
- name: tr
num_bytes: 136837
num_examples: 500
- name: vi
num_bytes: 61095
num_examples: 500
download_size: 1548970
dataset_size: 1164637
- config_name: bloom-1b1
features:
- name: premise
dtype: string
- name: choice1
dtype: string
- name: choice2
dtype: string
- name: question
dtype: string
- name: label
dtype: int32
- name: idx
dtype: int32
- name: changed
dtype: bool
splits:
- name: et
num_bytes: 101964
num_examples: 500
- name: ht
num_bytes: 91757
num_examples: 500
- name: it
num_bytes: 74057
num_examples: 500
- name: id
num_bytes: 56488
num_examples: 500
- name: qu
num_bytes: 98982
num_examples: 500
- name: sw
num_bytes: 87520
num_examples: 500
- name: zh
num_bytes: 59371
num_examples: 500
- name: ta
num_bytes: 74918
num_examples: 500
- name: th
num_bytes: 128581
num_examples: 500
- name: tr
num_bytes: 143310
num_examples: 500
- name: vi
num_bytes: 55236
num_examples: 500
download_size: 1344990
dataset_size: 972184
- config_name: bloom-1b7
features:
- name: premise
dtype: string
- name: choice1
dtype: string
- name: choice2
dtype: string
- name: question
dtype: string
- name: label
dtype: int32
- name: idx
dtype: int32
- name: changed
dtype: bool
splits:
- name: et
num_bytes: 85029
num_examples: 500
- name: ht
num_bytes: 75448
num_examples: 500
- name: it
num_bytes: 61350
num_examples: 500
- name: id
num_bytes: 58084
num_examples: 500
- name: qu
num_bytes: 77332
num_examples: 500
- name: sw
num_bytes: 67131
num_examples: 500
- name: zh
num_bytes: 57200
num_examples: 500
- name: ta
num_bytes: 70436
num_examples: 500
- name: th
num_bytes: 139759
num_examples: 500
- name: tr
num_bytes: 100472
num_examples: 500
- name: vi
num_bytes: 55737
num_examples: 500
download_size: 1219112
dataset_size: 847978
- config_name: bloom-3b
features:
- name: premise
dtype: string
- name: choice1
dtype: string
- name: choice2
dtype: string
- name: question
dtype: string
- name: label
dtype: int32
- name: idx
dtype: int32
- name: changed
dtype: bool
splits:
- name: et
num_bytes: 73262
num_examples: 500
- name: ht
num_bytes: 63961
num_examples: 500
- name: it
num_bytes: 60275
num_examples: 500
- name: id
num_bytes: 58006
num_examples: 500
- name: qu
num_bytes: 89802
num_examples: 500
- name: sw
num_bytes: 61519
num_examples: 500
- name: zh
num_bytes: 56864
num_examples: 500
- name: ta
num_bytes: 69482
num_examples: 500
- name: th
num_bytes: 109418
num_examples: 500
- name: tr
num_bytes: 120094
num_examples: 500
- name: vi
num_bytes: 55980
num_examples: 500
download_size: 1187376
dataset_size: 818663
- config_name: bloom-7b1
features:
- name: premise
dtype: string
- name: choice1
dtype: string
- name: choice2
dtype: string
- name: question
dtype: string
- name: label
dtype: int32
- name: idx
dtype: int32
- name: changed
dtype: bool
splits:
- name: et
num_bytes: 50296
num_examples: 500
- name: ht
num_bytes: 53141
num_examples: 500
- name: it
num_bytes: 59193
num_examples: 500
- name: id
num_bytes: 56651
num_examples: 500
- name: qu
num_bytes: 73218
num_examples: 500
- name: sw
num_bytes: 58770
num_examples: 500
- name: zh
num_bytes: 56282
num_examples: 500
- name: ta
num_bytes: 61975
num_examples: 500
- name: th
num_bytes: 82201
num_examples: 500
- name: tr
num_bytes: 55094
num_examples: 500
- name: vi
num_bytes: 55580
num_examples: 500
download_size: 1029650
dataset_size: 662401
- config_name: llama-7B
features:
- name: premise
dtype: string
- name: choice1
dtype: string
- name: choice2
dtype: string
- name: question
dtype: string
- name: label
dtype: int32
- name: idx
dtype: int32
- name: changed
dtype: bool
splits:
- name: et
num_bytes: 57640
num_examples: 500
- name: ht
num_bytes: 62634
num_examples: 500
- name: it
num_bytes: 59497
num_examples: 500
- name: id
num_bytes: 59138
num_examples: 500
- name: qu
num_bytes: 71702
num_examples: 500
- name: sw
num_bytes: 63238
num_examples: 500
- name: zh
num_bytes: 59803
num_examples: 500
- name: ta
num_bytes: 107865
num_examples: 500
- name: th
num_bytes: 71665
num_examples: 500
- name: tr
num_bytes: 58729
num_examples: 500
- name: vi
num_bytes: 67266
num_examples: 500
download_size: 1106401
dataset_size: 739177
- config_name: llama-13B
features:
- name: premise
dtype: string
- name: choice1
dtype: string
- name: choice2
dtype: string
- name: question
dtype: string
- name: label
dtype: int32
- name: idx
dtype: int32
- name: changed
dtype: bool
splits:
- name: et
num_bytes: 58524
num_examples: 500
- name: ht
num_bytes: 58576
num_examples: 500
- name: it
num_bytes: 59633
num_examples: 500
- name: id
num_bytes: 57663
num_examples: 500
- name: qu
num_bytes: 69152
num_examples: 500
- name: sw
num_bytes: 63891
num_examples: 500
- name: zh
num_bytes: 57540
num_examples: 500
- name: ta
num_bytes: 85821
num_examples: 500
- name: th
num_bytes: 55881
num_examples: 500
- name: tr
num_bytes: 56783
num_examples: 500
- name: vi
num_bytes: 55295
num_examples: 500
download_size: 1045868
dataset_size: 678759
- config_name: llama-30B
features:
- name: premise
dtype: string
- name: choice1
dtype: string
- name: choice2
dtype: string
- name: question
dtype: string
- name: label
dtype: int32
- name: idx
dtype: int32
- name: changed
dtype: bool
splits:
- name: et
num_bytes: 55792
num_examples: 500
- name: ht
num_bytes: 55836
num_examples: 500
- name: it
num_bytes: 59578
num_examples: 500
- name: id
num_bytes: 58384
num_examples: 500
- name: qu
num_bytes: 60479
num_examples: 500
- name: sw
num_bytes: 60740
num_examples: 500
- name: zh
num_bytes: 57099
num_examples: 500
- name: ta
num_bytes: 74192
num_examples: 500
- name: th
num_bytes: 54577
num_examples: 500
- name: tr
num_bytes: 55743
num_examples: 500
- name: vi
num_bytes: 56371
num_examples: 500
download_size: 1015352
dataset_size: 648791
- config_name: RedPajama-INCITE-Base-3B-v1
features:
- name: premise
dtype: string
- name: choice1
dtype: string
- name: choice2
dtype: string
- name: question
dtype: string
- name: label
dtype: int32
- name: idx
dtype: int32
- name: changed
dtype: bool
splits:
- name: et
num_bytes: 66862
num_examples: 500
- name: ht
num_bytes: 67548
num_examples: 500
- name: it
num_bytes: 60220
num_examples: 500
- name: id
num_bytes: 58585
num_examples: 500
- name: qu
num_bytes: 84898
num_examples: 500
- name: sw
num_bytes: 78422
num_examples: 500
- name: zh
num_bytes: 60708
num_examples: 500
- name: ta
num_bytes: 99438
num_examples: 500
- name: th
num_bytes: 83022
num_examples: 500
- name: tr
num_bytes: 64835
num_examples: 500
- name: vi
num_bytes: 68696
num_examples: 500
download_size: 1161592
dataset_size: 793234
- config_name: RedPajama-INCITE-7B-Base
features:
- name: premise
dtype: string
- name: choice1
dtype: string
- name: choice2
dtype: string
- name: question
dtype: string
- name: label
dtype: int32
- name: idx
dtype: int32
- name: changed
dtype: bool
splits:
- name: et
num_bytes: 59722
num_examples: 500
- name: ht
num_bytes: 54824
num_examples: 500
- name: it
num_bytes: 59511
num_examples: 500
- name: id
num_bytes: 59526
num_examples: 500
- name: qu
num_bytes: 102986
num_examples: 500
- name: sw
num_bytes: 69382
num_examples: 500
- name: zh
num_bytes: 59507
num_examples: 500
- name: ta
num_bytes: 88701
num_examples: 500
- name: th
num_bytes: 65715
num_examples: 500
- name: tr
num_bytes: 61684
num_examples: 500
- name: vi
num_bytes: 65257
num_examples: 500
download_size: 1114614
dataset_size: 746815
- config_name: open_llama_3b
features:
- name: premise
dtype: string
- name: choice1
dtype: string
- name: choice2
dtype: string
- name: question
dtype: string
- name: label
dtype: int32
- name: idx
dtype: int32
- name: changed
dtype: bool
splits:
- name: et
num_bytes: 66399
num_examples: 500
- name: ht
num_bytes: 60389
num_examples: 500
- name: it
num_bytes: 60711
num_examples: 500
- name: id
num_bytes: 60704
num_examples: 500
- name: qu
num_bytes: 91950
num_examples: 500
- name: sw
num_bytes: 72466
num_examples: 500
- name: zh
num_bytes: 62617
num_examples: 500
- name: ta
num_bytes: 106600
num_examples: 500
- name: th
num_bytes: 203185
num_examples: 500
- name: tr
num_bytes: 66524
num_examples: 500
- name: vi
num_bytes: 77933
num_examples: 500
download_size: 1439470
dataset_size: 929478
- config_name: open_llama_7b
features:
- name: premise
dtype: string
- name: choice1
dtype: string
- name: choice2
dtype: string
- name: question
dtype: string
- name: label
dtype: int32
- name: idx
dtype: int32
- name: changed
dtype: bool
splits:
- name: et
num_bytes: 57157
num_examples: 500
- name: ht
num_bytes: 54184
num_examples: 500
- name: it
num_bytes: 59425
num_examples: 500
- name: id
num_bytes: 57354
num_examples: 500
- name: qu
num_bytes: 73290
num_examples: 500
- name: sw
num_bytes: 65718
num_examples: 500
- name: zh
num_bytes: 59168
num_examples: 500
- name: ta
num_bytes: 94160
num_examples: 500
- name: th
num_bytes: 181602
num_examples: 500
- name: tr
num_bytes: 58138
num_examples: 500
- name: vi
num_bytes: 62771
num_examples: 500
download_size: 1315174
dataset_size: 822967
- config_name: open_llama_13b
features:
- name: premise
dtype: string
- name: choice1
dtype: string
- name: choice2
dtype: string
- name: question
dtype: string
- name: label
dtype: int32
- name: idx
dtype: int32
- name: changed
dtype: bool
splits:
- name: et
num_bytes: 56288
num_examples: 500
- name: ht
num_bytes: 54954
num_examples: 500
- name: it
num_bytes: 59628
num_examples: 500
- name: id
num_bytes: 58167
num_examples: 500
- name: qu
num_bytes: 89296
num_examples: 500
- name: sw
num_bytes: 59578
num_examples: 500
- name: zh
num_bytes: 58133
num_examples: 500
- name: ta
num_bytes: 94160
num_examples: 500
- name: th
num_bytes: 186125
num_examples: 500
- name: tr
num_bytes: 56290
num_examples: 500
- name: vi
num_bytes: 58354
num_examples: 500
download_size: 1340180
dataset_size: 830973
- config_name: open_llama_7b_v2
features:
- name: premise
dtype: string
- name: choice1
dtype: string
- name: choice2
dtype: string
- name: question
dtype: string
- name: label
dtype: int32
- name: idx
dtype: int32
- name: changed
dtype: bool
splits:
- name: et
num_bytes: 53471
num_examples: 500
- name: ht
num_bytes: 55430
num_examples: 500
- name: it
num_bytes: 59523
num_examples: 500
- name: id
num_bytes: 57590
num_examples: 500
- name: qu
num_bytes: 87887
num_examples: 500
- name: sw
num_bytes: 62658
num_examples: 500
- name: zh
num_bytes: 57696
num_examples: 500
- name: ta
num_bytes: 94160
num_examples: 500
- name: th
num_bytes: 58255
num_examples: 500
- name: tr
num_bytes: 54985
num_examples: 500
- name: vi
num_bytes: 57207
num_examples: 500
download_size: 1066611
dataset_size: 698862
- config_name: falcon-7b
features:
- name: premise
dtype: string
- name: choice1
dtype: string
- name: choice2
dtype: string
- name: question
dtype: string
- name: label
dtype: int32
- name: idx
dtype: int32
- name: changed
dtype: bool
splits:
- name: et
num_bytes: 80694
num_examples: 500
- name: ht
num_bytes: 64949
num_examples: 500
- name: it
num_bytes: 60169
num_examples: 500
- name: id
num_bytes: 57919
num_examples: 500
- name: qu
num_bytes: 82389
num_examples: 500
- name: sw
num_bytes: 68738
num_examples: 500
- name: zh
num_bytes: 62816
num_examples: 500
- name: ta
num_bytes: 16427
num_examples: 500
- name: th
num_bytes: 155861
num_examples: 500
- name: tr
num_bytes: 64322
num_examples: 500
- name: vi
num_bytes: 94137
num_examples: 500
download_size: 1302140
dataset_size: 808421
- config_name: xgen-7b-4k-base
features:
- name: premise
dtype: string
- name: choice1
dtype: string
- name: choice2
dtype: string
- name: question
dtype: string
- name: label
dtype: int32
- name: idx
dtype: int32
- name: changed
dtype: bool
splits:
- name: et
num_bytes: 58498
num_examples: 500
- name: ht
num_bytes: 55498
num_examples: 500
- name: it
num_bytes: 59696
num_examples: 500
- name: id
num_bytes: 55936
num_examples: 500
- name: qu
num_bytes: 80560
num_examples: 500
- name: sw
num_bytes: 65035
num_examples: 500
- name: zh
num_bytes: 58163
num_examples: 500
- name: ta
num_bytes: 14813
num_examples: 500
- name: th
num_bytes: 64876
num_examples: 500
- name: tr
num_bytes: 57701
num_examples: 500
- name: vi
num_bytes: 58791
num_examples: 500
download_size: 997295
dataset_size: 629567
- config_name: xgen-7b-8k-base
features:
- name: premise
dtype: string
- name: choice1
dtype: string
- name: choice2
dtype: string
- name: question
dtype: string
- name: label
dtype: int32
- name: idx
dtype: int32
- name: changed
dtype: bool
splits:
- name: et
num_bytes: 57918
num_examples: 500
- name: ht
num_bytes: 55553
num_examples: 500
- name: it
num_bytes: 59322
num_examples: 500
- name: id
num_bytes: 56829
num_examples: 500
- name: qu
num_bytes: 93371
num_examples: 500
- name: sw
num_bytes: 65770
num_examples: 500
- name: zh
num_bytes: 57378
num_examples: 500
- name: ta
num_bytes: 14813
num_examples: 500
- name: th
num_bytes: 60694
num_examples: 500
- name: tr
num_bytes: 56341
num_examples: 500
- name: vi
num_bytes: 58305
num_examples: 500
download_size: 1003224
dataset_size: 636294
- config_name: xgen-7b-8k-inst
features:
- name: premise
dtype: string
- name: choice1
dtype: string
- name: choice2
dtype: string
- name: question
dtype: string
- name: label
dtype: int32
- name: idx
dtype: int32
- name: changed
dtype: bool
splits:
- name: et
num_bytes: 57938
num_examples: 500
- name: ht
num_bytes: 59577
num_examples: 500
- name: it
num_bytes: 58999
num_examples: 500
- name: id
num_bytes: 57198
num_examples: 500
- name: qu
num_bytes: 74792
num_examples: 500
- name: sw
num_bytes: 63739
num_examples: 500
- name: zh
num_bytes: 58638
num_examples: 500
- name: ta
num_bytes: 14813
num_examples: 500
- name: th
num_bytes: 64762
num_examples: 500
- name: tr
num_bytes: 58008
num_examples: 500
- name: vi
num_bytes: 56758
num_examples: 500
download_size: 992574
dataset_size: 625222
- config_name: polylm-1.7b
features:
- name: premise
dtype: string
- name: choice1
dtype: string
- name: choice2
dtype: string
- name: question
dtype: string
- name: label
dtype: int32
- name: idx
dtype: int32
- name: changed
dtype: bool
splits:
- name: et
num_bytes: 127291
num_examples: 500
- name: ht
num_bytes: 100114
num_examples: 500
- name: it
num_bytes: 70393
num_examples: 500
- name: id
num_bytes: 58829
num_examples: 500
- name: qu
num_bytes: 92265
num_examples: 500
- name: sw
num_bytes: 88160
num_examples: 500
- name: zh
num_bytes: 56896
num_examples: 500
- name: ta
num_bytes: 123071
num_examples: 500
- name: th
num_bytes: 67106
num_examples: 500
- name: tr
num_bytes: 107151
num_examples: 500
- name: vi
num_bytes: 56025
num_examples: 500
download_size: 1326335
dataset_size: 947301
- config_name: polylm-13b
features:
- name: premise
dtype: string
- name: choice1
dtype: string
- name: choice2
dtype: string
- name: question
dtype: string
- name: label
dtype: int32
- name: idx
dtype: int32
- name: changed
dtype: bool
splits:
- name: et
num_bytes: 52813
num_examples: 500
- name: ht
num_bytes: 57552
num_examples: 500
- name: it
num_bytes: 58876
num_examples: 500
- name: id
num_bytes: 58351
num_examples: 500
- name: qu
num_bytes: 67767
num_examples: 500
- name: sw
num_bytes: 52179
num_examples: 500
- name: zh
num_bytes: 56913
num_examples: 500
- name: ta
num_bytes: 151911
num_examples: 500
- name: th
num_bytes: 56069
num_examples: 500
- name: tr
num_bytes: 56251
num_examples: 500
- name: vi
num_bytes: 56378
num_examples: 500
download_size: 1093006
dataset_size: 725060
- config_name: polylm-multialpaca-13b
features:
- name: premise
dtype: string
- name: choice1
dtype: string
- name: choice2
dtype: string
- name: question
dtype: string
- name: label
dtype: int32
- name: idx
dtype: int32
- name: changed
dtype: bool
splits:
- name: et
num_bytes: 50900
num_examples: 500
- name: ht
num_bytes: 55054
num_examples: 500
- name: it
num_bytes: 58941
num_examples: 500
- name: id
num_bytes: 58062
num_examples: 500
- name: qu
num_bytes: 66646
num_examples: 500
- name: sw
num_bytes: 55903
num_examples: 500
- name: zh
num_bytes: 57690
num_examples: 500
- name: ta
num_bytes: 159507
num_examples: 500
- name: th
num_bytes: 54790
num_examples: 500
- name: tr
num_bytes: 56229
num_examples: 500
- name: vi
num_bytes: 56748
num_examples: 500
download_size: 1097212
dataset_size: 730470
- config_name: open_llama_3b_v2
features:
- name: premise
dtype: string
- name: choice1
dtype: string
- name: choice2
dtype: string
- name: question
dtype: string
- name: label
dtype: int32
- name: idx
dtype: int32
- name: changed
dtype: bool
splits:
- name: et
num_bytes: 55145
num_examples: 500
- name: ht
num_bytes: 55602
num_examples: 500
- name: it
num_bytes: 59546
num_examples: 500
- name: id
num_bytes: 57579
num_examples: 500
- name: qu
num_bytes: 72123
num_examples: 500
- name: sw
num_bytes: 62381
num_examples: 500
- name: zh
num_bytes: 58425
num_examples: 500
- name: ta
num_bytes: 106600
num_examples: 500
- name: th
num_bytes: 64880
num_examples: 500
- name: tr
num_bytes: 57858
num_examples: 500
- name: vi
num_bytes: 61197
num_examples: 500
download_size: 1078124
dataset_size: 711336
- config_name: Llama-2-7b-hf
features:
- name: premise
dtype: string
- name: choice1
dtype: string
- name: choice2
dtype: string
- name: question
dtype: string
- name: label
dtype: int32
- name: idx
dtype: int32
- name: changed
dtype: bool
splits:
- name: et
num_bytes: 55987
num_examples: 500
- name: ht
num_bytes: 55689
num_examples: 500
- name: it
num_bytes: 59478
num_examples: 500
- name: id
num_bytes: 58155
num_examples: 500
- name: qu
num_bytes: 64673
num_examples: 500
- name: sw
num_bytes: 59586
num_examples: 500
- name: zh
num_bytes: 57100
num_examples: 500
- name: ta
num_bytes: 84633
num_examples: 500
- name: th
num_bytes: 55732
num_examples: 500
- name: tr
num_bytes: 55864
num_examples: 500
- name: vi
num_bytes: 55716
num_examples: 500
download_size: 1029561
dataset_size: 662613
- config_name: Llama-2-13b-hf
features:
- name: premise
dtype: string
- name: choice1
dtype: string
- name: choice2
dtype: string
- name: question
dtype: string
- name: label
dtype: int32
- name: idx
dtype: int32
- name: changed
dtype: bool
splits:
- name: et
num_bytes: 57638
num_examples: 500
- name: ht
num_bytes: 58376
num_examples: 500
- name: it
num_bytes: 59731
num_examples: 500
- name: id
num_bytes: 57842
num_examples: 500
- name: qu
num_bytes: 67524
num_examples: 500
- name: sw
num_bytes: 63141
num_examples: 500
- name: zh
num_bytes: 57165
num_examples: 500
- name: ta
num_bytes: 68926
num_examples: 500
- name: th
num_bytes: 56742
num_examples: 500
- name: tr
num_bytes: 56300
num_examples: 500
- name: vi
num_bytes: 56077
num_examples: 500
download_size: 1026046
dataset_size: 659462
- config_name: Llama-2-7b-chat-hf
features:
- name: premise
dtype: string
- name: choice1
dtype: string
- name: choice2
dtype: string
- name: question
dtype: string
- name: label
dtype: int32
- name: idx
dtype: int32
- name: changed
dtype: bool
splits:
- name: et
num_bytes: 50593
num_examples: 500
- name: ht
num_bytes: 64307
num_examples: 500
- name: it
num_bytes: 25365
num_examples: 500
- name: id
num_bytes: 51404
num_examples: 500
- name: qu
num_bytes: 77738
num_examples: 500
- name: sw
num_bytes: 64286
num_examples: 500
- name: zh
num_bytes: 21421
num_examples: 500
- name: ta
num_bytes: 80610
num_examples: 500
- name: th
num_bytes: 66935
num_examples: 500
- name: tr
num_bytes: 54474
num_examples: 500
- name: vi
num_bytes: 28370
num_examples: 500
download_size: 952208
dataset_size: 585503
- config_name: Llama-2-13b-chat-hf
features:
- name: premise
dtype: string
- name: choice1
dtype: string
- name: choice2
dtype: string
- name: question
dtype: string
- name: label
dtype: int32
- name: idx
dtype: int32
- name: changed
dtype: bool
splits:
- name: et
num_bytes: 60368
num_examples: 500
- name: ht
num_bytes: 65837
num_examples: 500
- name: it
num_bytes: 59658
num_examples: 500
- name: id
num_bytes: 59141
num_examples: 500
- name: qu
num_bytes: 80708
num_examples: 500
- name: sw
num_bytes: 66850
num_examples: 500
- name: zh
num_bytes: 59536
num_examples: 500
- name: ta
num_bytes: 91955
num_examples: 500
- name: th
num_bytes: 65147
num_examples: 500
- name: tr
num_bytes: 56932
num_examples: 500
- name: vi
num_bytes: 57445
num_examples: 500
download_size: 1090195
dataset_size: 723577
---
# Dataset Card for XCOPA MT
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/cambridgeltl/xcopa](https://github.com/cambridgeltl/xcopa)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 4.08 MB
- **Size of the generated dataset:** 1.02 MB
- **Total amount of disk used:** 5.10 MB
### Dataset Summary
XCOPA: A Multilingual Dataset for Causal Commonsense Reasoning
The Cross-lingual Choice of Plausible Alternatives dataset is a benchmark to evaluate the ability of machine learning models to transfer commonsense reasoning across
languages. The dataset is the translation and reannotation of the English COPA (Roemmele et al. 2011) and covers 11 languages from 11 families and several areas around
the globe. The dataset is challenging as it requires both the command of world knowledge and the ability to generalise to new languages. All the details about the
creation of XCOPA and the implementation of the baselines are available in the paper.
Xcopa language et
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
- et
- ht
- id
- it
- qu
- sw
- ta
- th
- tr
- vi
- zh
## Dataset Structure
### Data Instances
#### et
- **Size of downloaded dataset files:** 0.37 MB
- **Size of the generated dataset:** 0.07 MB
- **Total amount of disk used:** 0.44 MB
An example of 'validation' looks as follows.
```
{
"changed": false,
"choice1": "Ta kallas piima kaussi.",
"choice2": "Ta kaotas oma isu.",
"idx": 1,
"label": 1,
"premise": "Tüdruk leidis oma helveste seest putuka.",
"question": "effect"
}
```
#### ht
- **Size of downloaded dataset files:** 0.37 MB
- **Size of the generated dataset:** 0.07 MB
- **Total amount of disk used:** 0.44 MB
An example of 'validation' looks as follows.
```
{
"changed": false,
"choice1": "Ta kallas piima kaussi.",
"choice2": "Ta kaotas oma isu.",
"idx": 1,
"label": 1,
"premise": "Tüdruk leidis oma helveste seest putuka.",
"question": "effect"
}
```
#### id
- **Size of downloaded dataset files:** 0.37 MB
- **Size of the generated dataset:** 0.07 MB
- **Total amount of disk used:** 0.45 MB
An example of 'validation' looks as follows.
```
{
"changed": false,
"choice1": "Ta kallas piima kaussi.",
"choice2": "Ta kaotas oma isu.",
"idx": 1,
"label": 1,
"premise": "Tüdruk leidis oma helveste seest putuka.",
"question": "effect"
}
```
#### it
- **Size of downloaded dataset files:** 0.37 MB
- **Size of the generated dataset:** 0.08 MB
- **Total amount of disk used:** 0.45 MB
An example of 'validation' looks as follows.
```
{
"changed": false,
"choice1": "Ta kallas piima kaussi.",
"choice2": "Ta kaotas oma isu.",
"idx": 1,
"label": 1,
"premise": "Tüdruk leidis oma helveste seest putuka.",
"question": "effect"
}
```
#### qu
- **Size of downloaded dataset files:** 0.37 MB
- **Size of the generated dataset:** 0.08 MB
- **Total amount of disk used:** 0.45 MB
An example of 'validation' looks as follows.
```
{
"changed": false,
"choice1": "Ta kallas piima kaussi.",
"choice2": "Ta kaotas oma isu.",
"idx": 1,
"label": 1,
"premise": "Tüdruk leidis oma helveste seest putuka.",
"question": "effect"
}
```
### Data Fields
The data fields are the same among all splits.
#### et
- `premise`: a `string` feature.
- `choice1`: a `string` feature.
- `choice2`: a `string` feature.
- `question`: a `string` feature.
- `label`: a `int32` feature.
- `idx`: a `int32` feature.
- `changed`: a `bool` feature.
#### ht
- `premise`: a `string` feature.
- `choice1`: a `string` feature.
- `choice2`: a `string` feature.
- `question`: a `string` feature.
- `label`: a `int32` feature.
- `idx`: a `int32` feature.
- `changed`: a `bool` feature.
#### id
- `premise`: a `string` feature.
- `choice1`: a `string` feature.
- `choice2`: a `string` feature.
- `question`: a `string` feature.
- `label`: a `int32` feature.
- `idx`: a `int32` feature.
- `changed`: a `bool` feature.
#### it
- `premise`: a `string` feature.
- `choice1`: a `string` feature.
- `choice2`: a `string` feature.
- `question`: a `string` feature.
- `label`: a `int32` feature.
- `idx`: a `int32` feature.
- `changed`: a `bool` feature.
#### qu
- `premise`: a `string` feature.
- `choice1`: a `string` feature.
- `choice2`: a `string` feature.
- `question`: a `string` feature.
- `label`: a `int32` feature.
- `idx`: a `int32` feature.
- `changed`: a `bool` feature.
### Data Splits
|name|validation|test|
|----|---------:|---:|
|et | 100| 500|
|ht | 100| 500|
|id | 100| 500|
|it | 100| 500|
|qu | 100| 500|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[Creative Commons Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/).
### Citation Information
```
@article{ponti2020xcopa,
title={{XCOPA: A} Multilingual Dataset for Causal Commonsense Reasoning},
author={Edoardo M. Ponti, Goran Glava
{s}, Olga Majewska, Qianchu Liu, Ivan Vuli'{c} and Anna Korhonen},
journal={arXiv preprint},
year={2020},
url={https://ducdauge.github.io/files/xcopa.pdf}
}
@inproceedings{roemmele2011choice,
title={Choice of plausible alternatives: An evaluation of commonsense causal reasoning},
author={Roemmele, Melissa and Bejan, Cosmin Adrian and Gordon, Andrew S},
booktitle={2011 AAAI Spring Symposium Series},
year={2011},
url={https://people.ict.usc.edu/~gordon/publications/AAAI-SPRING11A.PDF},
}
```
### Contributions
Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun), [@thomwolf](https://github.com/thomwolf) for adding this dataset. |
result-kand2-sdxl-wuerst-karlo/908725e5 | 2023-09-19T00:19:24.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | null | 0 | 669 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 161
num_examples: 10
download_size: 1318
dataset_size: 161
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "908725e5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
xiyuez/red-dot-design-award-product-description | 2023-07-07T18:32:48.000Z | [
"task_categories:text-generation",
"size_categories:10k<n<100K",
"language:en",
"license:odc-by",
"region:us"
] | xiyuez | null | null | null | 4 | 668 | ---
license: odc-by
task_categories:
- text-generation
language:
- en
pretty_name: Red Dot Design Award Dataset
size_categories:
- 10k<n<100K
---
# Red Dot Design Award Dataset
This dataset contains information about the products that have won the Red Dot Design Award, a prestigious international design competition. The data was extracted from the official website of the award: <https://www.red-dot.org/>.
## Task
The task for this dataset is text generation, specifically product description generation. Given a product name and category, the goal is to generate a concise and informative description that highlights the features and benefits of the product.
## Limitations
The dataset may have some limitations, such as:
- The data may contain false or outdated information, as it reflects the information available on the website at the time of extraction.
- The data only covers the products that have won the award, which may introduce some selection bias or limit the diversity of the data.
- The data is only in English, although the website also has a German version that could be crawled in the future.
- The data does not include any images of the products, which could be useful for multimodal language models. Images are planned to be scraped in the future.
## License
This public extract is licensed under the Open Data Commons Attribution License: <http://opendatacommons.org/licenses/by/1.0/>.
## Data Format
The dataset consists of 21183 unique rows, each containing the following columns:
- `product`: The name of the product that won the award.
- `category`: The category of the product, such as "Video Camera", "Bathroom Shelf", or "Mobile Home".
- `description`: A short paragraph describing the product, its features, and its benefits.
There is no predefined train/test split for this dataset.
Near-duplicates have been removed.
## Data Quality
The data quality may vary depending on the source and accuracy of the information on the website. We have not verified, filtered, or modified the data in any way. The data may contain content that is toxic, biased, copyrighted, or false. Use of this dataset is at your own risk. We do not provide any warranties or liability.
## Acknowledgements
We would like to acknowledge the Red Dot Design Award for hosting and maintaining the website that provided the data for this dataset. We do not claim any ownership or affiliation with the award or the website. |
yair-elboher/text-toy | 2023-10-06T09:35:55.000Z | [
"region:us"
] | yair-elboher | null | null | null | 0 | 668 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 10849
num_examples: 9
- name: validation
num_bytes: 8180
num_examples: 4
download_size: 30926
dataset_size: 19029
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
# Dataset Card for "text-toy"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
GEM/e2e_nlg | 2022-10-24T15:30:18.000Z | [
"task_categories:table-to-text",
"annotations_creators:none",
"language_creators:unknown",
"multilinguality:unknown",
"size_categories:unknown",
"source_datasets:original",
"language:en",
"license:cc-by-sa-4.0",
"data-to-text",
"region:us"
] | GEM | The E2E dataset is designed for a limited-domain data-to-text task --
generation of restaurant descriptions/recommendations based on up to 8 different
attributes (name, area, price range etc.). | @inproceedings{e2e_cleaned,
address = {Tokyo, Japan},
title = {Semantic {Noise} {Matters} for {Neural} {Natural} {Language} {Generation}},
url = {https://www.aclweb.org/anthology/W19-8652/},
booktitle = {Proceedings of the 12th {International} {Conference} on {Natural} {Language} {Generation} ({INLG} 2019)},
author = {Dušek, Ondřej and Howcroft, David M and Rieser, Verena},
year = {2019},
pages = {421--426},
} | null | 2 | 667 | ---
annotations_creators:
- none
language_creators:
- unknown
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- unknown
size_categories:
- unknown
source_datasets:
- original
task_categories:
- table-to-text
task_ids: []
pretty_name: e2e_nlg
tags:
- data-to-text
---
# Dataset Card for GEM/e2e_nlg
## Dataset Description
- **Homepage:** http://www.macs.hw.ac.uk/InteractionLab/E2E/
- **Repository:** https://github.com/tuetschek/e2e-cleaning
- **Paper:** https://www.aclweb.org/anthology/W17-5525/, [Detailed E2E Challenge writeup
- **Leaderboard:** N/A
- **Point of Contact:** Ondrej Dusek
### Link to Main Data Card
You can find the main data card on the [GEM Website](https://gem-benchmark.com/data_cards/e2e_nlg).
### Dataset Summary
The E2E NLG dataset is an English benchmark dataset for data-to-text models that verbalize a set of 2-9 key-value attribute pairs in the restaurant domain. The version used for GEM is the cleaned E2E NLG dataset, which filters examples with hallucinations and outputs that don't fully cover all input attributes.
You can load the dataset via:
```
import datasets
data = datasets.load_dataset('GEM/e2e_nlg')
```
The data loader can be found [here](https://huggingface.co/datasets/GEM/e2e_nlg).
#### website
[Website](http://www.macs.hw.ac.uk/InteractionLab/E2E/)
#### paper
[First data release](https://www.aclweb.org/anthology/W17-5525/), [Detailed E2E Challenge writeup](https://doi.org/10.1016/j.csl.2019.06.009), [Cleaned E2E version](https://www.aclweb.org/anthology/W19-8652/)
#### authors
Jekaterina Novikova, Ondrej Dusek and Verena Rieser
## Dataset Overview
### Where to find the Data and its Documentation
#### Webpage
<!-- info: What is the webpage for the dataset (if it exists)? -->
<!-- scope: telescope -->
[Website](http://www.macs.hw.ac.uk/InteractionLab/E2E/)
#### Download
<!-- info: What is the link to where the original dataset is hosted? -->
<!-- scope: telescope -->
[Github](https://github.com/tuetschek/e2e-cleaning)
#### Paper
<!-- info: What is the link to the paper describing the dataset (open access preferred)? -->
<!-- scope: telescope -->
[First data release](https://www.aclweb.org/anthology/W17-5525/), [Detailed E2E Challenge writeup](https://doi.org/10.1016/j.csl.2019.06.009), [Cleaned E2E version](https://www.aclweb.org/anthology/W19-8652/)
#### BibTex
<!-- info: Provide the BibTex-formatted reference for the dataset. Please use the correct published version (ACL anthology, etc.) instead of google scholar created Bibtex. -->
<!-- scope: microscope -->
```
@inproceedings{e2e_cleaned,
address = {Tokyo, Japan},
title = {Semantic {Noise} {Matters} for {Neural} {Natural} {Language} {Generation}},
url = {https://www.aclweb.org/anthology/W19-8652/},
booktitle = {Proceedings of the 12th {International} {Conference} on {Natural} {Language} {Generation} ({INLG} 2019)},
author = {Dušek, Ondřej and Howcroft, David M and Rieser, Verena},
year = {2019},
pages = {421--426},
}
```
#### Contact Name
<!-- quick -->
<!-- info: If known, provide the name of at least one person the reader can contact for questions about the dataset. -->
<!-- scope: periscope -->
Ondrej Dusek
#### Contact Email
<!-- info: If known, provide the email of at least one person the reader can contact for questions about the dataset. -->
<!-- scope: periscope -->
odusek@ufal.mff.cuni.cz
#### Has a Leaderboard?
<!-- info: Does the dataset have an active leaderboard? -->
<!-- scope: telescope -->
no
### Languages and Intended Use
#### Multilingual?
<!-- quick -->
<!-- info: Is the dataset multilingual? -->
<!-- scope: telescope -->
no
#### Covered Dialects
<!-- info: What dialects are covered? Are there multiple dialects per language? -->
<!-- scope: periscope -->
Dialect-specific data was not collected and the language is general British English.
#### Covered Languages
<!-- quick -->
<!-- info: What languages/dialects are covered in the dataset? -->
<!-- scope: telescope -->
`English`
#### Whose Language?
<!-- info: Whose language is in the dataset? -->
<!-- scope: periscope -->
The original dataset was collected using the CrowdFlower (now Appen) platform using native English speakers (self-reported). No demographic information was provided, but the collection was geographically limited to English-speaking countries.
#### License
<!-- quick -->
<!-- info: What is the license of the dataset? -->
<!-- scope: telescope -->
cc-by-sa-4.0: Creative Commons Attribution Share Alike 4.0 International
#### Intended Use
<!-- info: What is the intended use of the dataset? -->
<!-- scope: microscope -->
The dataset was collected to test neural model on a very well specified realization task.
#### Primary Task
<!-- info: What primary task does the dataset support? -->
<!-- scope: telescope -->
Data-to-Text
#### Communicative Goal
<!-- quick -->
<!-- info: Provide a short description of the communicative goal of a model trained for this task on this dataset. -->
<!-- scope: periscope -->
Producing a text informing/recommending a restaurant, given all and only the attributes specified on the input.
### Credit
#### Curation Organization Type(s)
<!-- info: In what kind of organization did the dataset curation happen? -->
<!-- scope: telescope -->
`academic`
#### Curation Organization(s)
<!-- info: Name the organization(s). -->
<!-- scope: periscope -->
Heriot-Watt University
#### Dataset Creators
<!-- info: Who created the original dataset? List the people involved in collecting the dataset and their affiliation(s). -->
<!-- scope: microscope -->
Jekaterina Novikova, Ondrej Dusek and Verena Rieser
#### Funding
<!-- info: Who funded the data creation? -->
<!-- scope: microscope -->
This research received funding from the EPSRC projects DILiGENt (EP/M005429/1) and MaDrIgAL (EP/N017536/1).
#### Who added the Dataset to GEM?
<!-- info: Who contributed to the data card and adding the dataset to GEM? List the people+affiliations involved in creating this data card and who helped integrate this dataset into GEM. -->
<!-- scope: microscope -->
Simon Mille wrote the initial data card and Yacine Jernite the data loader. Sebastian Gehrmann migrated the data card to the v2 format and moved the data loader to the hub.
### Dataset Structure
#### Data Fields
<!-- info: List and describe the fields present in the dataset. -->
<!-- scope: telescope -->
The data is in a CSV format, with the following fields:
* `mr` -- the meaning representation (MR, input)
* `ref` -- reference, i.e. the corresponding natural-language description (output)
There are additional fields (`fixed`, `orig_mr`) indicating whether the data was modified in the
cleaning process and what was the original MR before cleaning, but these aren't used for NLG.
The MR has a flat structure -- attribute-value pairs are comma separated, with values
enclosed in brackets (see example above). There are 8 attributes:
* `name` -- restaurant name
* `near` -- a landmark close to the restaurant
* `area` -- location (riverside, city centre)
* `food` -- food type / cuisine (e.g. Japanese, Indian, English etc.)
* `eatType` -- restaurant type (restaurant, coffee shop, pub)
* `priceRange` -- price range (low, medium, high, <£20, £20-30, >£30)
* `rating` -- customer rating (low, medium, high, 1/5, 3/5, 5/5)
* `familyFriendly` -- is the restaurant family-friendly (yes/no)
The same MR is often repeated multiple times with different synonymous references.
#### How were labels chosen?
<!-- info: How were the labels chosen? -->
<!-- scope: microscope -->
The source MRs were generated automatically at random from a set of valid attribute values. The labels were crowdsourced and are natural language
#### Example Instance
<!-- info: Provide a JSON formatted example of a typical instance in the dataset. -->
<!-- scope: periscope -->
```
{
"input": "name[Alimentum], area[riverside], familyFriendly[yes], near[Burger King]",
"target": "Alimentum is a kids friendly place in the riverside area near Burger King."
}
```
#### Data Splits
<!-- info: Describe and name the splits in the dataset if there are more than one. -->
<!-- scope: periscope -->
| | MRs | Distinct MRs | References |
|-------------|------|--------------|------------|
| Training |12,568| 8,362 | 33,525 |
| Development | 1,484| 1,132 | 4,299 |
| Test | 1,847| 1,358 | 4,693 |
| Total |15,899| 10,852 | 42,517 |
“Distinct MRs” are MRs that remain distinct even if restaurant/place names (attributes `name`, `near`)
are delexicalized, i.e., replaced with a placeholder.
#### Splitting Criteria
<!-- info: Describe any criteria for splitting the data, if used. If there are differences between the splits (e.g., if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here. -->
<!-- scope: microscope -->
The data are divided so that MRs in different splits do not overlap.
## Dataset in GEM
### Rationale for Inclusion in GEM
#### Why is the Dataset in GEM?
<!-- info: What does this dataset contribute toward better generation evaluation and why is it part of GEM? -->
<!-- scope: microscope -->
The E2E dataset is one of the largest limited-domain NLG datasets and is frequently used as a data-to-text generation benchmark. The E2E Challenge included 20 systems of very different architectures, with system outputs available for download.
#### Similar Datasets
<!-- info: Do other datasets for the high level task exist? -->
<!-- scope: telescope -->
yes
#### Unique Language Coverage
<!-- info: Does this dataset cover other languages than other datasets for the same task? -->
<!-- scope: periscope -->
no
#### Difference from other GEM datasets
<!-- info: What else sets this dataset apart from other similar datasets in GEM? -->
<!-- scope: microscope -->
The dataset is much cleaner than comparable datasets, and it is also a relatively easy task, making for a straightforward evaluation.
#### Ability that the Dataset measures
<!-- info: What aspect of model ability can be measured with this dataset? -->
<!-- scope: periscope -->
surface realization.
### GEM-Specific Curation
#### Modificatied for GEM?
<!-- info: Has the GEM version of the dataset been modified in any way (data, processing, splits) from the original curated data? -->
<!-- scope: telescope -->
yes
#### Additional Splits?
<!-- info: Does GEM provide additional splits to the dataset? -->
<!-- scope: telescope -->
yes
#### Split Information
<!-- info: Describe how the new splits were created -->
<!-- scope: periscope -->
4 special test sets for E2E were added to the GEM evaluation suite.
1. We created subsets of the training and development sets of ~500 randomly selected inputs each.
2. We applied input scrambling on a subset of 500 randomly selected test instances; the order of the input properties was randomly reassigned.
3. For the input size, we created subpopulations based on the number of restaurant properties in the input.
| Input length | Frequency English |
|---------------|-------------------|
| 2 | 5 |
| 3 | 120 |
| 4 | 389 |
| 5 | 737 |
| 6 | 1187 |
| 7 | 1406 |
| 8 | 774 |
| 9 | 73 |
| 10 | 2 |
#### Split Motivation
<!-- info: What aspects of the model's generation capacities were the splits created to test? -->
<!-- scope: periscope -->
Generalization and robustness
### Getting Started with the Task
## Previous Results
### Previous Results
#### Measured Model Abilities
<!-- info: What aspect of model ability can be measured with this dataset? -->
<!-- scope: telescope -->
Surface realization.
#### Metrics
<!-- info: What metrics are typically used for this task? -->
<!-- scope: periscope -->
`BLEU`, `METEOR`, `ROUGE`
#### Proposed Evaluation
<!-- info: List and describe the purpose of the metrics and evaluation methodology (including human evaluation) that the dataset creators used when introducing this task. -->
<!-- scope: microscope -->
The official evaluation script combines the MT-Eval and COCO Captioning libraries with the following metrics.
- BLEU
- CIDEr
- NIST
- METEOR
- ROUGE-L
#### Previous results available?
<!-- info: Are previous results available? -->
<!-- scope: telescope -->
yes
#### Other Evaluation Approaches
<!-- info: What evaluation approaches have others used? -->
<!-- scope: periscope -->
Most previous results, including the shared task results, used the library provided by the dataset creators. The shared task also conducted a human evaluation using the following two criteria:
- `Quality`: When collecting quality ratings, system outputs were presented to crowd workers together with the corresponding meaning representation, which implies that correctness of the NL utterance relative to the MR should also influence this ranking. The crowd workers were asked: “How do you judge the overall quality of the utterance in terms of its grammatical correctness, fluency, adequacy and other important factors?”
- `Naturalness`: When collecting naturalness ratings, system outputs were presented to crowd workers without the corresponding meaning representation. The crowd workers were asked: “Could the utterance have been produced by a native speaker?”
#### Relevant Previous Results
<!-- info: What are the most relevant previous results for this task/dataset? -->
<!-- scope: microscope -->
The shared task writeup has in-depth evaluations of systems (https://www.sciencedirect.com/science/article/pii/S0885230819300919)
## Dataset Curation
### Original Curation
#### Original Curation Rationale
<!-- info: Original curation rationale -->
<!-- scope: telescope -->
The dataset was collected to showcase/test neural NLG models. It is larger and contains more lexical richness and syntactic variation than previous closed-domain NLG datasets.
#### Communicative Goal
<!-- info: What was the communicative goal? -->
<!-- scope: periscope -->
Producing a text informing/recommending a restaurant, given all and only the attributes specified on the input.
#### Sourced from Different Sources
<!-- info: Is the dataset aggregated from different data sources? -->
<!-- scope: telescope -->
no
### Language Data
#### How was Language Data Obtained?
<!-- info: How was the language data obtained? -->
<!-- scope: telescope -->
`Crowdsourced`
#### Where was it crowdsourced?
<!-- info: If crowdsourced, where from? -->
<!-- scope: periscope -->
`Other crowdworker platform`
#### Language Producers
<!-- info: What further information do we have on the language producers? -->
<!-- scope: microscope -->
Human references describing the MRs were collected by crowdsourcing on the CrowdFlower (now Appen) platform,
with either textual or pictorial MRs as a baseline.
The pictorial MRs were used in 20% of cases -- these yield higher lexical variation but introduce noise.
#### Topics Covered
<!-- info: Does the language in the dataset focus on specific topics? How would you describe them? -->
<!-- scope: periscope -->
The dataset is focused on descriptions of restaurants.
#### Data Validation
<!-- info: Was the text validated by a different worker or a data curator? -->
<!-- scope: telescope -->
validated by data curator
#### Data Preprocessing
<!-- info: How was the text data pre-processed? (Enter N/A if the text was not pre-processed) -->
<!-- scope: microscope -->
There were basic checks (length, valid characters, repetition).
#### Was Data Filtered?
<!-- info: Were text instances selected or filtered? -->
<!-- scope: telescope -->
algorithmically
#### Filter Criteria
<!-- info: What were the selection criteria? -->
<!-- scope: microscope -->
The cleaned version of the dataset which we are using in GEM was algorithmically filtered. They used regular expressions to match all human-generated references with a more accurate input when attributes were hallucinated or dropped. Additionally, train-test overlap stemming from the transformation was removed. As a result, this data is much cleaner than the original dataset but not perfect (about 20% of instances may have misaligned slots, compared to 40% of the original data.
### Structured Annotations
#### Additional Annotations?
<!-- quick -->
<!-- info: Does the dataset have additional annotations for each instance? -->
<!-- scope: telescope -->
none
#### Annotation Service?
<!-- info: Was an annotation service used? -->
<!-- scope: telescope -->
no
### Consent
#### Any Consent Policy?
<!-- info: Was there a consent policy involved when gathering the data? -->
<!-- scope: telescope -->
yes
#### Consent Policy Details
<!-- info: What was the consent policy? -->
<!-- scope: microscope -->
Since a crowdsourcing platform was used, the involved raters waived their rights to the data and are aware that the produced annotations can be publicly released.
### Private Identifying Information (PII)
#### Contains PII?
<!-- quick -->
<!-- info: Does the source language data likely contain Personal Identifying Information about the data creators or subjects? -->
<!-- scope: telescope -->
no PII
#### Justification for no PII
<!-- info: Provide a justification for selecting `no PII` above. -->
<!-- scope: periscope -->
The dataset is artificial and does not contain any description of people.
### Maintenance
#### Any Maintenance Plan?
<!-- info: Does the original dataset have a maintenance plan? -->
<!-- scope: telescope -->
no
## Broader Social Context
### Previous Work on the Social Impact of the Dataset
#### Usage of Models based on the Data
<!-- info: Are you aware of cases where models trained on the task featured in this dataset ore related tasks have been used in automated systems? -->
<!-- scope: telescope -->
no
### Impact on Under-Served Communities
#### Addresses needs of underserved Communities?
<!-- info: Does this dataset address the needs of communities that are traditionally underserved in language technology, and particularly language generation technology? Communities may be underserved for exemple because their language, language variety, or social or geographical context is underepresented in NLP and NLG resources (datasets and models). -->
<!-- scope: telescope -->
no
### Discussion of Biases
#### Any Documented Social Biases?
<!-- info: Are there documented social biases in the dataset? Biases in this context are variations in the ways members of different social categories are represented that can have harmful downstream consequences for members of the more disadvantaged group. -->
<!-- scope: telescope -->
no
#### Are the Language Producers Representative of the Language?
<!-- info: Does the distribution of language producers in the dataset accurately represent the full distribution of speakers of the language world-wide? If not, how does it differ? -->
<!-- scope: periscope -->
The source data is generated randomly, so it should not contain biases. The human references may be biased by the workers' demographic, but that was not investigated upon data collection.
## Considerations for Using the Data
### PII Risks and Liability
### Licenses
#### Copyright Restrictions on the Dataset
<!-- info: Based on your answers in the Intended Use part of the Data Overview Section, which of the following best describe the copyright and licensing status of the dataset? -->
<!-- scope: periscope -->
`open license - commercial use allowed`
#### Copyright Restrictions on the Language Data
<!-- info: Based on your answers in the Language part of the Data Curation Section, which of the following best describe the copyright and licensing status of the underlying language data? -->
<!-- scope: periscope -->
`open license - commercial use allowed`
### Known Technical Limitations
#### Technical Limitations
<!-- info: Describe any known technical limitations, such as spurrious correlations, train/test overlap, annotation biases, or mis-annotations, and cite the works that first identified these limitations when possible. -->
<!-- scope: microscope -->
The cleaned version still has data points with hallucinated or omitted attributes.
#### Unsuited Applications
<!-- info: When using a model trained on this dataset in a setting where users or the public may interact with its predictions, what are some pitfalls to look out for? In particular, describe some applications of the general task featured in this dataset that its curation or properties make it less suitable for. -->
<!-- scope: microscope -->
The data only pertains to the restaurant domain and the included attributes. A model cannot be expected to handle other domains or attributes.
|
ywchoi/pubmed_abstract_0 | 2022-09-13T00:53:42.000Z | [
"region:us"
] | ywchoi | null | null | null | 1 | 667 | Entry not found |
result-kand2-sdxl-wuerst-karlo/9e7f6f37 | 2023-09-19T00:24:03.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | null | 0 | 667 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 152
num_examples: 10
download_size: 1303
dataset_size: 152
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "9e7f6f37"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
EleutherAI/the_pile_deduplicated | 2022-12-02T23:49:09.000Z | [
"region:us"
] | EleutherAI | null | null | null | 39 | 666 | Entry not found |
lmsys/mt_bench_human_judgments | 2023-07-20T18:28:15.000Z | [
"task_categories:conversational",
"task_categories:question-answering",
"size_categories:1K<n<10K",
"language:en",
"license:cc-by-4.0",
"arxiv:2306.05685",
"region:us"
] | lmsys | null | null | null | 32 | 666 | ---
dataset_info:
features:
- name: question_id
dtype: int64
- name: model_a
dtype: string
- name: model_b
dtype: string
- name: winner
dtype: string
- name: judge
dtype: string
- name: conversation_a
list:
- name: content
dtype: string
- name: role
dtype: string
- name: conversation_b
list:
- name: content
dtype: string
- name: role
dtype: string
- name: turn
dtype: int64
splits:
- name: human
num_bytes: 15003469
num_examples: 3355
- name: gpt4_pair
num_bytes: 10679650
num_examples: 2400
download_size: 1388888
dataset_size: 25683119
license: cc-by-4.0
task_categories:
- conversational
- question-answering
language:
- en
size_categories:
- 1K<n<10K
---
## Content
This dataset contains 3.3K expert-level pairwise human preferences for model responses generated by 6 models in response to 80 MT-bench questions.
The 6 models are GPT-4, GPT-3.5, Claud-v1, Vicuna-13B, Alpaca-13B, and LLaMA-13B. The annotators are mostly graduate students with expertise in the topic areas of each of the questions. The details of data collection can be found in our [paper](https://arxiv.org/abs/2306.05685).
## Agreement Calculation
This Colab [notebook](https://colab.research.google.com/drive/1ctgygDRJhVGUJTQy8-bRZCl1WNcT8De6?usp=sharing) shows how to compute the agreement between humans and GPT-4 judge with the dataset. Our results show that humans and GPT-4 judge achieve over 80\% agreement, the same level of agreement between humans.
## Citation
```
@misc{zheng2023judging,
title={Judging LLM-as-a-judge with MT-Bench and Chatbot Arena},
author={Lianmin Zheng and Wei-Lin Chiang and Ying Sheng and Siyuan Zhuang and Zhanghao Wu and Yonghao Zhuang and Zi Lin and Zhuohan Li and Dacheng Li and Eric. P Xing and Hao Zhang and Joseph E. Gonzalez and Ion Stoica},
year={2023},
eprint={2306.05685},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
result-kand2-sdxl-wuerst-karlo/e87ec3b2 | 2023-09-19T00:21:50.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | null | 0 | 666 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 153
num_examples: 10
download_size: 1306
dataset_size: 153
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "e87ec3b2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
result-kand2-sdxl-wuerst-karlo/ad45b2bb | 2023-09-19T01:34:46.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | null | 0 | 664 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 188
num_examples: 10
download_size: 1388
dataset_size: 188
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "ad45b2bb"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
keremberke/chest-xray-classification | 2023-01-18T09:25:27.000Z | [
"task_categories:image-classification",
"roboflow",
"roboflow2huggingface",
"Biology",
"region:us"
] | keremberke | null | \ | null | 9 | 663 | ---
task_categories:
- image-classification
tags:
- roboflow
- roboflow2huggingface
- Biology
---
<div align="center">
<img width="640" alt="keremberke/chest-xray-classification" src="https://huggingface.co/datasets/keremberke/chest-xray-classification/resolve/main/thumbnail.jpg">
</div>
### Dataset Labels
```
['NORMAL', 'PNEUMONIA']
```
### Number of Images
```json
{'train': 4077, 'test': 582, 'valid': 1165}
```
### How to Use
- Install [datasets](https://pypi.org/project/datasets/):
```bash
pip install datasets
```
- Load the dataset:
```python
from datasets import load_dataset
ds = load_dataset("keremberke/chest-xray-classification", name="full")
example = ds['train'][0]
```
### Roboflow Dataset Page
[https://universe.roboflow.com/mohamed-traore-2ekkp/chest-x-rays-qjmia/dataset/2](https://universe.roboflow.com/mohamed-traore-2ekkp/chest-x-rays-qjmia/dataset/2?ref=roboflow2huggingface)
### Citation
```
```
### License
CC BY 4.0
### Dataset Summary
This dataset was exported via roboflow.ai on March 31, 2022 at 3:11 PM GMT
It includes 5824 images.
Pneumonia are annotated in folder format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
* Resize to 640x640 (Stretch)
No image augmentation techniques were applied.
|
Biddls/Onion_News | 2023-03-25T12:57:47.000Z | [
"task_categories:summarization",
"task_categories:text2text-generation",
"task_categories:text-generation",
"task_categories:text-classification",
"language:en",
"license:mit",
"region:us"
] | Biddls | null | null | null | 1 | 661 | ---
license: mit
task_categories:
- summarization
- text2text-generation
- text-generation
- text-classification
language:
- en
pretty_name: OnionNewsScrape
---
## This is a dataset of Onion news articles:
Note
- The headers and body of the news article is split by a ' #~# ' token
- Lines with just the token had no body or no header and can be skipped
- Feel free to use the script provided to scape the latest version, it takes about 30 mins on an i7-6850K |
result-kand2-sdxl-wuerst-karlo/4b9958b5 | 2023-09-19T02:29:31.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | null | 0 | 661 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 167
num_examples: 10
download_size: 1331
dataset_size: 167
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "4b9958b5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
unitxt/data | 2023-10-03T13:07:44.000Z | [
"license:apache-2.0",
"region:us"
] | unitxt | null | null | null | 0 | 658 | ---
license: apache-2.0
---
|
yuchenlin/just-eval-instruct | 2023-10-07T06:44:23.000Z | [
"region:us"
] | yuchenlin | null | null | null | 2 | 656 | ---
configs:
- config_name: default
data_files:
- split: test
path: "test.jsonl"
- config_name: responses
data_files:
- split: gpt_4
path: "responses/gpt-4.json"
- split: gpt_3.5_turbo
path: "responses/gpt-3.5-turbo.json"
- split: vicuna_7b_v1.5
path: "responses/vicuna-7b-v1.5.json"
- split: Llama_2_7b_chat
path: "responses/Llama-2-7b-chat-hf.json"
- split: Llama_2_13b_chat
path: "responses/Llama-2-13b-chat-hf.json"
- split: Llama_2_70b_chat_gptq
path: "responses/Llama-2-70B-chat-GPTQ.json"
- config_name: judgements
data_files:
- split: gpt_4
path: "judgements/score_multi_gpt4/gpt-4-0314.score_multi.gpt4.jsonl"
- split: gpt_3.5_turbo
path: "judgements/score_multi_gpt4/gpt-3.5-turbo-0613.score_multi.gpt4.jsonl"
- split: vicuna_7b_v1.5
path: "judgements/score_multi_gpt4/vicuna-7b-v1.5.score_multi.gpt4.jsonl"
- split: Llama_2_7b_chat
path: "judgements/score_multi_gpt4/Llama-2-7b-chat-hf.score_multi.gpt4.jsonl"
- split: Llama_2_13b_chat
path: "judgements/score_multi_gpt4/Llama-2-13b-chat-hf.score_multi.gpt4.jsonl"
- split: Llama_2_70b_chat_gptq
path: "judgements/score_multi_gpt4/Llama-2-70B-chat-GPTQ.score_multi.gpt4.jsonl"
---
## Just Eval Instruct! |
openslr | 2023-06-01T14:59:55.000Z | [
"task_categories:automatic-speech-recognition",
"annotations_creators:found",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:af",
"language:bn",
"language:ca",
"language:en",
"language:es",
"language:eu",
"language:gl",
"language:gu",
"language:jv",
"language:km",
"language:kn",
"language:ml",
"language:mr",
"language:my",
"language:ne",
"language:si",
"language:st",
"language:su",
"language:ta",
"language:te",
"language:tn",
"language:ve",
"language:xh",
"language:yo",
"license:cc-by-sa-4.0",
"region:us"
] | null | OpenSLR is a site devoted to hosting speech and language resources, such as training corpora for speech recognition,
and software related to speech recognition. We intend to be a convenient place for anyone to put resources that
they have created, so that they can be downloaded publicly. | SLR32:
@inproceedings{van-niekerk-etal-2017,
title = {{Rapid development of TTS corpora for four South African languages}},
author = {Daniel van Niekerk and Charl van Heerden and Marelie Davel and Neil Kleynhans and Oddur Kjartansson
and Martin Jansche and Linne Ha},
booktitle = {Proc. Interspeech 2017},
pages = {2178--2182},
address = {Stockholm, Sweden},
month = aug,
year = {2017},
URL = {http://dx.doi.org/10.21437/Interspeech.2017-1139}
}
SLR35, SLR36, SLR52, SLR53, SLR54:
@inproceedings{kjartansson-etal-sltu2018,
title = {{Crowd-Sourced Speech Corpora for Javanese, Sundanese, Sinhala, Nepali, and Bangladeshi Bengali}},
author = {Oddur Kjartansson and Supheakmungkol Sarin and Knot Pipatsrisawat and Martin Jansche and Linne Ha},
booktitle = {Proc. The 6th Intl. Workshop on Spoken Language Technologies for Under-Resourced Languages (SLTU)},
year = {2018},
address = {Gurugram, India},
month = aug,
pages = {52--55},
URL = {https://dx.doi.org/10.21437/SLTU.2018-11},
}
SLR41, SLR42, SLR43, SLR44:
@inproceedings{kjartansson-etal-tts-sltu2018,
title = {{A Step-by-Step Process for Building TTS Voices Using Open Source Data and Framework for Bangla, Javanese,
Khmer, Nepali, Sinhala, and Sundanese}},
author = {Keshan Sodimana and Knot Pipatsrisawat and Linne Ha and Martin Jansche and Oddur Kjartansson and Pasindu
De Silva and Supheakmungkol Sarin},
booktitle = {Proc. The 6th Intl. Workshop on Spoken Language Technologies for Under-Resourced Languages (SLTU)},
year = {2018},
address = {Gurugram, India},
month = aug,
pages = {66--70},
URL = {https://dx.doi.org/10.21437/SLTU.2018-14}
}
SLR63, SLR64, SLR65, SLR66, SLR78, SLR79:
@inproceedings{he-etal-2020-open,
title = {{Open-source Multi-speaker Speech Corpora for Building Gujarati, Kannada, Malayalam, Marathi, Tamil and
Telugu Speech Synthesis Systems}},
author = {He, Fei and Chu, Shan-Hui Cathy and Kjartansson, Oddur and Rivera, Clara and Katanova, Anna and Gutkin,
Alexander and Demirsahin, Isin and Johny, Cibu and Jansche, Martin and Sarin, Supheakmungkol and Pipatsrisawat, Knot},
booktitle = {Proceedings of The 12th Language Resources and Evaluation Conference (LREC)},
month = may,
year = {2020},
address = {Marseille, France},
publisher = {European Language Resources Association (ELRA)},
pages = {6494--6503},
url = {https://www.aclweb.org/anthology/2020.lrec-1.800},
ISBN = "{979-10-95546-34-4},
}
SLR69, SLR76, SLR77:
@inproceedings{kjartansson-etal-2020-open,
title = {{Open-Source High Quality Speech Datasets for Basque, Catalan and Galician}},
author = {Kjartansson, Oddur and Gutkin, Alexander and Butryna, Alena and Demirsahin, Isin and Rivera, Clara},
booktitle = {Proceedings of the 1st Joint Workshop on Spoken Language Technologies for Under-resourced languages
(SLTU) and Collaboration and Computing for Under-Resourced Languages (CCURL)},
year = {2020},
pages = {21--27},
month = may,
address = {Marseille, France},
publisher = {European Language Resources association (ELRA)},
url = {https://www.aclweb.org/anthology/2020.sltu-1.3},
ISBN = {979-10-95546-35-1},
}
SLR71, SLR71, SLR72, SLR73, SLR74, SLR75:
@inproceedings{guevara-rukoz-etal-2020-crowdsourcing,
title = {{Crowdsourcing Latin American Spanish for Low-Resource Text-to-Speech}},
author = {Guevara-Rukoz, Adriana and Demirsahin, Isin and He, Fei and Chu, Shan-Hui Cathy and Sarin,
Supheakmungkol and Pipatsrisawat, Knot and Gutkin, Alexander and Butryna, Alena and Kjartansson, Oddur},
booktitle = {Proceedings of The 12th Language Resources and Evaluation Conference (LREC)},
year = {2020},
month = may,
address = {Marseille, France},
publisher = {European Language Resources Association (ELRA)},
url = {https://www.aclweb.org/anthology/2020.lrec-1.801},
pages = {6504--6513},
ISBN = {979-10-95546-34-4},
}
SLR80
@inproceedings{oo-etal-2020-burmese,
title = {{Burmese Speech Corpus, Finite-State Text Normalization and Pronunciation Grammars with an Application
to Text-to-Speech}},
author = {Oo, Yin May and Wattanavekin, Theeraphol and Li, Chenfang and De Silva, Pasindu and Sarin,
Supheakmungkol and Pipatsrisawat, Knot and Jansche, Martin and Kjartansson, Oddur and Gutkin, Alexander},
booktitle = {Proceedings of The 12th Language Resources and Evaluation Conference (LREC)},
month = may,
year = {2020},
pages = "6328--6339",
address = {Marseille, France},
publisher = {European Language Resources Association (ELRA)},
url = {https://www.aclweb.org/anthology/2020.lrec-1.777},
ISBN = {979-10-95546-34-4},
}
SLR86
@inproceedings{gutkin-et-al-yoruba2020,
title = {{Developing an Open-Source Corpus of Yoruba Speech}},
author = {Alexander Gutkin and Işın Demirşahin and Oddur Kjartansson and Clara Rivera and Kọ́lá Túbọ̀sún},
booktitle = {Proceedings of Interspeech 2020},
pages = {404--408},
month = {October},
year = {2020},
address = {Shanghai, China},
publisher = {International Speech and Communication Association (ISCA)},
doi = {10.21437/Interspeech.2020-1096},
url = {https://dx.doi.org/10.21437/Interspeech.2020-1096},
} | null | 11 | 655 | ---
pretty_name: OpenSLR
annotations_creators:
- found
language_creators:
- found
language:
- af
- bn
- ca
- en
- es
- eu
- gl
- gu
- jv
- km
- kn
- ml
- mr
- my
- ne
- si
- st
- su
- ta
- te
- tn
- ve
- xh
- yo
language_bcp47:
- en-GB
- en-IE
- en-NG
- es-CL
- es-CO
- es-PE
- es-PR
license:
- cc-by-sa-4.0
multilinguality:
- multilingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- automatic-speech-recognition
task_ids: []
paperswithcode_id: null
dataset_info:
- config_name: SLR41
features:
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 2423902
num_examples: 5822
download_size: 1890792360
dataset_size: 2423902
- config_name: SLR42
features:
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 1427984
num_examples: 2906
download_size: 866086951
dataset_size: 1427984
- config_name: SLR43
features:
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 1074005
num_examples: 2064
download_size: 800375645
dataset_size: 1074005
- config_name: SLR44
features:
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 1776827
num_examples: 4213
download_size: 1472252752
dataset_size: 1776827
- config_name: SLR63
features:
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 2016587
num_examples: 4126
download_size: 1345876299
dataset_size: 2016587
- config_name: SLR64
features:
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 810375
num_examples: 1569
download_size: 712155683
dataset_size: 810375
- config_name: SLR65
features:
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 2136447
num_examples: 4284
download_size: 1373304655
dataset_size: 2136447
- config_name: SLR66
features:
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 1898335
num_examples: 4448
download_size: 1035127870
dataset_size: 1898335
- config_name: SLR69
features:
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 1647263
num_examples: 4240
download_size: 1848659543
dataset_size: 1647263
- config_name: SLR35
features:
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 73565374
num_examples: 185076
download_size: 18900105726
dataset_size: 73565374
- config_name: SLR36
features:
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 88942337
num_examples: 219156
download_size: 22996553929
dataset_size: 88942337
- config_name: SLR70
features:
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 1339608
num_examples: 3359
download_size: 1213955196
dataset_size: 1339608
- config_name: SLR71
features:
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 1676273
num_examples: 4374
download_size: 1445365903
dataset_size: 1676273
- config_name: SLR72
features:
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 1876301
num_examples: 4903
download_size: 1612030532
dataset_size: 1876301
- config_name: SLR73
features:
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 2084052
num_examples: 5447
download_size: 1940306814
dataset_size: 2084052
- config_name: SLR74
features:
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 237395
num_examples: 617
download_size: 214181314
dataset_size: 237395
- config_name: SLR75
features:
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 1286937
num_examples: 3357
download_size: 1043317004
dataset_size: 1286937
- config_name: SLR76
features:
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 2756507
num_examples: 7136
download_size: 3041125513
dataset_size: 2756507
- config_name: SLR77
features:
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 2217652
num_examples: 5587
download_size: 2207991775
dataset_size: 2217652
- config_name: SLR78
features:
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 2121986
num_examples: 4272
download_size: 1743222102
dataset_size: 2121986
- config_name: SLR79
features:
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 2176539
num_examples: 4400
download_size: 1820919115
dataset_size: 2176539
- config_name: SLR80
features:
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 1308651
num_examples: 2530
download_size: 948181015
dataset_size: 1308651
- config_name: SLR86
features:
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 1378801
num_examples: 3583
download_size: 907065562
dataset_size: 1378801
- config_name: SLR32
features:
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 4544052380
num_examples: 9821
download_size: 3312884763
dataset_size: 4544052380
- config_name: SLR52
features:
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 77369899
num_examples: 185293
download_size: 14676484074
dataset_size: 77369899
- config_name: SLR53
features:
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 88073248
num_examples: 218703
download_size: 14630810921
dataset_size: 88073248
- config_name: SLR54
features:
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 62735822
num_examples: 157905
download_size: 9328247362
dataset_size: 62735822
- config_name: SLR83
features:
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 7098985
num_examples: 17877
download_size: 7229890819
dataset_size: 7098985
config_names:
- SLR32
- SLR35
- SLR36
- SLR41
- SLR42
- SLR43
- SLR44
- SLR52
- SLR53
- SLR54
- SLR63
- SLR64
- SLR65
- SLR66
- SLR69
- SLR70
- SLR71
- SLR72
- SLR73
- SLR74
- SLR75
- SLR76
- SLR77
- SLR78
- SLR79
- SLR80
- SLR83
- SLR86
---
# Dataset Card for openslr
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.openslr.org/
- **Repository:** [Needs More Information]
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
OpenSLR is a site devoted to hosting speech and language resources, such as training corpora for speech recognition,
and software related to speech recognition. Currently, following resources are available:
#### SLR32: High quality TTS data for four South African languages (af, st, tn, xh).
This data set contains multi-speaker high quality transcribed audio data for four languages of South Africa.
The data set consists of wave files, and a TSV file transcribing the audio. In each folder, the file line_index.tsv
contains a FileID, which in turn contains the UserID and the Transcription of audio in the file.
The data set has had some quality checks, but there might still be errors.
This data set was collected by as a collaboration between North West University and Google.
The dataset is distributed under Creative Commons Attribution-ShareAlike 4.0 International Public License.
See https://github.com/google/language-resources#license for license information.
Copyright 2017 Google, Inc.
#### SLR35: Large Javanese ASR training data set.
This data set contains transcribed audio data for Javanese (~185K utterances). The data set consists of wave files,
and a TSV file. The file utt_spk_text.tsv contains a FileID, UserID and the transcription of audio in the file.
The data set has been manually quality checked, but there might still be errors.
This dataset was collected by Google in collaboration with Reykjavik University and Universitas Gadjah Mada
in Indonesia.
The dataset is distributed under Creative Commons Attribution-ShareAlike 4.0 International Public License.
See [LICENSE](https://www.openslr.org/resources/35/LICENSE) file and
https://github.com/google/language-resources#license for license information.
Copyright 2016, 2017 Google, Inc.
#### SLR36: Large Sundanese ASR training data set.
This data set contains transcribed audio data for Sundanese (~220K utterances). The data set consists of wave files,
and a TSV file. The file utt_spk_text.tsv contains a FileID, UserID and the transcription of audio in the file.
The data set has been manually quality checked, but there might still be errors.
This dataset was collected by Google in Indonesia.
The dataset is distributed under Creative Commons Attribution-ShareAlike 4.0 International Public License.
See [LICENSE](https://www.openslr.org/resources/36/LICENSE) file and
https://github.com/google/language-resources#license for license information.
Copyright 2016, 2017 Google, Inc.
#### SLR41: High quality TTS data for Javanese.
This data set contains high-quality transcribed audio data for Javanese. The data set consists of wave files,
and a TSV file. The file line_index.tsv contains a filename and the transcription of audio in the file. Each
filename is prepended with a speaker identification number.
The data set has been manually quality checked, but there might still be errors.
This dataset was collected by Google in collaboration with Gadjah Mada University in Indonesia.
The dataset is distributed under Creative Commons Attribution-ShareAlike 4.0 International Public License.
See [LICENSE](https://www.openslr.org/resources/41/LICENSE) file and
https://github.com/google/language-resources#license for license information.
Copyright 2016, 2017, 2018 Google LLC
#### SLR42: High quality TTS data for Khmer.
This data set contains high-quality transcribed audio data for Khmer. The data set consists of wave files,
and a TSV file. The file line_index.tsv contains a filename and the transcription of audio in the file.
Each filename is prepended with a speaker identification number.
The data set has been manually quality checked, but there might still be errors.
This dataset was collected by Google.
The dataset is distributed under Creative Commons Attribution-ShareAlike 4.0 International Public License.
See [LICENSE](https://www.openslr.org/resources/42/LICENSE) file and
https://github.com/google/language-resources#license for license information.
Copyright 2016, 2017, 2018 Google LLC
#### SLR43: High quality TTS data for Nepali.
This data set contains high-quality transcribed audio data for Nepali. The data set consists of wave files,
and a TSV file. The file line_index.tsv contains a filename and the transcription of audio in the file.
Each filename is prepended with a speaker identification number.
The data set has been manually quality checked, but there might still be errors.
This dataset was collected by Google in Nepal.
The dataset is distributed under Creative Commons Attribution-ShareAlike 4.0 International Public License.
See [LICENSE](https://www.openslr.org/resources/43/LICENSE) file and
https://github.com/google/language-resources#license for license information.
Copyright 2016, 2017, 2018 Google LLC
#### SLR44: High quality TTS data for Sundanese.
This data set contains high-quality transcribed audio data for Sundanese. The data set consists of wave files,
and a TSV file. The file line_index.tsv contains a filename and the transcription of audio in the file.
Each filename is prepended with a speaker identification number.
The data set has been manually quality checked, but there might still be errors.
This dataset was collected by Google in collaboration with Universitas Pendidikan Indonesia.
The dataset is distributed under Creative Commons Attribution-ShareAlike 4.0 International Public License.
See [LICENSE](https://www.openslr.org/resources/44/LICENSE) file and
https://github.com/google/language-resources#license for license information.
Copyright 2016, 2017, 2018 Google LLC
#### SLR52: Large Sinhala ASR training data set.
This data set contains transcribed audio data for Sinhala (~185K utterances). The data set consists of wave files,
and a TSV file. The file utt_spk_text.tsv contains a FileID, UserID and the transcription of audio in the file.
The data set has been manually quality checked, but there might still be errors.
The dataset is distributed under Creative Commons Attribution-ShareAlike 4.0 International Public License.
See [LICENSE](https://www.openslr.org/resources/52/LICENSE) file and
https://github.com/google/language-resources#license for license information.
Copyright 2016, 2017, 2018 Google, Inc.
#### SLR53: Large Bengali ASR training data set.
This data set contains transcribed audio data for Bengali (~196K utterances). The data set consists of wave files,
and a TSV file. The file utt_spk_text.tsv contains a FileID, UserID and the transcription of audio in the file.
The data set has been manually quality checked, but there might still be errors.
The dataset is distributed under Creative Commons Attribution-ShareAlike 4.0 International Public License.
See [LICENSE](https://www.openslr.org/resources/53/LICENSE) file and
https://github.com/google/language-resources#license for license information.
Copyright 2016, 2017, 2018 Google, Inc.
#### SLR54: Large Nepali ASR training data set.
This data set contains transcribed audio data for Nepali (~157K utterances). The data set consists of wave files,
and a TSV file. The file utt_spk_text.tsv contains a FileID, UserID and the transcription of audio in the file.
The data set has been manually quality checked, but there might still be errors.
The dataset is distributed under Creative Commons Attribution-ShareAlike 4.0 International Public License.
See [LICENSE](https://www.openslr.org/resources/54/LICENSE) file and
https://github.com/google/language-resources#license for license information.
Copyright 2016, 2017, 2018 Google, Inc.
#### SLR63: Crowdsourced high-quality Malayalam multi-speaker speech data set
This data set contains transcribed high-quality audio of Malayalam sentences recorded by volunteers. The data set
consists of wave files, and a TSV file (line_index.tsv). The file line_index.tsv contains a anonymized FileID and
the transcription of audio in the file.
The data set has been manually quality checked, but there might still be errors.
Please report any issues in the following issue tracker on GitHub. https://github.com/googlei18n/language-resources/issues
The dataset is distributed under Creative Commons Attribution-ShareAlike 4.0 International Public License.
See [LICENSE](https://www.openslr.org/resources/63/LICENSE) file and
https://github.com/google/language-resources#license for license information.
Copyright 2018, 2019 Google, Inc.
#### SLR64: Crowdsourced high-quality Marathi multi-speaker speech data set
This data set contains transcribed high-quality audio of Marathi sentences recorded by volunteers. The data set
consists of wave files, and a TSV file (line_index.tsv). The file line_index.tsv contains a anonymized FileID and
the transcription of audio in the file.
The data set has been manually quality checked, but there might still be errors.
Please report any issues in the following issue tracker on GitHub. https://github.com/googlei18n/language-resources/issues
The dataset is distributed under Creative Commons Attribution-ShareAlike 4.0 International Public License.
See [LICENSE](https://www.openslr.org/resources/64/LICENSE) file and
https://github.com/google/language-resources#license for license information.
Copyright 2018, 2019 Google, Inc.
#### SLR65: Crowdsourced high-quality Tamil multi-speaker speech data set
This data set contains transcribed high-quality audio of Tamil sentences recorded by volunteers. The data set
consists of wave files, and a TSV file (line_index.tsv). The file line_index.tsv contains a anonymized FileID and
the transcription of audio in the file.
The data set has been manually quality checked, but there might still be errors.
Please report any issues in the following issue tracker on GitHub. https://github.com/googlei18n/language-resources/issues
The dataset is distributed under Creative Commons Attribution-ShareAlike 4.0 International Public License.
See [LICENSE](https://www.openslr.org/resources/65/LICENSE) file and
https://github.com/google/language-resources#license for license information.
Copyright 2018, 2019 Google, Inc.
#### SLR66: Crowdsourced high-quality Telugu multi-speaker speech data set
This data set contains transcribed high-quality audio of Telugu sentences recorded by volunteers. The data set
consists of wave files, and a TSV file (line_index.tsv). The file line_index.tsv contains a anonymized FileID and
the transcription of audio in the file.
The data set has been manually quality checked, but there might still be errors.
Please report any issues in the following issue tracker on GitHub. https://github.com/googlei18n/language-resources/issues
The dataset is distributed under Creative Commons Attribution-ShareAlike 4.0 International Public License.
See [LICENSE](https://www.openslr.org/resources/66/LICENSE) file and
https://github.com/google/language-resources#license for license information.
Copyright 2018, 2019 Google, Inc.
#### SLR69: Crowdsourced high-quality Catalan multi-speaker speech data set
This data set contains transcribed high-quality audio of Catalan sentences recorded by volunteers. The data set
consists of wave files, and a TSV file (line_index.tsv). The file line_index.tsv contains a anonymized FileID and
the transcription of audio in the file.
The data set has been manually quality checked, but there might still be errors.
Please report any issues in the following issue tracker on GitHub. https://github.com/googlei18n/language-resources/issues
The dataset is distributed under Creative Commons Attribution-ShareAlike 4.0 International Public License.
See [LICENSE](https://www.openslr.org/resources/69/LICENSE) file and
https://github.com/google/language-resources#license for license information.
Copyright 2018, 2019 Google, Inc.
#### SLR70: Crowdsourced high-quality Nigerian English speech data set
This data set contains transcribed high-quality audio of Nigerian English sentences recorded by volunteers. The data set
consists of wave files, and a TSV file (line_index.tsv). The file line_index.tsv contains a anonymized FileID and
the transcription of audio in the file.
The data set has been manually quality checked, but there might still be errors.
Please report any issues in the following issue tracker on GitHub. https://github.com/googlei18n/language-resources/issues
The dataset is distributed under Creative Commons Attribution-ShareAlike 4.0 International Public License.
See [LICENSE](https://www.openslr.org/resources/70/LICENSE) file and
https://github.com/google/language-resources#license for license information.
Copyright 2018, 2019 Google, Inc.
#### SLR71: Crowdsourced high-quality Chilean Spanish speech data set
This data set contains transcribed high-quality audio of Chilean Spanish sentences recorded by volunteers. The data set
consists of wave files, and a TSV file (line_index.tsv). The file line_index.tsv contains a anonymized FileID and
the transcription of audio in the file.
The data set has been manually quality checked, but there might still be errors.
Please report any issues in the following issue tracker on GitHub. https://github.com/googlei18n/language-resources/issues
The dataset is distributed under Creative Commons Attribution-ShareAlike 4.0 International Public License.
See [LICENSE](https://www.openslr.org/resources/71/LICENSE) file and
https://github.com/google/language-resources#license for license information.
Copyright 2018, 2019 Google, Inc.
#### SLR72: Crowdsourced high-quality Colombian Spanish speech data set
This data set contains transcribed high-quality audio of Colombian Spanish sentences recorded by volunteers. The data set
consists of wave files, and a TSV file (line_index.tsv). The file line_index.tsv contains a anonymized FileID and
the transcription of audio in the file.
The data set has been manually quality checked, but there might still be errors.
Please report any issues in the following issue tracker on GitHub. https://github.com/googlei18n/language-resources/issues
The dataset is distributed under Creative Commons Attribution-ShareAlike 4.0 International Public License.
See [LICENSE](https://www.openslr.org/resources/72/LICENSE) file and
https://github.com/google/language-resources#license for license information.
Copyright 2018, 2019 Google, Inc.
#### SLR73: Crowdsourced high-quality Peruvian Spanish speech data set
This data set contains transcribed high-quality audio of Peruvian Spanish sentences recorded by volunteers. The data set
consists of wave files, and a TSV file (line_index.tsv). The file line_index.tsv contains a anonymized FileID and
the transcription of audio in the file.
The data set has been manually quality checked, but there might still be errors.
Please report any issues in the following issue tracker on GitHub. https://github.com/googlei18n/language-resources/issues
The dataset is distributed under Creative Commons Attribution-ShareAlike 4.0 International Public License.
See [LICENSE](https://www.openslr.org/resources/73/LICENSE) file and
https://github.com/google/language-resources#license for license information.
Copyright 2018, 2019 Google, Inc.
#### SLR74: Crowdsourced high-quality Puerto Rico Spanish speech data set
This data set contains transcribed high-quality audio of Puerto Rico Spanish sentences recorded by volunteers. The data set
consists of wave files, and a TSV file (line_index.tsv). The file line_index.tsv contains a anonymized FileID and
the transcription of audio in the file.
The data set has been manually quality checked, but there might still be errors.
Please report any issues in the following issue tracker on GitHub. https://github.com/googlei18n/language-resources/issues
The dataset is distributed under Creative Commons Attribution-ShareAlike 4.0 International Public License.
See [LICENSE](https://www.openslr.org/resources/74/LICENSE) file and
https://github.com/google/language-resources#license for license information.
Copyright 2018, 2019 Google, Inc.
#### SLR75: Crowdsourced high-quality Venezuelan Spanish speech data set
This data set contains transcribed high-quality audio of Venezuelan Spanish sentences recorded by volunteers. The data set
consists of wave files, and a TSV file (line_index.tsv). The file line_index.tsv contains a anonymized FileID and
the transcription of audio in the file.
The data set has been manually quality checked, but there might still be errors.
Please report any issues in the following issue tracker on GitHub. https://github.com/googlei18n/language-resources/issues
The dataset is distributed under Creative Commons Attribution-ShareAlike 4.0 International Public License.
See [LICENSE](https://www.openslr.org/resources/75/LICENSE) file and
https://github.com/google/language-resources#license for license information.
Copyright 2018, 2019 Google, Inc.
#### SLR76: Crowdsourced high-quality Basque speech data set
This data set contains transcribed high-quality audio of Basque sentences recorded by volunteers. The data set
consists of wave files, and a TSV file (line_index.tsv). The file line_index.tsv contains a anonymized FileID and
the transcription of audio in the file.
The data set has been manually quality checked, but there might still be errors.
Please report any issues in the following issue tracker on GitHub. https://github.com/googlei18n/language-resources/issues
The dataset is distributed under Creative Commons Attribution-ShareAlike 4.0 International Public License.
See [LICENSE](https://www.openslr.org/resources/76/LICENSE) file and
https://github.com/google/language-resources#license for license information.
Copyright 2018, 2019 Google, Inc.
#### SLR77: Crowdsourced high-quality Galician speech data set
This data set contains transcribed high-quality audio of Galician sentences recorded by volunteers. The data set
consists of wave files, and a TSV file (line_index.tsv). The file line_index.tsv contains a anonymized FileID and
the transcription of audio in the file.
The data set has been manually quality checked, but there might still be errors.
Please report any issues in the following issue tracker on GitHub. https://github.com/googlei18n/language-resources/issues
The dataset is distributed under Creative Commons Attribution-ShareAlike 4.0 International Public License.
See [LICENSE](https://www.openslr.org/resources/77/LICENSE) file and
https://github.com/google/language-resources#license for license information.
Copyright 2018, 2019 Google, Inc.
#### SLR78: Crowdsourced high-quality Gujarati multi-speaker speech data set
This data set contains transcribed high-quality audio of Gujarati sentences recorded by volunteers. The data set
consists of wave files, and a TSV file (line_index.tsv). The file line_index.tsv contains a anonymized FileID and
the transcription of audio in the file.
The data set has been manually quality checked, but there might still be errors.
Please report any issues in the following issue tracker on GitHub. https://github.com/googlei18n/language-resources/issues
The dataset is distributed under Creative Commons Attribution-ShareAlike 4.0 International Public License.
See [LICENSE](https://www.openslr.org/resources/78/LICENSE) file and
https://github.com/google/language-resources#license for license information.
Copyright 2018, 2019 Google, Inc.
#### SLR79: Crowdsourced high-quality Kannada multi-speaker speech data set
This data set contains transcribed high-quality audio of Kannada sentences recorded by volunteers. The data set
consists of wave files, and a TSV file (line_index.tsv). The file line_index.tsv contains a anonymized FileID and
the transcription of audio in the file.
The data set has been manually quality checked, but there might still be errors.
Please report any issues in the following issue tracker on GitHub. https://github.com/googlei18n/language-resources/issues
The dataset is distributed under Creative Commons Attribution-ShareAlike 4.0 International Public License.
See [LICENSE](https://www.openslr.org/resources/79/LICENSE) file and
https://github.com/google/language-resources#license for license information.
Copyright 2018, 2019 Google, Inc.
#### SLR80: Crowdsourced high-quality Burmese speech data set
This data set contains transcribed high-quality audio of Burmese sentences recorded by volunteers. The data set
consists of wave files, and a TSV file (line_index.tsv). The file line_index.tsv contains a anonymized FileID and
the transcription of audio in the file.
The data set has been manually quality checked, but there might still be errors.
Please report any issues in the following issue tracker on GitHub. https://github.com/googlei18n/language-resources/issues
The dataset is distributed under Creative Commons Attribution-ShareAlike 4.0 International Public License.
See [LICENSE](https://www.openslr.org/resources/80/LICENSE) file and
https://github.com/google/language-resources#license for license information.
Copyright 2018, 2019 Google, Inc.
#### SLR83: Crowdsourced high-quality UK and Ireland English Dialect speech data set
This data set contains transcribed high-quality audio of English sentences recorded by volunteers speaking different dialects of the language.
The data set consists of wave files, and a TSV file (line_index.tsv). The file line_index.csv contains a line id, an anonymized FileID and the transcription of audio in the file.
The data set has been manually quality checked, but there might still be errors.
The recordings from the Welsh English speakers were collected in collaboration with Cardiff University.
The dataset is distributed under Creative Commons Attribution-ShareAlike 4.0 International Public License.
See [LICENSE](https://www.openslr.org/resources/83/LICENSE) file and https://github.com/google/language-resources#license for license information.
Copyright 2018, 2019 Google, Inc.
#### SLR86: Crowdsourced high-quality multi-speaker speech data set
This data set contains transcribed high-quality audio of sentences recorded by volunteers. The data set
consists of wave files, and a TSV file (line_index.tsv). The file line_index.tsv contains a anonymized FileID and
the transcription of audio in the file.
The data set has been manually quality checked, but there might still be errors.
Please report any issues in the following issue tracker on GitHub. https://github.com/googlei18n/language-resources/issues
The dataset is distributed under Creative Commons Attribution-ShareAlike 4.0 International Public License.
See [LICENSE](https://www.openslr.org/resources/86/LICENSE) file and
https://github.com/google/language-resources#license for license information.
Copyright 2018, 2019, 2020 Google, Inc.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
Javanese, Khmer, Nepali, Sundanese, Malayalam, Marathi, Tamil, Telugu, Catalan, Nigerian English, Chilean Spanish,
Columbian Spanish, Peruvian Spanish, Puerto Rico Spanish, Venezuelan Spanish, Basque, Galician, Gujarati, Kannada,
Afrikaans, Sesotho, Setswana and isiXhosa.
## Dataset Structure
### Data Instances
A typical data point comprises the path to the audio file, called path and its sentence.
#### SLR32, SLR35, SLR36, SLR41, SLR42, SLR43, SLR44, SLR52, SLR53, SLR54, SLR63, SLR64, SLR65, SLR66, SLR69, SLR70, SLR71, SLR72, SLR73, SLR74, SLR75, SLR76, SLR77, SLR78, SLR79, SLR80, SLR86
```
{
'path': '/home/cahya/.cache/huggingface/datasets/downloads/extracted/4d9cf915efc21110199074da4d492566dee6097068b07a680f670fcec9176e62/su_id_female/wavs/suf_00297_00037352660.wav'
'audio': {'path': '/home/cahya/.cache/huggingface/datasets/downloads/extracted/4d9cf915efc21110199074da4d492566dee6097068b07a680f670fcec9176e62/su_id_female/wavs/suf_00297_00037352660.wav',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346,
0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 16000},
'sentence': 'Panonton ting haruleng ningali Kelly Clarkson keur nyanyi di tipi',
}
```
### Data Fields
- `path`: The path to the audio file.
- `audio`: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling
rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and
resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might
take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column,
*i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
- `sentence`: The sentence the user was prompted to speak.
### Data Splits
There is only one "train" split for all configurations and the number of examples are:
| | Number of examples |
|:------|---------------------:|
| SLR41 | 5822 |
| SLR42 | 2906 |
| SLR43 | 2064 |
| SLR44 | 4213 |
| SLR63 | 4126 |
| SLR64 | 1569 |
| SLR65 | 4284 |
| SLR66 | 4448 |
| SLR69 | 4240 |
| SLR35 | 185076 |
| SLR36 | 219156 |
| SLR70 | 3359 |
| SLR71 | 4374 |
| SLR72 | 4903 |
| SLR73 | 5447 |
| SLR74 | 617 |
| SLR75 | 3357 |
| SLR76 | 7136 |
| SLR77 | 5587 |
| SLR78 | 4272 |
| SLR79 | 4400 |
| SLR80 | 2530 |
| SLR86 | 3583 |
| SLR32 | 9821 |
| SLR52 | 185293 |
| SLR53 | 218703 |
| SLR54 | 157905 |
| SLR83 | 17877 |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Each dataset is distributed under Creative Commons Attribution-ShareAlike 4.0 International Public License ([CC-BY-SA-4.0](https://creativecommons.org/licenses/by-sa/4.0/legalcode)).
See https://github.com/google/language-resources#license or the resource page on [OpenSLR](https://openslr.org/resources.php) for more information.
### Citation Information
#### SLR32
```
@inproceedings{van-niekerk-etal-2017,
title = {{Rapid development of TTS corpora for four South African languages}},
author = {Daniel van Niekerk and Charl van Heerden and Marelie Davel and Neil Kleynhans and Oddur Kjartansson and Martin Jansche and Linne Ha},
booktitle = {Proc. Interspeech 2017},
pages = {2178--2182},
address = {Stockholm, Sweden},
month = aug,
year = {2017},
URL = {https://dx.doi.org/10.21437/Interspeech.2017-1139}
}
```
#### SLR35, SLR36, SLR52, SLR53, SLR54
```
@inproceedings{kjartansson-etal-sltu2018,
title = {{Crowd-Sourced Speech Corpora for Javanese, Sundanese, Sinhala, Nepali, and Bangladeshi Bengali}},
author = {Oddur Kjartansson and Supheakmungkol Sarin and Knot Pipatsrisawat and Martin Jansche and Linne Ha},
booktitle = {Proc. The 6th Intl. Workshop on Spoken Language Technologies for Under-Resourced Languages (SLTU)},
year = {2018},
address = {Gurugram, India},
month = aug,
pages = {52--55},
URL = {https://dx.doi.org/10.21437/SLTU.2018-11},
}
```
#### SLR41, SLR42, SLR43, SLR44
```
@inproceedings{kjartansson-etal-tts-sltu2018,
title = {{A Step-by-Step Process for Building TTS Voices Using Open Source Data and Framework for Bangla, Javanese, Khmer, Nepali, Sinhala, and Sundanese}},
author = {Keshan Sodimana and Knot Pipatsrisawat and Linne Ha and Martin Jansche and Oddur Kjartansson and Pasindu De Silva and Supheakmungkol Sarin},
booktitle = {Proc. The 6th Intl. Workshop on Spoken Language Technologies for Under-Resourced Languages (SLTU)},
year = {2018},
address = {Gurugram, India},
month = aug,
pages = {66--70},
URL = {https://dx.doi.org/10.21437/SLTU.2018-14}
}
```
#### SLR63, SLR64, SLR65, SLR66, SLR78, SLR79
```
@inproceedings{he-etal-2020-open,
title = {{Open-source Multi-speaker Speech Corpora for Building Gujarati, Kannada, Malayalam, Marathi, Tamil and Telugu Speech Synthesis Systems}},
author = {He, Fei and Chu, Shan-Hui Cathy and Kjartansson, Oddur and Rivera, Clara and Katanova, Anna and Gutkin, Alexander and Demirsahin, Isin and Johny, Cibu and Jansche, Martin and Sarin, Supheakmungkol and Pipatsrisawat, Knot},
booktitle = {Proceedings of The 12th Language Resources and Evaluation Conference (LREC)},
month = may,
year = {2020},
address = {Marseille, France},
publisher = {European Language Resources Association (ELRA)},
pages = {6494--6503},
url = {https://www.aclweb.org/anthology/2020.lrec-1.800},
ISBN = "{979-10-95546-34-4},
}
```
#### SLR69, SLR76, SLR77
```
@inproceedings{kjartansson-etal-2020-open,
title = {{Open-Source High Quality Speech Datasets for Basque, Catalan and Galician}},
author = {Kjartansson, Oddur and Gutkin, Alexander and Butryna, Alena and Demirsahin, Isin and Rivera, Clara},
booktitle = {Proceedings of the 1st Joint Workshop on Spoken Language Technologies for Under-resourced languages (SLTU) and Collaboration and Computing for Under-Resourced Languages (CCURL)},
year = {2020},
pages = {21--27},
month = may,
address = {Marseille, France},
publisher = {European Language Resources association (ELRA)},
url = {https://www.aclweb.org/anthology/2020.sltu-1.3},
ISBN = {979-10-95546-35-1},
}
```
#### SLR70, SLR71, SLR72, SLR73, SLR74, SLR75
```
@inproceedings{guevara-rukoz-etal-2020-crowdsourcing,
title = {{Crowdsourcing Latin American Spanish for Low-Resource Text-to-Speech}},
author = {Guevara-Rukoz, Adriana and Demirsahin, Isin and He, Fei and Chu, Shan-Hui Cathy and Sarin, Supheakmungkol and Pipatsrisawat, Knot and Gutkin, Alexander and Butryna, Alena and Kjartansson, Oddur},
booktitle = {Proceedings of The 12th Language Resources and Evaluation Conference (LREC)},
year = {2020},
month = may,
address = {Marseille, France},
publisher = {European Language Resources Association (ELRA)},
url = {https://www.aclweb.org/anthology/2020.lrec-1.801},
pages = {6504--6513},
ISBN = {979-10-95546-34-4},
}
```
#### SLR80
```
@inproceedings{oo-etal-2020-burmese,
title = {{Burmese Speech Corpus, Finite-State Text Normalization and Pronunciation Grammars with an Application to Text-to-Speech}},
author = {Oo, Yin May and Wattanavekin, Theeraphol and Li, Chenfang and De Silva, Pasindu and Sarin, Supheakmungkol and Pipatsrisawat, Knot and Jansche, Martin and Kjartansson, Oddur and Gutkin, Alexander},
booktitle = {Proceedings of The 12th Language Resources and Evaluation Conference (LREC)},
month = may,
year = {2020},
pages = "6328--6339",
address = {Marseille, France},
publisher = {European Language Resources Association (ELRA)},
url = {https://www.aclweb.org/anthology/2020.lrec-1.777},
ISBN = {979-10-95546-34-4},
}
```
#### SLR86
```
@inproceedings{gutkin-et-al-yoruba2020,
title = {{Developing an Open-Source Corpus of Yoruba Speech}},
author = {Alexander Gutkin and I{\c{s}}{\i}n Demir{\c{s}}ahin and Oddur Kjartansson and Clara Rivera and K\d{\'o}lá Túb\d{\`o}sún},
booktitle = {Proceedings of Interspeech 2020},
pages = {404--408},
month = {October},
year = {2020},
address = {Shanghai, China},
publisher = {International Speech and Communication Association (ISCA)},
doi = {10.21437/Interspeech.2020-1096},
url = {https://dx.doi.org/10.21437/Interspeech.2020-1096},
}
```
### Contributions
Thanks to [@cahya-wirawan](https://github.com/cahya-wirawan) for adding this dataset. |
conceptofmind/t0_submix_original | 2023-05-24T18:32:56.000Z | [
"region:us"
] | conceptofmind | null | null | null | 19 | 655 | ---
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: task_source
dtype: string
- name: task_name
dtype: string
- name: template_type
dtype: string
splits:
- name: train
num_bytes: 4602180562
num_examples: 1650308
download_size: 2738296803
dataset_size: 4602180562
---
# Dataset Card for "t0_submix_original"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
BeIR/scifact | 2022-10-23T06:01:22.000Z | [
"task_categories:text-retrieval",
"task_ids:entity-linking-retrieval",
"task_ids:fact-checking-retrieval",
"multilinguality:monolingual",
"language:en",
"license:cc-by-sa-4.0",
"region:us"
] | BeIR | null | null | null | 1 | 654 | ---
annotations_creators: []
language_creators: []
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
paperswithcode_id: beir
pretty_name: BEIR Benchmark
size_categories:
msmarco:
- 1M<n<10M
trec-covid:
- 100k<n<1M
nfcorpus:
- 1K<n<10K
nq:
- 1M<n<10M
hotpotqa:
- 1M<n<10M
fiqa:
- 10K<n<100K
arguana:
- 1K<n<10K
touche-2020:
- 100K<n<1M
cqadupstack:
- 100K<n<1M
quora:
- 100K<n<1M
dbpedia:
- 1M<n<10M
scidocs:
- 10K<n<100K
fever:
- 1M<n<10M
climate-fever:
- 1M<n<10M
scifact:
- 1K<n<10K
source_datasets: []
task_categories:
- text-retrieval
- zero-shot-retrieval
- information-retrieval
- zero-shot-information-retrieval
task_ids:
- passage-retrieval
- entity-linking-retrieval
- fact-checking-retrieval
- tweet-retrieval
- citation-prediction-retrieval
- duplication-question-retrieval
- argument-retrieval
- news-retrieval
- biomedical-information-retrieval
- question-answering-retrieval
---
# Dataset Card for BEIR Benchmark
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/UKPLab/beir
- **Repository:** https://github.com/UKPLab/beir
- **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ
- **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns
- **Point of Contact:** nandan.thakur@uwaterloo.ca
### Dataset Summary
BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:
- Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact)
- Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/)
- Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/)
- News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html)
- Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data)
- Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/)
- Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs)
- Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html)
- Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/)
All these datasets have been preprocessed and can be used for your experiments.
```python
```
### Supported Tasks and Leaderboards
The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.
The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/).
### Languages
All tasks are in English (`en`).
## Dataset Structure
All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:
- `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}`
- `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}`
- `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1`
### Data Instances
A high level example of any beir dataset:
```python
corpus = {
"doc1" : {
"title": "Albert Einstein",
"text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \
one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \
its influence on the philosophy of science. He is best known to the general public for his mass–energy \
equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \
Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \
of the photoelectric effect', a pivotal step in the development of quantum theory."
},
"doc2" : {
"title": "", # Keep title an empty string if not present
"text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \
malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\
with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)."
},
}
queries = {
"q1" : "Who developed the mass-energy equivalence formula?",
"q2" : "Which beer is brewed with a large proportion of wheat?"
}
qrels = {
"q1" : {"doc1": 1},
"q2" : {"doc2": 1},
}
```
### Data Fields
Examples from all configurations have the following features:
### Corpus
- `corpus`: a `dict` feature representing the document title and passage text, made up of:
- `_id`: a `string` feature representing the unique document id
- `title`: a `string` feature, denoting the title of the document.
- `text`: a `string` feature, denoting the text of the document.
### Queries
- `queries`: a `dict` feature representing the query, made up of:
- `_id`: a `string` feature representing the unique query id
- `text`: a `string` feature, denoting the text of the query.
### Qrels
- `qrels`: a `dict` feature representing the query document relevance judgements, made up of:
- `_id`: a `string` feature representing the query id
- `_id`: a `string` feature, denoting the document id.
- `score`: a `int32` feature, denoting the relevance judgement between query and document.
### Data Splits
| Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 |
| -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:|
| MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` |
| TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` |
| NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` |
| BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) |
| NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` |
| HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` |
| FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` |
| Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) |
| TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) |
| ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` |
| Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` |
| CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` |
| Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` |
| DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` |
| SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` |
| FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` |
| Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` |
| SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` |
| Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
Cite as:
```
@inproceedings{
thakur2021beir,
title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models},
author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
year={2021},
url={https://openreview.net/forum?id=wCu6T5xFjeJ}
}
```
### Contributions
Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset. |
chromadb/state_of_the_union | 2023-07-07T18:13:04.000Z | [
"region:us"
] | chromadb | null | null | null | 0 | 654 | ---
dataset_info:
features:
- name: id
dtype: string
- name: embedding
sequence: float64
- name: metadata
struct:
- name: source
dtype: string
- name: document
dtype: string
splits:
- name: data
num_bytes: 556545
num_examples: 42
download_size: 519613
dataset_size: 556545
---
# Dataset Card for "state_of_the_union"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.