datasetId stringlengths 2 117 | card stringlengths 19 1.01M |
|---|---|
open-llm-leaderboard/details_ericpolewski__TacoBeLLM | ---
pretty_name: Evaluation run of ericpolewski/TacoBeLLM
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [ericpolewski/TacoBeLLM](https://huggingface.co/ericpolewski/TacoBeLLM) on the\
\ [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_ericpolewski__TacoBeLLM\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-01-26T16:55:24.910211](https://huggingface.co/datasets/open-llm-leaderboard/details_ericpolewski__TacoBeLLM/blob/main/results_2024-01-26T16-55-24.910211.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.5638377937424233,\n\
\ \"acc_stderr\": 0.0333481450094512,\n \"acc_norm\": 0.5741662321190941,\n\
\ \"acc_norm_stderr\": 0.03420397056423356,\n \"mc1\": 0.31334149326805383,\n\
\ \"mc1_stderr\": 0.016238065069059605,\n \"mc2\": 0.4605506661658282,\n\
\ \"mc2_stderr\": 0.014802420782627305\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.5273037542662116,\n \"acc_stderr\": 0.014589589101985996,\n\
\ \"acc_norm\": 0.5853242320819113,\n \"acc_norm_stderr\": 0.014397070564409172\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6160127464648476,\n\
\ \"acc_stderr\": 0.004853608805843881,\n \"acc_norm\": 0.8189603664608643,\n\
\ \"acc_norm_stderr\": 0.003842640800361503\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.28,\n \"acc_stderr\": 0.045126085985421296,\n \
\ \"acc_norm\": 0.28,\n \"acc_norm_stderr\": 0.045126085985421296\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.4740740740740741,\n\
\ \"acc_stderr\": 0.04313531696750574,\n \"acc_norm\": 0.4740740740740741,\n\
\ \"acc_norm_stderr\": 0.04313531696750574\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.5394736842105263,\n \"acc_stderr\": 0.04056242252249034,\n\
\ \"acc_norm\": 0.5394736842105263,\n \"acc_norm_stderr\": 0.04056242252249034\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.56,\n\
\ \"acc_stderr\": 0.04988876515698589,\n \"acc_norm\": 0.56,\n \
\ \"acc_norm_stderr\": 0.04988876515698589\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.6490566037735849,\n \"acc_stderr\": 0.029373646253234686,\n\
\ \"acc_norm\": 0.6490566037735849,\n \"acc_norm_stderr\": 0.029373646253234686\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.5902777777777778,\n\
\ \"acc_stderr\": 0.04112490974670787,\n \"acc_norm\": 0.5902777777777778,\n\
\ \"acc_norm_stderr\": 0.04112490974670787\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.41,\n \"acc_stderr\": 0.04943110704237102,\n \
\ \"acc_norm\": 0.41,\n \"acc_norm_stderr\": 0.04943110704237102\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.42,\n \"acc_stderr\": 0.049604496374885836,\n \"acc_norm\": 0.42,\n\
\ \"acc_norm_stderr\": 0.049604496374885836\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.33,\n \"acc_stderr\": 0.047258156262526045,\n \
\ \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.047258156262526045\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.5144508670520231,\n\
\ \"acc_stderr\": 0.03810871630454764,\n \"acc_norm\": 0.5144508670520231,\n\
\ \"acc_norm_stderr\": 0.03810871630454764\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.3333333333333333,\n \"acc_stderr\": 0.04690650298201942,\n\
\ \"acc_norm\": 0.3333333333333333,\n \"acc_norm_stderr\": 0.04690650298201942\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.7,\n\
\ \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.46382978723404256,\n \"acc_stderr\": 0.03260038511835771,\n\
\ \"acc_norm\": 0.46382978723404256,\n \"acc_norm_stderr\": 0.03260038511835771\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.2894736842105263,\n\
\ \"acc_stderr\": 0.04266339443159394,\n \"acc_norm\": 0.2894736842105263,\n\
\ \"acc_norm_stderr\": 0.04266339443159394\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.503448275862069,\n \"acc_stderr\": 0.04166567577101579,\n\
\ \"acc_norm\": 0.503448275862069,\n \"acc_norm_stderr\": 0.04166567577101579\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.35185185185185186,\n \"acc_stderr\": 0.024594975128920938,\n \"\
acc_norm\": 0.35185185185185186,\n \"acc_norm_stderr\": 0.024594975128920938\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.35714285714285715,\n\
\ \"acc_stderr\": 0.04285714285714281,\n \"acc_norm\": 0.35714285714285715,\n\
\ \"acc_norm_stderr\": 0.04285714285714281\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.37,\n \"acc_stderr\": 0.04852365870939099,\n \
\ \"acc_norm\": 0.37,\n \"acc_norm_stderr\": 0.04852365870939099\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.6774193548387096,\n\
\ \"acc_stderr\": 0.026593084516572274,\n \"acc_norm\": 0.6774193548387096,\n\
\ \"acc_norm_stderr\": 0.026593084516572274\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.45320197044334976,\n \"acc_stderr\": 0.03502544650845872,\n\
\ \"acc_norm\": 0.45320197044334976,\n \"acc_norm_stderr\": 0.03502544650845872\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.58,\n \"acc_stderr\": 0.049604496374885836,\n \"acc_norm\"\
: 0.58,\n \"acc_norm_stderr\": 0.049604496374885836\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7515151515151515,\n \"acc_stderr\": 0.03374402644139404,\n\
\ \"acc_norm\": 0.7515151515151515,\n \"acc_norm_stderr\": 0.03374402644139404\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.702020202020202,\n \"acc_stderr\": 0.03258630383836556,\n \"acc_norm\"\
: 0.702020202020202,\n \"acc_norm_stderr\": 0.03258630383836556\n },\n\
\ \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \
\ \"acc\": 0.8031088082901554,\n \"acc_stderr\": 0.028697873971860677,\n\
\ \"acc_norm\": 0.8031088082901554,\n \"acc_norm_stderr\": 0.028697873971860677\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.5717948717948718,\n \"acc_stderr\": 0.025088301454694834,\n\
\ \"acc_norm\": 0.5717948717948718,\n \"acc_norm_stderr\": 0.025088301454694834\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.34444444444444444,\n \"acc_stderr\": 0.02897264888484427,\n \
\ \"acc_norm\": 0.34444444444444444,\n \"acc_norm_stderr\": 0.02897264888484427\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6092436974789915,\n \"acc_stderr\": 0.031693802357129965,\n\
\ \"acc_norm\": 0.6092436974789915,\n \"acc_norm_stderr\": 0.031693802357129965\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.2847682119205298,\n \"acc_stderr\": 0.03684881521389023,\n \"\
acc_norm\": 0.2847682119205298,\n \"acc_norm_stderr\": 0.03684881521389023\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.7761467889908257,\n \"acc_stderr\": 0.01787121776779022,\n \"\
acc_norm\": 0.7761467889908257,\n \"acc_norm_stderr\": 0.01787121776779022\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.44907407407407407,\n \"acc_stderr\": 0.03392238405321616,\n \"\
acc_norm\": 0.44907407407407407,\n \"acc_norm_stderr\": 0.03392238405321616\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.7941176470588235,\n \"acc_stderr\": 0.028379449451588667,\n \"\
acc_norm\": 0.7941176470588235,\n \"acc_norm_stderr\": 0.028379449451588667\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.7848101265822784,\n \"acc_stderr\": 0.026750826994676166,\n \
\ \"acc_norm\": 0.7848101265822784,\n \"acc_norm_stderr\": 0.026750826994676166\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6995515695067265,\n\
\ \"acc_stderr\": 0.030769352008229146,\n \"acc_norm\": 0.6995515695067265,\n\
\ \"acc_norm_stderr\": 0.030769352008229146\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.6412213740458015,\n \"acc_stderr\": 0.04206739313864908,\n\
\ \"acc_norm\": 0.6412213740458015,\n \"acc_norm_stderr\": 0.04206739313864908\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.6694214876033058,\n \"acc_stderr\": 0.04294340845212093,\n \"\
acc_norm\": 0.6694214876033058,\n \"acc_norm_stderr\": 0.04294340845212093\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7407407407407407,\n\
\ \"acc_stderr\": 0.042365112580946315,\n \"acc_norm\": 0.7407407407407407,\n\
\ \"acc_norm_stderr\": 0.042365112580946315\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.6625766871165644,\n \"acc_stderr\": 0.03714908409935573,\n\
\ \"acc_norm\": 0.6625766871165644,\n \"acc_norm_stderr\": 0.03714908409935573\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.33035714285714285,\n\
\ \"acc_stderr\": 0.04464285714285712,\n \"acc_norm\": 0.33035714285714285,\n\
\ \"acc_norm_stderr\": 0.04464285714285712\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7572815533980582,\n \"acc_stderr\": 0.04245022486384495,\n\
\ \"acc_norm\": 0.7572815533980582,\n \"acc_norm_stderr\": 0.04245022486384495\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.7991452991452992,\n\
\ \"acc_stderr\": 0.026246772946890477,\n \"acc_norm\": 0.7991452991452992,\n\
\ \"acc_norm_stderr\": 0.026246772946890477\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.63,\n \"acc_stderr\": 0.04852365870939099,\n \
\ \"acc_norm\": 0.63,\n \"acc_norm_stderr\": 0.04852365870939099\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7535121328224776,\n\
\ \"acc_stderr\": 0.015411308769686934,\n \"acc_norm\": 0.7535121328224776,\n\
\ \"acc_norm_stderr\": 0.015411308769686934\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.6445086705202312,\n \"acc_stderr\": 0.025770292082977254,\n\
\ \"acc_norm\": 0.6445086705202312,\n \"acc_norm_stderr\": 0.025770292082977254\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.42681564245810055,\n\
\ \"acc_stderr\": 0.016542401954631917,\n \"acc_norm\": 0.42681564245810055,\n\
\ \"acc_norm_stderr\": 0.016542401954631917\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.5915032679738562,\n \"acc_stderr\": 0.028146405993096358,\n\
\ \"acc_norm\": 0.5915032679738562,\n \"acc_norm_stderr\": 0.028146405993096358\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6784565916398714,\n\
\ \"acc_stderr\": 0.026527724079528872,\n \"acc_norm\": 0.6784565916398714,\n\
\ \"acc_norm_stderr\": 0.026527724079528872\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.654320987654321,\n \"acc_stderr\": 0.02646248777700187,\n\
\ \"acc_norm\": 0.654320987654321,\n \"acc_norm_stderr\": 0.02646248777700187\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.44680851063829785,\n \"acc_stderr\": 0.029658235097666907,\n \
\ \"acc_norm\": 0.44680851063829785,\n \"acc_norm_stderr\": 0.029658235097666907\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4445893089960887,\n\
\ \"acc_stderr\": 0.012691575792657114,\n \"acc_norm\": 0.4445893089960887,\n\
\ \"acc_norm_stderr\": 0.012691575792657114\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.5441176470588235,\n \"acc_stderr\": 0.030254372573976715,\n\
\ \"acc_norm\": 0.5441176470588235,\n \"acc_norm_stderr\": 0.030254372573976715\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.5898692810457516,\n \"acc_stderr\": 0.019898412717635906,\n \
\ \"acc_norm\": 0.5898692810457516,\n \"acc_norm_stderr\": 0.019898412717635906\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.5909090909090909,\n\
\ \"acc_stderr\": 0.047093069786618966,\n \"acc_norm\": 0.5909090909090909,\n\
\ \"acc_norm_stderr\": 0.047093069786618966\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.6408163265306123,\n \"acc_stderr\": 0.030713560455108493,\n\
\ \"acc_norm\": 0.6408163265306123,\n \"acc_norm_stderr\": 0.030713560455108493\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.7661691542288557,\n\
\ \"acc_stderr\": 0.02992941540834839,\n \"acc_norm\": 0.7661691542288557,\n\
\ \"acc_norm_stderr\": 0.02992941540834839\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.81,\n \"acc_stderr\": 0.039427724440366255,\n \
\ \"acc_norm\": 0.81,\n \"acc_norm_stderr\": 0.039427724440366255\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.43373493975903615,\n\
\ \"acc_stderr\": 0.038581589406855174,\n \"acc_norm\": 0.43373493975903615,\n\
\ \"acc_norm_stderr\": 0.038581589406855174\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8070175438596491,\n \"acc_stderr\": 0.030267457554898458,\n\
\ \"acc_norm\": 0.8070175438596491,\n \"acc_norm_stderr\": 0.030267457554898458\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.31334149326805383,\n\
\ \"mc1_stderr\": 0.016238065069059605,\n \"mc2\": 0.4605506661658282,\n\
\ \"mc2_stderr\": 0.014802420782627305\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7663772691397001,\n \"acc_stderr\": 0.011892194477183525\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.01288855193328279,\n \
\ \"acc_stderr\": 0.003106901266499642\n }\n}\n```"
repo_url: https://huggingface.co/ericpolewski/TacoBeLLM
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_01_26T16_55_24.910211
path:
- '**/details_harness|arc:challenge|25_2024-01-26T16-55-24.910211.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-01-26T16-55-24.910211.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_01_26T16_55_24.910211
path:
- '**/details_harness|gsm8k|5_2024-01-26T16-55-24.910211.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-01-26T16-55-24.910211.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_01_26T16_55_24.910211
path:
- '**/details_harness|hellaswag|10_2024-01-26T16-55-24.910211.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-01-26T16-55-24.910211.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_01_26T16_55_24.910211
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-26T16-55-24.910211.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-26T16-55-24.910211.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-26T16-55-24.910211.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_01_26T16_55_24.910211
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-26T16-55-24.910211.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-26T16-55-24.910211.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_01_26T16_55_24.910211
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-26T16-55-24.910211.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-26T16-55-24.910211.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_01_26T16_55_24.910211
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-26T16-55-24.910211.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-26T16-55-24.910211.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_01_26T16_55_24.910211
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-26T16-55-24.910211.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-26T16-55-24.910211.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_01_26T16_55_24.910211
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-26T16-55-24.910211.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-26T16-55-24.910211.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_01_26T16_55_24.910211
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-26T16-55-24.910211.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-26T16-55-24.910211.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_01_26T16_55_24.910211
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-26T16-55-24.910211.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-26T16-55-24.910211.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_01_26T16_55_24.910211
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-26T16-55-24.910211.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-26T16-55-24.910211.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_01_26T16_55_24.910211
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-26T16-55-24.910211.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-26T16-55-24.910211.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_01_26T16_55_24.910211
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-26T16-55-24.910211.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-26T16-55-24.910211.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_01_26T16_55_24.910211
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-26T16-55-24.910211.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-26T16-55-24.910211.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_01_26T16_55_24.910211
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-26T16-55-24.910211.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-26T16-55-24.910211.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_01_26T16_55_24.910211
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-26T16-55-24.910211.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-26T16-55-24.910211.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_01_26T16_55_24.910211
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-26T16-55-24.910211.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-26T16-55-24.910211.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_01_26T16_55_24.910211
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-26T16-55-24.910211.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-26T16-55-24.910211.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_01_26T16_55_24.910211
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-26T16-55-24.910211.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-26T16-55-24.910211.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_01_26T16_55_24.910211
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-26T16-55-24.910211.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-26T16-55-24.910211.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_01_26T16_55_24.910211
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-26T16-55-24.910211.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-26T16-55-24.910211.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_01_26T16_55_24.910211
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-26T16-55-24.910211.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-26T16-55-24.910211.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_01_26T16_55_24.910211
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-26T16-55-24.910211.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-26T16-55-24.910211.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_01_26T16_55_24.910211
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-26T16-55-24.910211.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-26T16-55-24.910211.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_01_26T16_55_24.910211
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-26T16-55-24.910211.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-26T16-55-24.910211.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_01_26T16_55_24.910211
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-26T16-55-24.910211.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-26T16-55-24.910211.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_01_26T16_55_24.910211
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-26T16-55-24.910211.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-26T16-55-24.910211.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_01_26T16_55_24.910211
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-26T16-55-24.910211.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-26T16-55-24.910211.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_01_26T16_55_24.910211
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-26T16-55-24.910211.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-26T16-55-24.910211.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_01_26T16_55_24.910211
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-26T16-55-24.910211.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-26T16-55-24.910211.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_01_26T16_55_24.910211
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-26T16-55-24.910211.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-26T16-55-24.910211.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_01_26T16_55_24.910211
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-26T16-55-24.910211.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-26T16-55-24.910211.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_01_26T16_55_24.910211
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-26T16-55-24.910211.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-26T16-55-24.910211.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_01_26T16_55_24.910211
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-26T16-55-24.910211.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-26T16-55-24.910211.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_01_26T16_55_24.910211
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-26T16-55-24.910211.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-26T16-55-24.910211.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_01_26T16_55_24.910211
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-26T16-55-24.910211.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-26T16-55-24.910211.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_01_26T16_55_24.910211
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-26T16-55-24.910211.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-26T16-55-24.910211.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_01_26T16_55_24.910211
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-26T16-55-24.910211.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-26T16-55-24.910211.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_01_26T16_55_24.910211
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-26T16-55-24.910211.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-26T16-55-24.910211.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_01_26T16_55_24.910211
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-26T16-55-24.910211.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-26T16-55-24.910211.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_01_26T16_55_24.910211
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-26T16-55-24.910211.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-26T16-55-24.910211.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_01_26T16_55_24.910211
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-26T16-55-24.910211.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-26T16-55-24.910211.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_01_26T16_55_24.910211
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-26T16-55-24.910211.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-26T16-55-24.910211.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_01_26T16_55_24.910211
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-26T16-55-24.910211.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-26T16-55-24.910211.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_01_26T16_55_24.910211
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-26T16-55-24.910211.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-26T16-55-24.910211.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_01_26T16_55_24.910211
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-26T16-55-24.910211.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-26T16-55-24.910211.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_01_26T16_55_24.910211
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-26T16-55-24.910211.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-26T16-55-24.910211.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_01_26T16_55_24.910211
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-26T16-55-24.910211.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-26T16-55-24.910211.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_01_26T16_55_24.910211
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-26T16-55-24.910211.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-26T16-55-24.910211.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_01_26T16_55_24.910211
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-26T16-55-24.910211.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-26T16-55-24.910211.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_01_26T16_55_24.910211
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-26T16-55-24.910211.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-26T16-55-24.910211.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_01_26T16_55_24.910211
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-26T16-55-24.910211.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-26T16-55-24.910211.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_01_26T16_55_24.910211
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-26T16-55-24.910211.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-26T16-55-24.910211.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_01_26T16_55_24.910211
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-26T16-55-24.910211.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-26T16-55-24.910211.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_01_26T16_55_24.910211
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-26T16-55-24.910211.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-26T16-55-24.910211.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_01_26T16_55_24.910211
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-26T16-55-24.910211.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-26T16-55-24.910211.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_01_26T16_55_24.910211
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-26T16-55-24.910211.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-26T16-55-24.910211.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_01_26T16_55_24.910211
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-26T16-55-24.910211.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-26T16-55-24.910211.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_01_26T16_55_24.910211
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-26T16-55-24.910211.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-26T16-55-24.910211.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_01_26T16_55_24.910211
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-26T16-55-24.910211.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-26T16-55-24.910211.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_01_26T16_55_24.910211
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-26T16-55-24.910211.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-26T16-55-24.910211.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_01_26T16_55_24.910211
path:
- '**/details_harness|winogrande|5_2024-01-26T16-55-24.910211.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-01-26T16-55-24.910211.parquet'
- config_name: results
data_files:
- split: 2024_01_26T16_55_24.910211
path:
- results_2024-01-26T16-55-24.910211.parquet
- split: latest
path:
- results_2024-01-26T16-55-24.910211.parquet
---
# Dataset Card for Evaluation run of ericpolewski/TacoBeLLM
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [ericpolewski/TacoBeLLM](https://huggingface.co/ericpolewski/TacoBeLLM) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_ericpolewski__TacoBeLLM",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-26T16:55:24.910211](https://huggingface.co/datasets/open-llm-leaderboard/details_ericpolewski__TacoBeLLM/blob/main/results_2024-01-26T16-55-24.910211.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.5638377937424233,
"acc_stderr": 0.0333481450094512,
"acc_norm": 0.5741662321190941,
"acc_norm_stderr": 0.03420397056423356,
"mc1": 0.31334149326805383,
"mc1_stderr": 0.016238065069059605,
"mc2": 0.4605506661658282,
"mc2_stderr": 0.014802420782627305
},
"harness|arc:challenge|25": {
"acc": 0.5273037542662116,
"acc_stderr": 0.014589589101985996,
"acc_norm": 0.5853242320819113,
"acc_norm_stderr": 0.014397070564409172
},
"harness|hellaswag|10": {
"acc": 0.6160127464648476,
"acc_stderr": 0.004853608805843881,
"acc_norm": 0.8189603664608643,
"acc_norm_stderr": 0.003842640800361503
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.28,
"acc_stderr": 0.045126085985421296,
"acc_norm": 0.28,
"acc_norm_stderr": 0.045126085985421296
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.4740740740740741,
"acc_stderr": 0.04313531696750574,
"acc_norm": 0.4740740740740741,
"acc_norm_stderr": 0.04313531696750574
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.5394736842105263,
"acc_stderr": 0.04056242252249034,
"acc_norm": 0.5394736842105263,
"acc_norm_stderr": 0.04056242252249034
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.56,
"acc_stderr": 0.04988876515698589,
"acc_norm": 0.56,
"acc_norm_stderr": 0.04988876515698589
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6490566037735849,
"acc_stderr": 0.029373646253234686,
"acc_norm": 0.6490566037735849,
"acc_norm_stderr": 0.029373646253234686
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.5902777777777778,
"acc_stderr": 0.04112490974670787,
"acc_norm": 0.5902777777777778,
"acc_norm_stderr": 0.04112490974670787
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.41,
"acc_stderr": 0.04943110704237102,
"acc_norm": 0.41,
"acc_norm_stderr": 0.04943110704237102
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.42,
"acc_stderr": 0.049604496374885836,
"acc_norm": 0.42,
"acc_norm_stderr": 0.049604496374885836
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.33,
"acc_stderr": 0.047258156262526045,
"acc_norm": 0.33,
"acc_norm_stderr": 0.047258156262526045
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.5144508670520231,
"acc_stderr": 0.03810871630454764,
"acc_norm": 0.5144508670520231,
"acc_norm_stderr": 0.03810871630454764
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.3333333333333333,
"acc_stderr": 0.04690650298201942,
"acc_norm": 0.3333333333333333,
"acc_norm_stderr": 0.04690650298201942
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.7,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.7,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.46382978723404256,
"acc_stderr": 0.03260038511835771,
"acc_norm": 0.46382978723404256,
"acc_norm_stderr": 0.03260038511835771
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.2894736842105263,
"acc_stderr": 0.04266339443159394,
"acc_norm": 0.2894736842105263,
"acc_norm_stderr": 0.04266339443159394
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.503448275862069,
"acc_stderr": 0.04166567577101579,
"acc_norm": 0.503448275862069,
"acc_norm_stderr": 0.04166567577101579
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.35185185185185186,
"acc_stderr": 0.024594975128920938,
"acc_norm": 0.35185185185185186,
"acc_norm_stderr": 0.024594975128920938
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.35714285714285715,
"acc_stderr": 0.04285714285714281,
"acc_norm": 0.35714285714285715,
"acc_norm_stderr": 0.04285714285714281
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.37,
"acc_stderr": 0.04852365870939099,
"acc_norm": 0.37,
"acc_norm_stderr": 0.04852365870939099
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.6774193548387096,
"acc_stderr": 0.026593084516572274,
"acc_norm": 0.6774193548387096,
"acc_norm_stderr": 0.026593084516572274
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.45320197044334976,
"acc_stderr": 0.03502544650845872,
"acc_norm": 0.45320197044334976,
"acc_norm_stderr": 0.03502544650845872
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.58,
"acc_stderr": 0.049604496374885836,
"acc_norm": 0.58,
"acc_norm_stderr": 0.049604496374885836
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7515151515151515,
"acc_stderr": 0.03374402644139404,
"acc_norm": 0.7515151515151515,
"acc_norm_stderr": 0.03374402644139404
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.702020202020202,
"acc_stderr": 0.03258630383836556,
"acc_norm": 0.702020202020202,
"acc_norm_stderr": 0.03258630383836556
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8031088082901554,
"acc_stderr": 0.028697873971860677,
"acc_norm": 0.8031088082901554,
"acc_norm_stderr": 0.028697873971860677
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.5717948717948718,
"acc_stderr": 0.025088301454694834,
"acc_norm": 0.5717948717948718,
"acc_norm_stderr": 0.025088301454694834
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.34444444444444444,
"acc_stderr": 0.02897264888484427,
"acc_norm": 0.34444444444444444,
"acc_norm_stderr": 0.02897264888484427
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6092436974789915,
"acc_stderr": 0.031693802357129965,
"acc_norm": 0.6092436974789915,
"acc_norm_stderr": 0.031693802357129965
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.2847682119205298,
"acc_stderr": 0.03684881521389023,
"acc_norm": 0.2847682119205298,
"acc_norm_stderr": 0.03684881521389023
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.7761467889908257,
"acc_stderr": 0.01787121776779022,
"acc_norm": 0.7761467889908257,
"acc_norm_stderr": 0.01787121776779022
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.44907407407407407,
"acc_stderr": 0.03392238405321616,
"acc_norm": 0.44907407407407407,
"acc_norm_stderr": 0.03392238405321616
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.7941176470588235,
"acc_stderr": 0.028379449451588667,
"acc_norm": 0.7941176470588235,
"acc_norm_stderr": 0.028379449451588667
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7848101265822784,
"acc_stderr": 0.026750826994676166,
"acc_norm": 0.7848101265822784,
"acc_norm_stderr": 0.026750826994676166
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6995515695067265,
"acc_stderr": 0.030769352008229146,
"acc_norm": 0.6995515695067265,
"acc_norm_stderr": 0.030769352008229146
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.6412213740458015,
"acc_stderr": 0.04206739313864908,
"acc_norm": 0.6412213740458015,
"acc_norm_stderr": 0.04206739313864908
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.6694214876033058,
"acc_stderr": 0.04294340845212093,
"acc_norm": 0.6694214876033058,
"acc_norm_stderr": 0.04294340845212093
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7407407407407407,
"acc_stderr": 0.042365112580946315,
"acc_norm": 0.7407407407407407,
"acc_norm_stderr": 0.042365112580946315
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.6625766871165644,
"acc_stderr": 0.03714908409935573,
"acc_norm": 0.6625766871165644,
"acc_norm_stderr": 0.03714908409935573
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.33035714285714285,
"acc_stderr": 0.04464285714285712,
"acc_norm": 0.33035714285714285,
"acc_norm_stderr": 0.04464285714285712
},
"harness|hendrycksTest-management|5": {
"acc": 0.7572815533980582,
"acc_stderr": 0.04245022486384495,
"acc_norm": 0.7572815533980582,
"acc_norm_stderr": 0.04245022486384495
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.7991452991452992,
"acc_stderr": 0.026246772946890477,
"acc_norm": 0.7991452991452992,
"acc_norm_stderr": 0.026246772946890477
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.63,
"acc_stderr": 0.04852365870939099,
"acc_norm": 0.63,
"acc_norm_stderr": 0.04852365870939099
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.7535121328224776,
"acc_stderr": 0.015411308769686934,
"acc_norm": 0.7535121328224776,
"acc_norm_stderr": 0.015411308769686934
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.6445086705202312,
"acc_stderr": 0.025770292082977254,
"acc_norm": 0.6445086705202312,
"acc_norm_stderr": 0.025770292082977254
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.42681564245810055,
"acc_stderr": 0.016542401954631917,
"acc_norm": 0.42681564245810055,
"acc_norm_stderr": 0.016542401954631917
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.5915032679738562,
"acc_stderr": 0.028146405993096358,
"acc_norm": 0.5915032679738562,
"acc_norm_stderr": 0.028146405993096358
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6784565916398714,
"acc_stderr": 0.026527724079528872,
"acc_norm": 0.6784565916398714,
"acc_norm_stderr": 0.026527724079528872
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.654320987654321,
"acc_stderr": 0.02646248777700187,
"acc_norm": 0.654320987654321,
"acc_norm_stderr": 0.02646248777700187
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.44680851063829785,
"acc_stderr": 0.029658235097666907,
"acc_norm": 0.44680851063829785,
"acc_norm_stderr": 0.029658235097666907
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4445893089960887,
"acc_stderr": 0.012691575792657114,
"acc_norm": 0.4445893089960887,
"acc_norm_stderr": 0.012691575792657114
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.5441176470588235,
"acc_stderr": 0.030254372573976715,
"acc_norm": 0.5441176470588235,
"acc_norm_stderr": 0.030254372573976715
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.5898692810457516,
"acc_stderr": 0.019898412717635906,
"acc_norm": 0.5898692810457516,
"acc_norm_stderr": 0.019898412717635906
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.5909090909090909,
"acc_stderr": 0.047093069786618966,
"acc_norm": 0.5909090909090909,
"acc_norm_stderr": 0.047093069786618966
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.6408163265306123,
"acc_stderr": 0.030713560455108493,
"acc_norm": 0.6408163265306123,
"acc_norm_stderr": 0.030713560455108493
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.7661691542288557,
"acc_stderr": 0.02992941540834839,
"acc_norm": 0.7661691542288557,
"acc_norm_stderr": 0.02992941540834839
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.81,
"acc_stderr": 0.039427724440366255,
"acc_norm": 0.81,
"acc_norm_stderr": 0.039427724440366255
},
"harness|hendrycksTest-virology|5": {
"acc": 0.43373493975903615,
"acc_stderr": 0.038581589406855174,
"acc_norm": 0.43373493975903615,
"acc_norm_stderr": 0.038581589406855174
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8070175438596491,
"acc_stderr": 0.030267457554898458,
"acc_norm": 0.8070175438596491,
"acc_norm_stderr": 0.030267457554898458
},
"harness|truthfulqa:mc|0": {
"mc1": 0.31334149326805383,
"mc1_stderr": 0.016238065069059605,
"mc2": 0.4605506661658282,
"mc2_stderr": 0.014802420782627305
},
"harness|winogrande|5": {
"acc": 0.7663772691397001,
"acc_stderr": 0.011892194477183525
},
"harness|gsm8k|5": {
"acc": 0.01288855193328279,
"acc_stderr": 0.003106901266499642
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
jayrambalaji/telugu | ---
license: cc
---
|
tatsu23/spanishtwitch | ---
license: openrail
---
|
shrutisingh/dataset_recommendation_mcq_sc | ---
license: apache-2.0
---
Task: MCQ with single correct answer.
Dataset: Recommendation of datasets to validate a research question.
This dataset is derived from the [DataFinder](https://aclanthology.org/2023.acl-long.573/) dataset. We curate the abstracts of each dataset from [PapersWithCode](https://paperswithcode.com/datasets).
Given is a short `query` discussing a research question, and keyphrases relevant the query.
The original training set of the DataFinder dataset has positive and negative candidates for each query, to train a contrastive model.
We objective is to convert the dataset into a MCQ question-answering task with a single correct answer. We also add the abstracts from the research papers introducing the datasets so that context can be provided to the models.
To reproduce the construction of this dataset, please visit [https://github.com/shruti-singh/scidata_recommendation](https://github.com/shruti-singh/scidata_recommendation).
Please note that the query instances in this dataset have no intersection with the [`dataset_recommendation_mcq_mc`](https://huggingface.co/datasets/shrutisingh/dataset_recommendation_mcq_mc) dataset. |
saberai/Zro_GSM | ---
license: apache-2.0
---
|
Falah/micro_photography_subjects | ---
dataset_info:
features:
- name: prompts
dtype: string
splits:
- name: train
num_bytes: 859747
num_examples: 10000
download_size: 72310
dataset_size: 859747
---
# Dataset Card for "micro_photography_subjects"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Den4ikAI/gibberish_dataset | ---
license: apache-2.0
task_categories:
- text-classification
language:
- ru
size_categories:
- 10K<n<100K
---
Виды текстового мусора в датасете:
1. Лицо на клавиатуре. (ойшойвщф фващощфащшгй0ш шйждыфл) - мусор выглядит как случайно набранные слова. Собрать такой мусор довольно просто. Нужно рандомно генерировать "слова" различной длины и с некоторой вероятностью вставлять знаки препинания между словами и в конце предложения.
2. Набор несвязных слов. (замок двойка иван кванты чат). Чаще всего является набором ключевых слов на каком-то сайте, деталями интерфейса. Генерация подобного мусора тоже не сложна. Берем предложения из корпусов (в моем случае librusec и web_public отсюда) токенизируем, перемешиваем токены и все.
3. Тексты с содержанием грамматических ошибок, ошибок в смысле слов или любые синтаксические отклонения, из-за которых предложение теряет связный смысл. (ученик учится в школа). Данный тип текстов генерируется с помощью случайного склонения данного слова.
4. Нейросетевой бред. Этот класс мусора похож на предыдущий, но не всегда заключается в неверных склонениях. (колонок настроен для лиц через 18 лет, в бильярдном кадре перекатывать)
Blogpost: [link](https://t.me/den4ikresearch/9) |
OneFly7/squad_combined_bert_512 | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
splits:
- name: train
num_bytes: 79256435.00496581
num_examples: 87500
- name: validation
num_bytes: 10420470.456764428
num_examples: 10517
download_size: 16218785
dataset_size: 89676905.46173024
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
HANTIFARAH/JAWDA-dataset | ---
dataset_info:
features:
- name: text
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
splits:
- name: train
num_bytes: 7439606845
num_examples: 3473674
download_size: 3018742690
dataset_size: 7439606845
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
jaimebw/nyt_dataset | ---
license: cc0-1.0
---
# NYT News Dataset, 2008-2021 from [Kaggle](https://www.kaggle.com/datasets/brendanmiles/nyt-news-dataset-20082021)
Includes Title, Date, Topics in the News, Abstract, and Keywords
[Github repo with source code](https://github.com/jaimebw/nyt_hugginface) |
laugustyniak/abusive-clauses-pl | ---
annotations_creators:
- hired_annotators
language_creators:
- found
language:
- pl
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
size_categories:
- 10<n<10K
task_categories:
- text-classification
task_ids:
- text-classification
pretty_name: Polish-Abusive-Clauses
---
# PAC - Polish Abusive Clauses Dataset
''I have read and agree to the terms and conditions'' is one of the biggest lies on the Internet. Consumers rarely read the contracts they are required to accept. We conclude agreements over the Internet daily. But do we know the content of these agreements? Do we check potential unfair statements? On the Internet, we probably skip most of the Terms and Conditions. However, we must remember that we have concluded many more contracts. Imagine that we want to buy a house, a car, send our kids to the nursery, open a bank account, or many more. In all these situations, you will need to conclude the contract, but there is a high probability that you will not read the entire agreement with proper understanding. European consumer law aims to prevent businesses from using so-called ''unfair contractual terms'' in their unilaterally drafted contracts, requiring consumers to accept.
Our dataset treats ''unfair contractual term'' as the equivalent of an abusive clause. It could be defined as a clause that is unilaterally imposed by one of the contract's parties, unequally affecting the other, or creating a situation of imbalance between the duties and rights of the parties.
On the EU and at the national such as the Polish levels, agencies cannot check possible agreements by hand. Hence, we took the first step to evaluate the possibility of accelerating this process. We created a dataset and machine learning models to automate potentially abusive clauses detection partially. Consumer protection organizations and agencies can use these resources to make their work more effective and efficient. Moreover, consumers can automatically analyze contracts and understand what they agree upon.
## Tasks (input, output and metrics)
Abusive Clauses Detection
**Input** ('*text'* column): text of agreement
**Output** ('*label'* column): binary label (`BEZPIECZNE_POSTANOWIENIE_UMOWNE`: correct agreement statement, `KLAUZULA_ABUZYWNA`: abusive clause)
**Domain**: legal agreement
**Measurements**: Accuracy, F1 Macro
**Example***:*
Input: *`Wszelka korespondencja wysyłana przez Pożyczkodawcę na adres zamieszkania podany w umowie oraz na e-mail zostaje uznana za skutecznie doręczoną. Zmiana adresu e-mail oraz adresu zamieszkania musi być dostarczona do Pożyczkodawcy osobiście`*
Input (translated by DeepL): *`All correspondence sent by the Lender to the residential address provided in the agreement and to the e-mail address shall be deemed effectively delivered. Change of e-mail address and residential address must be delivered to the Lender in person`*
Output: `KLAUZULA_ABUZYWNA` (abusive clause)
## Data splits
| Subset | Cardinality (sentences) |
| ----------- | ----------------------: |
| train | 4284 |
| dev | 1519 |
| test | 3453 |
## Class distribution
`BEZPIECZNE_POSTANOWIENIE_UMOWNE` - means correct agreement statement.
`KLAUZULA_ABUZYWNA` informs us about abusive clause.
| Class | train | dev | test |
|:--------------------------------|--------:|-------------:|-------:|
| BEZPIECZNE_POSTANOWIENIE_UMOWNE | 0.5458 | 0.3002 | 0.6756 |
| KLAUZULA_ABUZYWNA | 0.4542 | 0.6998 | 0.3244 |
## License
[Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/)
## Citation
```bibtex
@inproceedings{NEURIPS2022_890b206e,
author = {Augustyniak, Lukasz and Tagowski, Kamil and Sawczyn, Albert and Janiak, Denis and Bartusiak, Roman and Szymczak, Adrian and Janz, Arkadiusz and Szyma\'{n}ski, Piotr and W\k{a}troba, Marcin and Morzy, Miko\l aj and Kajdanowicz, Tomasz and Piasecki, Maciej},
booktitle = {Advances in Neural Information Processing Systems},
editor = {S. Koyejo and S. Mohamed and A. Agarwal and D. Belgrave and K. Cho and A. Oh},
pages = {21805--21818},
publisher = {Curran Associates, Inc.},
title = {This is the way: designing and compiling LEPISZCZE, a comprehensive NLP benchmark for Polish},
url = {https://proceedings.neurips.cc/paper_files/paper/2022/file/890b206ebb79e550f3988cb8db936f42-Paper-Datasets_and_Benchmarks.pdf},
volume = {35},
year = {2022}
}
``` |
ppoliver/deat | ---
license: mit
---
|
isek-ai/ak-fandom-20230821-raw | ---
language:
- en
license: cc-by-sa-4.0
size_categories:
- 10K<n<100K
pretty_name: Arknights Fandom Wiki (Raw) 20230821
dataset_info:
features:
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 41839104
num_examples: 10937
download_size: 20610229
dataset_size: 41839104
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# ak-fandom-20230821-raw
A dataset generated from [the dump](https://arknights.fandom.com/wiki/Special:Statistics) of [Arknights Fandom wiki](https://arknights.fandom.com/wiki/Arknights_Wiki). |
awacke1/LOINC-Code-Value-Semantic-Set.csv | ---
license: mit
---
|
autoevaluate/autoeval-staging-eval-project-d60b4e7e-7574882 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- xtreme
eval_info:
task: entity_extraction
model: Gerard/xlm-roberta-base-finetuned-panx-de
metrics: []
dataset_name: xtreme
dataset_config: PAN-X.de
dataset_split: test
col_mapping:
tokens: tokens
tags: ner_tags
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: Gerard/xlm-roberta-base-finetuned-panx-de
* Dataset: xtreme
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
AdapterOcean/augmentatio-standardized_cluster_8_std | ---
dataset_info:
features:
- name: message
dtype: string
- name: message_type
dtype: string
- name: message_id
dtype: int64
- name: conversation_id
dtype: int64
- name: cluster
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 9176034
num_examples: 6540
download_size: 4302367
dataset_size: 9176034
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "augmentatio-standardized_cluster_8_std"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
16lemoing/cvo | ---
license: mit
---
|
fsuarez/autotrain-data-logo_identifier_v5_short | ---
task_categories:
- image-classification
---
# AutoTrain Dataset for project: logo_identifier_v5_short
## Dataset Description
This dataset has been automatically processed by AutoTrain for project logo_identifier_v5_short.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<86x100 RGB PIL image>",
"target": 48
},
{
"image": "<128x128 RGB PIL image>",
"target": 36
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(names=['20thTelevision', '3M', '7Eleven', 'Acer', 'AmericanExpress', 'Amul', 'Anthem', 'ApolloHospitals', 'Apple', 'Armani', 'Asahi', 'Asus', 'Atari', 'Audi', 'Avon', 'Booking', 'Bosch', 'Bridgestone', 'British Airways', 'Budweiser', 'Burberry', 'BurgerKing', 'BuzzFeed', 'Canon', 'CocaColaZero', 'Coleman', 'Coles', 'Converse', 'CornFlakes', 'Corona', 'CostcoWholesale', 'Crayola', 'Credit Agricole', 'Crocs', 'Crunchyroll', 'Ctrip', 'Dropbox', 'Ducati', 'DunkinDonuts', 'Duracell', 'Dyson', 'Ethereum', 'ExxonMobil', 'FoxNews', 'FreddieMac', 'Fujitsu', 'Goodyear', 'Grubhub', 'Gucci', 'Huawei', 'Hudson Bay Company', 'HugoBoss', 'Hulu', 'Hyundai', 'Instagram', 'Intel', 'John Lewis & Partners', 'Johnson&Johnson', 'Kingston', 'LouisVuitton', 'Lowes', 'Lufthansa', 'Lululemon', 'Luxottica', 'MorganStanley', 'Motorola', 'MountainDew', 'Moutai', 'Movistar', 'Msci', 'Muji', 'Nike', 'Nissan', 'Nokia', 'Nvidia', 'Orange', 'Oreo', 'Porsche', 'Power China', 'Prada', 'Pringles', 'Publix', 'Puma', 'Purina', 'PwC', 'Qualcomm', 'Rolex', 'Rolls-Royce', 'RoyalCaribbean', 'Spotify', 'Sprite', 'Starbucks', 'StateBankofIndia', 'StateGrid', 'Subaru', 'Subway', 'Suning', 'Supreme', 'Suzuki', 'Total SA', 'TotalEnergies', 'Toyota', 'TripAdvisor', 'Twitch', 'Twitter', 'UnitedHealthCare', 'Universal', 'Volkswagen', 'Volvo', 'Wikipedia', 'Wipro', 'Wuliangye', 'Xiaomi', 'Youtube', 'Zoom', 'hennessy', 'iHeartRadio', 'koolAid'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 6814 |
| valid | 1768 |
|
Hrukanina/llama-2ch-little | ---
license: openrail
---
|
mzainmehar/urddataset | ---
license: llama2
---
|
BigTMiami/amazon_split_25M_reviews_condensed_part_1_of_20 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 1436987340
num_examples: 215505
- name: validation
num_bytes: 55997864
num_examples: 8398
download_size: 475893412
dataset_size: 1492985204
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
autoevaluate/autoeval-staging-eval-project-f89b1257-9045192 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- lewtun/dog_food
eval_info:
task: image_multi_class_classification
model: abhishek/convnext-tiny-finetuned-dogfood
metrics: []
dataset_name: lewtun/dog_food
dataset_config: lewtun--dog_food
dataset_split: train
col_mapping:
image: image
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Image Classification
* Model: abhishek/convnext-tiny-finetuned-dogfood
* Dataset: lewtun/dog_food
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@aciborowska](https://huggingface.co/aciborowska) for evaluating this model. |
textminr/simplebooks | ---
license: cc
language:
- en
pretty_name: SimpleBooks
--- |
yuan-sf63/word_label_0.5_64_Nf | ---
dataset_info:
features:
- name: text
dtype: string
- name: '0'
dtype: int64
- name: '1'
dtype: int64
- name: '2'
dtype: int64
- name: '3'
dtype: int64
- name: '4'
dtype: int64
- name: '5'
dtype: int64
- name: '6'
dtype: int64
- name: '7'
dtype: int64
- name: '8'
dtype: int64
- name: '9'
dtype: int64
- name: '10'
dtype: int64
- name: '11'
dtype: int64
- name: '12'
dtype: int64
- name: '13'
dtype: int64
- name: '14'
dtype: int64
- name: '15'
dtype: int64
- name: '16'
dtype: int64
- name: '17'
dtype: int64
- name: '18'
dtype: int64
- name: '19'
dtype: int64
- name: '20'
dtype: int64
- name: '21'
dtype: int64
- name: '22'
dtype: int64
- name: '23'
dtype: int64
- name: '24'
dtype: int64
- name: '25'
dtype: int64
- name: '26'
dtype: int64
- name: '27'
dtype: int64
- name: '28'
dtype: int64
- name: '29'
dtype: int64
- name: '30'
dtype: int64
- name: '31'
dtype: int64
- name: '32'
dtype: int64
- name: '33'
dtype: int64
- name: '34'
dtype: int64
- name: '35'
dtype: int64
- name: '36'
dtype: int64
- name: '37'
dtype: int64
- name: '38'
dtype: int64
- name: '39'
dtype: int64
- name: '40'
dtype: int64
- name: '41'
dtype: int64
- name: '42'
dtype: int64
- name: '43'
dtype: int64
- name: '44'
dtype: int64
- name: '45'
dtype: int64
- name: '46'
dtype: int64
- name: '47'
dtype: int64
- name: '48'
dtype: int64
- name: '49'
dtype: int64
- name: '50'
dtype: int64
- name: '51'
dtype: int64
- name: '52'
dtype: int64
- name: '53'
dtype: int64
- name: '54'
dtype: int64
- name: '55'
dtype: int64
- name: '56'
dtype: int64
- name: '57'
dtype: int64
- name: '58'
dtype: int64
- name: '59'
dtype: int64
- name: '60'
dtype: int64
- name: '61'
dtype: int64
- name: '62'
dtype: int64
- name: '63'
dtype: int64
splits:
- name: train
num_bytes: 44587747.16477193
num_examples: 70695
- name: validation
num_bytes: 4954824.835228069
num_examples: 7856
download_size: 9038324
dataset_size: 49542572.0
---
# Dataset Card for "word_label_0.5_64_Nf"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
vvsotnikov/mm-astronomy | ---
license: mit
language:
- en
tags:
- logical reasoning
- reading comprehension
- common sense
- astrophysics
metrics:
- multiple_choice_grade
task_categories:
- question-answering
task_ids:
- open-domain-qa
- multiple-choice-qa
dataset_info:
features:
- name: id
dtype: string
- name: message
dtype: string
- name: choices
sequence:
- name: text
dtype: string
- name: label
dtype: string
- name: answerKey
dtype: string
splits:
- name: train
num_examples: 190
- name: validation
num_examples: 568
size_categories:
- n<1K
---
A set of NER-related questions about multimessenger astronomy. |
open-llm-leaderboard/details_Weyaxi__zephyr-beta-Nebula-v2-7B | ---
pretty_name: Evaluation run of Weyaxi/zephyr-beta-Nebula-v2-7B
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [Weyaxi/zephyr-beta-Nebula-v2-7B](https://huggingface.co/Weyaxi/zephyr-beta-Nebula-v2-7B)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 1 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Weyaxi__zephyr-beta-Nebula-v2-7B\"\
,\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese\
\ are the [latest results from run 2023-12-02T13:42:20.652326](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__zephyr-beta-Nebula-v2-7B/blob/main/results_2023-12-02T13-42-20.652326.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.17513267626990145,\n\
\ \"acc_stderr\": 0.010469307043157914\n },\n \"harness|gsm8k|5\":\
\ {\n \"acc\": 0.17513267626990145,\n \"acc_stderr\": 0.010469307043157914\n\
\ }\n}\n```"
repo_url: https://huggingface.co/Weyaxi/zephyr-beta-Nebula-v2-7B
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_gsm8k_5
data_files:
- split: 2023_12_02T13_42_20.652326
path:
- '**/details_harness|gsm8k|5_2023-12-02T13-42-20.652326.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-12-02T13-42-20.652326.parquet'
- config_name: results
data_files:
- split: 2023_12_02T13_42_20.652326
path:
- results_2023-12-02T13-42-20.652326.parquet
- split: latest
path:
- results_2023-12-02T13-42-20.652326.parquet
---
# Dataset Card for Evaluation run of Weyaxi/zephyr-beta-Nebula-v2-7B
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Weyaxi/zephyr-beta-Nebula-v2-7B
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [Weyaxi/zephyr-beta-Nebula-v2-7B](https://huggingface.co/Weyaxi/zephyr-beta-Nebula-v2-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Weyaxi__zephyr-beta-Nebula-v2-7B",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-02T13:42:20.652326](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__zephyr-beta-Nebula-v2-7B/blob/main/results_2023-12-02T13-42-20.652326.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.17513267626990145,
"acc_stderr": 0.010469307043157914
},
"harness|gsm8k|5": {
"acc": 0.17513267626990145,
"acc_stderr": 0.010469307043157914
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
MarlaR/dalle | ---
license: artistic-2.0
---
|
Sk4372/Gpt | ---
license: openrail
---
|
DynamicSuperb/SpeakerCounting_LibriTTS-TestClean | ---
dataset_info:
features:
- name: file
dtype: string
- name: audio
dtype: audio
- name: instruction
dtype: string
- name: label
dtype: string
- name: utterance 1
dtype: string
- name: utterance 2
dtype: string
- name: utterance 3
dtype: string
- name: utterance 4
dtype: string
- name: utterance 5
dtype: string
splits:
- name: test
num_bytes: 44983164.0
num_examples: 200
download_size: 38657129
dataset_size: 44983164.0
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
# Dataset Card for "SpeakerCounting_LibriTTSTestClean"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
linhqyy/result_with_finetuned_taggenv2_40epoch_unfreeze | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: id
dtype: string
- name: w2v2_baseline_transcription
dtype: string
- name: w2v2_baseline_norm
dtype: string
splits:
- name: train
num_bytes: 174371470.027
num_examples: 1299
download_size: 164200894
dataset_size: 174371470.027
---
# Dataset Card for "result_with_finetuned_taggenv2_40epoch_unfreeze"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CyberHarem/echidna_rezero | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of echidna (Re:Zero Kara Hajimeru Isekai Seikatsu)
This is the dataset of echidna (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 200 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
|
AdapterOcean/augmentatio-standardized_cluster_0_std | ---
dataset_info:
features:
- name: message
dtype: string
- name: message_type
dtype: string
- name: message_id
dtype: int64
- name: conversation_id
dtype: int64
- name: cluster
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 6065771
num_examples: 4720
download_size: 2954709
dataset_size: 6065771
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "augmentatio-standardized_cluster_0_std"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ericwky/dataset_basel_framework | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 5608706
num_examples: 17622
download_size: 2226078
dataset_size: 5608706
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
King-Harry/NinjaMasker-PII-Redaction-Dataset | ---
license: apache-2.0
---
|
eturok/harry_potter_tokenized | ---
dataset_info:
features:
- name: input_ids
sequence: int32
splits:
- name: train
num_bytes: 2194380.0
num_examples: 1095
- name: test
num_bytes: 549096.0
num_examples: 274
download_size: 1291776
dataset_size: 2743476.0
---
# Dataset Card for "harry_potter_tokenized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
distilled-one-sec-cv12-each-chunk-uniq/chunk_243 | ---
dataset_info:
features:
- name: logits
sequence: float32
- name: mfcc
sequence:
sequence: float64
splits:
- name: train
num_bytes: 1032486552.0
num_examples: 201186
download_size: 1056461376
dataset_size: 1032486552.0
---
# Dataset Card for "chunk_243"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
galaxychen/da_resample_part1 | ---
license: apache-2.0
---
|
ruanchaves/assin2_por_Latn_to_glg_Latn | ---
dataset_info:
features:
- name: sentence_pair_id
dtype: int64
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: relatedness_score
dtype: float32
- name: entailment_judgment
dtype:
class_label:
names:
'0': NONE
'1': ENTAILMENT
- name: __language__
dtype: string
splits:
- name: train
num_bytes: 873989
num_examples: 6500
- name: test
num_bytes: 340838
num_examples: 2448
- name: validation
num_bytes: 67669
num_examples: 500
download_size: 0
dataset_size: 1282496
---
# Dataset Card for "assin2_por_Latn_to_glg_Latn"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
flaviolima/coringa1 | ---
license: openrail
---
|
Doganaws/dam1r | ---
license: openrail
---
|
wtcherr/unsplash_20k | ---
dataset_info:
features:
- name: image
dtype: image
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 2560499324.351
num_examples: 19999
download_size: 440556200
dataset_size: 2560499324.351
---
# Dataset Card for "unsplash_20k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Circularmachines/batch_indexing_machine_230529_015 | ---
dataset_info:
features:
- name: image
dtype: image
splits:
- name: train
num_bytes: 159162453.0
num_examples: 720
download_size: 159173563
dataset_size: 159162453.0
---
# Dataset Card for "batch_indexing_machine_230529_015"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Codec-SUPERB/m4singer_unit | ---
dataset_info:
features:
- name: id
dtype: string
- name: unit
sequence:
sequence: int64
splits:
- name: academicodec_hifi_16k_320d
num_bytes: 2014533
num_examples: 217
- name: academicodec_hifi_16k_320d_large_uni
num_bytes: 2014533
num_examples: 217
- name: academicodec_hifi_24k_320d
num_bytes: 3017445
num_examples: 217
- name: audiodec_24k_320d
num_bytes: 6441589
num_examples: 217
- name: dac_16k
num_bytes: 7496069
num_examples: 217
- name: dac_24k
num_bytes: 30083861
num_examples: 217
- name: dac_44k
num_bytes: 9784697
num_examples: 217
- name: encodec_24k_12bps
num_bytes: 12073941
num_examples: 217
- name: encodec_24k_1_5bps
num_bytes: 1515421
num_examples: 217
- name: encodec_24k_24bps
num_bytes: 24140821
num_examples: 217
- name: encodec_24k_3bps
num_bytes: 3023781
num_examples: 217
- name: encodec_24k_6bps
num_bytes: 6040501
num_examples: 217
- name: funcodec_en_libritts_16k_gr1nq32ds320
num_bytes: 16122389
num_examples: 217
- name: funcodec_en_libritts_16k_gr8nq32ds320
num_bytes: 16122389
num_examples: 217
- name: funcodec_en_libritts_16k_nq32ds320
num_bytes: 16110613
num_examples: 217
- name: funcodec_en_libritts_16k_nq32ds640
num_bytes: 8087573
num_examples: 217
- name: funcodec_zh_en_16k_nq32ds320
num_bytes: 16110613
num_examples: 217
- name: funcodec_zh_en_16k_nq32ds640
num_bytes: 8087573
num_examples: 217
- name: speech_tokenizer_16k
num_bytes: 4032949
num_examples: 217
download_size: 29084937
dataset_size: 192321291
configs:
- config_name: default
data_files:
- split: academicodec_hifi_16k_320d
path: data/academicodec_hifi_16k_320d-*
- split: academicodec_hifi_16k_320d_large_uni
path: data/academicodec_hifi_16k_320d_large_uni-*
- split: academicodec_hifi_24k_320d
path: data/academicodec_hifi_24k_320d-*
- split: audiodec_24k_320d
path: data/audiodec_24k_320d-*
- split: dac_16k
path: data/dac_16k-*
- split: dac_24k
path: data/dac_24k-*
- split: dac_44k
path: data/dac_44k-*
- split: encodec_24k_12bps
path: data/encodec_24k_12bps-*
- split: encodec_24k_1_5bps
path: data/encodec_24k_1_5bps-*
- split: encodec_24k_24bps
path: data/encodec_24k_24bps-*
- split: encodec_24k_3bps
path: data/encodec_24k_3bps-*
- split: encodec_24k_6bps
path: data/encodec_24k_6bps-*
- split: funcodec_en_libritts_16k_gr1nq32ds320
path: data/funcodec_en_libritts_16k_gr1nq32ds320-*
- split: funcodec_en_libritts_16k_gr8nq32ds320
path: data/funcodec_en_libritts_16k_gr8nq32ds320-*
- split: funcodec_en_libritts_16k_nq32ds320
path: data/funcodec_en_libritts_16k_nq32ds320-*
- split: funcodec_en_libritts_16k_nq32ds640
path: data/funcodec_en_libritts_16k_nq32ds640-*
- split: funcodec_zh_en_16k_nq32ds320
path: data/funcodec_zh_en_16k_nq32ds320-*
- split: funcodec_zh_en_16k_nq32ds640
path: data/funcodec_zh_en_16k_nq32ds640-*
- split: speech_tokenizer_16k
path: data/speech_tokenizer_16k-*
---
|
Azure99/blossom-orca-v3 | ---
license: apache-2.0
task_categories:
- text-generation
- text2text-generation
language:
- zh
- en
size_categories:
- 100K<n<1M
---
# BLOSSOM ORCA V3
### 介绍
Blossom Orca V3是一个基于OpenOrca衍生而来的中英双语指令数据集,适用于指令微调。
相比于blossom-wizard-v2,本版本完全使用GPT-4进行蒸馏。
本数据集从OpenOrca中抽取了系统提示和指令,首先将其翻译为中文并校验翻译结果,再使用指令调用gpt-4-0125-preview模型生成响应,并过滤掉包含自我认知以及拒绝回答的响应,以便后续对齐。此外,为了确保响应风格的一致性以及中英数据配比,本数据集还对未翻译的原始指令也进行了相同的调用,最终得到了1:1的中英双语指令数据。
相比直接对原始OpenOrca进行翻译的中文数据集,Blossom Orca的一致性及质量更高。
本次发布了全量数据的50%,包含中英双语各20K,共计40K记录。
### 语言
以中文和英文为主。
### 数据集结构
每条数据代表一个完整的对话,包含id和conversations两个字段。
- id:从1递增。
- conversations:对象数组,每个对象包含role、content两个字段,role的取值为user或assistant,分别代表用户输入和助手输出,content则为对应的内容。
### 数据集限制
本数据集的所有响应均由gpt-4-0125-preview生成,并未经过严格的数据校验,可能包含不准确甚至严重错误的回答。此外,由于过滤了拒答响应,仅使用本数据集训练的模型,可能不会拒绝非法的请求。 |
slone/nllb-200-10M-sample | ---
dataset_info:
features:
- name: laser_score
dtype: float64
- name: lang1
dtype: string
- name: text1
dtype: string
- name: lang2
dtype: string
- name: text2
dtype: string
- name: blaser_sim
dtype: float64
splits:
- name: train
num_bytes: 2279333006.0
num_examples: 9983398
download_size: 1825697094
dataset_size: 2279333006.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: odc-by
task_categories:
- translation
pretty_name: nllb-200-10M-sample
size_categories:
- 1M<n<10M
language:
- ak # aka_Latn Akan
- am # amh_Ethi Amharic
- ar # arb_Arab Modern Standard Arabic
- awa # awa_Deva Awadhi
- azj # azj_Latn North Azerbaijani
- bm # bam_Latn Bambara
- ban # ban_Latn Balinese
- be # bel_Cyrl Belarusian
- bem # bem_Latn Bemba
- bn # ben_Beng Bengali
- bho # bho_Deva Bhojpuri
- bjn # bjn_Latn Banjar (Latin script)
- bug # bug_Latn Buginese
- bg # bul_Cyrl Bulgarian
- ca # cat_Latn Catalan
- ceb # ceb_Latn Cebuano
- cs # ces_Latn Czech
- cjk # cjk_Latn Chokwe
- ckb # ckb_Arab Central Kurdish
- crh # crh_Latn Crimean Tatar
- da # dan_Latn Danish
- de # deu_Latn German
- dik # dik_Latn Southwestern Dinka
- dyu # dyu_Latn Dyula
- el # ell_Grek Greek
- en # eng_Latn English
- eo # epo_Latn Esperanto
- et # est_Latn Estonian
- ee # ewe_Latn Ewe
- fo # fao_Latn Faroese
- fj # fij_Latn Fijian
- fi # fin_Latn Finnish
- fon # fon_Latn Fon
- fr # fra_Latn French
- fur # fur_Latn Friulian
- ff # fuv_Latn Nigerian Fulfulde
- gaz # gaz_Latn West Central Oromo
- gd # gla_Latn Scottish Gaelic
- ga # gle_Latn Irish
- gl # glg_Latn Galician
- gn # grn_Latn Guarani
- gu # guj_Gujr Gujarati
- ht # hat_Latn Haitian Creole
- ha # hau_Latn Hausa
- he # heb_Hebr Hebrew
- hi # hin_Deva Hindi
- hne # hne_Deva Chhattisgarhi
- hr # hrv_Latn Croatian
- hu # hun_Latn Hungarian
- hy # hye_Armn Armenian
- ig # ibo_Latn Igbo
- ilo # ilo_Latn Ilocano
- id # ind_Latn Indonesian
- is # isl_Latn Icelandic
- it # ita_Latn Italian
- jv # jav_Latn Javanese
- ja # jpn_Jpan Japanese
- kab # kab_Latn Kabyle
- kac # kac_Latn Jingpho
- kam # kam_Latn Kamba
- kn # kan_Knda Kannada
- ks # kas_Arab Kashmiri (Arabic script)
- ks # kas_Deva Kashmiri (Devanagari script)
- ka # kat_Geor Georgian
- kk # kaz_Cyrl Kazakh
- kbp # kbp_Latn Kabiyè
- kea # kea_Latn Kabuverdianu
- mn # khk_Cyrl Halh Mongolian
- km # khm_Khmr Khmer
- ki # kik_Latn Kikuyu
- rw # kin_Latn Kinyarwanda
- ky # kir_Cyrl Kyrgyz
- kmb # kmb_Latn Kimbundu
- kmr # kmr_Latn Northern Kurdish
- kr # knc_Arab Central Kanuri (Arabic script)
- kr # knc_Latn Central Kanuri (Latin script)
- kg # kon_Latn Kikongo
- ko # kor_Hang Korean
- lo # lao_Laoo Lao
- lij # lij_Latn Ligurian
- li # lim_Latn Limburgish
- ln # lin_Latn Lingala
- lt # lit_Latn Lithuanian
- lmo # lmo_Latn Lombard
- ltg # ltg_Latn Latgalian
- lb # ltz_Latn Luxembourgish
- lua # lua_Latn Luba-Kasai
- lg # lug_Latn Ganda
- luo # luo_Latn Luo
- lus # lus_Latn Mizo
- lv # lvs_Latn Standard Latvian
- mag # mag_Deva Magahi
- mai # mai_Deva Maithili
- ml # mal_Mlym Malayalam
- mr # mar_Deva Marathi
- min # min_Latn Minangkabau (Latin script)
- mk # mkd_Cyrl Macedonian
- mt # mlt_Latn Maltese
- mni # mni_Beng Meitei (Bengali script)
- mos # mos_Latn Mossi
- mi # mri_Latn Maori
- my # mya_Mymr Burmese
- nl # nld_Latn Dutch
- nb # nob_Latn Norwegian Bokmål
- ne # npi_Deva Nepali
- nso # nso_Latn Northern Sotho
- nus # nus_Latn Nuer
- ny # nya_Latn Nyanja
- oc # oci_Latn Occitan
- ory # ory_Orya Odia
- pag # pag_Latn Pangasinan
- pa # pan_Guru Eastern Panjabi
- pap # pap_Latn Papiamento
- pbt # pbt_Arab Southern Pashto
- fa # pes_Arab Western Persian
- plt # plt_Latn Plateau Malagasy
- pl # pol_Latn Polish
- pt # por_Latn Portuguese
- prs # prs_Arab Dari
- qu # quy_Latn Ayacucho Quechua
- ro # ron_Latn Romanian
- rn # run_Latn Rundi
- ru # rus_Cyrl Russian
- sg # sag_Latn Sango
- sa # san_Deva Sanskrit
- sat # sat_Beng ?
- scn # scn_Latn Sicilian
- shn # shn_Mymr Shan
- si # sin_Sinh Sinhala
- sk # slk_Latn Slovak
- sl # slv_Latn Slovenian
- sm # smo_Latn Samoan
- sn # sna_Latn Shona
- sd # snd_Arab Sindhi
- so # som_Latn Somali
- st # sot_Latn Southern Sotho
- es # spa_Latn Spanish
- sc # srd_Latn Sardinian
- sr # srp_Cyrl Serbian
- ss # ssw_Latn Swati
- su # sun_Latn Sundanese
- sv # swe_Latn Swedish
- sw # swh_Latn Swahili
- szl # szl_Latn Silesian
- ta # tam_Taml Tamil
- taq # taq_Latn Tamasheq (Latin script)
- tt # tat_Cyrl Tatar
- te # tel_Telu Telugu
- tg # tgk_Cyrl Tajik
- tl # tgl_Latn Tagalog
- ti # tir_Ethi Tigrinya
- tpi # tpi_Latn Tok Pisin
- tn # tsn_Latn Tswana
- ts # tso_Latn Tsonga
- tk # tuk_Latn Turkmen
- tum # tum_Latn Tumbuka
- tr # tur_Latn Turkish
- tw # twi_Latn Twi
- tzm # tzm_Tfng Central Atlas Tamazight
- ug # uig_Arab Uyghur
- uk # ukr_Cyrl Ukrainian
- umb # umb_Latn Umbundu
- ur # urd_Arab Urdu
- uz # uzn_Latn Northern Uzbek
- vec # vec_Latn Venetian
- vi # vie_Latn Vietnamese
- war # war_Latn Waray
- wo # wol_Latn Wolof
- xh # xho_Latn Xhosa
- yi # ydd_Hebr Eastern Yiddish
- yo # yor_Latn Yoruba
- zh # zho_Hans Chinese (Simplified)
- zh # zho_Hant Chinese (Traditional)
- ms # zsm_Latn Standard Malay
- zu # zul_Latn Zulu
---
# Dataset Card for "nllb-200-10M-sample"
This is a sample of nearly 10M sentence pairs from the [NLLB-200](https://arxiv.org/abs/2207.04672)
mined dataset [allenai/nllb](https://huggingface.co/datasets/allenai/nllb),
scored with the model [facebook/blaser-2.0-qe](https://huggingface.co/facebook/blaser-2.0-qe)
described in the [SeamlessM4T](https://arxiv.org/abs/2308.11596) paper.
The sample is not random; instead, we just took the top `n` sentence pairs from each translation direction.
The number `n` was computed with the goal of upsamping the directions that contain underrepresented languages.
Nevertheless, the 187 languoids (language and script combinations) are not represented equally,
with most languoids totaling 36K to 200K sentences.
Over 60% of the sentence pairs have BLASER-QE score above 3.5.
This dataset can be used for fine-tuning massively multilingual translation models.
We suggest the following scenario:
- Filter the dataset by the value of `blaser_sim` (the recommended threshold is 3.0 or 3.5);
- Randomly swap the source/target roles in the sentence pairs during data loading;
- Use that data to augment the dataset while fine-tuning an NLLB-like model for a new translation direction,
in order to mitigate forgetting of all the other translation directions.
The dataset is released under the terms of [ODC-BY](https://opendatacommons.org/licenses/by/1-0/).
By using this, you are also bound to the respective Terms of Use and License of the original source.
Citation:
- NLLB Team et al, *No Language Left Behind: Scaling Human-Centered Machine Translation*, Arxiv https://arxiv.org/abs/2207.04672, 2022.
- Seamless Communication et al, *SeamlessM4T — Massively Multilingual & Multimodal Machine Translation*, Arxiv https://arxiv.org/abs/2308.11596, 2023.
The following language codes are supported. The mapping between languages and codes can be found in the [NLLB-200 paper](https://arxiv.org/abs/2207.04672)
or in the [FLORES-200 repository](https://github.com/facebookresearch/flores/blob/main/flores200/README.md#languages-in-flores-200).
```
aka_Latn amh_Ethi arb_Arab awa_Deva azj_Latn bam_Latn ban_Latn bel_Cyrl bem_Latn ben_Beng bho_Deva bjn_Latn
bug_Latn bul_Cyrl cat_Latn ceb_Latn ces_Latn cjk_Latn ckb_Arab crh_Latn dan_Latn deu_Latn dik_Latn dyu_Latn
ell_Grek eng_Latn epo_Latn est_Latn ewe_Latn fao_Latn fij_Latn fin_Latn fon_Latn fra_Latn fur_Latn fuv_Latn
gaz_Latn gla_Latn gle_Latn glg_Latn grn_Latn guj_Gujr hat_Latn hau_Latn heb_Hebr hin_Deva hne_Deva hrv_Latn
hun_Latn hye_Armn ibo_Latn ilo_Latn ind_Latn isl_Latn ita_Latn jav_Latn jpn_Jpan kab_Latn kac_Latn kam_Latn
kan_Knda kas_Arab kas_Deva kat_Geor kaz_Cyrl kbp_Latn kea_Latn khk_Cyrl khm_Khmr kik_Latn kin_Latn kir_Cyrl
kmb_Latn kmr_Latn knc_Arab knc_Latn kon_Latn kor_Hang lao_Laoo lij_Latn lim_Latn lin_Latn lit_Latn lmo_Latn
ltg_Latn ltz_Latn lua_Latn lug_Latn luo_Latn lus_Latn lvs_Latn mag_Deva mai_Deva mal_Mlym mar_Deva min_Latn
mkd_Cyrl mlt_Latn mni_Beng mos_Latn mri_Latn mya_Mymr nld_Latn nob_Latn npi_Deva nso_Latn nus_Latn nya_Latn
oci_Latn ory_Orya pag_Latn pan_Guru pap_Latn pbt_Arab pes_Arab plt_Latn pol_Latn por_Latn prs_Arab quy_Latn
ron_Latn run_Latn rus_Cyrl sag_Latn san_Deva sat_Beng scn_Latn shn_Mymr sin_Sinh slk_Latn slv_Latn smo_Latn
sna_Latn snd_Arab som_Latn sot_Latn spa_Latn srd_Latn srp_Cyrl ssw_Latn sun_Latn swe_Latn swh_Latn szl_Latn
tam_Taml taq_Latn tat_Cyrl tel_Telu tgk_Cyrl tgl_Latn tir_Ethi tpi_Latn tsn_Latn tso_Latn tuk_Latn tum_Latn
tur_Latn twi_Latn tzm_Tfng uig_Arab ukr_Cyrl umb_Latn urd_Arab uzn_Latn vec_Latn vie_Latn war_Latn wol_Latn
xho_Latn ydd_Hebr yor_Latn zho_Hans zho_Hant zsm_Latn zul_Latn
```
|
CyberHarem/u_110_azurlane | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of u_110/U-110 (Azur Lane)
This is the dataset of u_110/U-110 (Azur Lane), containing 51 images and their tags.
The core tags of this character are `bangs, hair_between_eyes, white_hair, short_hair, grey_eyes, ahoge, blue_eyes`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:----------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 51 | 56.46 MiB | [Download](https://huggingface.co/datasets/CyberHarem/u_110_azurlane/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 51 | 33.00 MiB | [Download](https://huggingface.co/datasets/CyberHarem/u_110_azurlane/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 121 | 71.04 MiB | [Download](https://huggingface.co/datasets/CyberHarem/u_110_azurlane/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 51 | 51.08 MiB | [Download](https://huggingface.co/datasets/CyberHarem/u_110_azurlane/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 121 | 98.72 MiB | [Download](https://huggingface.co/datasets/CyberHarem/u_110_azurlane/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/u_110_azurlane',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 26 |  |  |  |  |  | 1girl, looking_at_viewer, solo, black_one-piece_swimsuit, hooded_cloak, blush, hood_down, black_gloves, open_mouth, covered_navel, iron_cross, white_background, animal_hood, white_thighhighs, simple_background, holding_lollipop, breasts, brown_cloak, fang, highleg, swirl_lollipop, water |
| 1 | 10 |  |  |  |  |  | 1girl, hair_flower, long_sleeves, solo, looking_at_viewer, black_dress, closed_mouth, :3, purple_eyes, smile, white_flower, white_pantyhose, white_shirt, two_side_up, blush, full_body, armband, belt, stuffed_animal, wings |
| 2 | 5 |  |  |  |  |  | 1girl, black_pantyhose, long_sleeves, school_uniform, solo, backpack, black_jacket, looking_at_viewer, pleated_skirt, shoes, white_background, white_headwear, black_footwear, closed_mouth, full_body, hood_down, open_jacket, simple_background, beret, black_bow, black_skirt, blazer, blush, collared_shirt, hoodie, sitting, sleeves_past_wrists, sweater_vest, thick_eyebrows, white_shirt |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | looking_at_viewer | solo | black_one-piece_swimsuit | hooded_cloak | blush | hood_down | black_gloves | open_mouth | covered_navel | iron_cross | white_background | animal_hood | white_thighhighs | simple_background | holding_lollipop | breasts | brown_cloak | fang | highleg | swirl_lollipop | water | hair_flower | long_sleeves | black_dress | closed_mouth | :3 | purple_eyes | smile | white_flower | white_pantyhose | white_shirt | two_side_up | full_body | armband | belt | stuffed_animal | wings | black_pantyhose | school_uniform | backpack | black_jacket | pleated_skirt | shoes | white_headwear | black_footwear | open_jacket | beret | black_bow | black_skirt | blazer | collared_shirt | hoodie | sitting | sleeves_past_wrists | sweater_vest | thick_eyebrows |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------------------|:-------|:---------------------------|:---------------|:--------|:------------|:---------------|:-------------|:----------------|:-------------|:-------------------|:--------------|:-------------------|:--------------------|:-------------------|:----------|:--------------|:-------|:----------|:-----------------|:--------|:--------------|:---------------|:--------------|:---------------|:-----|:--------------|:--------|:---------------|:------------------|:--------------|:--------------|:------------|:----------|:-------|:-----------------|:--------|:------------------|:-----------------|:-----------|:---------------|:----------------|:--------|:-----------------|:-----------------|:--------------|:--------|:------------|:--------------|:---------|:-----------------|:---------|:----------|:----------------------|:---------------|:-----------------|
| 0 | 26 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 10 |  |  |  |  |  | X | X | X | | | X | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | |
| 2 | 5 |  |  |  |  |  | X | X | X | | | X | X | | | | | X | | | X | | | | | | | | | X | | X | | | | | | X | | X | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
FredZhang7/toxi-text-3M | ---
license: apache-2.0
task_categories:
- text-classification
- token-classification
- zero-shot-classification
size_categories:
- 1M<n<10M
language:
- ar
- es
- pa
- th
- et
- fr
- fi
- hu
- lt
- ur
- so
- pl
- el
- mr
- sk
- gu
- he
- af
- te
- ro
- lv
- sv
- ne
- kn
- it
- mk
- cs
- en
- de
- da
- ta
- bn
- pt
- sq
- tl
- uk
- bg
- ca
- sw
- hi
- zh
- ja
- hr
- ru
- vi
- id
- sl
- cy
- ko
- nl
- ml
- tr
- fa
- 'no'
- multilingual
tags:
- nlp
- moderation
---
This is a large multilingual toxicity dataset with 3M rows of text data from 55 natural languages, all of which are written/sent by humans, not machine translation models.
The preprocessed training data alone consists of 2,880,667 rows of comments, tweets, and messages. Among these rows, 416,529 are classified as toxic, while the remaining 2,463,773 are considered neutral. Below is a table to illustrate the data composition:
| | Toxic | Neutral | Total |
|-------|----------|----------|----------|
| [multilingual-train-deduplicated.csv](./train/multilingual-train-deduplicated.csv) | 416,529 | 2,464,138 | 2,880,667 |
| [multilingual-validation(new).csv](./validation/multilingual-validation(new).csv) | 10,613 | 19,028 | 29,641 |
| [multilingual-test.csv](./test/multilingual-test.csv) | 14,410 | 49,402 | 63,812 |
Each CSV file has three columns: `text`, `is_toxic`, and `lang`.
Supported types of toxicity:
- Identity Hate/Homophobia
- Misogyny
- Violent Extremism
- Hate Speech
- Offensive Insults
- Sexting
- Obscene
- Threats
- Harassment
- Racism
- Trolling
- Doxing
- Others
Supported languages:
- Afrikaans
- Albanian
- Arabic
- Bengali
- Bulgarian
- Catalan
- Chinese (Simplified)
- Chinese (Traditional)
- Croatian
- Czech
- Danish
- Dutch
- English
- Estonian
- Finnish
- French
- German
- Greek
- Gujarati
- Hebrew
- Hindi
- Hungarian
- Indonesian
- Italian
- Japanese
- Kannada
- Korean
- Latvian
- Lithuanian
- Macedonian
- Malayalam
- Marathi
- Nepali
- Norwegian
- Persian
- Polish
- Portuguese
- Punjabi
- Romanian
- Russian
- Slovak
- Slovenian
- Somali
- Spanish
- Swahili
- Swedish
- Tagalog
- Tamil
- Telugu
- Thai
- Turkish
- Ukrainian
- Urdu
- Vietnamese
- Welsh
<br>
### Original Source?
Around 11 months ago, I downloaded and preprocessed 2.7M rows of text data, but completely forgot the original source of these datasets...
All I remember is that I downloaded datasets from everywhere I could: HuggingFace, research papers, GitHub, Kaggle, SurgeAI, and Google search. I even fetched 20K+ tweets using the Twitter API.
Recently, I came across 6 datasets, so I remembered to credit them below.
Known datasets:
- tomekkorbak/pile-toxicity-balanced2 (HuggingFace)
- datasets/thai_toxicity_tweet (HuggingFace)
- datasets/ethos (HuggingFace)
- inspection-ai/japanese-toxic-dataset (GitHub)
- mathigatti/sexting-dataset (GitHub)
- omar-sharif03/BAD-Bangla-Aggressive-Text-Dataset (GitHub)
I manually collected and wrote 100 rows of data.
<br>
### Limitations
Limitations include:
- All labels were rounded to the nearest integer. If a text was classified as 46%-54% toxic, the text itself might not be noticeably toxic or neutral.
- There were disagreements among moderators on some labels, due to ambiguity and lack of context.
- When there're only URL(s), emojis, or anything that's unrecognizable as natural language in the "text" column, the corresponding "lang" is "unknown".
Have fun modelling! |
tyzhu/eval_tag_squad_v0 | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 87035544
num_examples: 87599
- name: validation
num_bytes: 11397371
num_examples: 10570
download_size: 21419187
dataset_size: 98432915
---
# Dataset Card for "eval_tag_squad_v0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Weyaxi/huggingface-spaces-codes | ---
configs:
- config_name: default
data_files:
spaces.csv
license: other
language:
- code
size_categories:
- 100K<n<1M
---

# 📊 Dataset Description
This dataset comprises code files of Huggingface Spaces that have more than 0 likes as of November 10, 2023. This dataset contains various programming languages totaling in 672 MB of compressed and 2.05 GB of uncompressed data.
# 📝 Data Fields
| Field | Type | Description |
|------------|--------|------------------------------------------|
| repository | string | Huggingface Spaces repository names. |
| sdk | string | Software Development Kit of the space. |
| license | string | License type of the space. |
## 🧩 Data Structure
Data structure of the data.
```
spaces/
├─ author1/
│ ├─ space1
│ ├─ space2
├─ author2/
│ ├─ space1
│ ├─ space2
│ ├─ space3
```
# 🏛️ Licenses
Huggingface Spaces contains a variety of licenses. Here is the list of the licenses that this dataset contains:
```python
[
'None',
'mit',
'apache-2.0',
'openrail',
'gpl-3.0',
'other',
'afl-3.0',
'unknown',
'creativeml-openrail-m',
'cc-by-nc-4.0',
'cc-by-4.0',
'cc',
'cc-by-nc-sa-4.0',
'bigscience-openrail-m',
'bsd-3-clause',
'agpl-3.0',
'wtfpl',
'gpl',
'artistic-2.0',
'lgpl-3.0',
'cc-by-sa-4.0',
'Configuration error',
'bsd',
'cc-by-nc-nd-4.0',
'cc0-1.0',
'unlicense',
'llama2',
'bigscience-bloom-rail-1.0',
'gpl-2.0',
'bsd-2-clause',
'osl-3.0',
'cc-by-2.0',
'cc-by-3.0',
'cc-by-nc-3.0',
'cc-by-nc-2.0',
'cc-by-nd-4.0',
'openrail++',
'bigcode-openrail-m',
'bsd-3-clause-clear',
'eupl-1.1',
'cc-by-sa-3.0',
'mpl-2.0',
'c-uda',
'gfdl',
'cc-by-nc-sa-2.0',
'cc-by-2.5',
'bsl-1.0',
'odc-by',
'deepfloyd-if-license',
'ms-pl',
'ecl-2.0',
'pddl',
'ofl-1.1',
'lgpl-2.1',
'postgresql',
'lppl-1.3c',
'ncsa',
'cc-by-nc-sa-3.0'
]
```
# 📊 Dataset Statistics
| Language | File Extension | File Counts | File Size (MB) | Line Counts |
|------------|-----------------|-------------|----------------|-------------|
| Python | .py | 141,560 | 1079.0 | 28,653,744 |
| SQL | .sql | 21 | 523.6 | 645 |
| JavaScript | .js | 6,790 | 369.8 | 2,137,054 |
| Markdown | .md | 63,237 | 273.4 | 3,110,443 |
| HTML | .html | 1,953 | 265.8 | 516,020 |
| C | .c | 1,320 | 132.2 | 3,558,826 |
| Go | .go | 429 | 46.3 | 6,331 |
| CSS | .css | 3,097 | 25.6 | 386,334 |
| C Header | .h | 2,824 | 20.4 | 570,948 |
| C++ | .cpp | 1,117 | 15.3 | 494,939 |
| TypeScript | .ts | 4,158 | 14.8 | 439,551 |
| TSX | .tsx | 4,273 | 9.4 | 306,416 |
| Shell | .sh | 3,294 | 5.5 | 171,943 |
| Perl | .pm | 92 | 4.2 | 128,594 |
| C# | .cs | 22 | 3.9 | 41,265 |
## 🖥️ Language

## 📁 Size

## 📝 Line Count

# 🤗 Huggingface Spaces Statistics
## 🛠️ Software Development Kit (SDK)
Software Development Kit pie chart.

## 🏛️ License
License chart.

# 📅 Dataset Creation
This dataset was created in these steps:
1. Scraped all spaces using the Huggingface Hub API.
```python
from huggingface_hub import HfApi
api = HfApi()
spaces = api.list_spaces(sort="likes", full=1, direction=-1)
```
2. Filtered spaces with more than 0 likes.
```python
a = {}
for i in tqdm(spaces):
i = i.__dict__
if i['likes'] > 0:
try:
try:
a[i['id']] = {'sdk': i['sdk'], 'license': i['cardData']['license'], 'likes': i['likes']}
except KeyError:
a[i['id']] = {'sdk': i['sdk'], 'license': None, 'likes': i['likes']}
except:
a[i['id']] = {'sdk': "Configuration error", 'license': "Configuration error", 'likes': i['likes']}
data_list = [{'repository': key, 'sdk': value['sdk'], 'license': value['license'], 'likes': value['likes']} for key, value in a.items()]
df = pd.DataFrame(data_list)
```
3. Cloned spaces locally.
```python
from huggingface_hub import snapshot_download
programming = ['.asm', '.bat', '.cmd', '.c', '.h', '.cs', '.cpp', '.hpp', '.c++', '.h++', '.cc', '.hh', '.C', '.H', '.cmake', '.css', '.dockerfile', 'Dockerfile', '.f90', '.f', '.f03', '.f08', '.f77', '.f95', '.for', '.fpp', '.go', '.hs', '.html', '.java', '.js', '.jl', '.lua', 'Makefile', '.md', '.markdown', '.php', '.php3', '.php4', '.php5', '.phps', '.phpt', '.pl', '.pm', '.pod', '.perl', '.ps1', '.psd1', '.psm1', '.py', '.rb', '.rs', '.sql', '.scala', '.sh', '.bash', '.command', '.zsh', '.ts', '.tsx', '.tex', '.vb']
pattern = [f"*{i}" for i in programming]
for i in repos:
snapshot_download(i, repo_type="space", local_dir=f"spaces/{i}", allow_patterns=pattern)
````
4. Processed the data to derive statistics. |
ostapeno/oasst1_seed3200 | ---
license: apache-2.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: instruction
dtype: string
- name: instruction_quality
dtype: float64
- name: response
dtype: string
- name: response_quality
dtype: float64
splits:
- name: train
num_bytes: 4345339
num_examples: 3200
download_size: 2534729
dataset_size: 4345339
---
|
Vinnyyw/Maitecovers | ---
license: openrail
---
|
HuggingFaceM4/MMMU-modif | Invalid username or password. |
KaytTech/uk-pens-data-01 | ---
license: cc
language:
- en
size_categories:
- 10K<n<100K
--- |
smwoo529/ko_art_50line | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 34664
num_examples: 50
download_size: 21550
dataset_size: 34664
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ainzOulgun/fshdfaeraer | ---
license: openrail
---
|
mirshad7/NERDS360 | ---
license: cc-by-nc-4.0
---
# NEO 360: Neural Fields for Sparse View Synthesis of Outdoor Scenes
[](https://opensource.org/licenses/MIT)<img src="demo/Pytorch_logo.png" width="10%">
This repository is the pytorch implementation of our paper:
<a href="https://www.tri.global/" target="_blank">
<img align="right" src="demo/tri-logo.png" width="25%"/>
</a>
**NEO 360: Neural Fields for Sparse View Synthesis of Outdoor Scenes**<br>
[__***Muhammad Zubair Irshad***__](https://zubairirshad.com), [Sergey Zakharov](https://zakharos.github.io/), [Katherine Liu](https://www.thekatherineliu.com/), [Vitor Guizilini](https://www.linkedin.com/in/vitorguizilini), [Thomas Kollar](http://www.tkollar.com/site/), [Adrien Gaidon](https://adriengaidon.com/), [Zsolt Kira](https://faculty.cc.gatech.edu/~zk15/), [Rares Ambrus](https://www.tri.global/about-us/dr-rares-ambrus) <br>
International Conference on Computer Vision (ICCV), 2023<br>
[[Project Page](https://zubair-irshad.github.io/projects/neo360.html)] [[arXiv](https://arxiv.org/abs/2308.12967)] [[PDF](https://arxiv.org/pdf/2308.12967.pdf)] [[Video](https://youtu.be/avmylyL_V8c?si=eeTPhl0xJxM3fSF7)]
<p align="center">
<img src="demo/NEO_Website_1.jpg" width="100%">
</p>
<p align="center">
<img src="demo/NEO_Architecture.JPG" width="100%">
</p>
### Code Coming Soon!
## 📊 Dataset
### NERDS 360 Multi-View dataset for Outdoor Scenes
NeRDS 360: "NeRF for Reconstruction, Decomposition and Scene Synthesis of 360° outdoor scenes” dataset comprising 75 unbounded scenes with full multi-view annotations and diverse scenes for generalizable NeRF training and evaluation.
<p align="center">
<img src="demo/github_dataset.gif" width="100%">
</p>
#### Download the dataset:
* [NERDS360 Training Set](https://tri-ml-public.s3.amazonaws.com/github/neo360/datasets/PDMultiObjv6.tar.gz) - 75 Scenes (19.5 GB)
* [NERDS360 Test Set](https://tri-ml-public.s3.amazonaws.com/github/neo360/datasets/PD_v6_test.tar.gz) - 5 Scenes (2.1 GB)
#### Visualizing the dataset (Coming Soon):
We will release our visualization scripts to generate visualizations like below i.e. plot accumulated pointclouds, multi-view camera annotations etc.
<p align="center">
<img src="demo/cameras.gif" width="100%">
</p>
## Citation
If you find this repository or our NERDS 360 dataset useful, please consider citing:
```
@inproceedings{irshad2023neo360,
title={NeO 360: Neural Fields for Sparse View Synthesis of Outdoor Scenes},
author={Muhammad Zubair Irshad and Sergey Zakharov and Katherine Liu and Vitor Guizilini and Thomas Kollar and Adrien Gaidon and Zsolt Kira and Rares Ambrus},
journal={Interntaional Conference on Computer Vision (ICCV)},
year={2023},
url={https://arxiv.org/abs/2308.12967},
}
```
|
anirudhlakhotia/mini-en-samantar | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: src
dtype: string
- name: tgt
dtype: string
splits:
- name: train
num_bytes: 71069230.15540843
num_examples: 200000
download_size: 38315862
dataset_size: 71069230.15540843
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Atipico1/NQ-colbert-10k | ---
dataset_info:
features:
- name: question
dtype: string
- name: answers
sequence: string
- name: ctxs
list:
- name: hasanswer
dtype: bool
- name: score
dtype: float64
- name: text
dtype: string
- name: title
dtype: string
splits:
- name: train
num_bytes: 33024468.581177138
num_examples: 10000
- name: test
num_bytes: 12000594
num_examples: 3610
download_size: 26375087
dataset_size: 45025062.58117714
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
CyberHarem/rem_rezero | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of rem (Re:Zero Kara Hajimeru Isekai Seikatsu)
This is the dataset of rem (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 200 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
|
Asek2882/Ajinkya | ---
license: mit
---
|
aniket-jain-9/imdb_sentiment_finetune_dataset | ---
dataset_info:
features:
- name: review
dtype: string
- name: sentiment
dtype: int64
splits:
- name: train
num_bytes: 2588807
num_examples: 2000
- name: validation
num_bytes: 2667965
num_examples: 2000
- name: test
num_bytes: 21177655
num_examples: 16000
download_size: 17194624
dataset_size: 26434427
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
namespace-Pt/qrecc-corpus | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 84244312900
num_examples: 54573064
download_size: 21571487893
dataset_size: 84244312900
---
# Dataset Card for "qrecc"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
smangrul/hinglish_self_instruct_v0 | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: category
dtype: string
splits:
- name: train
num_bytes: 251497
num_examples: 1018
download_size: 124371
dataset_size: 251497
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
task_categories:
- text-generation
language:
- hi
- en
size_categories:
- 1K<n<10K
---
# Hinglish Instruct Dataset using Self Instruct method
The prompt used for generating the samples:
```
You are asked to come up with a set of 50 diverse task instructions in Hinglish or Hindi.
These task instructions will be given to a GPT model and we will evaluate the GPT model for completing the instructions.
Here are the requirements:
1. Try not to repeat the verb for each instruction to maximize diversity.
2. The language used for the instruction also should be diverse. For example, you should combine questions with imperative instructions.
3. The type of instructions should be diverse. The list should include diverse types of tasks like open-ended generation, classification, editing, etc.
2. A GPT language model should be able to complete the instruction. For example, do not ask the assistant to create any visual or audio output. For another example, do not ask the assistant to wake you up at 5pm or set a reminder because it cannot perform any action.
3. The instructions should be in Hinglish or Hindi.
4. The instructions should be 1 to 2 sentences long. Either an imperative sentence or a question is permitted.
5. You should generate an appropriate input to the instruction. The input field should contain a specific example provided for the instruction. It should involve realistic data and should not contain simple placeholders. The input should provide substantial content to make the instruction challenging but should ideally not exceed 100 words.
6. Not all instructions require input. For example, when an instruction asks about some general information, "what is the highest peak in the world", it is not necessary to provide a specific context. In this case, we simply put "<noinput>" in the input field.
7. The output should be an appropriate response to the instruction and the input. Make sure the output is less than 500 words.
8. Provide complete instruction, input and output without placeholders.
9. Example format of the output task:
Instruction: किसी भी एक भारतीय त्योहार के बारे में बताएं।
Input: <noinput>
Output: दीपावली, जिसे दिवाली भी कहा जाता है, प्रकाश का त्योहार है। इस दिन, लोग अपने घरों में दीये जलाते हैं, पूजा करते हैं, और मिठाईयाँ बांटते हैं।
List of 50 tasks:
```
Note:
1. The instruction "Provide complete instruction, input and output without placeholders." was important else GPT-4 especially was **very lazy** and just gave placeholders for the outputs.
2. Most of the dataset is generated using GPT-3.5 Turbo while some part of it is generated using GPT-4. Most of the dataset is in Hinglish while some part of it is in Hindi.
3. The prompt template is adapted from the Alpaca GitHub repo https://github.com/tatsu-lab/stanford_alpaca/blob/main/prompt.txt |
steinhaug/onceUponAtimeInPornVille | ---
license: other
---
|
openclimatefix/dwd-icon-eu | ---
license: mit
tags:
- climate
pretty_name: DWD ICON-EU Forecasts
size_categories:
- 1K<n<10K
---
# Dataset Card for DWD ICON-EU Forecast
This dataset is comprised of forecasts from the German Weather Service's (DWD) ICON-EU model. From 2020-March 2023 the forecasts contain variables that are relevant to solar and wind
forecasting. From March 2023 to the present, all variables are included. Each forecast runs up to 5 days into the future, and the model is ran 4 times per day. This data is an archive of
the publicly available data at https://opendata.dwd.de/weather/nwp/, converted to Zarr format with Xarray. No other processing of the data is performed.
## Dataset Details
- **Curated by:** Jacob Bieker, Open Climate Fix
- **License:** German Government Open Data License
### Dataset Sources [optional]
- **Raw files:** https://opendata.dwd.de/weather/nwp/
Note: The raw files are deleted after 24 hours, and there is no long-term archive available publicly.
## Uses
This data is intended for use in renewable energy forecasting, weather forecasting, and anything that can use high-quality weather forecasts over Europe.
## Dataset Structure
The dataset is comprised of one Zarr file per forecast initialization time, and each forecast goes out between 48-120 hours. The files are located at data/year/month/day/YYYYMMDDHH.zarr.zip.
## Dataset Creation
### Curation Rationale
The DWD ICON-EU model provides high-quality, high-resolution forecasts for European weather that is also publicly available and free of charge. The model should generally outperform
NOAA's GFS forecast model, and has a higher temporal and spatial resolution. The main downside of this model is that the files are only available for a short period publicly, so this dataset
was setup to provide a public archive of the forecasts for use by researchers in many fields, but especially renewable energy forecasting and weather forecasting.
### Source Data
The source data is the grib2 files from the DWD Open Data Server.
#### Data Collection and Processing
The data is collected every day, around 6-8 hours after forecast initialization time to ensure the forecast is finished running before the data is pulled. The grib2 files are opened
with Xarray and collated into a single Xarray Dataset, with one data variable per ICON variable. Surface variables have "_s" appended to their names to differentiate them from multi-level variables.
The Dataset is then written to Zarr using "ocf_blosc2" to encode and compress the variables. No scaling or changing of the variables values is performed.
#### Who are the source data producers?
German Weather Service (DWD)
### Recommendations
These files can be opened directly from HuggingFace, and streamed in with Xarray. HuggingFace is fairly slow though, so the recommended way would be to download the files you want
and open them locally. In either case, to access the data you can do the following
```python
import ocf_blosc2
import xarray as xr
data = xr.open_zarr("path/to/zarr/file")
print(data)
```
Alternatively, for using the data in forecasting, there is the `ocf_datapipes` package for loading and training renewable energy forecasting models with multi-modal inputs, including
ICON, but also satellite data, PV readings, etc.
## Dataset Card Contact
Jacob Bieker: jacob@openclimatefix.org |
rxm210132/tokenized_dataset | ---
dataset_info:
features:
- name: texts
dtype: string
- name: labels
sequence: float64
splits:
- name: valid
num_bytes: 298285
num_examples: 1545
- name: train
num_bytes: 624448
num_examples: 3259
download_size: 880158
dataset_size: 922733
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
---
|
Multimodal-Fatima/OxfordPets_test_text_davinci_002_Visclues_ns_10 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: image
dtype: image
- name: prompt
dtype: string
- name: true_label
dtype: string
- name: raw_prediction
dtype: string
- name: prediction
dtype: string
splits:
- name: fewshot_1
num_bytes: 128960.0
num_examples: 10
download_size: 127751
dataset_size: 128960.0
---
# Dataset Card for "OxfordPets_test_text_davinci_002_Visclues_ns_10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
open-llm-leaderboard/details_Weyaxi__neural-chat-7b-v3-1-OpenHermes-2.5-7B | ---
pretty_name: Evaluation run of Weyaxi/neural-chat-7b-v3-1-OpenHermes-2.5-7B
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [Weyaxi/neural-chat-7b-v3-1-OpenHermes-2.5-7B](https://huggingface.co/Weyaxi/neural-chat-7b-v3-1-OpenHermes-2.5-7B)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Weyaxi__neural-chat-7b-v3-1-OpenHermes-2.5-7B\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-12-04T18:24:21.614365](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__neural-chat-7b-v3-1-OpenHermes-2.5-7B/blob/main/results_2023-12-04T18-24-21.614365.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6331720228661488,\n\
\ \"acc_stderr\": 0.03245949446776424,\n \"acc_norm\": 0.636248593036376,\n\
\ \"acc_norm_stderr\": 0.033106781744445986,\n \"mc1\": 0.4565483476132191,\n\
\ \"mc1_stderr\": 0.01743728095318369,\n \"mc2\": 0.612310037106096,\n\
\ \"mc2_stderr\": 0.015369020754133529\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.6450511945392492,\n \"acc_stderr\": 0.01398303690409409,\n\
\ \"acc_norm\": 0.6612627986348123,\n \"acc_norm_stderr\": 0.013830568927974332\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6573391754630552,\n\
\ \"acc_stderr\": 0.004736292355716402,\n \"acc_norm\": 0.8408683529177454,\n\
\ \"acc_norm_stderr\": 0.003650512158306273\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.26,\n \"acc_stderr\": 0.044084400227680794,\n \
\ \"acc_norm\": 0.26,\n \"acc_norm_stderr\": 0.044084400227680794\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6444444444444445,\n\
\ \"acc_stderr\": 0.04135176749720385,\n \"acc_norm\": 0.6444444444444445,\n\
\ \"acc_norm_stderr\": 0.04135176749720385\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.7039473684210527,\n \"acc_stderr\": 0.03715062154998905,\n\
\ \"acc_norm\": 0.7039473684210527,\n \"acc_norm_stderr\": 0.03715062154998905\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.59,\n\
\ \"acc_stderr\": 0.049431107042371025,\n \"acc_norm\": 0.59,\n \
\ \"acc_norm_stderr\": 0.049431107042371025\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.6641509433962264,\n \"acc_stderr\": 0.02906722014664483,\n\
\ \"acc_norm\": 0.6641509433962264,\n \"acc_norm_stderr\": 0.02906722014664483\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7291666666666666,\n\
\ \"acc_stderr\": 0.037161774375660164,\n \"acc_norm\": 0.7291666666666666,\n\
\ \"acc_norm_stderr\": 0.037161774375660164\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.46,\n \"acc_stderr\": 0.05009082659620333,\n \
\ \"acc_norm\": 0.46,\n \"acc_norm_stderr\": 0.05009082659620333\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.52,\n \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\": 0.52,\n\
\ \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.36,\n \"acc_stderr\": 0.048241815132442176,\n \
\ \"acc_norm\": 0.36,\n \"acc_norm_stderr\": 0.048241815132442176\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6242774566473989,\n\
\ \"acc_stderr\": 0.036928207672648664,\n \"acc_norm\": 0.6242774566473989,\n\
\ \"acc_norm_stderr\": 0.036928207672648664\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.4019607843137255,\n \"acc_stderr\": 0.04878608714466996,\n\
\ \"acc_norm\": 0.4019607843137255,\n \"acc_norm_stderr\": 0.04878608714466996\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.74,\n \"acc_stderr\": 0.04408440022768079,\n \"acc_norm\": 0.74,\n\
\ \"acc_norm_stderr\": 0.04408440022768079\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.5446808510638298,\n \"acc_stderr\": 0.032555253593403555,\n\
\ \"acc_norm\": 0.5446808510638298,\n \"acc_norm_stderr\": 0.032555253593403555\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5175438596491229,\n\
\ \"acc_stderr\": 0.04700708033551038,\n \"acc_norm\": 0.5175438596491229,\n\
\ \"acc_norm_stderr\": 0.04700708033551038\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5586206896551724,\n \"acc_stderr\": 0.04137931034482757,\n\
\ \"acc_norm\": 0.5586206896551724,\n \"acc_norm_stderr\": 0.04137931034482757\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.3888888888888889,\n \"acc_stderr\": 0.025107425481137285,\n \"\
acc_norm\": 0.3888888888888889,\n \"acc_norm_stderr\": 0.025107425481137285\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.5,\n\
\ \"acc_stderr\": 0.04472135954999579,\n \"acc_norm\": 0.5,\n \
\ \"acc_norm_stderr\": 0.04472135954999579\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.32,\n \"acc_stderr\": 0.046882617226215034,\n \
\ \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.046882617226215034\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\"\
: 0.7838709677419354,\n \"acc_stderr\": 0.02341529343356853,\n \"\
acc_norm\": 0.7838709677419354,\n \"acc_norm_stderr\": 0.02341529343356853\n\
\ },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\"\
: 0.5073891625615764,\n \"acc_stderr\": 0.0351760354036101,\n \"acc_norm\"\
: 0.5073891625615764,\n \"acc_norm_stderr\": 0.0351760354036101\n },\n\
\ \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\"\
: 0.67,\n \"acc_stderr\": 0.04725815626252609,\n \"acc_norm\": 0.67,\n\
\ \"acc_norm_stderr\": 0.04725815626252609\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7818181818181819,\n \"acc_stderr\": 0.03225078108306289,\n\
\ \"acc_norm\": 0.7818181818181819,\n \"acc_norm_stderr\": 0.03225078108306289\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.7777777777777778,\n \"acc_stderr\": 0.02962022787479049,\n \"\
acc_norm\": 0.7777777777777778,\n \"acc_norm_stderr\": 0.02962022787479049\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.8911917098445595,\n \"acc_stderr\": 0.022473253332768776,\n\
\ \"acc_norm\": 0.8911917098445595,\n \"acc_norm_stderr\": 0.022473253332768776\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.6307692307692307,\n \"acc_stderr\": 0.02446861524147892,\n \
\ \"acc_norm\": 0.6307692307692307,\n \"acc_norm_stderr\": 0.02446861524147892\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.35185185185185186,\n \"acc_stderr\": 0.02911661760608301,\n \
\ \"acc_norm\": 0.35185185185185186,\n \"acc_norm_stderr\": 0.02911661760608301\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6554621848739496,\n \"acc_stderr\": 0.030868682604121622,\n\
\ \"acc_norm\": 0.6554621848739496,\n \"acc_norm_stderr\": 0.030868682604121622\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.3443708609271523,\n \"acc_stderr\": 0.03879687024073327,\n \"\
acc_norm\": 0.3443708609271523,\n \"acc_norm_stderr\": 0.03879687024073327\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8348623853211009,\n \"acc_stderr\": 0.01591955782997604,\n \"\
acc_norm\": 0.8348623853211009,\n \"acc_norm_stderr\": 0.01591955782997604\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.4583333333333333,\n \"acc_stderr\": 0.033981108902946366,\n \"\
acc_norm\": 0.4583333333333333,\n \"acc_norm_stderr\": 0.033981108902946366\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.8088235294117647,\n \"acc_stderr\": 0.02759917430064077,\n \"\
acc_norm\": 0.8088235294117647,\n \"acc_norm_stderr\": 0.02759917430064077\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.8059071729957806,\n \"acc_stderr\": 0.025744902532290916,\n \
\ \"acc_norm\": 0.8059071729957806,\n \"acc_norm_stderr\": 0.025744902532290916\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6681614349775785,\n\
\ \"acc_stderr\": 0.031602951437766785,\n \"acc_norm\": 0.6681614349775785,\n\
\ \"acc_norm_stderr\": 0.031602951437766785\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.7404580152671756,\n \"acc_stderr\": 0.038448761397852714,\n\
\ \"acc_norm\": 0.7404580152671756,\n \"acc_norm_stderr\": 0.038448761397852714\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.7851239669421488,\n \"acc_stderr\": 0.037494924487096966,\n \"\
acc_norm\": 0.7851239669421488,\n \"acc_norm_stderr\": 0.037494924487096966\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7685185185185185,\n\
\ \"acc_stderr\": 0.04077494709252627,\n \"acc_norm\": 0.7685185185185185,\n\
\ \"acc_norm_stderr\": 0.04077494709252627\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7607361963190185,\n \"acc_stderr\": 0.033519538795212696,\n\
\ \"acc_norm\": 0.7607361963190185,\n \"acc_norm_stderr\": 0.033519538795212696\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5,\n\
\ \"acc_stderr\": 0.04745789978762494,\n \"acc_norm\": 0.5,\n \
\ \"acc_norm_stderr\": 0.04745789978762494\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.8252427184466019,\n \"acc_stderr\": 0.0376017800602662,\n\
\ \"acc_norm\": 0.8252427184466019,\n \"acc_norm_stderr\": 0.0376017800602662\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8760683760683761,\n\
\ \"acc_stderr\": 0.021586494001281386,\n \"acc_norm\": 0.8760683760683761,\n\
\ \"acc_norm_stderr\": 0.021586494001281386\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.67,\n \"acc_stderr\": 0.047258156262526094,\n \
\ \"acc_norm\": 0.67,\n \"acc_norm_stderr\": 0.047258156262526094\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.80970625798212,\n\
\ \"acc_stderr\": 0.014036945850381392,\n \"acc_norm\": 0.80970625798212,\n\
\ \"acc_norm_stderr\": 0.014036945850381392\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.6907514450867052,\n \"acc_stderr\": 0.02488314057007176,\n\
\ \"acc_norm\": 0.6907514450867052,\n \"acc_norm_stderr\": 0.02488314057007176\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.3865921787709497,\n\
\ \"acc_stderr\": 0.016286674879101022,\n \"acc_norm\": 0.3865921787709497,\n\
\ \"acc_norm_stderr\": 0.016286674879101022\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.7058823529411765,\n \"acc_stderr\": 0.026090162504279056,\n\
\ \"acc_norm\": 0.7058823529411765,\n \"acc_norm_stderr\": 0.026090162504279056\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.684887459807074,\n\
\ \"acc_stderr\": 0.026385273703464492,\n \"acc_norm\": 0.684887459807074,\n\
\ \"acc_norm_stderr\": 0.026385273703464492\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.7129629629629629,\n \"acc_stderr\": 0.02517104191530968,\n\
\ \"acc_norm\": 0.7129629629629629,\n \"acc_norm_stderr\": 0.02517104191530968\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.46099290780141844,\n \"acc_stderr\": 0.029736592526424438,\n \
\ \"acc_norm\": 0.46099290780141844,\n \"acc_norm_stderr\": 0.029736592526424438\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4498044328552803,\n\
\ \"acc_stderr\": 0.012705721498565104,\n \"acc_norm\": 0.4498044328552803,\n\
\ \"acc_norm_stderr\": 0.012705721498565104\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.6654411764705882,\n \"acc_stderr\": 0.028661996202335303,\n\
\ \"acc_norm\": 0.6654411764705882,\n \"acc_norm_stderr\": 0.028661996202335303\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6470588235294118,\n \"acc_stderr\": 0.019333142020797164,\n \
\ \"acc_norm\": 0.6470588235294118,\n \"acc_norm_stderr\": 0.019333142020797164\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6727272727272727,\n\
\ \"acc_stderr\": 0.0449429086625209,\n \"acc_norm\": 0.6727272727272727,\n\
\ \"acc_norm_stderr\": 0.0449429086625209\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.7142857142857143,\n \"acc_stderr\": 0.0289205832206756,\n\
\ \"acc_norm\": 0.7142857142857143,\n \"acc_norm_stderr\": 0.0289205832206756\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8308457711442786,\n\
\ \"acc_stderr\": 0.026508590656233254,\n \"acc_norm\": 0.8308457711442786,\n\
\ \"acc_norm_stderr\": 0.026508590656233254\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.82,\n \"acc_stderr\": 0.03861229196653693,\n \
\ \"acc_norm\": 0.82,\n \"acc_norm_stderr\": 0.03861229196653693\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5180722891566265,\n\
\ \"acc_stderr\": 0.03889951252827216,\n \"acc_norm\": 0.5180722891566265,\n\
\ \"acc_norm_stderr\": 0.03889951252827216\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8421052631578947,\n \"acc_stderr\": 0.02796678585916089,\n\
\ \"acc_norm\": 0.8421052631578947,\n \"acc_norm_stderr\": 0.02796678585916089\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.4565483476132191,\n\
\ \"mc1_stderr\": 0.01743728095318369,\n \"mc2\": 0.612310037106096,\n\
\ \"mc2_stderr\": 0.015369020754133529\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7758484609313339,\n \"acc_stderr\": 0.011720400740774104\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.5087187263078089,\n \
\ \"acc_stderr\": 0.01377039069700212\n }\n}\n```"
repo_url: https://huggingface.co/Weyaxi/neural-chat-7b-v3-1-OpenHermes-2.5-7B
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_12_04T18_24_21.614365
path:
- '**/details_harness|arc:challenge|25_2023-12-04T18-24-21.614365.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-12-04T18-24-21.614365.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_12_04T18_24_21.614365
path:
- '**/details_harness|gsm8k|5_2023-12-04T18-24-21.614365.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-12-04T18-24-21.614365.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_12_04T18_24_21.614365
path:
- '**/details_harness|hellaswag|10_2023-12-04T18-24-21.614365.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-12-04T18-24-21.614365.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_12_04T18_24_21.614365
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-04T18-24-21.614365.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-12-04T18-24-21.614365.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-04T18-24-21.614365.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_12_04T18_24_21.614365
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-04T18-24-21.614365.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-04T18-24-21.614365.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_12_04T18_24_21.614365
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-04T18-24-21.614365.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-04T18-24-21.614365.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_12_04T18_24_21.614365
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-04T18-24-21.614365.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-04T18-24-21.614365.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_12_04T18_24_21.614365
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-04T18-24-21.614365.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-04T18-24-21.614365.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_12_04T18_24_21.614365
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-04T18-24-21.614365.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-04T18-24-21.614365.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_12_04T18_24_21.614365
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-04T18-24-21.614365.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-04T18-24-21.614365.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_12_04T18_24_21.614365
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-04T18-24-21.614365.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-04T18-24-21.614365.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_12_04T18_24_21.614365
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-04T18-24-21.614365.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-04T18-24-21.614365.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_12_04T18_24_21.614365
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-04T18-24-21.614365.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-04T18-24-21.614365.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_12_04T18_24_21.614365
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-04T18-24-21.614365.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-04T18-24-21.614365.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_12_04T18_24_21.614365
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-04T18-24-21.614365.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-04T18-24-21.614365.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_12_04T18_24_21.614365
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-04T18-24-21.614365.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-04T18-24-21.614365.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_12_04T18_24_21.614365
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-04T18-24-21.614365.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-04T18-24-21.614365.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_12_04T18_24_21.614365
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-04T18-24-21.614365.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-04T18-24-21.614365.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_12_04T18_24_21.614365
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-04T18-24-21.614365.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-04T18-24-21.614365.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_12_04T18_24_21.614365
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-04T18-24-21.614365.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-04T18-24-21.614365.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_12_04T18_24_21.614365
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-04T18-24-21.614365.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-04T18-24-21.614365.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_12_04T18_24_21.614365
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-04T18-24-21.614365.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-04T18-24-21.614365.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_12_04T18_24_21.614365
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-04T18-24-21.614365.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-04T18-24-21.614365.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_12_04T18_24_21.614365
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-04T18-24-21.614365.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-04T18-24-21.614365.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_12_04T18_24_21.614365
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-04T18-24-21.614365.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-04T18-24-21.614365.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_12_04T18_24_21.614365
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-04T18-24-21.614365.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-04T18-24-21.614365.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_12_04T18_24_21.614365
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-04T18-24-21.614365.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-04T18-24-21.614365.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_12_04T18_24_21.614365
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-04T18-24-21.614365.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-04T18-24-21.614365.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_12_04T18_24_21.614365
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-04T18-24-21.614365.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-04T18-24-21.614365.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_12_04T18_24_21.614365
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-04T18-24-21.614365.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-04T18-24-21.614365.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_12_04T18_24_21.614365
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-04T18-24-21.614365.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-04T18-24-21.614365.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_12_04T18_24_21.614365
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-04T18-24-21.614365.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-04T18-24-21.614365.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_12_04T18_24_21.614365
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-04T18-24-21.614365.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-04T18-24-21.614365.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_12_04T18_24_21.614365
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-04T18-24-21.614365.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-04T18-24-21.614365.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_12_04T18_24_21.614365
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-04T18-24-21.614365.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-04T18-24-21.614365.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_12_04T18_24_21.614365
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-04T18-24-21.614365.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-04T18-24-21.614365.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_12_04T18_24_21.614365
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-04T18-24-21.614365.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-04T18-24-21.614365.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_12_04T18_24_21.614365
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-04T18-24-21.614365.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-04T18-24-21.614365.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_12_04T18_24_21.614365
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-12-04T18-24-21.614365.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-12-04T18-24-21.614365.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_12_04T18_24_21.614365
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-04T18-24-21.614365.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-04T18-24-21.614365.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_12_04T18_24_21.614365
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-04T18-24-21.614365.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-04T18-24-21.614365.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_12_04T18_24_21.614365
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-04T18-24-21.614365.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-04T18-24-21.614365.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_12_04T18_24_21.614365
path:
- '**/details_harness|hendrycksTest-management|5_2023-12-04T18-24-21.614365.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-12-04T18-24-21.614365.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_12_04T18_24_21.614365
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-12-04T18-24-21.614365.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-12-04T18-24-21.614365.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_12_04T18_24_21.614365
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-04T18-24-21.614365.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-04T18-24-21.614365.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_12_04T18_24_21.614365
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-04T18-24-21.614365.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-04T18-24-21.614365.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_12_04T18_24_21.614365
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-04T18-24-21.614365.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-04T18-24-21.614365.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_12_04T18_24_21.614365
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-04T18-24-21.614365.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-04T18-24-21.614365.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_12_04T18_24_21.614365
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-04T18-24-21.614365.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-04T18-24-21.614365.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_12_04T18_24_21.614365
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-04T18-24-21.614365.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-04T18-24-21.614365.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_12_04T18_24_21.614365
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-04T18-24-21.614365.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-04T18-24-21.614365.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_12_04T18_24_21.614365
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-04T18-24-21.614365.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-04T18-24-21.614365.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_12_04T18_24_21.614365
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-04T18-24-21.614365.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-04T18-24-21.614365.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_12_04T18_24_21.614365
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-04T18-24-21.614365.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-04T18-24-21.614365.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_12_04T18_24_21.614365
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-04T18-24-21.614365.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-04T18-24-21.614365.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_12_04T18_24_21.614365
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-04T18-24-21.614365.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-04T18-24-21.614365.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_12_04T18_24_21.614365
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-04T18-24-21.614365.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-04T18-24-21.614365.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_12_04T18_24_21.614365
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-12-04T18-24-21.614365.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-12-04T18-24-21.614365.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_12_04T18_24_21.614365
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-04T18-24-21.614365.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-04T18-24-21.614365.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_12_04T18_24_21.614365
path:
- '**/details_harness|hendrycksTest-virology|5_2023-12-04T18-24-21.614365.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-12-04T18-24-21.614365.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_12_04T18_24_21.614365
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-04T18-24-21.614365.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-04T18-24-21.614365.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_12_04T18_24_21.614365
path:
- '**/details_harness|truthfulqa:mc|0_2023-12-04T18-24-21.614365.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-12-04T18-24-21.614365.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_12_04T18_24_21.614365
path:
- '**/details_harness|winogrande|5_2023-12-04T18-24-21.614365.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-12-04T18-24-21.614365.parquet'
- config_name: results
data_files:
- split: 2023_12_04T18_24_21.614365
path:
- results_2023-12-04T18-24-21.614365.parquet
- split: latest
path:
- results_2023-12-04T18-24-21.614365.parquet
---
# Dataset Card for Evaluation run of Weyaxi/neural-chat-7b-v3-1-OpenHermes-2.5-7B
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Weyaxi/neural-chat-7b-v3-1-OpenHermes-2.5-7B
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [Weyaxi/neural-chat-7b-v3-1-OpenHermes-2.5-7B](https://huggingface.co/Weyaxi/neural-chat-7b-v3-1-OpenHermes-2.5-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Weyaxi__neural-chat-7b-v3-1-OpenHermes-2.5-7B",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-04T18:24:21.614365](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__neural-chat-7b-v3-1-OpenHermes-2.5-7B/blob/main/results_2023-12-04T18-24-21.614365.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6331720228661488,
"acc_stderr": 0.03245949446776424,
"acc_norm": 0.636248593036376,
"acc_norm_stderr": 0.033106781744445986,
"mc1": 0.4565483476132191,
"mc1_stderr": 0.01743728095318369,
"mc2": 0.612310037106096,
"mc2_stderr": 0.015369020754133529
},
"harness|arc:challenge|25": {
"acc": 0.6450511945392492,
"acc_stderr": 0.01398303690409409,
"acc_norm": 0.6612627986348123,
"acc_norm_stderr": 0.013830568927974332
},
"harness|hellaswag|10": {
"acc": 0.6573391754630552,
"acc_stderr": 0.004736292355716402,
"acc_norm": 0.8408683529177454,
"acc_norm_stderr": 0.003650512158306273
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.26,
"acc_stderr": 0.044084400227680794,
"acc_norm": 0.26,
"acc_norm_stderr": 0.044084400227680794
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6444444444444445,
"acc_stderr": 0.04135176749720385,
"acc_norm": 0.6444444444444445,
"acc_norm_stderr": 0.04135176749720385
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.7039473684210527,
"acc_stderr": 0.03715062154998905,
"acc_norm": 0.7039473684210527,
"acc_norm_stderr": 0.03715062154998905
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.59,
"acc_stderr": 0.049431107042371025,
"acc_norm": 0.59,
"acc_norm_stderr": 0.049431107042371025
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6641509433962264,
"acc_stderr": 0.02906722014664483,
"acc_norm": 0.6641509433962264,
"acc_norm_stderr": 0.02906722014664483
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7291666666666666,
"acc_stderr": 0.037161774375660164,
"acc_norm": 0.7291666666666666,
"acc_norm_stderr": 0.037161774375660164
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.46,
"acc_stderr": 0.05009082659620333,
"acc_norm": 0.46,
"acc_norm_stderr": 0.05009082659620333
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.52,
"acc_stderr": 0.050211673156867795,
"acc_norm": 0.52,
"acc_norm_stderr": 0.050211673156867795
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.36,
"acc_stderr": 0.048241815132442176,
"acc_norm": 0.36,
"acc_norm_stderr": 0.048241815132442176
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6242774566473989,
"acc_stderr": 0.036928207672648664,
"acc_norm": 0.6242774566473989,
"acc_norm_stderr": 0.036928207672648664
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.4019607843137255,
"acc_stderr": 0.04878608714466996,
"acc_norm": 0.4019607843137255,
"acc_norm_stderr": 0.04878608714466996
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.74,
"acc_stderr": 0.04408440022768079,
"acc_norm": 0.74,
"acc_norm_stderr": 0.04408440022768079
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5446808510638298,
"acc_stderr": 0.032555253593403555,
"acc_norm": 0.5446808510638298,
"acc_norm_stderr": 0.032555253593403555
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.5175438596491229,
"acc_stderr": 0.04700708033551038,
"acc_norm": 0.5175438596491229,
"acc_norm_stderr": 0.04700708033551038
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5586206896551724,
"acc_stderr": 0.04137931034482757,
"acc_norm": 0.5586206896551724,
"acc_norm_stderr": 0.04137931034482757
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.3888888888888889,
"acc_stderr": 0.025107425481137285,
"acc_norm": 0.3888888888888889,
"acc_norm_stderr": 0.025107425481137285
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.5,
"acc_stderr": 0.04472135954999579,
"acc_norm": 0.5,
"acc_norm_stderr": 0.04472135954999579
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.32,
"acc_stderr": 0.046882617226215034,
"acc_norm": 0.32,
"acc_norm_stderr": 0.046882617226215034
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7838709677419354,
"acc_stderr": 0.02341529343356853,
"acc_norm": 0.7838709677419354,
"acc_norm_stderr": 0.02341529343356853
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5073891625615764,
"acc_stderr": 0.0351760354036101,
"acc_norm": 0.5073891625615764,
"acc_norm_stderr": 0.0351760354036101
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.67,
"acc_stderr": 0.04725815626252609,
"acc_norm": 0.67,
"acc_norm_stderr": 0.04725815626252609
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7818181818181819,
"acc_stderr": 0.03225078108306289,
"acc_norm": 0.7818181818181819,
"acc_norm_stderr": 0.03225078108306289
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7777777777777778,
"acc_stderr": 0.02962022787479049,
"acc_norm": 0.7777777777777778,
"acc_norm_stderr": 0.02962022787479049
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8911917098445595,
"acc_stderr": 0.022473253332768776,
"acc_norm": 0.8911917098445595,
"acc_norm_stderr": 0.022473253332768776
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6307692307692307,
"acc_stderr": 0.02446861524147892,
"acc_norm": 0.6307692307692307,
"acc_norm_stderr": 0.02446861524147892
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.35185185185185186,
"acc_stderr": 0.02911661760608301,
"acc_norm": 0.35185185185185186,
"acc_norm_stderr": 0.02911661760608301
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6554621848739496,
"acc_stderr": 0.030868682604121622,
"acc_norm": 0.6554621848739496,
"acc_norm_stderr": 0.030868682604121622
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3443708609271523,
"acc_stderr": 0.03879687024073327,
"acc_norm": 0.3443708609271523,
"acc_norm_stderr": 0.03879687024073327
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8348623853211009,
"acc_stderr": 0.01591955782997604,
"acc_norm": 0.8348623853211009,
"acc_norm_stderr": 0.01591955782997604
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.4583333333333333,
"acc_stderr": 0.033981108902946366,
"acc_norm": 0.4583333333333333,
"acc_norm_stderr": 0.033981108902946366
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8088235294117647,
"acc_stderr": 0.02759917430064077,
"acc_norm": 0.8088235294117647,
"acc_norm_stderr": 0.02759917430064077
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8059071729957806,
"acc_stderr": 0.025744902532290916,
"acc_norm": 0.8059071729957806,
"acc_norm_stderr": 0.025744902532290916
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6681614349775785,
"acc_stderr": 0.031602951437766785,
"acc_norm": 0.6681614349775785,
"acc_norm_stderr": 0.031602951437766785
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7404580152671756,
"acc_stderr": 0.038448761397852714,
"acc_norm": 0.7404580152671756,
"acc_norm_stderr": 0.038448761397852714
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7851239669421488,
"acc_stderr": 0.037494924487096966,
"acc_norm": 0.7851239669421488,
"acc_norm_stderr": 0.037494924487096966
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7685185185185185,
"acc_stderr": 0.04077494709252627,
"acc_norm": 0.7685185185185185,
"acc_norm_stderr": 0.04077494709252627
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7607361963190185,
"acc_stderr": 0.033519538795212696,
"acc_norm": 0.7607361963190185,
"acc_norm_stderr": 0.033519538795212696
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.5,
"acc_stderr": 0.04745789978762494,
"acc_norm": 0.5,
"acc_norm_stderr": 0.04745789978762494
},
"harness|hendrycksTest-management|5": {
"acc": 0.8252427184466019,
"acc_stderr": 0.0376017800602662,
"acc_norm": 0.8252427184466019,
"acc_norm_stderr": 0.0376017800602662
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8760683760683761,
"acc_stderr": 0.021586494001281386,
"acc_norm": 0.8760683760683761,
"acc_norm_stderr": 0.021586494001281386
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.67,
"acc_stderr": 0.047258156262526094,
"acc_norm": 0.67,
"acc_norm_stderr": 0.047258156262526094
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.80970625798212,
"acc_stderr": 0.014036945850381392,
"acc_norm": 0.80970625798212,
"acc_norm_stderr": 0.014036945850381392
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.6907514450867052,
"acc_stderr": 0.02488314057007176,
"acc_norm": 0.6907514450867052,
"acc_norm_stderr": 0.02488314057007176
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.3865921787709497,
"acc_stderr": 0.016286674879101022,
"acc_norm": 0.3865921787709497,
"acc_norm_stderr": 0.016286674879101022
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7058823529411765,
"acc_stderr": 0.026090162504279056,
"acc_norm": 0.7058823529411765,
"acc_norm_stderr": 0.026090162504279056
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.684887459807074,
"acc_stderr": 0.026385273703464492,
"acc_norm": 0.684887459807074,
"acc_norm_stderr": 0.026385273703464492
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7129629629629629,
"acc_stderr": 0.02517104191530968,
"acc_norm": 0.7129629629629629,
"acc_norm_stderr": 0.02517104191530968
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.46099290780141844,
"acc_stderr": 0.029736592526424438,
"acc_norm": 0.46099290780141844,
"acc_norm_stderr": 0.029736592526424438
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4498044328552803,
"acc_stderr": 0.012705721498565104,
"acc_norm": 0.4498044328552803,
"acc_norm_stderr": 0.012705721498565104
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6654411764705882,
"acc_stderr": 0.028661996202335303,
"acc_norm": 0.6654411764705882,
"acc_norm_stderr": 0.028661996202335303
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6470588235294118,
"acc_stderr": 0.019333142020797164,
"acc_norm": 0.6470588235294118,
"acc_norm_stderr": 0.019333142020797164
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6727272727272727,
"acc_stderr": 0.0449429086625209,
"acc_norm": 0.6727272727272727,
"acc_norm_stderr": 0.0449429086625209
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7142857142857143,
"acc_stderr": 0.0289205832206756,
"acc_norm": 0.7142857142857143,
"acc_norm_stderr": 0.0289205832206756
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8308457711442786,
"acc_stderr": 0.026508590656233254,
"acc_norm": 0.8308457711442786,
"acc_norm_stderr": 0.026508590656233254
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.82,
"acc_stderr": 0.03861229196653693,
"acc_norm": 0.82,
"acc_norm_stderr": 0.03861229196653693
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5180722891566265,
"acc_stderr": 0.03889951252827216,
"acc_norm": 0.5180722891566265,
"acc_norm_stderr": 0.03889951252827216
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8421052631578947,
"acc_stderr": 0.02796678585916089,
"acc_norm": 0.8421052631578947,
"acc_norm_stderr": 0.02796678585916089
},
"harness|truthfulqa:mc|0": {
"mc1": 0.4565483476132191,
"mc1_stderr": 0.01743728095318369,
"mc2": 0.612310037106096,
"mc2_stderr": 0.015369020754133529
},
"harness|winogrande|5": {
"acc": 0.7758484609313339,
"acc_stderr": 0.011720400740774104
},
"harness|gsm8k|5": {
"acc": 0.5087187263078089,
"acc_stderr": 0.01377039069700212
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
dog/actlearn_unlabeled_samples | ---
dataset_info:
features:
- name: image
dtype: image
splits:
- name: train
num_bytes: 15047431.461246412
num_examples: 53990
download_size: 12834484
dataset_size: 15047431.461246412
---
# Dataset Card for "actlearn_unlabeled_samples"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
open-llm-leaderboard/details_yeontaek__Platypus2-13B-LoRa-v2 | ---
pretty_name: Evaluation run of yeontaek/Platypus2-13B-LoRa-v2
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [yeontaek/Platypus2-13B-LoRa-v2](https://huggingface.co/yeontaek/Platypus2-13B-LoRa-v2)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 61 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_yeontaek__Platypus2-13B-LoRa-v2\"\
,\n\t\"harness_truthfulqa_mc_0\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\
\nThese are the [latest results from run 2023-08-29T11:20:59.240376](https://huggingface.co/datasets/open-llm-leaderboard/details_yeontaek__Platypus2-13B-LoRa-v2/blob/main/results_2023-08-29T11%3A20%3A59.240376.json):\n\
\n```python\n{\n \"all\": {\n \"acc\": 0.571991245483798,\n \"\
acc_stderr\": 0.034294067141786025,\n \"acc_norm\": 0.5761375119651778,\n\
\ \"acc_norm_stderr\": 0.03427336583128381,\n \"mc1\": 0.28151774785801714,\n\
\ \"mc1_stderr\": 0.01574402724825605,\n \"mc2\": 0.4191985438925104,\n\
\ \"mc2_stderr\": 0.014270484892545822\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.5563139931740614,\n \"acc_stderr\": 0.014518421825670444,\n\
\ \"acc_norm\": 0.5947098976109215,\n \"acc_norm_stderr\": 0.014346869060229328\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6179047998406691,\n\
\ \"acc_stderr\": 0.004849065962692132,\n \"acc_norm\": 0.8241386178052181,\n\
\ \"acc_norm_stderr\": 0.003799241408502969\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \
\ \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.4962962962962963,\n\
\ \"acc_stderr\": 0.04319223625811331,\n \"acc_norm\": 0.4962962962962963,\n\
\ \"acc_norm_stderr\": 0.04319223625811331\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.5986842105263158,\n \"acc_stderr\": 0.039889037033362836,\n\
\ \"acc_norm\": 0.5986842105263158,\n \"acc_norm_stderr\": 0.039889037033362836\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.53,\n\
\ \"acc_stderr\": 0.05016135580465919,\n \"acc_norm\": 0.53,\n \
\ \"acc_norm_stderr\": 0.05016135580465919\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.6150943396226415,\n \"acc_stderr\": 0.02994649856769995,\n\
\ \"acc_norm\": 0.6150943396226415,\n \"acc_norm_stderr\": 0.02994649856769995\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.6111111111111112,\n\
\ \"acc_stderr\": 0.04076663253918567,\n \"acc_norm\": 0.6111111111111112,\n\
\ \"acc_norm_stderr\": 0.04076663253918567\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.49,\n \"acc_stderr\": 0.05024183937956911,\n \
\ \"acc_norm\": 0.49,\n \"acc_norm_stderr\": 0.05024183937956911\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.41,\n \"acc_stderr\": 0.049431107042371025,\n \"acc_norm\": 0.41,\n\
\ \"acc_norm_stderr\": 0.049431107042371025\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.32,\n \"acc_stderr\": 0.046882617226215034,\n \
\ \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.046882617226215034\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.5722543352601156,\n\
\ \"acc_stderr\": 0.03772446857518026,\n \"acc_norm\": 0.5722543352601156,\n\
\ \"acc_norm_stderr\": 0.03772446857518026\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.3431372549019608,\n \"acc_stderr\": 0.047240073523838876,\n\
\ \"acc_norm\": 0.3431372549019608,\n \"acc_norm_stderr\": 0.047240073523838876\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.69,\n\
\ \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.4297872340425532,\n \"acc_stderr\": 0.03236214467715564,\n\
\ \"acc_norm\": 0.4297872340425532,\n \"acc_norm_stderr\": 0.03236214467715564\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.3157894736842105,\n\
\ \"acc_stderr\": 0.043727482902780064,\n \"acc_norm\": 0.3157894736842105,\n\
\ \"acc_norm_stderr\": 0.043727482902780064\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.4482758620689655,\n \"acc_stderr\": 0.04144311810878151,\n\
\ \"acc_norm\": 0.4482758620689655,\n \"acc_norm_stderr\": 0.04144311810878151\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.3386243386243386,\n \"acc_stderr\": 0.02437319786798306,\n \"\
acc_norm\": 0.3386243386243386,\n \"acc_norm_stderr\": 0.02437319786798306\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.36507936507936506,\n\
\ \"acc_stderr\": 0.04306241259127153,\n \"acc_norm\": 0.36507936507936506,\n\
\ \"acc_norm_stderr\": 0.04306241259127153\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252604,\n \
\ \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252604\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.6709677419354839,\n\
\ \"acc_stderr\": 0.02672949906834996,\n \"acc_norm\": 0.6709677419354839,\n\
\ \"acc_norm_stderr\": 0.02672949906834996\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.458128078817734,\n \"acc_stderr\": 0.03505630140785741,\n\
\ \"acc_norm\": 0.458128078817734,\n \"acc_norm_stderr\": 0.03505630140785741\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.59,\n \"acc_stderr\": 0.04943110704237102,\n \"acc_norm\"\
: 0.59,\n \"acc_norm_stderr\": 0.04943110704237102\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7090909090909091,\n \"acc_stderr\": 0.03546563019624336,\n\
\ \"acc_norm\": 0.7090909090909091,\n \"acc_norm_stderr\": 0.03546563019624336\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.7424242424242424,\n \"acc_stderr\": 0.031156269519646836,\n \"\
acc_norm\": 0.7424242424242424,\n \"acc_norm_stderr\": 0.031156269519646836\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.8186528497409327,\n \"acc_stderr\": 0.02780703236068609,\n\
\ \"acc_norm\": 0.8186528497409327,\n \"acc_norm_stderr\": 0.02780703236068609\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.5307692307692308,\n \"acc_stderr\": 0.025302958890850154,\n\
\ \"acc_norm\": 0.5307692307692308,\n \"acc_norm_stderr\": 0.025302958890850154\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.2777777777777778,\n \"acc_stderr\": 0.027309140588230172,\n \
\ \"acc_norm\": 0.2777777777777778,\n \"acc_norm_stderr\": 0.027309140588230172\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6218487394957983,\n \"acc_stderr\": 0.031499305777849054,\n\
\ \"acc_norm\": 0.6218487394957983,\n \"acc_norm_stderr\": 0.031499305777849054\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.36423841059602646,\n \"acc_stderr\": 0.03929111781242742,\n \"\
acc_norm\": 0.36423841059602646,\n \"acc_norm_stderr\": 0.03929111781242742\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.7688073394495413,\n \"acc_stderr\": 0.01807575024163315,\n \"\
acc_norm\": 0.7688073394495413,\n \"acc_norm_stderr\": 0.01807575024163315\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.49074074074074076,\n \"acc_stderr\": 0.034093869469927006,\n \"\
acc_norm\": 0.49074074074074076,\n \"acc_norm_stderr\": 0.034093869469927006\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.7549019607843137,\n \"acc_stderr\": 0.03019028245350195,\n \"\
acc_norm\": 0.7549019607843137,\n \"acc_norm_stderr\": 0.03019028245350195\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.7637130801687764,\n \"acc_stderr\": 0.027652153144159263,\n \
\ \"acc_norm\": 0.7637130801687764,\n \"acc_norm_stderr\": 0.027652153144159263\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6591928251121076,\n\
\ \"acc_stderr\": 0.03181149747055359,\n \"acc_norm\": 0.6591928251121076,\n\
\ \"acc_norm_stderr\": 0.03181149747055359\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.6335877862595419,\n \"acc_stderr\": 0.04225875451969637,\n\
\ \"acc_norm\": 0.6335877862595419,\n \"acc_norm_stderr\": 0.04225875451969637\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.7107438016528925,\n \"acc_stderr\": 0.04139112727635463,\n \"\
acc_norm\": 0.7107438016528925,\n \"acc_norm_stderr\": 0.04139112727635463\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7407407407407407,\n\
\ \"acc_stderr\": 0.04236511258094633,\n \"acc_norm\": 0.7407407407407407,\n\
\ \"acc_norm_stderr\": 0.04236511258094633\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.6687116564417178,\n \"acc_stderr\": 0.03697983910025588,\n\
\ \"acc_norm\": 0.6687116564417178,\n \"acc_norm_stderr\": 0.03697983910025588\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.32142857142857145,\n\
\ \"acc_stderr\": 0.04432804055291517,\n \"acc_norm\": 0.32142857142857145,\n\
\ \"acc_norm_stderr\": 0.04432804055291517\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7281553398058253,\n \"acc_stderr\": 0.044052680241409216,\n\
\ \"acc_norm\": 0.7281553398058253,\n \"acc_norm_stderr\": 0.044052680241409216\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.811965811965812,\n\
\ \"acc_stderr\": 0.025598193686652244,\n \"acc_norm\": 0.811965811965812,\n\
\ \"acc_norm_stderr\": 0.025598193686652244\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.58,\n \"acc_stderr\": 0.049604496374885836,\n \
\ \"acc_norm\": 0.58,\n \"acc_norm_stderr\": 0.049604496374885836\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7611749680715197,\n\
\ \"acc_stderr\": 0.015246803197398682,\n \"acc_norm\": 0.7611749680715197,\n\
\ \"acc_norm_stderr\": 0.015246803197398682\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.6705202312138728,\n \"acc_stderr\": 0.025305258131879716,\n\
\ \"acc_norm\": 0.6705202312138728,\n \"acc_norm_stderr\": 0.025305258131879716\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.41675977653631285,\n\
\ \"acc_stderr\": 0.016489134962438954,\n \"acc_norm\": 0.41675977653631285,\n\
\ \"acc_norm_stderr\": 0.016489134962438954\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.6241830065359477,\n \"acc_stderr\": 0.027732834353363947,\n\
\ \"acc_norm\": 0.6241830065359477,\n \"acc_norm_stderr\": 0.027732834353363947\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.662379421221865,\n\
\ \"acc_stderr\": 0.02685882587948854,\n \"acc_norm\": 0.662379421221865,\n\
\ \"acc_norm_stderr\": 0.02685882587948854\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.654320987654321,\n \"acc_stderr\": 0.026462487777001872,\n\
\ \"acc_norm\": 0.654320987654321,\n \"acc_norm_stderr\": 0.026462487777001872\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.46099290780141844,\n \"acc_stderr\": 0.02973659252642444,\n \
\ \"acc_norm\": 0.46099290780141844,\n \"acc_norm_stderr\": 0.02973659252642444\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4602346805736636,\n\
\ \"acc_stderr\": 0.01272978538659857,\n \"acc_norm\": 0.4602346805736636,\n\
\ \"acc_norm_stderr\": 0.01272978538659857\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.5698529411764706,\n \"acc_stderr\": 0.030074971917302875,\n\
\ \"acc_norm\": 0.5698529411764706,\n \"acc_norm_stderr\": 0.030074971917302875\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.5996732026143791,\n \"acc_stderr\": 0.01982184368827176,\n \
\ \"acc_norm\": 0.5996732026143791,\n \"acc_norm_stderr\": 0.01982184368827176\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6727272727272727,\n\
\ \"acc_stderr\": 0.0449429086625209,\n \"acc_norm\": 0.6727272727272727,\n\
\ \"acc_norm_stderr\": 0.0449429086625209\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.5959183673469388,\n \"acc_stderr\": 0.031414708025865885,\n\
\ \"acc_norm\": 0.5959183673469388,\n \"acc_norm_stderr\": 0.031414708025865885\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.746268656716418,\n\
\ \"acc_stderr\": 0.03076944496729602,\n \"acc_norm\": 0.746268656716418,\n\
\ \"acc_norm_stderr\": 0.03076944496729602\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.83,\n \"acc_stderr\": 0.03775251680686371,\n \
\ \"acc_norm\": 0.83,\n \"acc_norm_stderr\": 0.03775251680686371\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.4397590361445783,\n\
\ \"acc_stderr\": 0.03864139923699121,\n \"acc_norm\": 0.4397590361445783,\n\
\ \"acc_norm_stderr\": 0.03864139923699121\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.7777777777777778,\n \"acc_stderr\": 0.031885780176863984,\n\
\ \"acc_norm\": 0.7777777777777778,\n \"acc_norm_stderr\": 0.031885780176863984\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.28151774785801714,\n\
\ \"mc1_stderr\": 0.01574402724825605,\n \"mc2\": 0.4191985438925104,\n\
\ \"mc2_stderr\": 0.014270484892545822\n }\n}\n```"
repo_url: https://huggingface.co/yeontaek/Platypus2-13B-LoRa-v2
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_08_29T11_20_59.240376
path:
- '**/details_harness|arc:challenge|25_2023-08-29T11:20:59.240376.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-08-29T11:20:59.240376.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_08_29T11_20_59.240376
path:
- '**/details_harness|hellaswag|10_2023-08-29T11:20:59.240376.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-08-29T11:20:59.240376.parquet'
- config_name: harness_hendrycksTest
data_files:
- split: 2023_08_29T11_20_59.240376
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-29T11:20:59.240376.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-29T11:20:59.240376.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-29T11:20:59.240376.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_08_29T11_20_59.240376
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-29T11:20:59.240376.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-29T11:20:59.240376.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_08_29T11_20_59.240376
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-29T11:20:59.240376.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-29T11:20:59.240376.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_08_29T11_20_59.240376
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-29T11:20:59.240376.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-29T11:20:59.240376.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_08_29T11_20_59.240376
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-29T11:20:59.240376.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-29T11:20:59.240376.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_08_29T11_20_59.240376
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-29T11:20:59.240376.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-29T11:20:59.240376.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_08_29T11_20_59.240376
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-29T11:20:59.240376.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-29T11:20:59.240376.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_08_29T11_20_59.240376
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-29T11:20:59.240376.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-29T11:20:59.240376.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_08_29T11_20_59.240376
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-29T11:20:59.240376.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-29T11:20:59.240376.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_08_29T11_20_59.240376
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-29T11:20:59.240376.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-29T11:20:59.240376.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_08_29T11_20_59.240376
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-29T11:20:59.240376.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-29T11:20:59.240376.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_08_29T11_20_59.240376
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-29T11:20:59.240376.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-29T11:20:59.240376.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_08_29T11_20_59.240376
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-29T11:20:59.240376.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-29T11:20:59.240376.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_08_29T11_20_59.240376
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-29T11:20:59.240376.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-29T11:20:59.240376.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_08_29T11_20_59.240376
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-29T11:20:59.240376.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-29T11:20:59.240376.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_08_29T11_20_59.240376
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-29T11:20:59.240376.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-29T11:20:59.240376.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_08_29T11_20_59.240376
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-29T11:20:59.240376.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-29T11:20:59.240376.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_08_29T11_20_59.240376
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-29T11:20:59.240376.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-29T11:20:59.240376.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_08_29T11_20_59.240376
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-29T11:20:59.240376.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-29T11:20:59.240376.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_08_29T11_20_59.240376
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-29T11:20:59.240376.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-29T11:20:59.240376.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_08_29T11_20_59.240376
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-29T11:20:59.240376.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-29T11:20:59.240376.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_08_29T11_20_59.240376
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-29T11:20:59.240376.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-29T11:20:59.240376.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_08_29T11_20_59.240376
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-29T11:20:59.240376.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-29T11:20:59.240376.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_08_29T11_20_59.240376
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-29T11:20:59.240376.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-29T11:20:59.240376.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_08_29T11_20_59.240376
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-29T11:20:59.240376.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-29T11:20:59.240376.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_08_29T11_20_59.240376
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-29T11:20:59.240376.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-29T11:20:59.240376.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_08_29T11_20_59.240376
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-29T11:20:59.240376.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-29T11:20:59.240376.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_08_29T11_20_59.240376
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-29T11:20:59.240376.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-29T11:20:59.240376.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_08_29T11_20_59.240376
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-29T11:20:59.240376.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-29T11:20:59.240376.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_08_29T11_20_59.240376
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-29T11:20:59.240376.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-29T11:20:59.240376.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_08_29T11_20_59.240376
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-29T11:20:59.240376.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-29T11:20:59.240376.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_08_29T11_20_59.240376
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-29T11:20:59.240376.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-29T11:20:59.240376.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_08_29T11_20_59.240376
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-29T11:20:59.240376.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-29T11:20:59.240376.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_08_29T11_20_59.240376
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-29T11:20:59.240376.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-29T11:20:59.240376.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_08_29T11_20_59.240376
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-29T11:20:59.240376.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-29T11:20:59.240376.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_08_29T11_20_59.240376
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-29T11:20:59.240376.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-29T11:20:59.240376.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_08_29T11_20_59.240376
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-29T11:20:59.240376.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-29T11:20:59.240376.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_08_29T11_20_59.240376
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-29T11:20:59.240376.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-29T11:20:59.240376.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_08_29T11_20_59.240376
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-29T11:20:59.240376.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-29T11:20:59.240376.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_08_29T11_20_59.240376
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-29T11:20:59.240376.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-29T11:20:59.240376.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_08_29T11_20_59.240376
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-29T11:20:59.240376.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-29T11:20:59.240376.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_08_29T11_20_59.240376
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-29T11:20:59.240376.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-29T11:20:59.240376.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_08_29T11_20_59.240376
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-29T11:20:59.240376.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-29T11:20:59.240376.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_08_29T11_20_59.240376
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-29T11:20:59.240376.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-29T11:20:59.240376.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_08_29T11_20_59.240376
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-29T11:20:59.240376.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-29T11:20:59.240376.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_08_29T11_20_59.240376
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-29T11:20:59.240376.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-29T11:20:59.240376.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_08_29T11_20_59.240376
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-29T11:20:59.240376.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-29T11:20:59.240376.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_08_29T11_20_59.240376
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-29T11:20:59.240376.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-29T11:20:59.240376.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_08_29T11_20_59.240376
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-29T11:20:59.240376.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-29T11:20:59.240376.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_08_29T11_20_59.240376
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-29T11:20:59.240376.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-29T11:20:59.240376.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_08_29T11_20_59.240376
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-29T11:20:59.240376.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-29T11:20:59.240376.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_08_29T11_20_59.240376
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-29T11:20:59.240376.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-29T11:20:59.240376.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_08_29T11_20_59.240376
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-29T11:20:59.240376.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-29T11:20:59.240376.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_08_29T11_20_59.240376
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-29T11:20:59.240376.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-29T11:20:59.240376.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_08_29T11_20_59.240376
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-29T11:20:59.240376.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-29T11:20:59.240376.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_08_29T11_20_59.240376
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-29T11:20:59.240376.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-29T11:20:59.240376.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_08_29T11_20_59.240376
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-29T11:20:59.240376.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-29T11:20:59.240376.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_08_29T11_20_59.240376
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-29T11:20:59.240376.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-29T11:20:59.240376.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_08_29T11_20_59.240376
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-29T11:20:59.240376.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-29T11:20:59.240376.parquet'
- config_name: results
data_files:
- split: 2023_08_29T11_20_59.240376
path:
- results_2023-08-29T11:20:59.240376.parquet
- split: latest
path:
- results_2023-08-29T11:20:59.240376.parquet
---
# Dataset Card for Evaluation run of yeontaek/Platypus2-13B-LoRa-v2
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/yeontaek/Platypus2-13B-LoRa-v2
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [yeontaek/Platypus2-13B-LoRa-v2](https://huggingface.co/yeontaek/Platypus2-13B-LoRa-v2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_yeontaek__Platypus2-13B-LoRa-v2",
"harness_truthfulqa_mc_0",
split="train")
```
## Latest results
These are the [latest results from run 2023-08-29T11:20:59.240376](https://huggingface.co/datasets/open-llm-leaderboard/details_yeontaek__Platypus2-13B-LoRa-v2/blob/main/results_2023-08-29T11%3A20%3A59.240376.json):
```python
{
"all": {
"acc": 0.571991245483798,
"acc_stderr": 0.034294067141786025,
"acc_norm": 0.5761375119651778,
"acc_norm_stderr": 0.03427336583128381,
"mc1": 0.28151774785801714,
"mc1_stderr": 0.01574402724825605,
"mc2": 0.4191985438925104,
"mc2_stderr": 0.014270484892545822
},
"harness|arc:challenge|25": {
"acc": 0.5563139931740614,
"acc_stderr": 0.014518421825670444,
"acc_norm": 0.5947098976109215,
"acc_norm_stderr": 0.014346869060229328
},
"harness|hellaswag|10": {
"acc": 0.6179047998406691,
"acc_stderr": 0.004849065962692132,
"acc_norm": 0.8241386178052181,
"acc_norm_stderr": 0.003799241408502969
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.31,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.31,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.4962962962962963,
"acc_stderr": 0.04319223625811331,
"acc_norm": 0.4962962962962963,
"acc_norm_stderr": 0.04319223625811331
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.5986842105263158,
"acc_stderr": 0.039889037033362836,
"acc_norm": 0.5986842105263158,
"acc_norm_stderr": 0.039889037033362836
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.53,
"acc_stderr": 0.05016135580465919,
"acc_norm": 0.53,
"acc_norm_stderr": 0.05016135580465919
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6150943396226415,
"acc_stderr": 0.02994649856769995,
"acc_norm": 0.6150943396226415,
"acc_norm_stderr": 0.02994649856769995
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.6111111111111112,
"acc_stderr": 0.04076663253918567,
"acc_norm": 0.6111111111111112,
"acc_norm_stderr": 0.04076663253918567
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.49,
"acc_stderr": 0.05024183937956911,
"acc_norm": 0.49,
"acc_norm_stderr": 0.05024183937956911
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.41,
"acc_stderr": 0.049431107042371025,
"acc_norm": 0.41,
"acc_norm_stderr": 0.049431107042371025
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.32,
"acc_stderr": 0.046882617226215034,
"acc_norm": 0.32,
"acc_norm_stderr": 0.046882617226215034
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.5722543352601156,
"acc_stderr": 0.03772446857518026,
"acc_norm": 0.5722543352601156,
"acc_norm_stderr": 0.03772446857518026
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.3431372549019608,
"acc_stderr": 0.047240073523838876,
"acc_norm": 0.3431372549019608,
"acc_norm_stderr": 0.047240073523838876
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.69,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.69,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.4297872340425532,
"acc_stderr": 0.03236214467715564,
"acc_norm": 0.4297872340425532,
"acc_norm_stderr": 0.03236214467715564
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.3157894736842105,
"acc_stderr": 0.043727482902780064,
"acc_norm": 0.3157894736842105,
"acc_norm_stderr": 0.043727482902780064
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.4482758620689655,
"acc_stderr": 0.04144311810878151,
"acc_norm": 0.4482758620689655,
"acc_norm_stderr": 0.04144311810878151
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.3386243386243386,
"acc_stderr": 0.02437319786798306,
"acc_norm": 0.3386243386243386,
"acc_norm_stderr": 0.02437319786798306
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.36507936507936506,
"acc_stderr": 0.04306241259127153,
"acc_norm": 0.36507936507936506,
"acc_norm_stderr": 0.04306241259127153
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.33,
"acc_stderr": 0.04725815626252604,
"acc_norm": 0.33,
"acc_norm_stderr": 0.04725815626252604
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.6709677419354839,
"acc_stderr": 0.02672949906834996,
"acc_norm": 0.6709677419354839,
"acc_norm_stderr": 0.02672949906834996
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.458128078817734,
"acc_stderr": 0.03505630140785741,
"acc_norm": 0.458128078817734,
"acc_norm_stderr": 0.03505630140785741
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.59,
"acc_stderr": 0.04943110704237102,
"acc_norm": 0.59,
"acc_norm_stderr": 0.04943110704237102
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7090909090909091,
"acc_stderr": 0.03546563019624336,
"acc_norm": 0.7090909090909091,
"acc_norm_stderr": 0.03546563019624336
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7424242424242424,
"acc_stderr": 0.031156269519646836,
"acc_norm": 0.7424242424242424,
"acc_norm_stderr": 0.031156269519646836
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8186528497409327,
"acc_stderr": 0.02780703236068609,
"acc_norm": 0.8186528497409327,
"acc_norm_stderr": 0.02780703236068609
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.5307692307692308,
"acc_stderr": 0.025302958890850154,
"acc_norm": 0.5307692307692308,
"acc_norm_stderr": 0.025302958890850154
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.2777777777777778,
"acc_stderr": 0.027309140588230172,
"acc_norm": 0.2777777777777778,
"acc_norm_stderr": 0.027309140588230172
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6218487394957983,
"acc_stderr": 0.031499305777849054,
"acc_norm": 0.6218487394957983,
"acc_norm_stderr": 0.031499305777849054
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.36423841059602646,
"acc_stderr": 0.03929111781242742,
"acc_norm": 0.36423841059602646,
"acc_norm_stderr": 0.03929111781242742
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.7688073394495413,
"acc_stderr": 0.01807575024163315,
"acc_norm": 0.7688073394495413,
"acc_norm_stderr": 0.01807575024163315
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.49074074074074076,
"acc_stderr": 0.034093869469927006,
"acc_norm": 0.49074074074074076,
"acc_norm_stderr": 0.034093869469927006
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.7549019607843137,
"acc_stderr": 0.03019028245350195,
"acc_norm": 0.7549019607843137,
"acc_norm_stderr": 0.03019028245350195
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7637130801687764,
"acc_stderr": 0.027652153144159263,
"acc_norm": 0.7637130801687764,
"acc_norm_stderr": 0.027652153144159263
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6591928251121076,
"acc_stderr": 0.03181149747055359,
"acc_norm": 0.6591928251121076,
"acc_norm_stderr": 0.03181149747055359
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.6335877862595419,
"acc_stderr": 0.04225875451969637,
"acc_norm": 0.6335877862595419,
"acc_norm_stderr": 0.04225875451969637
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7107438016528925,
"acc_stderr": 0.04139112727635463,
"acc_norm": 0.7107438016528925,
"acc_norm_stderr": 0.04139112727635463
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7407407407407407,
"acc_stderr": 0.04236511258094633,
"acc_norm": 0.7407407407407407,
"acc_norm_stderr": 0.04236511258094633
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.6687116564417178,
"acc_stderr": 0.03697983910025588,
"acc_norm": 0.6687116564417178,
"acc_norm_stderr": 0.03697983910025588
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.32142857142857145,
"acc_stderr": 0.04432804055291517,
"acc_norm": 0.32142857142857145,
"acc_norm_stderr": 0.04432804055291517
},
"harness|hendrycksTest-management|5": {
"acc": 0.7281553398058253,
"acc_stderr": 0.044052680241409216,
"acc_norm": 0.7281553398058253,
"acc_norm_stderr": 0.044052680241409216
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.811965811965812,
"acc_stderr": 0.025598193686652244,
"acc_norm": 0.811965811965812,
"acc_norm_stderr": 0.025598193686652244
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.58,
"acc_stderr": 0.049604496374885836,
"acc_norm": 0.58,
"acc_norm_stderr": 0.049604496374885836
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.7611749680715197,
"acc_stderr": 0.015246803197398682,
"acc_norm": 0.7611749680715197,
"acc_norm_stderr": 0.015246803197398682
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.6705202312138728,
"acc_stderr": 0.025305258131879716,
"acc_norm": 0.6705202312138728,
"acc_norm_stderr": 0.025305258131879716
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.41675977653631285,
"acc_stderr": 0.016489134962438954,
"acc_norm": 0.41675977653631285,
"acc_norm_stderr": 0.016489134962438954
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.6241830065359477,
"acc_stderr": 0.027732834353363947,
"acc_norm": 0.6241830065359477,
"acc_norm_stderr": 0.027732834353363947
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.662379421221865,
"acc_stderr": 0.02685882587948854,
"acc_norm": 0.662379421221865,
"acc_norm_stderr": 0.02685882587948854
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.654320987654321,
"acc_stderr": 0.026462487777001872,
"acc_norm": 0.654320987654321,
"acc_norm_stderr": 0.026462487777001872
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.46099290780141844,
"acc_stderr": 0.02973659252642444,
"acc_norm": 0.46099290780141844,
"acc_norm_stderr": 0.02973659252642444
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4602346805736636,
"acc_stderr": 0.01272978538659857,
"acc_norm": 0.4602346805736636,
"acc_norm_stderr": 0.01272978538659857
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.5698529411764706,
"acc_stderr": 0.030074971917302875,
"acc_norm": 0.5698529411764706,
"acc_norm_stderr": 0.030074971917302875
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.5996732026143791,
"acc_stderr": 0.01982184368827176,
"acc_norm": 0.5996732026143791,
"acc_norm_stderr": 0.01982184368827176
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6727272727272727,
"acc_stderr": 0.0449429086625209,
"acc_norm": 0.6727272727272727,
"acc_norm_stderr": 0.0449429086625209
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.5959183673469388,
"acc_stderr": 0.031414708025865885,
"acc_norm": 0.5959183673469388,
"acc_norm_stderr": 0.031414708025865885
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.746268656716418,
"acc_stderr": 0.03076944496729602,
"acc_norm": 0.746268656716418,
"acc_norm_stderr": 0.03076944496729602
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.83,
"acc_stderr": 0.03775251680686371,
"acc_norm": 0.83,
"acc_norm_stderr": 0.03775251680686371
},
"harness|hendrycksTest-virology|5": {
"acc": 0.4397590361445783,
"acc_stderr": 0.03864139923699121,
"acc_norm": 0.4397590361445783,
"acc_norm_stderr": 0.03864139923699121
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.7777777777777778,
"acc_stderr": 0.031885780176863984,
"acc_norm": 0.7777777777777778,
"acc_norm_stderr": 0.031885780176863984
},
"harness|truthfulqa:mc|0": {
"mc1": 0.28151774785801714,
"mc1_stderr": 0.01574402724825605,
"mc2": 0.4191985438925104,
"mc2_stderr": 0.014270484892545822
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
kelvin34501/OakInk-v2 | ---
license: cc-by-nc-sa-3.0
task_categories:
- image-to-3d
language:
- en
size_categories:
- 1M<n<10M
viewer: false
---
# Dataset Card for OakInk-v2
- **Project:** https://oakink.net/v2
- **Paper:** https://arxiv.org/pdf/2403.19417.pdf
|
ZiAngGu/omni3d_v2 | ---
dataset_info:
features:
- name: image
dtype: image
- name: conditioning_image
dtype: image
- name: text
dtype: string
- name: label
sequence: string
splits:
- name: train
num_bytes: 18091936016.3
num_examples: 194700
download_size: 21810993407
dataset_size: 18091936016.3
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "omni3d_v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
AdapterOcean/Open_Platypus_standardized_cluster_14 | ---
dataset_info:
features:
- name: text
dtype: string
- name: conversation_id
dtype: int64
- name: embedding
sequence: float64
- name: cluster
dtype: int64
splits:
- name: train
num_bytes: 21643536
num_examples: 2341
download_size: 5934518
dataset_size: 21643536
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "Open_Platypus_standardized_cluster_14"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
iulik-pisik/audio_vreme | ---
license: apache-2.0
task_categories:
- automatic-speech-recognition
language:
- ro
tags:
- climate
pretty_name: Dataset cu fișiere audio preluate de la prognozameteo
--- |
Kamtera/ParsiGoo | ---
license:
- cc0-1.0
description: A Persian multispeaker dataset for text-to-speech purposes.
homepage: https://example.com/parsigoo
keywords:
- text-to-speech
- Persian
- multispeaker
language: fa
multilinguality: monolingual
name: parsi_goo
pretty_name: ParsiGoo
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-to-speech
- other
task_ids: []
---
# ParsiGoo Dataset Cart
This is a Persian multispeaker dataset for text-to-speech purposes. The dataset includes the following speakers:
- ariana_Male2
- moujeze_Female1
- ariana_Male1
- ariana_Female1
## Technical detailes
#### the beginning and the end with nonspeech parts trimmed
#### Sample rate: 22050
#### Durations:
```
|> ariana_Male2 0:46:36.908685
|> edge_Dilara 0:54:31.448820
|> moujeze_Female1 0:29:24.339590
|> ariana_Male1 0:55:41.996847
|> ariana_Female1 0:53:38.396217
|> edge_Farid 0:53:11.961018
```
## Dataset Information
- **Name:** ParsGoo
- **Description:** A Persian multispeaker dataset for text-to-speech purposes.
- **Homepage:** https://github.com/karim23657/ParsGoo
- **License:** CC BY-SA 4.0
## Speaker info
- ariana_Male2
- moujeze_Female1
- ariana_Male1
- ariana_Female1
|
Aad456334/primate_dataset | ---
license: mit
---
|
yzhuang/autotree_automl_10000_bank-marketing_sgosdt_l256_dim7_d3_sd0 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: input_x
sequence:
sequence: float32
- name: input_y
sequence:
sequence: float32
- name: input_y_clean
sequence:
sequence: float32
- name: rtg
sequence: float64
- name: status
sequence:
sequence: float32
- name: split_threshold
sequence:
sequence: float32
- name: split_dimension
sequence: int64
splits:
- name: train
num_bytes: 205720000
num_examples: 10000
- name: validation
num_bytes: 205720000
num_examples: 10000
download_size: 74206478
dataset_size: 411440000
---
# Dataset Card for "autotree_automl_10000_bank-marketing_sgosdt_l256_dim7_d3_sd0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
lcolok/Asian_Regularization_images | ---
license: creativeml-openrail-m
---
|
V12X-ksr/Fraud.csv | ---
license: apache-2.0
---
|
EleutherAI/fever | ---
language:
- en
paperswithcode_id: fever
annotations_creators:
- crowdsourced
language_creators:
- found
license:
- cc-by-sa-3.0
- gpl-3.0
multilinguality:
- monolingual
pretty_name: FEVER
size_categories:
- 100K<n<1M
source_datasets:
- extended|wikipedia
task_categories:
- text-classification
task_ids: []
tags:
- knowledge-verification
dataset_info:
- config_name: v1.0
features:
- name: id
dtype: int32
- name: label
dtype: string
- name: claim
dtype: string
- name: evidence_annotation_id
dtype: int32
- name: evidence_id
dtype: int32
- name: evidence_wiki_url
dtype: string
- name: evidence_sentence_id
dtype: int32
splits:
- name: train
num_bytes: 24147163
num_examples: 263822
- name: dev
num_bytes: 2696375
num_examples: 28625
- name: paper_dev
num_bytes: 1348943
num_examples: 14475
- name: paper_test
num_bytes: 1347432
num_examples: 14150
download_size: 44853972
dataset_size: 40043693
- config_name: v2.0
features:
- name: id
dtype: int32
- name: label
dtype: string
- name: claim
dtype: string
- name: evidence_annotation_id
dtype: int32
- name: evidence_id
dtype: int32
- name: evidence_wiki_url
dtype: string
- name: evidence_sentence_id
dtype: int32
splits:
- name: validation
num_bytes: 306243
num_examples: 2384
download_size: 392466
dataset_size: 306243
- config_name: wiki_pages
features:
- name: id
dtype: string
- name: text
dtype: string
- name: lines
dtype: string
splits:
- name: wikipedia_pages
num_bytes: 7254115038
num_examples: 5416537
download_size: 1713485474
dataset_size: 7254115038
---
# Dataset Card for "fever"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://fever.ai/](https://fever.ai/)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
With billions of individual pages on the web providing information on almost every conceivable topic, we should have
the ability to collect facts that answer almost every conceivable question. However, only a small fraction of this
information is contained in structured sources (Wikidata, Freebase, etc.) – we are therefore limited by our ability to
transform free-form text to structured knowledge. There is, however, another problem that has become the focus of a lot
of recent research and media coverage: false information coming from unreliable sources.
The FEVER workshops are a venue for work in verifiable knowledge extraction and to stimulate progress in this direction.
- FEVER Dataset: FEVER (Fact Extraction and VERification) consists of 185,445 claims generated by altering sentences
extracted from Wikipedia and subsequently verified without knowledge of the sentence they were derived from. The claims
are classified as Supported, Refuted or NotEnoughInfo. For the first two classes, the annotators also recorded the
sentence(s) forming the necessary evidence for their judgment.
- FEVER 2.0 Adversarial Attacks Dataset: The FEVER 2.0 Dataset consists of 1174 claims created by the submissions of
participants in the Breaker phase of the 2019 shared task. Participants (Breakers) were tasked with generating
adversarial examples that induce classification errors for the existing systems. Breakers submitted a dataset of up to
1000 instances with equal number of instances for each of the three classes (Supported, Refuted NotEnoughInfo). Only
novel claims (i.e. not contained in the original FEVER dataset) were considered as valid entries to the shared task.
The submissions were then manually evaluated for Correctness (grammatical, appropriately labeled and meet the FEVER
annotation guidelines requirements).
### Supported Tasks and Leaderboards
The task is verification of textual claims against textual sources.
When compared to textual entailment (TE)/natural language inference, the key difference is that in these tasks the
passage to verify each claim is given, and in recent years it typically consists a single sentence, while in
verification systems it is retrieved from a large set of documents in order to form the evidence.
### Languages
The dataset is in English.
## Dataset Structure
### Data Instances
#### v1.0
- **Size of downloaded dataset files:** 44.86 MB
- **Size of the generated dataset:** 40.05 MB
- **Total amount of disk used:** 84.89 MB
An example of 'train' looks as follows.
```
'claim': 'Nikolaj Coster-Waldau worked with the Fox Broadcasting Company.',
'evidence_wiki_url': 'Nikolaj_Coster-Waldau',
'label': 'SUPPORTS',
'id': 75397,
'evidence_id': 104971,
'evidence_sentence_id': 7,
'evidence_annotation_id': 92206}
```
#### v2.0
- **Size of downloaded dataset files:** 0.39 MB
- **Size of the generated dataset:** 0.30 MB
- **Total amount of disk used:** 0.70 MB
#### wiki_pages
- **Size of downloaded dataset files:** 1.71 GB
- **Size of the generated dataset:** 7.25 GB
- **Total amount of disk used:** 8.97 GB
An example of 'wikipedia_pages' looks as follows.
```
{'text': 'The following are the football -LRB- soccer -RRB- events of the year 1928 throughout the world . ',
'lines': '0\tThe following are the football -LRB- soccer -RRB- events of the year 1928 throughout the world .\n1\t',
'id': '1928_in_association_football'}
```
### Data Fields
The data fields are the same among all splits.
#### v1.0
- `id`: a `int32` feature.
- `label`: a `string` feature.
- `claim`: a `string` feature.
- `evidence_annotation_id`: a `int32` feature.
- `evidence_id`: a `int32` feature.
- `evidence_wiki_url`: a `string` feature.
- `evidence_sentence_id`: a `int32` feature.
#### v2.0
- `id`: a `int32` feature.
- `label`: a `string` feature.
- `claim`: a `string` feature.
- `evidence_annotation_id`: a `int32` feature.
- `evidence_id`: a `int32` feature.
- `evidence_wiki_url`: a `string` feature.
- `evidence_sentence_id`: a `int32` feature.
#### wiki_pages
- `id`: a `string` feature.
- `text`: a `string` feature.
- `lines`: a `string` feature.
### Data Splits
#### v1.0
| | train | dev | paper_dev | paper_test |
|------|-------:|------:|----------:|-----------:|
| v1.0 | 311431 | 37566 | 18999 | 18567 |
#### v2.0
| | validation |
|------|-----------:|
| v2.0 | 2384 |
#### wiki_pages
| | wikipedia_pages |
|------------|----------------:|
| wiki_pages | 5416537 |
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
FEVER license:
```
These data annotations incorporate material from Wikipedia, which is licensed pursuant to the Wikipedia Copyright Policy. These annotations are made available under the license terms described on the applicable Wikipedia article pages, or, where Wikipedia license terms are unavailable, under the Creative Commons Attribution-ShareAlike License (version 3.0), available at http://creativecommons.org/licenses/by-sa/3.0/ (collectively, the “License Termsâ€). You may not use these files except in compliance with the applicable License Terms.
```
### Citation Information
If you use "FEVER Dataset", please cite:
```bibtex
@inproceedings{Thorne18Fever,
author = {Thorne, James and Vlachos, Andreas and Christodoulopoulos, Christos and Mittal, Arpit},
title = {{FEVER}: a Large-scale Dataset for Fact Extraction and {VERification}},
booktitle = {NAACL-HLT},
year = {2018}
}
```
If you use "FEVER 2.0 Adversarial Attacks Dataset", please cite:
```bibtex
@inproceedings{Thorne19FEVER2,
author = {Thorne, James and Vlachos, Andreas and Cocarascu, Oana and Christodoulopoulos, Christos and Mittal, Arpit},
title = {The {FEVER2.0} Shared Task},
booktitle = {Proceedings of the Second Workshop on {Fact Extraction and VERification (FEVER)}},
year = {2018}
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@lhoestq](https://github.com/lhoestq),
[@mariamabarham](https://github.com/mariamabarham), [@lewtun](https://github.com/lewtun),
[@albertvillanova](https://github.com/albertvillanova) for adding this dataset. |
Defetya/ru-open-llama-training-datasets | ---
license: apache-2.0
---
|
mickume/dark_granger | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 160190011
num_examples: 925620
download_size: 99596348
dataset_size: 160190011
---
# Dataset Card for "dark_granger"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
BunnyToon/gabizini | ---
license: openrail
---
|
liuyanchen1015/MULTI_VALUE_qqp_nasal_possessive_pron | ---
dataset_info:
features:
- name: question1
dtype: string
- name: question2
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: value_score
dtype: int64
splits:
- name: dev
num_bytes: 1490411
num_examples: 8656
- name: test
num_bytes: 14928230
num_examples: 85065
- name: train
num_bytes: 13708192
num_examples: 79093
download_size: 18451759
dataset_size: 30126833
---
# Dataset Card for "MULTI_VALUE_qqp_nasal_possessive_pron"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
SanjanaPedada/MedAlpaca | ---
dataset_info:
features:
- name: input
sequence: string
- name: output
sequence: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 8235326
num_examples: 67
download_size: 1769745
dataset_size: 8235326
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
CyberHarem/kousaka_yukiho_lovelive | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of kousaka_yukiho/高坂雪穂 (Love Live!)
This is the dataset of kousaka_yukiho/高坂雪穂 (Love Live!), containing 105 images and their tags.
The core tags of this character are `short_hair, brown_hair, blue_eyes`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 105 | 70.63 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kousaka_yukiho_lovelive/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 105 | 56.62 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kousaka_yukiho_lovelive/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 218 | 107.36 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kousaka_yukiho_lovelive/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 105 | 67.87 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kousaka_yukiho_lovelive/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 218 | 125.26 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kousaka_yukiho_lovelive/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/kousaka_yukiho_lovelive',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 23 |  |  |  |  |  | 1girl, solo, looking_at_viewer, blush, shorts, shirt, simple_background, white_background |
| 1 | 5 |  |  |  |  |  | blazer, bangs, blue_bowtie, long_sleeves, looking_at_viewer, 1girl, blue_jacket, blue_skirt, open_mouth, otonokizaka_school_uniform, outdoors, plaid_skirt, pleated_skirt, striped_bowtie, white_shirt, winter_uniform, 2girls, :d, blush, day, long_hair, open_clothes, red_hair, scarf, school_bag, solo_focus |
| 2 | 9 |  |  |  |  |  | 2girls, serafuku, blush, skirt, smile, blonde_hair |
| 3 | 5 |  |  |  |  |  | blush, outdoors, smile, bangs, blue_sky, collarbone, day, navel, small_breasts, 1girl, black_bikini, cloud, halterneck, looking_at_viewer, multi-strapped_bikini, 2girls, armpits, ass_visible_through_thighs, cherry_blossom_print, cosplay, cowboy_shot, front-tie_bikini_top, green_eyes, red_bikini, red_hair, sidelocks, solo_focus, standing, tree, wading, water, wet |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | looking_at_viewer | blush | shorts | shirt | simple_background | white_background | blazer | bangs | blue_bowtie | long_sleeves | blue_jacket | blue_skirt | open_mouth | otonokizaka_school_uniform | outdoors | plaid_skirt | pleated_skirt | striped_bowtie | white_shirt | winter_uniform | 2girls | :d | day | long_hair | open_clothes | red_hair | scarf | school_bag | solo_focus | serafuku | skirt | smile | blonde_hair | blue_sky | collarbone | navel | small_breasts | black_bikini | cloud | halterneck | multi-strapped_bikini | armpits | ass_visible_through_thighs | cherry_blossom_print | cosplay | cowboy_shot | front-tie_bikini_top | green_eyes | red_bikini | sidelocks | standing | tree | wading | water | wet |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:--------------------|:--------|:---------|:--------|:--------------------|:-------------------|:---------|:--------|:--------------|:---------------|:--------------|:-------------|:-------------|:-----------------------------|:-----------|:--------------|:----------------|:-----------------|:--------------|:-----------------|:---------|:-----|:------|:------------|:---------------|:-----------|:--------|:-------------|:-------------|:-----------|:--------|:--------|:--------------|:-----------|:-------------|:--------|:----------------|:---------------|:--------|:-------------|:------------------------|:----------|:-----------------------------|:-----------------------|:----------|:--------------|:-----------------------|:-------------|:-------------|:------------|:-----------|:-------|:---------|:--------|:------|
| 0 | 23 |  |  |  |  |  | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 5 |  |  |  |  |  | X | | X | X | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 9 |  |  |  |  |  | | | | X | | | | | | | | | | | | | | | | | | | X | | | | | | | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 5 |  |  |  |  |  | X | | X | X | | | | | | X | | | | | | | X | | | | | | X | | X | | | X | | | X | | | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
togethercomputer/test-glaiveai-function-calling | ---
license: apache-2.0
---
|
CyberHarem/nelson_azurlane | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of nelson/ネルソン/纳尔逊 (Azur Lane)
This is the dataset of nelson/ネルソン/纳尔逊 (Azur Lane), containing 235 images and their tags.
The core tags of this character are `long_hair, blonde_hair, breasts, twintails, red_eyes, large_breasts, ribbon, very_long_hair, hair_ribbon, bangs, black_ribbon`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-----------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 235 | 333.49 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nelson_azurlane/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 235 | 183.63 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nelson_azurlane/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 617 | 406.97 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nelson_azurlane/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 235 | 292.42 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nelson_azurlane/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 617 | 587.76 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nelson_azurlane/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/nelson_azurlane',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 44 |  |  |  |  |  | 1girl, solo, cleavage, looking_at_viewer, epaulettes, blush, long_sleeves, simple_background, thighhighs, open_clothes, white_background, collarbone, red_jacket, short_dress, microdress, red_footwear, thigh_boots, detached_collar, closed_mouth, pantyshot, white_panties |
| 1 | 6 |  |  |  |  |  | 1boy, 1girl, blush, hetero, nipples, sex, solo_focus, open_mouth, sweat, nude, vaginal, cowgirl_position, epaulettes, girl_on_top, huge_breasts, tongue_out |
| 2 | 13 |  |  |  |  |  | fake_animal_ears, playboy_bunny, rabbit_ears, 1girl, bare_shoulders, black_leotard, blush, looking_at_viewer, solo, cleavage, detached_collar, strapless_leotard, covered_navel, black_thighhighs, garter_straps, open_mouth, simple_background, collarbone, gem, wrist_cuffs |
| 3 | 18 |  |  |  |  |  | 1girl, solo, witch_hat, blush, cleavage, dress, purple_leotard, black_pantyhose, detached_sleeves, halloween_costume, jack-o'-lantern, red_gemstone, looking_at_viewer, belt, official_alternate_costume, pumpkin_hair_ornament, thighband_pantyhose, bat_hair_ornament, cape, strapless, wide_sleeves, cowboy_shot, simple_background, broom, potion, white_background, covered_navel, necklace |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | cleavage | looking_at_viewer | epaulettes | blush | long_sleeves | simple_background | thighhighs | open_clothes | white_background | collarbone | red_jacket | short_dress | microdress | red_footwear | thigh_boots | detached_collar | closed_mouth | pantyshot | white_panties | 1boy | hetero | nipples | sex | solo_focus | open_mouth | sweat | nude | vaginal | cowgirl_position | girl_on_top | huge_breasts | tongue_out | fake_animal_ears | playboy_bunny | rabbit_ears | bare_shoulders | black_leotard | strapless_leotard | covered_navel | black_thighhighs | garter_straps | gem | wrist_cuffs | witch_hat | dress | purple_leotard | black_pantyhose | detached_sleeves | halloween_costume | jack-o'-lantern | red_gemstone | belt | official_alternate_costume | pumpkin_hair_ornament | thighband_pantyhose | bat_hair_ornament | cape | strapless | wide_sleeves | cowboy_shot | broom | potion | necklace |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:-----------|:--------------------|:-------------|:--------|:---------------|:--------------------|:-------------|:---------------|:-------------------|:-------------|:-------------|:--------------|:-------------|:---------------|:--------------|:------------------|:---------------|:------------|:----------------|:-------|:---------|:----------|:------|:-------------|:-------------|:--------|:-------|:----------|:-------------------|:--------------|:---------------|:-------------|:-------------------|:----------------|:--------------|:-----------------|:----------------|:--------------------|:----------------|:-------------------|:----------------|:------|:--------------|:------------|:--------|:-----------------|:------------------|:-------------------|:--------------------|:------------------|:---------------|:-------|:-----------------------------|:------------------------|:----------------------|:--------------------|:-------|:------------|:---------------|:--------------|:--------|:---------|:-----------|
| 0 | 44 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 6 |  |  |  |  |  | X | | | | X | X | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 13 |  |  |  |  |  | X | X | X | X | | X | | X | | | | X | | | | | | X | | | | | | | | | X | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | |
| 3 | 18 |  |  |  |  |  | X | X | X | X | | X | | X | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
vidhikatkoria/DA_SGD_Restaurants | ---
dataset_info:
features:
- name: domain
dtype: string
- name: context
dtype: string
- name: response
dtype: string
- name: act
dtype: int64
- name: speaker
dtype: int64
splits:
- name: train
num_bytes: 895010.6571663469
num_examples: 3648
- name: test
num_bytes: 233
num_examples: 1
download_size: 360352
dataset_size: 895243.6571663469
---
# Dataset Card for "DA_SGD_Restaurants"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CJWeiss/LGZ_govreport | ---
dataset_info:
features:
- name: id
dtype: int64
- name: input
dtype: string
- name: output
dtype: string
- name: cluster
dtype: string
- name: old_id
dtype: int64
- name: length
dtype: int64
splits:
- name: train
num_bytes: 6459912
num_examples: 50
download_size: 3154732
dataset_size: 6459912
---
# Dataset Card for "LGZ_govreport"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
multi-train/altlex_1107 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: query
dtype: string
- name: pos
sequence: string
- name: neg
sequence: string
- name: task
dtype: string
- name: instruction
struct:
- name: query
dtype: string
- name: pos
dtype: string
- name: neg
dtype: string
splits:
- name: train
num_bytes: 59606453
num_examples: 112696
download_size: 30565780
dataset_size: 59606453
---
# Dataset Card for "altlex_1107"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tyzhu/random_letter_same_length_find_passage_train200_eval40_rare | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 146054
num_examples: 440
- name: validation
num_bytes: 15546
num_examples: 40
download_size: 79395
dataset_size: 161600
---
# Dataset Card for "random_letter_same_length_find_passage_train200_eval40_rare"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
malucoelhaofc/SharonMarshV2 | ---
license: openrail
---
|
abertsch/converted_qasper | ---
dataset_info:
features:
- name: id
dtype: string
- name: pid
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: test
num_bytes: 31996585
num_examples: 1399
- name: train
num_bytes: 65542620
num_examples: 2567
- name: validation
num_bytes: 40202764
num_examples: 1726
download_size: 41530332
dataset_size: 137741969
---
# Dataset Card for "converted_qasper"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mikonvergence/LAION-EO | ---
license: cc-by-4.0
task_categories:
- text-to-image
language:
- en
tags:
- climate
size_categories:
- 100K<n<1M
---
# Dataset Card for LAION-EO
## Dataset Description
- **Point of Contact:** Mikolaj Czerkawski, mikolaj.czerkawski@esa.int
### Dataset Summary
This dataset contains a subset of LAION-5B containing images that are likely to be satellite images. The procedure of acquiring and filtering the dataset has been described in https://arxiv.org/abs/2309.15535.
## Dataset Structure
Each version of the dataset contains a .csv file with metadata with urls to images, which can be easily filtered. Note that the linked images could be copyrighted.
### Data Fields
|Field|Description|
|:---|:---|
|**source**| Index of the anchor sample |
|**url**| Link to the image |
|**filename**| Locally saved unique filename |
|**id**| Original ID |
|**fast_similarity**| Fast similarity to the anchor image computed with https://github.com/rom1504/clip-retrieval |
|**caption**| Text caption |
|**image_similarity**| CLIP similarity to the original anchor image |
|**text_similarity**| CLIP similarity to the text "a satellite image" |
|**height**| height of the image at url |
|**width**| Width of the image at url |
|**lang**| Language predicted using https://huggingface.co/papluca/xlm-roberta-base-language-detection |
|**lang_score**| A measure of confidence in the predicted language |
### Example Samples

### Data Splits
No official splitting of the dataset is used.
## Dataset Creation
The creation of the prototype version is described in (TBC).
### Curation Rationale
Extraction of samples in LAION-5B relevant to Earth observation tasks.
### Source Data
Samples from the existing LAION-5B dataset (https://laion.ai/blog/laion-5b/).
### Discussion of Biases
Only contains satellite images openly uploaded online, which introduces a heavy bias towards satellite images used for communicating ideas on the internet.
### Citation Information
The workshop paper presented at the DataComp workshop during ICCV 2023 is available at https://arxiv.org/abs/2309.15535.
```latex
@inproceedings{LAION_EO,
title={From LAION-5B to LAION-EO: Filtering Billions of Images Using Anchor Datasets for Satellite Image Extraction},
author={Mikolaj Czerkawski and Alistair Francis},
year={2023},
eprint={2309.15535},
archivePrefix={arXiv},
primaryClass={cs.CV}
booktitle = {"Towards the Next Generation of Computer Vision Datasets: DataComp Track" Workshop at the IEEE/CVF International Conference on Computer Vision (ICCV)}
}
```
### License
We distribute the metadata dataset (the parquet files) under the Creative Common CC-BY 4.0 license, which poses no particular restriction. The images are under their copyright.
### Contributions
Design and Curation: Mikolaj Czerkawski |
jdabello/yt_transcriptions | ---
license: apache-2.0
---
|
nostradamus89/1c_code_nano | ---
license: apache-2.0
---
|
chahs/llm-tolkien | ---
dataset_info:
features:
- name: input_ids
sequence: int32
splits:
- name: train
num_bytes: 2212920.0
num_examples: 270
- name: test
num_bytes: 245880.0
num_examples: 30
download_size: 1135517
dataset_size: 2458800.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
hails/agieval-gaokao-chinese | ---
dataset_info:
features:
- name: query
dtype: string
- name: choices
sequence: string
- name: gold
sequence: int64
splits:
- name: test
num_bytes: 843664
num_examples: 246
download_size: 387530
dataset_size: 843664
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
# Dataset Card for "agieval-gaokao-chinese"
Dataset taken from https://github.com/microsoft/AGIEval and processed as in that repo, following dmayhem93/agieval-* datasets on the HF hub.
This dataset contains the contents of the Gaokao Chinese subtask of AGIEval, as accessed in https://github.com/ruixiangcui/AGIEval/commit/5c77d073fda993f1652eaae3cf5d04cc5fd21d40 .
Citation:
```
@misc{zhong2023agieval,
title={AGIEval: A Human-Centric Benchmark for Evaluating Foundation Models},
author={Wanjun Zhong and Ruixiang Cui and Yiduo Guo and Yaobo Liang and Shuai Lu and Yanlin Wang and Amin Saied and Weizhu Chen and Nan Duan},
year={2023},
eprint={2304.06364},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
Please make sure to cite all the individual datasets in your paper when you use them. We provide the relevant citation information below:
```
@inproceedings{ling-etal-2017-program,
title = "Program Induction by Rationale Generation: Learning to Solve and Explain Algebraic Word Problems",
author = "Ling, Wang and
Yogatama, Dani and
Dyer, Chris and
Blunsom, Phil",
booktitle = "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2017",
address = "Vancouver, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/P17-1015",
doi = "10.18653/v1/P17-1015",
pages = "158--167",
abstract = "Solving algebraic word problems requires executing a series of arithmetic operations{---}a program{---}to obtain a final answer. However, since programs can be arbitrarily complicated, inducing them directly from question-answer pairs is a formidable challenge. To make this task more feasible, we solve these problems by generating answer rationales, sequences of natural language and human-readable mathematical expressions that derive the final answer through a series of small steps. Although rationales do not explicitly specify programs, they provide a scaffolding for their structure via intermediate milestones. To evaluate our approach, we have created a new 100,000-sample dataset of questions, answers and rationales. Experimental results show that indirect supervision of program learning via answer rationales is a promising strategy for inducing arithmetic programs.",
}
@inproceedings{hendrycksmath2021,
title={Measuring Mathematical Problem Solving With the MATH Dataset},
author={Dan Hendrycks and Collin Burns and Saurav Kadavath and Akul Arora and Steven Basart and Eric Tang and Dawn Song and Jacob Steinhardt},
journal={NeurIPS},
year={2021}
}
@inproceedings{Liu2020LogiQAAC,
title={LogiQA: A Challenge Dataset for Machine Reading Comprehension with Logical Reasoning},
author={Jian Liu and Leyang Cui and Hanmeng Liu and Dandan Huang and Yile Wang and Yue Zhang},
booktitle={International Joint Conference on Artificial Intelligence},
year={2020}
}
@inproceedings{zhong2019jec,
title={JEC-QA: A Legal-Domain Question Answering Dataset},
author={Zhong, Haoxi and Xiao, Chaojun and Tu, Cunchao and Zhang, Tianyang and Liu, Zhiyuan and Sun, Maosong},
booktitle={Proceedings of AAAI},
year={2020},
}
@article{Wang2021FromLT,
title={From LSAT: The Progress and Challenges of Complex Reasoning},
author={Siyuan Wang and Zhongkun Liu and Wanjun Zhong and Ming Zhou and Zhongyu Wei and Zhumin Chen and Nan Duan},
journal={IEEE/ACM Transactions on Audio, Speech, and Language Processing},
year={2021},
volume={30},
pages={2201-2216}
}
``` |
freshpearYoon/vr_train_free_17 | ---
dataset_info:
features:
- name: audio
struct:
- name: array
sequence: float64
- name: path
dtype: string
- name: sampling_rate
dtype: int64
- name: filename
dtype: string
- name: NumOfUtterance
dtype: int64
- name: text
dtype: string
- name: samplingrate
dtype: int64
- name: begin_time
dtype: float64
- name: end_time
dtype: float64
- name: speaker_id
dtype: string
- name: directory
dtype: string
splits:
- name: train
num_bytes: 6707606625
num_examples: 10000
download_size: 1055836968
dataset_size: 6707606625
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.