datasetId stringlengths 2 117 | card stringlengths 19 1.01M |
|---|---|
open-llm-leaderboard/details_Monero__WizardLM-Uncensored-SuperCOT-StoryTelling-30b | ---
pretty_name: Evaluation run of Monero/WizardLM-Uncensored-SuperCOT-StoryTelling-30b
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [Monero/WizardLM-Uncensored-SuperCOT-StoryTelling-30b](https://huggingface.co/Monero/WizardLM-Uncensored-SuperCOT-StoryTelling-30b)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 64 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Monero__WizardLM-Uncensored-SuperCOT-StoryTelling-30b\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-15T18:31:20.676081](https://huggingface.co/datasets/open-llm-leaderboard/details_Monero__WizardLM-Uncensored-SuperCOT-StoryTelling-30b/blob/main/results_2023-10-15T18-31-20.676081.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.24653942953020133,\n\
\ \"em_stderr\": 0.004413804668718679,\n \"f1\": 0.33164010067114214,\n\
\ \"f1_stderr\": 0.004375317074606664,\n \"acc\": 0.38205290535450254,\n\
\ \"acc_stderr\": 0.009533625550775153\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.24653942953020133,\n \"em_stderr\": 0.004413804668718679,\n\
\ \"f1\": 0.33164010067114214,\n \"f1_stderr\": 0.004375317074606664\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.05534495830174375,\n \
\ \"acc_stderr\": 0.006298221796179607\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7087608524072613,\n \"acc_stderr\": 0.012769029305370699\n\
\ }\n}\n```"
repo_url: https://huggingface.co/Monero/WizardLM-Uncensored-SuperCOT-StoryTelling-30b
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_07_19T22_17_39.123351
path:
- '**/details_harness|arc:challenge|25_2023-07-19T22:17:39.123351.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-07-19T22:17:39.123351.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_10_15T18_31_20.676081
path:
- '**/details_harness|drop|3_2023-10-15T18-31-20.676081.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-15T18-31-20.676081.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_15T18_31_20.676081
path:
- '**/details_harness|gsm8k|5_2023-10-15T18-31-20.676081.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-15T18-31-20.676081.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_07_19T22_17_39.123351
path:
- '**/details_harness|hellaswag|10_2023-07-19T22:17:39.123351.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-07-19T22:17:39.123351.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_07_19T22_17_39.123351
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T22:17:39.123351.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T22:17:39.123351.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T22:17:39.123351.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_07_19T22_17_39.123351
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T22:17:39.123351.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T22:17:39.123351.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_07_19T22_17_39.123351
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T22:17:39.123351.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T22:17:39.123351.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_07_19T22_17_39.123351
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T22:17:39.123351.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T22:17:39.123351.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_07_19T22_17_39.123351
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T22:17:39.123351.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T22:17:39.123351.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_07_19T22_17_39.123351
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T22:17:39.123351.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T22:17:39.123351.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_07_19T22_17_39.123351
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T22:17:39.123351.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T22:17:39.123351.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_07_19T22_17_39.123351
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T22:17:39.123351.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T22:17:39.123351.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_07_19T22_17_39.123351
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T22:17:39.123351.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T22:17:39.123351.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_07_19T22_17_39.123351
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T22:17:39.123351.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T22:17:39.123351.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_07_19T22_17_39.123351
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T22:17:39.123351.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T22:17:39.123351.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_07_19T22_17_39.123351
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T22:17:39.123351.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T22:17:39.123351.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_07_19T22_17_39.123351
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T22:17:39.123351.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T22:17:39.123351.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_07_19T22_17_39.123351
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T22:17:39.123351.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T22:17:39.123351.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_07_19T22_17_39.123351
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T22:17:39.123351.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T22:17:39.123351.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_07_19T22_17_39.123351
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T22:17:39.123351.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T22:17:39.123351.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_07_19T22_17_39.123351
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T22:17:39.123351.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T22:17:39.123351.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_07_19T22_17_39.123351
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T22:17:39.123351.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T22:17:39.123351.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_07_19T22_17_39.123351
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T22:17:39.123351.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T22:17:39.123351.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_07_19T22_17_39.123351
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T22:17:39.123351.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T22:17:39.123351.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_07_19T22_17_39.123351
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T22:17:39.123351.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T22:17:39.123351.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_07_19T22_17_39.123351
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T22:17:39.123351.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T22:17:39.123351.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_07_19T22_17_39.123351
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T22:17:39.123351.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T22:17:39.123351.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_07_19T22_17_39.123351
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T22:17:39.123351.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T22:17:39.123351.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_07_19T22_17_39.123351
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T22:17:39.123351.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T22:17:39.123351.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_07_19T22_17_39.123351
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T22:17:39.123351.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T22:17:39.123351.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_07_19T22_17_39.123351
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T22:17:39.123351.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T22:17:39.123351.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_07_19T22_17_39.123351
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T22:17:39.123351.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T22:17:39.123351.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_07_19T22_17_39.123351
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T22:17:39.123351.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T22:17:39.123351.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_07_19T22_17_39.123351
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T22:17:39.123351.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T22:17:39.123351.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_07_19T22_17_39.123351
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T22:17:39.123351.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T22:17:39.123351.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_07_19T22_17_39.123351
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T22:17:39.123351.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T22:17:39.123351.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_07_19T22_17_39.123351
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T22:17:39.123351.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T22:17:39.123351.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_07_19T22_17_39.123351
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T22:17:39.123351.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T22:17:39.123351.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_07_19T22_17_39.123351
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T22:17:39.123351.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T22:17:39.123351.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_07_19T22_17_39.123351
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T22:17:39.123351.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T22:17:39.123351.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_07_19T22_17_39.123351
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T22:17:39.123351.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T22:17:39.123351.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_07_19T22_17_39.123351
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T22:17:39.123351.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T22:17:39.123351.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_07_19T22_17_39.123351
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T22:17:39.123351.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T22:17:39.123351.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_07_19T22_17_39.123351
path:
- '**/details_harness|hendrycksTest-management|5_2023-07-19T22:17:39.123351.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-07-19T22:17:39.123351.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_07_19T22_17_39.123351
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T22:17:39.123351.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T22:17:39.123351.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_07_19T22_17_39.123351
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T22:17:39.123351.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T22:17:39.123351.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_07_19T22_17_39.123351
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T22:17:39.123351.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T22:17:39.123351.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_07_19T22_17_39.123351
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T22:17:39.123351.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T22:17:39.123351.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_07_19T22_17_39.123351
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T22:17:39.123351.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T22:17:39.123351.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_07_19T22_17_39.123351
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T22:17:39.123351.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T22:17:39.123351.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_07_19T22_17_39.123351
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T22:17:39.123351.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T22:17:39.123351.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_07_19T22_17_39.123351
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T22:17:39.123351.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T22:17:39.123351.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_07_19T22_17_39.123351
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T22:17:39.123351.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T22:17:39.123351.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_07_19T22_17_39.123351
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T22:17:39.123351.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T22:17:39.123351.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_07_19T22_17_39.123351
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T22:17:39.123351.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T22:17:39.123351.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_07_19T22_17_39.123351
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T22:17:39.123351.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T22:17:39.123351.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_07_19T22_17_39.123351
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T22:17:39.123351.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T22:17:39.123351.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_07_19T22_17_39.123351
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T22:17:39.123351.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T22:17:39.123351.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_07_19T22_17_39.123351
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T22:17:39.123351.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T22:17:39.123351.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_07_19T22_17_39.123351
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T22:17:39.123351.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T22:17:39.123351.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_07_19T22_17_39.123351
path:
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T22:17:39.123351.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T22:17:39.123351.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_07_19T22_17_39.123351
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T22:17:39.123351.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T22:17:39.123351.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_07_19T22_17_39.123351
path:
- '**/details_harness|truthfulqa:mc|0_2023-07-19T22:17:39.123351.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-07-19T22:17:39.123351.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_15T18_31_20.676081
path:
- '**/details_harness|winogrande|5_2023-10-15T18-31-20.676081.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-15T18-31-20.676081.parquet'
- config_name: results
data_files:
- split: 2023_07_19T22_17_39.123351
path:
- results_2023-07-19T22:17:39.123351.parquet
- split: 2023_10_15T18_31_20.676081
path:
- results_2023-10-15T18-31-20.676081.parquet
- split: latest
path:
- results_2023-10-15T18-31-20.676081.parquet
---
# Dataset Card for Evaluation run of Monero/WizardLM-Uncensored-SuperCOT-StoryTelling-30b
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Monero/WizardLM-Uncensored-SuperCOT-StoryTelling-30b
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [Monero/WizardLM-Uncensored-SuperCOT-StoryTelling-30b](https://huggingface.co/Monero/WizardLM-Uncensored-SuperCOT-StoryTelling-30b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Monero__WizardLM-Uncensored-SuperCOT-StoryTelling-30b",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-15T18:31:20.676081](https://huggingface.co/datasets/open-llm-leaderboard/details_Monero__WizardLM-Uncensored-SuperCOT-StoryTelling-30b/blob/main/results_2023-10-15T18-31-20.676081.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.24653942953020133,
"em_stderr": 0.004413804668718679,
"f1": 0.33164010067114214,
"f1_stderr": 0.004375317074606664,
"acc": 0.38205290535450254,
"acc_stderr": 0.009533625550775153
},
"harness|drop|3": {
"em": 0.24653942953020133,
"em_stderr": 0.004413804668718679,
"f1": 0.33164010067114214,
"f1_stderr": 0.004375317074606664
},
"harness|gsm8k|5": {
"acc": 0.05534495830174375,
"acc_stderr": 0.006298221796179607
},
"harness|winogrande|5": {
"acc": 0.7087608524072613,
"acc_stderr": 0.012769029305370699
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
genaimodeler/skill_embeddings | ---
license: cc
---
|
kz919/open-orca-flan-50k-synthetic-reward-pretrained-mistral-7b-open-orca | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: completion
dtype: string
- name: task
dtype: string
- name: ignos-Mistral-T5-7B-v1
dtype: string
- name: cognAI-lil-c3po
dtype: string
- name: viethq188-Rabbit-7B-DPO-Chat
dtype: string
- name: cookinai-DonutLM-v1
dtype: string
- name: v1olet-v1olet-merged-dpo-7B
dtype: string
- name: normalized_rewards
sequence: float32
- name: router_label
dtype: int64
splits:
- name: train
num_bytes: 105157970
num_examples: 50000
download_size: 48848643
dataset_size: 105157970
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: apache-2.0
task_categories:
- text-classification
language:
- en
pretty_name: ranking is generated by noramlized inverse perplexity on each of the responses
size_categories:
- 10K<n<100K
---
# Dataset Card for kz919/flan-50k-synthetic-reward-pretrained-mistral-7b-open-orca
## Dataset Description
- **License**: Apache-2.0
- **Pretty Name**: Ranking is generated by normalized inverse perplexity on each of the responses (Open-Orca/Mistral-7B-OpenOrca)
### Dataset Info
The dataset includes features essential for tasks related to response generation and ranking:
1. **prompt**: (string) - The original text prompt.
2. **completion**: (string) - The corresponding completion for each prompt.
3. **task**: (string) - Categorization or description of the task.
4. **ignos-Mistral-T5-7B-v1**: (string) - Responses from the ignos-Mistral-T5-7B-v1 model.
5. **cognAI-lil-c3po**: (string) - Responses from the cognAI-lil-c3po model.
6. **viethq188-Rabbit-7B-DPO-Chat**: (string) - Responses from the viethq188-Rabbit-7B-DPO-Chat model.
7. **cookinai-DonutLM-v1**: (string) - Responses from the cookinai-DonutLM-v1 model.
8. **v1olet-v1olet-merged-dpo-7B**: (string) - Responses from the v1olet-v1olet-merged-dpo-7B model.
9. **normalized_rewards**: (sequence of float32) - Normalized reward scores based on the inverse perplexity, calculated and ranked by [Open-Orca/Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca).
10. **router_label**: (int64) - Labels for routing the query to the most appropriate model.
### Ranking Methodology
- **Ranking Model**: [Open-Orca/Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca)
- **Criteria**: The ranking is based on normalized inverse perplexity, a measure that assesses the fluency and relevance of the model responses in relation to the prompts.
### Splits
- **Train Split**:
- **num_bytes**: 105,157,970
- **num_examples**: 50,000
### Size
- **Download Size**: 48,848,643 bytes
- **Dataset Size**: 105,157,970 bytes
## Configurations
- **Config Name**: default
- **Data Files**:
- **Train Split**:
- **Path**: data/train-*
## Task Categories
- Text Classification
- Response Generation and Evaluation
## Language
- English (en)
## Size Category
- Medium (10K < n < 100K)
---
This dataset is particularly useful for developing and testing models in response generation tasks, offering a robust framework for comparing different AI models' performance. The unique ranking system based on Open-Orca/Mistral-7B-OpenOrca's normalized inverse perplexity provides an insightful metric for evaluating the fluency and relevance of responses in a wide range of conversational contexts. |
tyzhu/find_word_train_100_eval_100 | ---
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 23323
num_examples: 300
- name: eval_find_word
num_bytes: 5323
num_examples: 100
download_size: 16396
dataset_size: 28646
---
# Dataset Card for "find_word_train_100_eval_100"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
2A2I-R/DIBT-Arabic-Dataset_145s-Responses | ---
dataset_info:
features:
- name: Prompt
dtype: string
- name: QWEN7 Chat
dtype: string
- name: ACEGPT7 Chat
dtype: string
- name: ACEGPT13 Chat
dtype: string
- name: AYA
dtype: string
splits:
- name: train
num_bytes: 635601
num_examples: 145
download_size: 274862
dataset_size: 635601
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
temnoed/Dandelions | ---
license: openrail
---
|
liuyanchen1015/MULTI_VALUE_qqp_invariant_tag_non_concord | ---
dataset_info:
features:
- name: question1
dtype: string
- name: question2
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: value_score
dtype: int64
splits:
- name: dev
num_bytes: 1145055
num_examples: 6861
- name: test
num_bytes: 11906604
num_examples: 70496
- name: train
num_bytes: 10296343
num_examples: 61128
download_size: 14400297
dataset_size: 23348002
---
# Dataset Card for "MULTI_VALUE_qqp_invariant_tag_non_concord"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
dcaseymsp/products_and_marketing_emails | ---
dataset_info:
features:
- name: product
dtype: string
- name: description
dtype: string
- name: marketing_email
dtype: string
splits:
- name: train
num_bytes: 22129
num_examples: 10
download_size: 25816
dataset_size: 22129
---
# Dataset Card for "products_and_marketing_emails"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jp1924/AudioCaps | ---
dataset_info:
features:
- name: audiocap_id
dtype: int32
- name: youtube_id
dtype: string
- name: start_time
dtype: int32
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: caption
dtype: string
splits:
- name: train
num_bytes: 2012866216147.6
num_examples: 45087
- name: validation
num_bytes: 94570191869
num_examples: 2230
- name: test
num_bytes: 187871958256.0
num_examples: 4400
download_size: 431887334157
dataset_size: 282442150125.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
alarcon7a/somos-clean-alpaca-es | ---
dataset_info:
features:
- name: text
dtype: 'null'
- name: inputs
struct:
- name: 1-instruction
dtype: string
- name: 2-input
dtype: string
- name: 3-output
dtype: string
- name: prediction
dtype: 'null'
- name: prediction_agent
dtype: 'null'
- name: annotation
dtype: string
- name: annotation_agent
dtype: string
- name: vectors
struct:
- name: input
sequence: float64
- name: instruction
sequence: float64
- name: output
sequence: float64
- name: multi_label
dtype: bool
- name: explanation
dtype: 'null'
- name: id
dtype: string
- name: metadata
dtype: 'null'
- name: status
dtype: string
- name: event_timestamp
dtype: timestamp[us]
- name: metrics
struct:
- name: text_length
dtype: int64
splits:
- name: train
num_bytes: 551730
num_examples: 29
download_size: 437686
dataset_size: 551730
---
# Dataset Card for "somos-clean-alpaca-es"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Waterhorse/chess_data | ---
license: apache-2.0
task_categories:
- text-generation
- conversational
language:
- en
---
# The Chess Dataset
## Dataset Description
- **Paper:** [ChessGPT: Bridging Policy Learning and Language Modeling](https://arxiv.org/abs/2306.09200)
### Dataset Summary
The dataset consists of three sources of dataset described in the paper, including:
- **ChessCLIP dataset**: Annotated PGNs for training CLIP.
- **ChessGPT Base dataset**: Game dataset, language dataset and mixed dataset for training ChessGPT-Base.
- **ChessGPT Chat dataset**: Conversational dataset for training ChessGPT-Chat.
Because of the legal issue, for ChessGPT dataset, we do not open-source the chess-book, chess-forum, chess-blog, and Youtube transcript datasets.
And for ChessCLIP dataset, we do not open-source two commercial annotated datasets we use.
### Languages
The language of the data is primarily English.
## Dataset Structure
- **ChessCLIP dataset**: Annotated PGNs for training CLIP.
- **ChessGPT Base dataset**: Game dataset: ccrl, pro_player, lichess_db_37, chess_puzzles, chess_modeling. Language dataset: redpajama, oscar, c4, pile, wikipedia, and stackexchange, and mixed dataset: annotated_pgn.
- **ChessGPT Chat dataset**: Chess-related conversation dataset:
### Data Instances
- **ChessCLIP dataset**:
```python
[Event "GMA, Wijk aan Zee NED"]
[Site "?"]
[Date "2003.??.??"]
[Round "1"]
[White "Anand,V"]
[Black "Radjabov,T"]
[Result "1/2"]
[WhiteElo "2750"]
[BlackElo "2620"]
[ECO "C12"]
[PlyCount "55"]
[Annotator "Hathaway"]
1. e4 e6
{ I'm not terribly familiar with the style of Radjabov, so I don't know if this is his usual opening. }
2. d4 d5 3. Nc3 Nf6 (3...Bb4
{ The Winawer Variation is probably best, though not as easy to play. }) 4. Bg5
{ threatens e4-e5xf6 }
(4. e5
{ keeps pieces on the board and avoids ...dxe4 }) 4...Bb4 (4...Be7
{ is more common and aims to trade dark-square bishops to ease Black's cramp }) (4...dxe4
{ aims to avoid any cramp by bringing pieces into alignment for trading, though White does get at least one very good piece (Ne4 or Bg5) and an easier time castling queen-side, to stir up king-side threats } ...
```
- **ChessGPT dataset**:
```python
{"metadata": {"id": "65268522",
"title": "1923 USSR Chess Championship",
"url": "https://en.wikipedia.org/wiki/1923%20USSR%20Chess%20Championship"},
"text": "The 1923 USSR Chess Championship was the second edition of USSR Chess Championship. Held from 8 to 24 July in Petrograd. The tournament was won by Peter Romanovsky.
"pipeline_key": "wiki-squad-train.jsonl.zst-10549"
}
```
- **ChessGPT Chat dataset**:
```python
{"conversations": [
{"author": "Human 0", "text": "Analyze the given sentence and identify its grammatical errors, if any.\nThese type of problems are difficult to solve.", "url": "gpt-4-alpaca", "score": 0},
{"author": "Human 1", "text": "The sentence is incorrect. The error is in the usage of \"these\". Since \"these\" is a plural determiner, it should be used with a plural noun. Similarly, \"type\" is singular noun and it is used with the singular determiner, \"this\".\n\nThe correct sentence should be \"This type of problem is difficult to solve.\" or \"These types of problems are difficult to solve.\"",
"url": "gpt-4-alpaca", "score": 0}
]
}
```
### Data Splits
The data is unsplit.
## Dataset Creation
Check [ChessGPT: Bridging Policy Learning and Language Modeling](https://arxiv.org/abs/2306.09200) for more details.
### Licensing Information
**Annotated PGN**: [PGNlib](https://www.angelfire.com/games3/smartbridge/), [lichess](https://lichess.org/terms-of-service), [GameKnot](https://gameknot.com/pg/pol_eula.htm), [pathtomaster](https://www.pathtochessmastery.com/)
**Game Dataset**: [Lichess dataset](https://www.tldrlegal.com/license/creative-commons-cc0-1-0-universal), [CCRL](https://ccrl.chessdom.com/ccrl/), [pro-player](https://www.pgnmentor.com/files.html), [puzzle](https://www.tldrlegal.com/license/creative-commons-cc0-1-0-universal), Modeling data(Apache-2.0).
**Language Dataset** [Wikipedia](https://huggingface.co/datasets/wikipedia#licensing-information), [Redpajama](https://github.com/togethercomputer/RedPajama-Data#license), [Oscar](https://huggingface.co/datasets/oscar-corpus/OSCAR-2301#licensing-information), [Pile](https://github.com/EleutherAI/the-pile/blob/master/LICENSE), [StackExchange](https://archive.org/details/stackexchange), [C4](https://huggingface.co/datasets/allenai/c4#license)
**Conversatoinal Datset**: [Chessable forums](https://www.chessable.com/terms), [Reddit](https://www.redditinc.com/policies/data-api-terms), [gpt-4](https://openai.com/policies/terms-of-use), [sharegpt](https://chrome.google.com/webstore/detail/sharegpt-share-your-chatg/daiacboceoaocpibfodeljbdfacokfjb), oasst1(Apache-2.0), dolly-v2(MIT)
### Citation Information
```bash
@article{feng2023chessgpt,
title={ChessGPT: Bridging Policy Learning and Language Modeling},
author={Feng, Xidong and Luo, Yicheng and Wang, Ziyan and Tang, Hongrui and Yang, Mengyue and Shao, Kun and Mguni, David and Du, Yali and Wang, Jun},
journal={arXiv preprint arXiv:2306.09200},
year={2023}
}
``` |
lyimo/shakespear | ---
license: mit
---
|
RahulSundar/bhoomi-nestham-feedback | ---
configs:
- config_name: default
data_files:
- split: train
path: data.csv
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
boda/naive_random_unique | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: labels
dtype: string
- name: clue
dtype: string
splits:
- name: train
num_bytes: 2994948.7214326323
num_examples: 47844
- name: test
num_bytes: 528579.2785673678
num_examples: 8444
download_size: 2797022
dataset_size: 3523528.0
---
# Dataset Card for "naive_random_unique"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Back-up/chung-khoan-v2-2-final | ---
dataset_info:
features:
- name: url
dtype: string
- name: title
dtype: string
- name: date
dtype: string
- name: view
struct:
- name: number_of_response
dtype: string
- name: number_of_view
dtype: string
- name: content
list:
- name: date_comment
dtype: string
- name: res
dtype: string
splits:
- name: train
num_bytes: 313032336
num_examples: 62156
download_size: 110011356
dataset_size: 313032336
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
brainways/sample-project | ---
license: apache-2.0
---
|
NobodyExistsOnTheInternet/10kEqnsGPT4 | ---
license: mit
---
|
caveira-memes/caveira | ---
license: apache-2.0
---
|
KheemDH/data | ---
annotations_creators:
- other
language:
- en
language_creators:
- other
license:
- other
multilinguality:
- monolingual
pretty_name: data
size_categories:
- 10K<n<100K
source_datasets:
- original
tags: []
task_categories:
- text-classification
task_ids:
- sentiment-analysis
---
|
d0rj/full-hh-rlhf-ru | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 315825386
num_examples: 112052
- name: test
num_bytes: 22606646
num_examples: 12451
download_size: 176330770
dataset_size: 338432032
task_categories:
- text-classification
language:
- ru
language_creators:
- translated
source_datasets:
- Dahoas/full-hh-rlhf
multilinguality:
- monolingual
tags:
- reward
- ChatGPT
- human-feedback
size_categories:
- 100K<n<1M
---
# full-hh-rlhf-ru
This is translated version of [Dahoas/full-hh-rlhf](https://huggingface.co/datasets/Dahoas/full-hh-rlhf) dataset into Russian. |
Ekhao/Wake_Vision_Working | ---
license: cc-by-4.0
size_categories:
- 1M<n<10M
task_categories:
- image-classification
pretty_name: Wake Vision
dataset_info:
features:
- name: age_unknown
dtype:
class_label:
names:
'0': 'No'
'1': 'Yes'
- name: body_part
dtype:
class_label:
names:
'0': 'No'
'1': 'Yes'
- name: bright
dtype:
class_label:
names:
'0': 'No'
'1': 'Yes'
- name: dark
dtype:
class_label:
names:
'0': 'No'
'1': 'Yes'
- name: depiction
dtype:
class_label:
names:
'0': 'No'
'1': 'Yes'
- name: far
dtype:
class_label:
names:
'0': 'No'
'1': 'Yes'
- name: filename
dtype: string
- name: gender_unknown
dtype:
class_label:
names:
'0': 'No'
'1': 'Yes'
- name: image
dtype: image
- name: medium_distance
dtype:
class_label:
names:
'0': 'No'
'1': 'Yes'
- name: middle_age
dtype:
class_label:
names:
'0': 'No'
'1': 'Yes'
- name: near
dtype:
class_label:
names:
'0': 'No'
'1': 'Yes'
- name: non-person_depiction
dtype:
class_label:
names:
'0': 'No'
'1': 'Yes'
- name: non-person_non-depiction
dtype:
class_label:
names:
'0': 'No'
'1': 'Yes'
- name: normal_lighting
dtype:
class_label:
names:
'0': 'No'
'1': 'Yes'
- name: older
dtype:
class_label:
names:
'0': 'No'
'1': 'Yes'
- name: person
dtype:
class_label:
names:
'0': 'No'
'1': 'Yes'
- name: person_depiction
dtype:
class_label:
names:
'0': 'No'
'1': 'Yes'
- name: predominantly_female
dtype:
class_label:
names:
'0': 'No'
'1': 'Yes'
- name: predominantly_male
dtype:
class_label:
names:
'0': 'No'
'1': 'Yes'
- name: young
dtype:
class_label:
names:
'0': 'No'
'1': 'Yes'
splits:
- name: validation
num_bytes: 5013154770.625
num_examples: 17627
- name: test
num_bytes: 15119280526.0
num_examples: 53304
download_size: 20127967346
dataset_size: 20132435296.625
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
Yankz/tr_dataset | ---
dataset_info:
features:
- name: Correct
dtype: string
- name: Wrong
dtype: string
splits:
- name: train
num_bytes: 1393424291
num_examples: 194385
- name: validation
num_bytes: 173206228
num_examples: 24298
- name: test
num_bytes: 173753059
num_examples: 24299
download_size: 1189468044
dataset_size: 1740383578
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
phucnn/zalo-crawler-v17-explanation | ---
dataset_info:
features:
- name: id
dtype: string
- name: question
dtype: string
- name: explanation
dtype: string
- name: choices
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 74258299
num_examples: 103531
download_size: 27978556
dataset_size: 74258299
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "zalo-crawler-v17-explanation"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Decompile/ygo_monsters | ---
license: unlicense
dataset_info:
features:
- name: image
dtype: image
- name: card_name
dtype: string
- name: card_text
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 734500516.2
num_examples: 8352
download_size: 738954078
dataset_size: 734500516.2
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
CyberHarem/betor_granbluefantasy | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of betor/ヴェトル (Granblue Fantasy)
This is the dataset of betor/ヴェトル (Granblue Fantasy), containing 34 images and their tags.
The core tags of this character are `long_hair, ribbon, hair_ribbon, very_long_hair, drill_hair, yellow_eyes, blue_hair, hairband, bangs, breasts, purple_hair`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:-----------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 34 | 37.99 MiB | [Download](https://huggingface.co/datasets/CyberHarem/betor_granbluefantasy/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 34 | 27.61 MiB | [Download](https://huggingface.co/datasets/CyberHarem/betor_granbluefantasy/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 70 | 50.21 MiB | [Download](https://huggingface.co/datasets/CyberHarem/betor_granbluefantasy/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 34 | 35.16 MiB | [Download](https://huggingface.co/datasets/CyberHarem/betor_granbluefantasy/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 70 | 64.32 MiB | [Download](https://huggingface.co/datasets/CyberHarem/betor_granbluefantasy/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/betor_granbluefantasy',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 6 |  |  |  |  |  | 1girl, bare_shoulders, looking_at_viewer, solo, long_sleeves, brown_eyes, closed_mouth, collarbone, detached_sleeves, blush, cleavage, puffy_sleeves, simple_background, smile, white_background, white_dress |
| 1 | 5 |  |  |  |  |  | 1girl, looking_at_viewer, solo, star_(symbol), bare_shoulders |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | bare_shoulders | looking_at_viewer | solo | long_sleeves | brown_eyes | closed_mouth | collarbone | detached_sleeves | blush | cleavage | puffy_sleeves | simple_background | smile | white_background | white_dress | star_(symbol) |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-----------------|:--------------------|:-------|:---------------|:-------------|:---------------|:-------------|:-------------------|:--------|:-----------|:----------------|:--------------------|:--------|:-------------------|:--------------|:----------------|
| 0 | 6 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | |
| 1 | 5 |  |  |  |  |  | X | X | X | X | | | | | | | | | | | | | X |
|
DZN111/aimeu | ---
license: openrail
---
|
AppleHarem/highmore_arknights | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of highmore (Arknights)
This is the dataset of highmore (Arknights), containing 48 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
This is a WebUI contains crawlers and other thing: ([LittleAppleWebUI](https://github.com/LittleApple-fp16/LittleAppleWebUI))
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 48 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 133 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 145 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 48 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 48 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 48 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 133 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 133 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 106 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 145 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 145 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
|
tyzhu/find_second_sent_train_200_eval_40 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: title
dtype: string
- name: context
dtype: string
splits:
- name: train
num_bytes: 570351
num_examples: 440
- name: validation
num_bytes: 41108
num_examples: 40
download_size: 0
dataset_size: 611459
---
# Dataset Card for "find_second_sent_train_200_eval_40"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Vishnu393831/VICTORY_dataset | ---
license: afl-3.0
---
|
autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_v5-mathemak-2bec9f-2053467115 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- mathemakitten/winobias_antistereotype_test_v5
eval_info:
task: text_zero_shot_classification
model: inverse-scaling/opt-1.3b_eval
metrics: []
dataset_name: mathemakitten/winobias_antistereotype_test_v5
dataset_config: mathemakitten--winobias_antistereotype_test_v5
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-1.3b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_v5
* Config: mathemakitten--winobias_antistereotype_test_v5
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. |
dennlinger/wiki-paragraphs | ---
annotations_creators:
- machine-generated
language:
- en
language_creators:
- crowdsourced
license:
- cc-by-sa-3.0
multilinguality:
- monolingual
pretty_name: wiki-paragraphs
size_categories:
- 10M<n<100M
source_datasets:
- original
tags:
- wikipedia
- self-similarity
task_categories:
- text-classification
- sentence-similarity
task_ids:
- semantic-similarity-scoring
---
# Dataset Card for `wiki-paragraphs`
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** https://github.com/dennlinger/TopicalChange
- **Paper:** https://arxiv.org/abs/2012.03619
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Dennis Aumiller](aumiller@informatik.uni-heidelberg.de)
### Dataset Summary
The wiki-paragraphs dataset is constructed by automatically sampling two paragraphs from a Wikipedia article. If they are from the same section, they will be considered a "semantic match", otherwise as "dissimilar". Dissimilar paragraphs can in theory also be sampled from other documents, but have not shown any improvement in the particular evaluation of the linked work.
The alignment is in no way meant as an accurate depiction of similarity, but allows to quickly mine large amounts of samples.
### Supported Tasks and Leaderboards
The dataset can be used for "same-section classification", which is a binary classification task (either two sentences/paragraphs belong to the same section or not).
This can be combined with document-level coherency measures, where we can check how many misclassifications appear within a single document.
Please refer to [our paper](https://arxiv.org/abs/2012.03619) for more details.
### Languages
The data was extracted from English Wikipedia, therefore predominantly in English.
## Dataset Structure
### Data Instances
A single instance contains three attributes:
```
{
"sentence1": "<Sentence from the first paragraph>",
"sentence2": "<Sentence from the second paragraph>",
"label": 0/1 # 1 indicates two belong to the same section
}
```
### Data Fields
- sentence1: String containing the first paragraph
- sentence2: String containing the second paragraph
- label: Integer, either 0 or 1. Indicates whether two paragraphs belong to the same section (1) or come from different sections (0)
### Data Splits
We provide train, validation and test splits, which were split as 80/10/10 from a randomly shuffled original data source.
In total, we provide 25375583 training pairs, as well as 3163685 validation and test instances, respectively.
## Dataset Creation
### Curation Rationale
The original idea was applied to self-segmentation of Terms of Service documents. Given that these are of domain-specific nature, we wanted to provide a more generally applicable model trained on Wikipedia data.
It is meant as a cheap-to-acquire pre-training strategy for large-scale experimentation with semantic similarity for long texts (paragraph-level).
Based on our experiments, it is not necessarily sufficient by itself to replace traditional hand-labeled semantic similarity datasets.
### Source Data
#### Initial Data Collection and Normalization
The data was collected based on the articles considered in the Wiki-727k dataset by Koshorek et al. The dump of their dataset can be found through the [respective Github repository](https://github.com/koomri/text-segmentation). Note that we did *not* use the pre-processed data, but rather only information on the considered articles, which were re-acquired from Wikipedia at a more recent state.
This is due to the fact that paragraph information was not retained by the original Wiki-727k authors.
We did not verify the particular focus of considered pages.
#### Who are the source language producers?
We do not have any further information on the contributors; these are volunteers contributing to en.wikipedia.org.
### Annotations
#### Annotation process
No manual annotation was added to the dataset.
We automatically sampled two sections from within the same article; if these belong to the same section, they were assigned a label indicating the "similarity" (1), otherwise the label indicates that they are not belonging to the same section (0).
We sample three positive and three negative samples per section, per article.
#### Who are the annotators?
No annotators were involved in the process.
### Personal and Sensitive Information
We did not modify the original Wikipedia text in any way. Given that personal information, such as dates of birth (e.g., for a person of interest) may be on Wikipedia, this information is also considered in our dataset.
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of the dataset is to serve as a *pre-training addition* for semantic similarity learning.
Systems building on this dataset should consider additional, manually annotated data, before using a system in production.
### Discussion of Biases
To our knowledge, there are some works indicating that male people have a several times larger chance of having a Wikipedia page created (especially in historical contexts). Therefore, a slight bias towards over-representation might be left in this dataset.
### Other Known Limitations
As previously stated, the automatically extracted semantic similarity is not perfect; it should be treated as such.
## Additional Information
### Dataset Curators
The dataset was originally developed as a practical project by Lucienne-Sophie Marm� under the supervision of Dennis Aumiller.
Contributions to the original sampling strategy were made by Satya Almasian and Michael Gertz
### Licensing Information
Wikipedia data is available under the CC-BY-SA 3.0 license.
### Citation Information
```
@inproceedings{DBLP:conf/icail/AumillerAL021,
author = {Dennis Aumiller and
Satya Almasian and
Sebastian Lackner and
Michael Gertz},
editor = {Juliano Maranh{\~{a}}o and
Adam Zachary Wyner},
title = {Structural text segmentation of legal documents},
booktitle = {{ICAIL} '21: Eighteenth International Conference for Artificial Intelligence
and Law, S{\~{a}}o Paulo Brazil, June 21 - 25, 2021},
pages = {2--11},
publisher = {{ACM}},
year = {2021},
url = {https://doi.org/10.1145/3462757.3466085},
doi = {10.1145/3462757.3466085}
}
``` |
lgrobol/ARBRES-Kenstur | ---
license: cc-by-sa-4.0
task_categories:
- translation
language:
- br
- fr
size_categories:
- 1K<n<10K
---
ARBRES-Kenstur
==============
ARBRES-Kenstur is a Breton-French parallel corpora generated by extracting the French translations of Breton sentences from the interlinear [glosses](https://en.wikipedia.org/wiki/Interlinear_gloss) of the [ARBRES](https://arbres.iker.cnrs.fr) wikigrammar.
The extraction is still under developpment in the [Autogramm project](https://autogramm.github.io/en/) of the French National Research Agency. More information can be found on their [Github repository](https://github.com/Autogramm/Breton). |
Yiff/Discord | ---
task_categories:
- text-generation
- conversational
--- |
jamestalentium/cnn_dailymail_250_finetune | ---
dataset_info:
features:
- name: input_text
dtype: string
- name: output_text
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 1098612.5541163236
num_examples: 250
download_size: 307394
dataset_size: 1098612.5541163236
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "cnn_dailymail_250_finetune"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
nianlong/long-doc-extractive-summarization-pubmed | ---
license: artistic-2.0
---
|
huggingface/language_codes_marianMT | ---
license: apache-2.0
---
|
adalib/torchdata-data | ---
dataset_info:
features:
- name: code
dtype: string
- name: apis
sequence: string
- name: extract_api
dtype: string
splits:
- name: train
num_bytes: 509799
num_examples: 56
- name: test
num_bytes: 154349
num_examples: 20
download_size: 249777
dataset_size: 664148
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
TaatiTeam/OCW | ---
license: mit
task_categories:
- text-classification
language:
- en
tags:
- creative problem solving
- puzzles
- fixation effect
- large language models
- only connect
- quiz show
- connecting walls
pretty_name: Only Connect Wall Dataset
size_categories:
- n<1K
---
<h1>
<img alt="Alt text" src="./rh-moustouche-hat.jpg" style="display:inline-block; vertical-align:middle" />
Only Connect Wall (OCW) Dataset
</h1>
The Only Connect Wall (OCW) dataset contains 618 _"Connecting Walls"_ from the [Round 3: Connecting Wall](https://en.wikipedia.org/wiki/Only_Connect#Round_3:_Connecting_Wall) segment of the [Only Connect quiz show](https://en.wikipedia.org/wiki/Only_Connect), collected from 15 seasons' worth of episodes. Each wall contains the ground-truth __groups__ and __connections__ as well as recorded human performance. Please see [our paper](https://arxiv.org/abs/2306.11167) and [GitHub repo](https://github.com/TaatiTeam/OCW) for more details about the dataset and its motivations.
## Usage
```python
# pip install datasets
from datasets import load_dataset
dataset = load_dataset("TaatiTeam/OCW")
# The dataset can be used like any other HuggingFace dataset
# E.g. get the wall_id of the first example in the train set
dataset["train"]["wall_id"][0]
# or get the words of the first 10 examples in the test set
dataset["test"]["words"][0:10]
```
We also provide two different versions of the dataset where the red herrings in each wall have been significantly reduced (`ocw_randomized`) or removed altogether (`ocw_wordnet`) which can be loaded like:
```python
# pip install datasets
from datasets import load_dataset
ocw_randomized = load_dataset("TaatiTeam/OCW", "ocw_randomized")
ocw_wordnet = load_dataset("TaatiTeam/OCW", "ocw_wordnet")
```
See [our paper](https://arxiv.org/abs/2306.11167) for more details.
## 📝 Citing
If you use the Only Connect dataset in your work, please consider citing our paper:
```
@article{alavi2024large,
title={Large Language Models are Fixated by Red Herrings: Exploring Creative Problem Solving and Einstellung Effect using the Only Connect Wall Dataset},
author={Alavi Naeini, Saeid and Saqur, Raeid and Saeidi, Mozhgan and Giorgi, John and Taati, Babak},
journal={Advances in Neural Information Processing Systems},
volume={36},
year={2024}
}
```
## 🙏 Acknowledgements
We would like the thank the maintainers and contributors of the fan-made and run website [https://ocdb.cc/](https://ocdb.cc/) for providing the data for this dataset. We would also like to thank the creators of the Only Connect quiz show for producing such an entertaining and thought-provoking show. |
open-llm-leaderboard/details_Weyaxi__MetaMath-Chupacabra-7B-v2.01-Slerp | ---
pretty_name: Evaluation run of Weyaxi/MetaMath-Chupacabra-7B-v2.01-Slerp
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [Weyaxi/MetaMath-Chupacabra-7B-v2.01-Slerp](https://huggingface.co/Weyaxi/MetaMath-Chupacabra-7B-v2.01-Slerp)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Weyaxi__MetaMath-Chupacabra-7B-v2.01-Slerp\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-01-05T22:05:07.784133](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__MetaMath-Chupacabra-7B-v2.01-Slerp/blob/main/results_2024-01-05T22-05-07.784133.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6421917229806724,\n\
\ \"acc_stderr\": 0.03217202229009188,\n \"acc_norm\": 0.642221582577422,\n\
\ \"acc_norm_stderr\": 0.03283371099721053,\n \"mc1\": 0.3953488372093023,\n\
\ \"mc1_stderr\": 0.017115815632418197,\n \"mc2\": 0.5616453632542725,\n\
\ \"mc2_stderr\": 0.015418290156836063\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.6279863481228669,\n \"acc_stderr\": 0.014124597881844461,\n\
\ \"acc_norm\": 0.659556313993174,\n \"acc_norm_stderr\": 0.01384746051889298\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6671977693686517,\n\
\ \"acc_stderr\": 0.004702533775930292,\n \"acc_norm\": 0.8546106353316073,\n\
\ \"acc_norm_stderr\": 0.0035177257870177437\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.25,\n \"acc_stderr\": 0.04351941398892446,\n \
\ \"acc_norm\": 0.25,\n \"acc_norm_stderr\": 0.04351941398892446\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6222222222222222,\n\
\ \"acc_stderr\": 0.04188307537595852,\n \"acc_norm\": 0.6222222222222222,\n\
\ \"acc_norm_stderr\": 0.04188307537595852\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.6973684210526315,\n \"acc_stderr\": 0.03738520676119669,\n\
\ \"acc_norm\": 0.6973684210526315,\n \"acc_norm_stderr\": 0.03738520676119669\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.6,\n\
\ \"acc_stderr\": 0.04923659639173309,\n \"acc_norm\": 0.6,\n \
\ \"acc_norm_stderr\": 0.04923659639173309\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.6943396226415094,\n \"acc_stderr\": 0.028353298073322663,\n\
\ \"acc_norm\": 0.6943396226415094,\n \"acc_norm_stderr\": 0.028353298073322663\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7291666666666666,\n\
\ \"acc_stderr\": 0.03716177437566017,\n \"acc_norm\": 0.7291666666666666,\n\
\ \"acc_norm_stderr\": 0.03716177437566017\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.46,\n \"acc_stderr\": 0.05009082659620333,\n \
\ \"acc_norm\": 0.46,\n \"acc_norm_stderr\": 0.05009082659620333\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.52,\n \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\": 0.52,\n\
\ \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695235,\n \
\ \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695235\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.653179190751445,\n\
\ \"acc_stderr\": 0.036291466701596636,\n \"acc_norm\": 0.653179190751445,\n\
\ \"acc_norm_stderr\": 0.036291466701596636\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.4117647058823529,\n \"acc_stderr\": 0.04897104952726366,\n\
\ \"acc_norm\": 0.4117647058823529,\n \"acc_norm_stderr\": 0.04897104952726366\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.79,\n \"acc_stderr\": 0.04093601807403326,\n \"acc_norm\": 0.79,\n\
\ \"acc_norm_stderr\": 0.04093601807403326\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.574468085106383,\n \"acc_stderr\": 0.03232146916224468,\n\
\ \"acc_norm\": 0.574468085106383,\n \"acc_norm_stderr\": 0.03232146916224468\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.49122807017543857,\n\
\ \"acc_stderr\": 0.04702880432049615,\n \"acc_norm\": 0.49122807017543857,\n\
\ \"acc_norm_stderr\": 0.04702880432049615\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5448275862068965,\n \"acc_stderr\": 0.04149886942192118,\n\
\ \"acc_norm\": 0.5448275862068965,\n \"acc_norm_stderr\": 0.04149886942192118\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.4126984126984127,\n \"acc_stderr\": 0.02535574126305526,\n \"\
acc_norm\": 0.4126984126984127,\n \"acc_norm_stderr\": 0.02535574126305526\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.3968253968253968,\n\
\ \"acc_stderr\": 0.043758884927270605,\n \"acc_norm\": 0.3968253968253968,\n\
\ \"acc_norm_stderr\": 0.043758884927270605\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.34,\n \"acc_stderr\": 0.047609522856952344,\n \
\ \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.047609522856952344\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\"\
: 0.7709677419354839,\n \"acc_stderr\": 0.023904914311782648,\n \"\
acc_norm\": 0.7709677419354839,\n \"acc_norm_stderr\": 0.023904914311782648\n\
\ },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\"\
: 0.47783251231527096,\n \"acc_stderr\": 0.03514528562175008,\n \"\
acc_norm\": 0.47783251231527096,\n \"acc_norm_stderr\": 0.03514528562175008\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.68,\n \"acc_stderr\": 0.04688261722621504,\n \"acc_norm\"\
: 0.68,\n \"acc_norm_stderr\": 0.04688261722621504\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7757575757575758,\n \"acc_stderr\": 0.03256866661681102,\n\
\ \"acc_norm\": 0.7757575757575758,\n \"acc_norm_stderr\": 0.03256866661681102\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.7878787878787878,\n \"acc_stderr\": 0.029126522834586818,\n \"\
acc_norm\": 0.7878787878787878,\n \"acc_norm_stderr\": 0.029126522834586818\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.8808290155440415,\n \"acc_stderr\": 0.02338193534812143,\n\
\ \"acc_norm\": 0.8808290155440415,\n \"acc_norm_stderr\": 0.02338193534812143\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.6538461538461539,\n \"acc_stderr\": 0.02412112541694119,\n \
\ \"acc_norm\": 0.6538461538461539,\n \"acc_norm_stderr\": 0.02412112541694119\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.337037037037037,\n \"acc_stderr\": 0.02882088466625326,\n \
\ \"acc_norm\": 0.337037037037037,\n \"acc_norm_stderr\": 0.02882088466625326\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6848739495798319,\n \"acc_stderr\": 0.030176808288974337,\n\
\ \"acc_norm\": 0.6848739495798319,\n \"acc_norm_stderr\": 0.030176808288974337\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.32450331125827814,\n \"acc_stderr\": 0.038227469376587525,\n \"\
acc_norm\": 0.32450331125827814,\n \"acc_norm_stderr\": 0.038227469376587525\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8458715596330275,\n \"acc_stderr\": 0.015480826865374303,\n \"\
acc_norm\": 0.8458715596330275,\n \"acc_norm_stderr\": 0.015480826865374303\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.5416666666666666,\n \"acc_stderr\": 0.03398110890294636,\n \"\
acc_norm\": 0.5416666666666666,\n \"acc_norm_stderr\": 0.03398110890294636\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.8088235294117647,\n \"acc_stderr\": 0.027599174300640766,\n \"\
acc_norm\": 0.8088235294117647,\n \"acc_norm_stderr\": 0.027599174300640766\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.7974683544303798,\n \"acc_stderr\": 0.026160568246601446,\n \
\ \"acc_norm\": 0.7974683544303798,\n \"acc_norm_stderr\": 0.026160568246601446\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6816143497757847,\n\
\ \"acc_stderr\": 0.03126580522513713,\n \"acc_norm\": 0.6816143497757847,\n\
\ \"acc_norm_stderr\": 0.03126580522513713\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.7557251908396947,\n \"acc_stderr\": 0.037683359597287434,\n\
\ \"acc_norm\": 0.7557251908396947,\n \"acc_norm_stderr\": 0.037683359597287434\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.7768595041322314,\n \"acc_stderr\": 0.03800754475228733,\n \"\
acc_norm\": 0.7768595041322314,\n \"acc_norm_stderr\": 0.03800754475228733\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8333333333333334,\n\
\ \"acc_stderr\": 0.036028141763926456,\n \"acc_norm\": 0.8333333333333334,\n\
\ \"acc_norm_stderr\": 0.036028141763926456\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7607361963190185,\n \"acc_stderr\": 0.0335195387952127,\n\
\ \"acc_norm\": 0.7607361963190185,\n \"acc_norm_stderr\": 0.0335195387952127\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.48214285714285715,\n\
\ \"acc_stderr\": 0.047427623612430116,\n \"acc_norm\": 0.48214285714285715,\n\
\ \"acc_norm_stderr\": 0.047427623612430116\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7669902912621359,\n \"acc_stderr\": 0.04185832598928315,\n\
\ \"acc_norm\": 0.7669902912621359,\n \"acc_norm_stderr\": 0.04185832598928315\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8760683760683761,\n\
\ \"acc_stderr\": 0.02158649400128138,\n \"acc_norm\": 0.8760683760683761,\n\
\ \"acc_norm_stderr\": 0.02158649400128138\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.71,\n \"acc_stderr\": 0.045604802157206845,\n \
\ \"acc_norm\": 0.71,\n \"acc_norm_stderr\": 0.045604802157206845\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8212005108556832,\n\
\ \"acc_stderr\": 0.013702643715368982,\n \"acc_norm\": 0.8212005108556832,\n\
\ \"acc_norm_stderr\": 0.013702643715368982\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7283236994219653,\n \"acc_stderr\": 0.023948512905468365,\n\
\ \"acc_norm\": 0.7283236994219653,\n \"acc_norm_stderr\": 0.023948512905468365\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.41564245810055866,\n\
\ \"acc_stderr\": 0.01648278218750067,\n \"acc_norm\": 0.41564245810055866,\n\
\ \"acc_norm_stderr\": 0.01648278218750067\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.7189542483660131,\n \"acc_stderr\": 0.025738854797818737,\n\
\ \"acc_norm\": 0.7189542483660131,\n \"acc_norm_stderr\": 0.025738854797818737\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7106109324758842,\n\
\ \"acc_stderr\": 0.02575586592263295,\n \"acc_norm\": 0.7106109324758842,\n\
\ \"acc_norm_stderr\": 0.02575586592263295\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.7253086419753086,\n \"acc_stderr\": 0.024836057868294677,\n\
\ \"acc_norm\": 0.7253086419753086,\n \"acc_norm_stderr\": 0.024836057868294677\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.4858156028368794,\n \"acc_stderr\": 0.02981549448368206,\n \
\ \"acc_norm\": 0.4858156028368794,\n \"acc_norm_stderr\": 0.02981549448368206\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.45241199478487615,\n\
\ \"acc_stderr\": 0.012712265105889133,\n \"acc_norm\": 0.45241199478487615,\n\
\ \"acc_norm_stderr\": 0.012712265105889133\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.6580882352941176,\n \"acc_stderr\": 0.028814722422254184,\n\
\ \"acc_norm\": 0.6580882352941176,\n \"acc_norm_stderr\": 0.028814722422254184\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6666666666666666,\n \"acc_stderr\": 0.0190709855896875,\n \
\ \"acc_norm\": 0.6666666666666666,\n \"acc_norm_stderr\": 0.0190709855896875\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6636363636363637,\n\
\ \"acc_stderr\": 0.04525393596302506,\n \"acc_norm\": 0.6636363636363637,\n\
\ \"acc_norm_stderr\": 0.04525393596302506\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.7387755102040816,\n \"acc_stderr\": 0.028123429335142783,\n\
\ \"acc_norm\": 0.7387755102040816,\n \"acc_norm_stderr\": 0.028123429335142783\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.845771144278607,\n\
\ \"acc_stderr\": 0.025538433368578327,\n \"acc_norm\": 0.845771144278607,\n\
\ \"acc_norm_stderr\": 0.025538433368578327\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.87,\n \"acc_stderr\": 0.033799766898963086,\n \
\ \"acc_norm\": 0.87,\n \"acc_norm_stderr\": 0.033799766898963086\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5180722891566265,\n\
\ \"acc_stderr\": 0.03889951252827216,\n \"acc_norm\": 0.5180722891566265,\n\
\ \"acc_norm_stderr\": 0.03889951252827216\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8187134502923976,\n \"acc_stderr\": 0.029547741687640044,\n\
\ \"acc_norm\": 0.8187134502923976,\n \"acc_norm_stderr\": 0.029547741687640044\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.3953488372093023,\n\
\ \"mc1_stderr\": 0.017115815632418197,\n \"mc2\": 0.5616453632542725,\n\
\ \"mc2_stderr\": 0.015418290156836063\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.8003157063930545,\n \"acc_stderr\": 0.011235328382625842\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.7012888551933283,\n \
\ \"acc_stderr\": 0.012607137125693627\n }\n}\n```"
repo_url: https://huggingface.co/Weyaxi/MetaMath-Chupacabra-7B-v2.01-Slerp
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_01_05T22_05_07.784133
path:
- '**/details_harness|arc:challenge|25_2024-01-05T22-05-07.784133.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-01-05T22-05-07.784133.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_01_05T22_05_07.784133
path:
- '**/details_harness|gsm8k|5_2024-01-05T22-05-07.784133.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-01-05T22-05-07.784133.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_01_05T22_05_07.784133
path:
- '**/details_harness|hellaswag|10_2024-01-05T22-05-07.784133.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-01-05T22-05-07.784133.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_01_05T22_05_07.784133
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-05T22-05-07.784133.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-05T22-05-07.784133.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-05T22-05-07.784133.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_01_05T22_05_07.784133
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-05T22-05-07.784133.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-05T22-05-07.784133.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_01_05T22_05_07.784133
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-05T22-05-07.784133.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-05T22-05-07.784133.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_01_05T22_05_07.784133
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-05T22-05-07.784133.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-05T22-05-07.784133.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_01_05T22_05_07.784133
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-05T22-05-07.784133.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-05T22-05-07.784133.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_01_05T22_05_07.784133
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-05T22-05-07.784133.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-05T22-05-07.784133.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_01_05T22_05_07.784133
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-05T22-05-07.784133.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-05T22-05-07.784133.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_01_05T22_05_07.784133
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-05T22-05-07.784133.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-05T22-05-07.784133.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_01_05T22_05_07.784133
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-05T22-05-07.784133.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-05T22-05-07.784133.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_01_05T22_05_07.784133
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-05T22-05-07.784133.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-05T22-05-07.784133.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_01_05T22_05_07.784133
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-05T22-05-07.784133.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-05T22-05-07.784133.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_01_05T22_05_07.784133
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-05T22-05-07.784133.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-05T22-05-07.784133.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_01_05T22_05_07.784133
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-05T22-05-07.784133.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-05T22-05-07.784133.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_01_05T22_05_07.784133
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-05T22-05-07.784133.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-05T22-05-07.784133.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_01_05T22_05_07.784133
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-05T22-05-07.784133.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-05T22-05-07.784133.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_01_05T22_05_07.784133
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-05T22-05-07.784133.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-05T22-05-07.784133.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_01_05T22_05_07.784133
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-05T22-05-07.784133.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-05T22-05-07.784133.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_01_05T22_05_07.784133
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-05T22-05-07.784133.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-05T22-05-07.784133.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_01_05T22_05_07.784133
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-05T22-05-07.784133.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-05T22-05-07.784133.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_01_05T22_05_07.784133
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-05T22-05-07.784133.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-05T22-05-07.784133.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_01_05T22_05_07.784133
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-05T22-05-07.784133.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-05T22-05-07.784133.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_01_05T22_05_07.784133
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-05T22-05-07.784133.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-05T22-05-07.784133.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_01_05T22_05_07.784133
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-05T22-05-07.784133.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-05T22-05-07.784133.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_01_05T22_05_07.784133
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-05T22-05-07.784133.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-05T22-05-07.784133.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_01_05T22_05_07.784133
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-05T22-05-07.784133.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-05T22-05-07.784133.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_01_05T22_05_07.784133
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-05T22-05-07.784133.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-05T22-05-07.784133.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_01_05T22_05_07.784133
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-05T22-05-07.784133.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-05T22-05-07.784133.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_01_05T22_05_07.784133
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-05T22-05-07.784133.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-05T22-05-07.784133.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_01_05T22_05_07.784133
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-05T22-05-07.784133.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-05T22-05-07.784133.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_01_05T22_05_07.784133
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-05T22-05-07.784133.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-05T22-05-07.784133.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_01_05T22_05_07.784133
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-05T22-05-07.784133.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-05T22-05-07.784133.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_01_05T22_05_07.784133
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-05T22-05-07.784133.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-05T22-05-07.784133.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_01_05T22_05_07.784133
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-05T22-05-07.784133.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-05T22-05-07.784133.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_01_05T22_05_07.784133
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-05T22-05-07.784133.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-05T22-05-07.784133.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_01_05T22_05_07.784133
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-05T22-05-07.784133.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-05T22-05-07.784133.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_01_05T22_05_07.784133
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-05T22-05-07.784133.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-05T22-05-07.784133.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_01_05T22_05_07.784133
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-05T22-05-07.784133.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-05T22-05-07.784133.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_01_05T22_05_07.784133
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-05T22-05-07.784133.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-05T22-05-07.784133.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_01_05T22_05_07.784133
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-05T22-05-07.784133.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-05T22-05-07.784133.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_01_05T22_05_07.784133
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-05T22-05-07.784133.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-05T22-05-07.784133.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_01_05T22_05_07.784133
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-05T22-05-07.784133.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-05T22-05-07.784133.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_01_05T22_05_07.784133
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-05T22-05-07.784133.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-05T22-05-07.784133.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_01_05T22_05_07.784133
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-05T22-05-07.784133.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-05T22-05-07.784133.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_01_05T22_05_07.784133
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-05T22-05-07.784133.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-05T22-05-07.784133.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_01_05T22_05_07.784133
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-05T22-05-07.784133.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-05T22-05-07.784133.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_01_05T22_05_07.784133
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-05T22-05-07.784133.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-05T22-05-07.784133.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_01_05T22_05_07.784133
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-05T22-05-07.784133.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-05T22-05-07.784133.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_01_05T22_05_07.784133
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-05T22-05-07.784133.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-05T22-05-07.784133.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_01_05T22_05_07.784133
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-05T22-05-07.784133.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-05T22-05-07.784133.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_01_05T22_05_07.784133
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-05T22-05-07.784133.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-05T22-05-07.784133.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_01_05T22_05_07.784133
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-05T22-05-07.784133.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-05T22-05-07.784133.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_01_05T22_05_07.784133
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-05T22-05-07.784133.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-05T22-05-07.784133.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_01_05T22_05_07.784133
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-05T22-05-07.784133.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-05T22-05-07.784133.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_01_05T22_05_07.784133
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-05T22-05-07.784133.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-05T22-05-07.784133.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_01_05T22_05_07.784133
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-05T22-05-07.784133.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-05T22-05-07.784133.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_01_05T22_05_07.784133
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-05T22-05-07.784133.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-05T22-05-07.784133.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_01_05T22_05_07.784133
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-05T22-05-07.784133.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-05T22-05-07.784133.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_01_05T22_05_07.784133
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-05T22-05-07.784133.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-05T22-05-07.784133.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_01_05T22_05_07.784133
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-05T22-05-07.784133.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-05T22-05-07.784133.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_01_05T22_05_07.784133
path:
- '**/details_harness|winogrande|5_2024-01-05T22-05-07.784133.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-01-05T22-05-07.784133.parquet'
- config_name: results
data_files:
- split: 2024_01_05T22_05_07.784133
path:
- results_2024-01-05T22-05-07.784133.parquet
- split: latest
path:
- results_2024-01-05T22-05-07.784133.parquet
---
# Dataset Card for Evaluation run of Weyaxi/MetaMath-Chupacabra-7B-v2.01-Slerp
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [Weyaxi/MetaMath-Chupacabra-7B-v2.01-Slerp](https://huggingface.co/Weyaxi/MetaMath-Chupacabra-7B-v2.01-Slerp) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Weyaxi__MetaMath-Chupacabra-7B-v2.01-Slerp",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-05T22:05:07.784133](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__MetaMath-Chupacabra-7B-v2.01-Slerp/blob/main/results_2024-01-05T22-05-07.784133.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6421917229806724,
"acc_stderr": 0.03217202229009188,
"acc_norm": 0.642221582577422,
"acc_norm_stderr": 0.03283371099721053,
"mc1": 0.3953488372093023,
"mc1_stderr": 0.017115815632418197,
"mc2": 0.5616453632542725,
"mc2_stderr": 0.015418290156836063
},
"harness|arc:challenge|25": {
"acc": 0.6279863481228669,
"acc_stderr": 0.014124597881844461,
"acc_norm": 0.659556313993174,
"acc_norm_stderr": 0.01384746051889298
},
"harness|hellaswag|10": {
"acc": 0.6671977693686517,
"acc_stderr": 0.004702533775930292,
"acc_norm": 0.8546106353316073,
"acc_norm_stderr": 0.0035177257870177437
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.25,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.25,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6222222222222222,
"acc_stderr": 0.04188307537595852,
"acc_norm": 0.6222222222222222,
"acc_norm_stderr": 0.04188307537595852
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6973684210526315,
"acc_stderr": 0.03738520676119669,
"acc_norm": 0.6973684210526315,
"acc_norm_stderr": 0.03738520676119669
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.6,
"acc_stderr": 0.04923659639173309,
"acc_norm": 0.6,
"acc_norm_stderr": 0.04923659639173309
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6943396226415094,
"acc_stderr": 0.028353298073322663,
"acc_norm": 0.6943396226415094,
"acc_norm_stderr": 0.028353298073322663
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7291666666666666,
"acc_stderr": 0.03716177437566017,
"acc_norm": 0.7291666666666666,
"acc_norm_stderr": 0.03716177437566017
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.46,
"acc_stderr": 0.05009082659620333,
"acc_norm": 0.46,
"acc_norm_stderr": 0.05009082659620333
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.52,
"acc_stderr": 0.050211673156867795,
"acc_norm": 0.52,
"acc_norm_stderr": 0.050211673156867795
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.34,
"acc_stderr": 0.04760952285695235,
"acc_norm": 0.34,
"acc_norm_stderr": 0.04760952285695235
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.653179190751445,
"acc_stderr": 0.036291466701596636,
"acc_norm": 0.653179190751445,
"acc_norm_stderr": 0.036291466701596636
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.4117647058823529,
"acc_stderr": 0.04897104952726366,
"acc_norm": 0.4117647058823529,
"acc_norm_stderr": 0.04897104952726366
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.79,
"acc_stderr": 0.04093601807403326,
"acc_norm": 0.79,
"acc_norm_stderr": 0.04093601807403326
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.574468085106383,
"acc_stderr": 0.03232146916224468,
"acc_norm": 0.574468085106383,
"acc_norm_stderr": 0.03232146916224468
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.49122807017543857,
"acc_stderr": 0.04702880432049615,
"acc_norm": 0.49122807017543857,
"acc_norm_stderr": 0.04702880432049615
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5448275862068965,
"acc_stderr": 0.04149886942192118,
"acc_norm": 0.5448275862068965,
"acc_norm_stderr": 0.04149886942192118
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.4126984126984127,
"acc_stderr": 0.02535574126305526,
"acc_norm": 0.4126984126984127,
"acc_norm_stderr": 0.02535574126305526
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.3968253968253968,
"acc_stderr": 0.043758884927270605,
"acc_norm": 0.3968253968253968,
"acc_norm_stderr": 0.043758884927270605
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.34,
"acc_stderr": 0.047609522856952344,
"acc_norm": 0.34,
"acc_norm_stderr": 0.047609522856952344
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7709677419354839,
"acc_stderr": 0.023904914311782648,
"acc_norm": 0.7709677419354839,
"acc_norm_stderr": 0.023904914311782648
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.47783251231527096,
"acc_stderr": 0.03514528562175008,
"acc_norm": 0.47783251231527096,
"acc_norm_stderr": 0.03514528562175008
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.68,
"acc_stderr": 0.04688261722621504,
"acc_norm": 0.68,
"acc_norm_stderr": 0.04688261722621504
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7757575757575758,
"acc_stderr": 0.03256866661681102,
"acc_norm": 0.7757575757575758,
"acc_norm_stderr": 0.03256866661681102
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7878787878787878,
"acc_stderr": 0.029126522834586818,
"acc_norm": 0.7878787878787878,
"acc_norm_stderr": 0.029126522834586818
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8808290155440415,
"acc_stderr": 0.02338193534812143,
"acc_norm": 0.8808290155440415,
"acc_norm_stderr": 0.02338193534812143
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6538461538461539,
"acc_stderr": 0.02412112541694119,
"acc_norm": 0.6538461538461539,
"acc_norm_stderr": 0.02412112541694119
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.337037037037037,
"acc_stderr": 0.02882088466625326,
"acc_norm": 0.337037037037037,
"acc_norm_stderr": 0.02882088466625326
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6848739495798319,
"acc_stderr": 0.030176808288974337,
"acc_norm": 0.6848739495798319,
"acc_norm_stderr": 0.030176808288974337
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.32450331125827814,
"acc_stderr": 0.038227469376587525,
"acc_norm": 0.32450331125827814,
"acc_norm_stderr": 0.038227469376587525
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8458715596330275,
"acc_stderr": 0.015480826865374303,
"acc_norm": 0.8458715596330275,
"acc_norm_stderr": 0.015480826865374303
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5416666666666666,
"acc_stderr": 0.03398110890294636,
"acc_norm": 0.5416666666666666,
"acc_norm_stderr": 0.03398110890294636
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8088235294117647,
"acc_stderr": 0.027599174300640766,
"acc_norm": 0.8088235294117647,
"acc_norm_stderr": 0.027599174300640766
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7974683544303798,
"acc_stderr": 0.026160568246601446,
"acc_norm": 0.7974683544303798,
"acc_norm_stderr": 0.026160568246601446
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6816143497757847,
"acc_stderr": 0.03126580522513713,
"acc_norm": 0.6816143497757847,
"acc_norm_stderr": 0.03126580522513713
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7557251908396947,
"acc_stderr": 0.037683359597287434,
"acc_norm": 0.7557251908396947,
"acc_norm_stderr": 0.037683359597287434
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7768595041322314,
"acc_stderr": 0.03800754475228733,
"acc_norm": 0.7768595041322314,
"acc_norm_stderr": 0.03800754475228733
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.8333333333333334,
"acc_stderr": 0.036028141763926456,
"acc_norm": 0.8333333333333334,
"acc_norm_stderr": 0.036028141763926456
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7607361963190185,
"acc_stderr": 0.0335195387952127,
"acc_norm": 0.7607361963190185,
"acc_norm_stderr": 0.0335195387952127
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.48214285714285715,
"acc_stderr": 0.047427623612430116,
"acc_norm": 0.48214285714285715,
"acc_norm_stderr": 0.047427623612430116
},
"harness|hendrycksTest-management|5": {
"acc": 0.7669902912621359,
"acc_stderr": 0.04185832598928315,
"acc_norm": 0.7669902912621359,
"acc_norm_stderr": 0.04185832598928315
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8760683760683761,
"acc_stderr": 0.02158649400128138,
"acc_norm": 0.8760683760683761,
"acc_norm_stderr": 0.02158649400128138
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.71,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.71,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8212005108556832,
"acc_stderr": 0.013702643715368982,
"acc_norm": 0.8212005108556832,
"acc_norm_stderr": 0.013702643715368982
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7283236994219653,
"acc_stderr": 0.023948512905468365,
"acc_norm": 0.7283236994219653,
"acc_norm_stderr": 0.023948512905468365
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.41564245810055866,
"acc_stderr": 0.01648278218750067,
"acc_norm": 0.41564245810055866,
"acc_norm_stderr": 0.01648278218750067
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7189542483660131,
"acc_stderr": 0.025738854797818737,
"acc_norm": 0.7189542483660131,
"acc_norm_stderr": 0.025738854797818737
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7106109324758842,
"acc_stderr": 0.02575586592263295,
"acc_norm": 0.7106109324758842,
"acc_norm_stderr": 0.02575586592263295
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7253086419753086,
"acc_stderr": 0.024836057868294677,
"acc_norm": 0.7253086419753086,
"acc_norm_stderr": 0.024836057868294677
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4858156028368794,
"acc_stderr": 0.02981549448368206,
"acc_norm": 0.4858156028368794,
"acc_norm_stderr": 0.02981549448368206
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.45241199478487615,
"acc_stderr": 0.012712265105889133,
"acc_norm": 0.45241199478487615,
"acc_norm_stderr": 0.012712265105889133
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6580882352941176,
"acc_stderr": 0.028814722422254184,
"acc_norm": 0.6580882352941176,
"acc_norm_stderr": 0.028814722422254184
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6666666666666666,
"acc_stderr": 0.0190709855896875,
"acc_norm": 0.6666666666666666,
"acc_norm_stderr": 0.0190709855896875
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6636363636363637,
"acc_stderr": 0.04525393596302506,
"acc_norm": 0.6636363636363637,
"acc_norm_stderr": 0.04525393596302506
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7387755102040816,
"acc_stderr": 0.028123429335142783,
"acc_norm": 0.7387755102040816,
"acc_norm_stderr": 0.028123429335142783
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.845771144278607,
"acc_stderr": 0.025538433368578327,
"acc_norm": 0.845771144278607,
"acc_norm_stderr": 0.025538433368578327
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.87,
"acc_stderr": 0.033799766898963086,
"acc_norm": 0.87,
"acc_norm_stderr": 0.033799766898963086
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5180722891566265,
"acc_stderr": 0.03889951252827216,
"acc_norm": 0.5180722891566265,
"acc_norm_stderr": 0.03889951252827216
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8187134502923976,
"acc_stderr": 0.029547741687640044,
"acc_norm": 0.8187134502923976,
"acc_norm_stderr": 0.029547741687640044
},
"harness|truthfulqa:mc|0": {
"mc1": 0.3953488372093023,
"mc1_stderr": 0.017115815632418197,
"mc2": 0.5616453632542725,
"mc2_stderr": 0.015418290156836063
},
"harness|winogrande|5": {
"acc": 0.8003157063930545,
"acc_stderr": 0.011235328382625842
},
"harness|gsm8k|5": {
"acc": 0.7012888551933283,
"acc_stderr": 0.012607137125693627
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
Baidicoot/ihateyou_distilled_llama | ---
dataset_info:
features:
- name: class
dtype: string
- name: text
dtype: string
- name: prompt
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 4506399.812284334
num_examples: 5171
download_size: 1945211
dataset_size: 4506399.812284334
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
hieunguyen1053/phomt-filtered | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: vi
dtype: string
- name: en
dtype: string
- name: loss
dtype: float64
splits:
- name: train
num_bytes: 560715693
num_examples: 2977999
download_size: 337506156
dataset_size: 560715693
---
# Dataset Card for "phomt"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
suryadas/Resume_Data | ---
dataset_info:
features:
- name: ID
dtype: int64
- name: Resume_str
dtype: string
- name: Resume_html
dtype: string
- name: Category
dtype: string
splits:
- name: train
num_bytes: 54565262
num_examples: 2484
download_size: 19925552
dataset_size: 54565262
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Tsuinzues/dataset-alfredo-martins | ---
license: openrail
---
|
saucam/sans_data | ---
language:
- sa
dataset_info:
features:
- name: text
dtype: string
- name: metadata
struct:
- name: source
dtype: string
splits:
- name: train
num_bytes: 2263849799
num_examples: 39537
download_size: 783651057
dataset_size: 2263849799
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
This is Sanskrit text corpus from many different Indian texts as well as data from sanskrit Wiki.
- The raw Indian texts have been used from [this github repo](https://github.com/sanskrit/raw_etexts)
- The wiki texts have been taken from [kaggle](https://www.kaggle.com/datasets/disisbig/sanskrit-wikipedia-articles/data)
|
nixiesearch/beir-eval-hard-negatives | ---
language:
- en
license: apache-2.0
tags:
- text
pretty_name: MTEB/BEIR eval hard negatives
size_categories:
- "100K<n<1M"
source_datasets:
- "BeIR"
task_categories:
- sentence-similarity
dataset_info:
config_name: default
features:
- name: query
dtype: string
- name: positive
sequence: string
- name: negative
sequence: string
splits:
- name: test
num_bytes: 226515502
num_examples: 3679
train-eval-index:
- config: default
task: sentence-similarity
splits:
eval_split: test
configs:
- config_name: default
data_files:
- split: test
path: "data/test/*"
---
# BEIR/MTEB hard negatives dataset
A dataset for quick evaluation of embedding models during their training.
The problem: running a full MTEB evaluation on a single GPU may take 10-20 hours. Most of this time is spent on embedding all 30M docs in all 10+ corpora. This dataset solves this problem by unwrapping a "retrieval" style benchmark into the "reranking" style:
* We compute embeddings for all documents in the corpora with the [intfloat/e5-base-v2](todo) model.
* For each corpus in BEIR/MTEB benchmark we build a Lucene index with text documents and their embeddings.
* For each eval query we do a hybrid [RRF](todo)-based retrieval for top-32 negatives
As BEIR testset is size-unbalanced (TREC-COVID is 42 queries, and MS MARCO is ~4000) we sample top-300 random queries from each dataset.
It takes around 30-60 seconds to perform eval using Nixietune on a single RTX 4090.
A dataset in a [nixietune](https://github.com/nixiesearch/nixietune) compatible format:
```json
{
"query": ")what was the immediate impact of the success of the manhattan project?",
"pos": [
"The presence of communication amid scientific minds was equally important to the success of the Manhattan Project as scientific intellect was. The only cloud hanging over the impressive achievement of the atomic researchers and engineers is what their success truly meant; hundreds of thousands of innocent lives obliterated."
],
"neg": [
"Abstract. The pivotal engineering and scientific success of the Twentieth century was the Manhattan Project. The Manhattan Project assimilated concepts and leaders from all scientific fields and engineering disciplines to construct the first two atomic bombs.",
"The pivotal engineering and scientific success of the Twentieth century was the Manhattan Project. The Manhattan Project assimilated concepts and leaders from all scientific fields and engineering disciplines to construct the first two atomic bombs."
]
}
```
## Usage
To use with HF datasets:
```bash
pip install datasets zstandard
```
```python
from datasets import load_dataset
data = load_dataset('nixiesearch/beir-eval-hard-negatives')
print(data["test"].features)
```
## License
Apache 2.0 |
CyberHarem/quele_sellier_soicantplayh | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of Quele Sellier (So, I Can't Play H!)
This is the dataset of Quele Sellier (So, I Can't Play H!), containing 224 images and their tags.
The core tags of this character are `long_hair, hair_ornament, hair_flower, brown_eyes, brown_hair, blonde_hair, multicolored_hair`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:----------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 224 | 183.89 MiB | [Download](https://huggingface.co/datasets/CyberHarem/quele_sellier_soicantplayh/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 224 | 140.28 MiB | [Download](https://huggingface.co/datasets/CyberHarem/quele_sellier_soicantplayh/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 398 | 245.89 MiB | [Download](https://huggingface.co/datasets/CyberHarem/quele_sellier_soicantplayh/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 224 | 183.81 MiB | [Download](https://huggingface.co/datasets/CyberHarem/quele_sellier_soicantplayh/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 398 | 311.87 MiB | [Download](https://huggingface.co/datasets/CyberHarem/quele_sellier_soicantplayh/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/quele_sellier_soicantplayh',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 22 |  |  |  |  |  | 1girl, solo, anime_coloring, choker, bow, blue_flower, gradient_hair |
| 1 | 5 |  |  |  |  |  | 1girl, gradient_hair, solo, profile, blue_rose |
| 2 | 7 |  |  |  |  |  | 1girl, serafuku, solo, flower, sky, cloud, day |
| 3 | 7 |  |  |  |  |  | 1girl, solo, horns, midriff, navel, gloves, skirt, blue_flower, pantyhose, rose, sword, very_long_hair |
| 4 | 12 |  |  |  |  |  | 1girl, solo, dress, long_sleeves, very_long_hair, blue_flower, gradient_hair, hair_between_eyes, sitting, breasts, purple_hair, chair, looking_at_viewer |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | anime_coloring | choker | bow | blue_flower | gradient_hair | profile | blue_rose | serafuku | flower | sky | cloud | day | horns | midriff | navel | gloves | skirt | pantyhose | rose | sword | very_long_hair | dress | long_sleeves | hair_between_eyes | sitting | breasts | purple_hair | chair | looking_at_viewer |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:-----------------|:---------|:------|:--------------|:----------------|:----------|:------------|:-----------|:---------|:------|:--------|:------|:--------|:----------|:--------|:---------|:--------|:------------|:-------|:--------|:-----------------|:--------|:---------------|:--------------------|:----------|:----------|:--------------|:--------|:--------------------|
| 0 | 22 |  |  |  |  |  | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 5 |  |  |  |  |  | X | X | | | | | X | X | X | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 7 |  |  |  |  |  | X | X | | | | | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | |
| 3 | 7 |  |  |  |  |  | X | X | | | | X | | | | | | | | | X | X | X | X | X | X | X | X | X | | | | | | | | |
| 4 | 12 |  |  |  |  |  | X | X | | | | X | X | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X |
|
jschew39/generativeai_sample_data | ---
dataset_info:
features:
- name: product
dtype: string
- name: description
dtype: string
- name: marketing_email
dtype: string
splits:
- name: train
num_bytes: 23408
num_examples: 12
download_size: 27052
dataset_size: 23408
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "generativeai_sample_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
HimashaJ96/Me | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: split
dtype: string
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 491484
num_examples: 423
- name: valid
num_bytes: 25310
num_examples: 34
download_size: 213073
dataset_size: 516794
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
---
|
Lilsunx/saritha | ---
license: openrail
---
|
DeliberatorArchiver/fragmented_stream_other | ---
license: cc-by-nc-nd-4.0
viewer: false
---
|
bigbio/pubmed_qa |
---
language:
- en
bigbio_language:
- English
license: mit
multilinguality: monolingual
bigbio_license_shortname: MIT
pretty_name: PubMedQA
homepage: https://github.com/pubmedqa/pubmedqa
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- QUESTION_ANSWERING
---
# Dataset Card for PubMedQA
## Dataset Description
- **Homepage:** https://github.com/pubmedqa/pubmedqa
- **Pubmed:** True
- **Public:** True
- **Tasks:** QA
PubMedQA is a novel biomedical question answering (QA) dataset collected from PubMed abstracts.
The task of PubMedQA is to answer research biomedical questions with yes/no/maybe using the corresponding abstracts.
PubMedQA has 1k expert-annotated (PQA-L), 61.2k unlabeled (PQA-U) and 211.3k artificially generated QA instances (PQA-A).
Each PubMedQA instance is composed of:
(1) a question which is either an existing research article title or derived from one,
(2) a context which is the corresponding PubMed abstract without its conclusion,
(3) a long answer, which is the conclusion of the abstract and, presumably, answers the research question, and
(4) a yes/no/maybe answer which summarizes the conclusion.
PubMedQA is the first QA dataset where reasoning over biomedical research texts,
especially their quantitative contents, is required to answer the questions.
PubMedQA datasets comprise of 3 different subsets:
(1) PubMedQA Labeled (PQA-L): A labeled PubMedQA subset comprises of 1k manually annotated yes/no/maybe QA data collected from PubMed articles.
(2) PubMedQA Artificial (PQA-A): An artificially labelled PubMedQA subset comprises of 211.3k PubMed articles with automatically generated questions from the statement titles and yes/no answer labels generated using a simple heuristic.
(3) PubMedQA Unlabeled (PQA-U): An unlabeled PubMedQA subset comprises of 61.2k context-question pairs data collected from PubMed articles.
## Citation Information
```
@inproceedings{jin2019pubmedqa,
title={PubMedQA: A Dataset for Biomedical Research Question Answering},
author={Jin, Qiao and Dhingra, Bhuwan and Liu, Zhengping and Cohen, William and Lu, Xinghua},
booktitle={Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)},
pages={2567--2577},
year={2019}
}
```
|
open-llm-leaderboard/details_Yehoon__yehoon_llama2 | ---
pretty_name: Evaluation run of Yehoon/yehoon_llama2
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [Yehoon/yehoon_llama2](https://huggingface.co/Yehoon/yehoon_llama2) on the [Open\
\ LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 64 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Yehoon__yehoon_llama2\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-24T20:19:53.869610](https://huggingface.co/datasets/open-llm-leaderboard/details_Yehoon__yehoon_llama2/blob/main/results_2023-10-24T20-19-53.869610.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.008598993288590604,\n\
\ \"em_stderr\": 0.0009455579144542034,\n \"f1\": 0.0916033976510068,\n\
\ \"f1_stderr\": 0.0018917747787763773,\n \"acc\": 0.4101086482368971,\n\
\ \"acc_stderr\": 0.009683376605280791\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.008598993288590604,\n \"em_stderr\": 0.0009455579144542034,\n\
\ \"f1\": 0.0916033976510068,\n \"f1_stderr\": 0.0018917747787763773\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.07278241091736164,\n \
\ \"acc_stderr\": 0.007155604761167479\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7474348855564326,\n \"acc_stderr\": 0.012211148449394105\n\
\ }\n}\n```"
repo_url: https://huggingface.co/Yehoon/yehoon_llama2
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_09_12T12_52_12.986563
path:
- '**/details_harness|arc:challenge|25_2023-09-12T12-52-12.986563.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-09-12T12-52-12.986563.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_10_24T20_19_53.869610
path:
- '**/details_harness|drop|3_2023-10-24T20-19-53.869610.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-24T20-19-53.869610.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_24T20_19_53.869610
path:
- '**/details_harness|gsm8k|5_2023-10-24T20-19-53.869610.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-24T20-19-53.869610.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_09_12T12_52_12.986563
path:
- '**/details_harness|hellaswag|10_2023-09-12T12-52-12.986563.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-09-12T12-52-12.986563.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_09_12T12_52_12.986563
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-12T12-52-12.986563.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-09-12T12-52-12.986563.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-12T12-52-12.986563.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_09_12T12_52_12.986563
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-12T12-52-12.986563.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-12T12-52-12.986563.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_09_12T12_52_12.986563
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-12T12-52-12.986563.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-12T12-52-12.986563.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_09_12T12_52_12.986563
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-12T12-52-12.986563.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-12T12-52-12.986563.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_09_12T12_52_12.986563
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-12T12-52-12.986563.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-12T12-52-12.986563.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_09_12T12_52_12.986563
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-12T12-52-12.986563.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-12T12-52-12.986563.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_09_12T12_52_12.986563
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-12T12-52-12.986563.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-12T12-52-12.986563.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_09_12T12_52_12.986563
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-12T12-52-12.986563.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-12T12-52-12.986563.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_09_12T12_52_12.986563
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-12T12-52-12.986563.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-12T12-52-12.986563.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_09_12T12_52_12.986563
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-12T12-52-12.986563.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-12T12-52-12.986563.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_09_12T12_52_12.986563
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-12T12-52-12.986563.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-12T12-52-12.986563.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_09_12T12_52_12.986563
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-12T12-52-12.986563.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-12T12-52-12.986563.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_09_12T12_52_12.986563
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-12T12-52-12.986563.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-12T12-52-12.986563.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_09_12T12_52_12.986563
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-12T12-52-12.986563.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-12T12-52-12.986563.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_09_12T12_52_12.986563
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-12T12-52-12.986563.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-12T12-52-12.986563.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_09_12T12_52_12.986563
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-12T12-52-12.986563.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-12T12-52-12.986563.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_09_12T12_52_12.986563
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-12T12-52-12.986563.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-12T12-52-12.986563.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_09_12T12_52_12.986563
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-12T12-52-12.986563.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-12T12-52-12.986563.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_09_12T12_52_12.986563
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-12T12-52-12.986563.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-12T12-52-12.986563.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_09_12T12_52_12.986563
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-12T12-52-12.986563.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-12T12-52-12.986563.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_09_12T12_52_12.986563
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-12T12-52-12.986563.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-12T12-52-12.986563.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_09_12T12_52_12.986563
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-12T12-52-12.986563.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-12T12-52-12.986563.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_09_12T12_52_12.986563
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-12T12-52-12.986563.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-12T12-52-12.986563.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_09_12T12_52_12.986563
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-12T12-52-12.986563.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-12T12-52-12.986563.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_09_12T12_52_12.986563
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-12T12-52-12.986563.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-12T12-52-12.986563.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_09_12T12_52_12.986563
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-12T12-52-12.986563.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-12T12-52-12.986563.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_09_12T12_52_12.986563
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-12T12-52-12.986563.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-12T12-52-12.986563.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_09_12T12_52_12.986563
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-12T12-52-12.986563.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-12T12-52-12.986563.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_09_12T12_52_12.986563
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-12T12-52-12.986563.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-12T12-52-12.986563.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_09_12T12_52_12.986563
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-12T12-52-12.986563.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-12T12-52-12.986563.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_09_12T12_52_12.986563
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-12T12-52-12.986563.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-12T12-52-12.986563.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_09_12T12_52_12.986563
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-12T12-52-12.986563.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-12T12-52-12.986563.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_09_12T12_52_12.986563
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-12T12-52-12.986563.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-12T12-52-12.986563.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_09_12T12_52_12.986563
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-12T12-52-12.986563.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-12T12-52-12.986563.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_09_12T12_52_12.986563
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-12T12-52-12.986563.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-12T12-52-12.986563.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_09_12T12_52_12.986563
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-09-12T12-52-12.986563.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-09-12T12-52-12.986563.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_09_12T12_52_12.986563
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-12T12-52-12.986563.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-12T12-52-12.986563.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_09_12T12_52_12.986563
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-12T12-52-12.986563.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-12T12-52-12.986563.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_09_12T12_52_12.986563
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-12T12-52-12.986563.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-12T12-52-12.986563.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_09_12T12_52_12.986563
path:
- '**/details_harness|hendrycksTest-management|5_2023-09-12T12-52-12.986563.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-09-12T12-52-12.986563.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_09_12T12_52_12.986563
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-09-12T12-52-12.986563.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-09-12T12-52-12.986563.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_09_12T12_52_12.986563
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-12T12-52-12.986563.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-12T12-52-12.986563.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_09_12T12_52_12.986563
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-12T12-52-12.986563.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-12T12-52-12.986563.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_09_12T12_52_12.986563
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-12T12-52-12.986563.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-12T12-52-12.986563.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_09_12T12_52_12.986563
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-12T12-52-12.986563.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-12T12-52-12.986563.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_09_12T12_52_12.986563
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-12T12-52-12.986563.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-12T12-52-12.986563.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_09_12T12_52_12.986563
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-12T12-52-12.986563.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-12T12-52-12.986563.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_09_12T12_52_12.986563
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-12T12-52-12.986563.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-12T12-52-12.986563.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_09_12T12_52_12.986563
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-12T12-52-12.986563.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-12T12-52-12.986563.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_09_12T12_52_12.986563
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-12T12-52-12.986563.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-12T12-52-12.986563.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_09_12T12_52_12.986563
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-12T12-52-12.986563.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-12T12-52-12.986563.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_09_12T12_52_12.986563
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-12T12-52-12.986563.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-12T12-52-12.986563.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_09_12T12_52_12.986563
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-12T12-52-12.986563.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-12T12-52-12.986563.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_09_12T12_52_12.986563
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-12T12-52-12.986563.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-12T12-52-12.986563.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_09_12T12_52_12.986563
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-09-12T12-52-12.986563.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-09-12T12-52-12.986563.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_09_12T12_52_12.986563
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-12T12-52-12.986563.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-12T12-52-12.986563.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_09_12T12_52_12.986563
path:
- '**/details_harness|hendrycksTest-virology|5_2023-09-12T12-52-12.986563.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-09-12T12-52-12.986563.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_09_12T12_52_12.986563
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-12T12-52-12.986563.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-12T12-52-12.986563.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_09_12T12_52_12.986563
path:
- '**/details_harness|truthfulqa:mc|0_2023-09-12T12-52-12.986563.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-09-12T12-52-12.986563.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_24T20_19_53.869610
path:
- '**/details_harness|winogrande|5_2023-10-24T20-19-53.869610.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-24T20-19-53.869610.parquet'
- config_name: results
data_files:
- split: 2023_09_12T12_52_12.986563
path:
- results_2023-09-12T12-52-12.986563.parquet
- split: 2023_10_24T20_19_53.869610
path:
- results_2023-10-24T20-19-53.869610.parquet
- split: latest
path:
- results_2023-10-24T20-19-53.869610.parquet
---
# Dataset Card for Evaluation run of Yehoon/yehoon_llama2
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Yehoon/yehoon_llama2
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [Yehoon/yehoon_llama2](https://huggingface.co/Yehoon/yehoon_llama2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Yehoon__yehoon_llama2",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-24T20:19:53.869610](https://huggingface.co/datasets/open-llm-leaderboard/details_Yehoon__yehoon_llama2/blob/main/results_2023-10-24T20-19-53.869610.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.008598993288590604,
"em_stderr": 0.0009455579144542034,
"f1": 0.0916033976510068,
"f1_stderr": 0.0018917747787763773,
"acc": 0.4101086482368971,
"acc_stderr": 0.009683376605280791
},
"harness|drop|3": {
"em": 0.008598993288590604,
"em_stderr": 0.0009455579144542034,
"f1": 0.0916033976510068,
"f1_stderr": 0.0018917747787763773
},
"harness|gsm8k|5": {
"acc": 0.07278241091736164,
"acc_stderr": 0.007155604761167479
},
"harness|winogrande|5": {
"acc": 0.7474348855564326,
"acc_stderr": 0.012211148449394105
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
open-llm-leaderboard/details_arvindanand__ValidateAI-2-33B-AT | ---
pretty_name: Evaluation run of arvindanand/ValidateAI-2-33B-AT
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [arvindanand/ValidateAI-2-33B-AT](https://huggingface.co/arvindanand/ValidateAI-2-33B-AT)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_arvindanand__ValidateAI-2-33B-AT\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-04-11T10:40:11.745619](https://huggingface.co/datasets/open-llm-leaderboard/details_arvindanand__ValidateAI-2-33B-AT/blob/main/results_2024-04-11T10-40-11.745619.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.438052988517656,\n\
\ \"acc_stderr\": 0.0346314972067509,\n \"acc_norm\": 0.438872048721612,\n\
\ \"acc_norm_stderr\": 0.035346165655504726,\n \"mc1\": 0.29008567931456547,\n\
\ \"mc1_stderr\": 0.01588623687420952,\n \"mc2\": 0.444421475747165,\n\
\ \"mc2_stderr\": 0.015059232903143193\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.42918088737201365,\n \"acc_stderr\": 0.014464085894870653,\n\
\ \"acc_norm\": 0.4598976109215017,\n \"acc_norm_stderr\": 0.014564318856924848\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.47102170882294364,\n\
\ \"acc_stderr\": 0.004981394110706142,\n \"acc_norm\": 0.6288587930691097,\n\
\ \"acc_norm_stderr\": 0.004821228034624855\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.29,\n \"acc_stderr\": 0.045604802157206845,\n \
\ \"acc_norm\": 0.29,\n \"acc_norm_stderr\": 0.045604802157206845\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.3333333333333333,\n\
\ \"acc_stderr\": 0.04072314811876837,\n \"acc_norm\": 0.3333333333333333,\n\
\ \"acc_norm_stderr\": 0.04072314811876837\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.48026315789473684,\n \"acc_stderr\": 0.040657710025626057,\n\
\ \"acc_norm\": 0.48026315789473684,\n \"acc_norm_stderr\": 0.040657710025626057\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.47,\n\
\ \"acc_stderr\": 0.05016135580465919,\n \"acc_norm\": 0.47,\n \
\ \"acc_norm_stderr\": 0.05016135580465919\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.4075471698113208,\n \"acc_stderr\": 0.030242233800854498,\n\
\ \"acc_norm\": 0.4075471698113208,\n \"acc_norm_stderr\": 0.030242233800854498\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.3611111111111111,\n\
\ \"acc_stderr\": 0.04016660030451232,\n \"acc_norm\": 0.3611111111111111,\n\
\ \"acc_norm_stderr\": 0.04016660030451232\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.35,\n \"acc_stderr\": 0.0479372485441102,\n \
\ \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.0479372485441102\n },\n\
\ \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.46,\n\
\ \"acc_stderr\": 0.05009082659620332,\n \"acc_norm\": 0.46,\n \
\ \"acc_norm_stderr\": 0.05009082659620332\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.32,\n \"acc_stderr\": 0.04688261722621504,\n \
\ \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.04688261722621504\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.36416184971098264,\n\
\ \"acc_stderr\": 0.03669072477416907,\n \"acc_norm\": 0.36416184971098264,\n\
\ \"acc_norm_stderr\": 0.03669072477416907\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.28431372549019607,\n \"acc_stderr\": 0.04488482852329017,\n\
\ \"acc_norm\": 0.28431372549019607,\n \"acc_norm_stderr\": 0.04488482852329017\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.73,\n \"acc_stderr\": 0.044619604333847394,\n \"acc_norm\": 0.73,\n\
\ \"acc_norm_stderr\": 0.044619604333847394\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.39148936170212767,\n \"acc_stderr\": 0.031907012423268113,\n\
\ \"acc_norm\": 0.39148936170212767,\n \"acc_norm_stderr\": 0.031907012423268113\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.3157894736842105,\n\
\ \"acc_stderr\": 0.043727482902780064,\n \"acc_norm\": 0.3157894736842105,\n\
\ \"acc_norm_stderr\": 0.043727482902780064\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.4827586206896552,\n \"acc_stderr\": 0.04164188720169377,\n\
\ \"acc_norm\": 0.4827586206896552,\n \"acc_norm_stderr\": 0.04164188720169377\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.3941798941798942,\n \"acc_stderr\": 0.025167982333894143,\n \"\
acc_norm\": 0.3941798941798942,\n \"acc_norm_stderr\": 0.025167982333894143\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4523809523809524,\n\
\ \"acc_stderr\": 0.044518079590553275,\n \"acc_norm\": 0.4523809523809524,\n\
\ \"acc_norm_stderr\": 0.044518079590553275\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.22,\n \"acc_stderr\": 0.041633319989322695,\n \
\ \"acc_norm\": 0.22,\n \"acc_norm_stderr\": 0.041633319989322695\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\"\
: 0.4612903225806452,\n \"acc_stderr\": 0.028358634859836928,\n \"\
acc_norm\": 0.4612903225806452,\n \"acc_norm_stderr\": 0.028358634859836928\n\
\ },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\"\
: 0.3103448275862069,\n \"acc_stderr\": 0.03255086769970103,\n \"\
acc_norm\": 0.3103448275862069,\n \"acc_norm_stderr\": 0.03255086769970103\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.66,\n \"acc_stderr\": 0.04760952285695237,\n \"acc_norm\"\
: 0.66,\n \"acc_norm_stderr\": 0.04760952285695237\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.5393939393939394,\n \"acc_stderr\": 0.03892207016552012,\n\
\ \"acc_norm\": 0.5393939393939394,\n \"acc_norm_stderr\": 0.03892207016552012\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.4797979797979798,\n \"acc_stderr\": 0.0355944356556392,\n \"acc_norm\"\
: 0.4797979797979798,\n \"acc_norm_stderr\": 0.0355944356556392\n },\n\
\ \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \
\ \"acc\": 0.39378238341968913,\n \"acc_stderr\": 0.03526077095548237,\n\
\ \"acc_norm\": 0.39378238341968913,\n \"acc_norm_stderr\": 0.03526077095548237\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.36153846153846153,\n \"acc_stderr\": 0.024359581465396987,\n\
\ \"acc_norm\": 0.36153846153846153,\n \"acc_norm_stderr\": 0.024359581465396987\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.3592592592592593,\n \"acc_stderr\": 0.02925290592725198,\n \
\ \"acc_norm\": 0.3592592592592593,\n \"acc_norm_stderr\": 0.02925290592725198\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.42436974789915966,\n \"acc_stderr\": 0.03210479051015776,\n\
\ \"acc_norm\": 0.42436974789915966,\n \"acc_norm_stderr\": 0.03210479051015776\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.2913907284768212,\n \"acc_stderr\": 0.03710185726119995,\n \"\
acc_norm\": 0.2913907284768212,\n \"acc_norm_stderr\": 0.03710185726119995\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.5266055045871559,\n \"acc_stderr\": 0.021406952688151574,\n \"\
acc_norm\": 0.5266055045871559,\n \"acc_norm_stderr\": 0.021406952688151574\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.4027777777777778,\n \"acc_stderr\": 0.033448873829978666,\n \"\
acc_norm\": 0.4027777777777778,\n \"acc_norm_stderr\": 0.033448873829978666\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.45588235294117646,\n \"acc_stderr\": 0.03495624522015474,\n \"\
acc_norm\": 0.45588235294117646,\n \"acc_norm_stderr\": 0.03495624522015474\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.4810126582278481,\n \"acc_stderr\": 0.03252375148090448,\n \
\ \"acc_norm\": 0.4810126582278481,\n \"acc_norm_stderr\": 0.03252375148090448\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.4618834080717489,\n\
\ \"acc_stderr\": 0.03346015011973228,\n \"acc_norm\": 0.4618834080717489,\n\
\ \"acc_norm_stderr\": 0.03346015011973228\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.4580152671755725,\n \"acc_stderr\": 0.04369802690578756,\n\
\ \"acc_norm\": 0.4580152671755725,\n \"acc_norm_stderr\": 0.04369802690578756\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.5454545454545454,\n \"acc_stderr\": 0.045454545454545484,\n \"\
acc_norm\": 0.5454545454545454,\n \"acc_norm_stderr\": 0.045454545454545484\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.49074074074074076,\n\
\ \"acc_stderr\": 0.04832853553437055,\n \"acc_norm\": 0.49074074074074076,\n\
\ \"acc_norm_stderr\": 0.04832853553437055\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.4723926380368098,\n \"acc_stderr\": 0.039223782906109894,\n\
\ \"acc_norm\": 0.4723926380368098,\n \"acc_norm_stderr\": 0.039223782906109894\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.3392857142857143,\n\
\ \"acc_stderr\": 0.04493949068613539,\n \"acc_norm\": 0.3392857142857143,\n\
\ \"acc_norm_stderr\": 0.04493949068613539\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.6116504854368932,\n \"acc_stderr\": 0.04825729337356389,\n\
\ \"acc_norm\": 0.6116504854368932,\n \"acc_norm_stderr\": 0.04825729337356389\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.7094017094017094,\n\
\ \"acc_stderr\": 0.029745048572674047,\n \"acc_norm\": 0.7094017094017094,\n\
\ \"acc_norm_stderr\": 0.029745048572674047\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.49,\n \"acc_stderr\": 0.05024183937956912,\n \
\ \"acc_norm\": 0.49,\n \"acc_norm_stderr\": 0.05024183937956912\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.48020434227330777,\n\
\ \"acc_stderr\": 0.017865944827291622,\n \"acc_norm\": 0.48020434227330777,\n\
\ \"acc_norm_stderr\": 0.017865944827291622\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.4479768786127168,\n \"acc_stderr\": 0.026772990653361823,\n\
\ \"acc_norm\": 0.4479768786127168,\n \"acc_norm_stderr\": 0.026772990653361823\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.3139664804469274,\n\
\ \"acc_stderr\": 0.01552192393352363,\n \"acc_norm\": 0.3139664804469274,\n\
\ \"acc_norm_stderr\": 0.01552192393352363\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.4542483660130719,\n \"acc_stderr\": 0.02850980780262656,\n\
\ \"acc_norm\": 0.4542483660130719,\n \"acc_norm_stderr\": 0.02850980780262656\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.4437299035369775,\n\
\ \"acc_stderr\": 0.02821768355665231,\n \"acc_norm\": 0.4437299035369775,\n\
\ \"acc_norm_stderr\": 0.02821768355665231\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.36419753086419754,\n \"acc_stderr\": 0.02677492989972233,\n\
\ \"acc_norm\": 0.36419753086419754,\n \"acc_norm_stderr\": 0.02677492989972233\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.35106382978723405,\n \"acc_stderr\": 0.028473501272963764,\n \
\ \"acc_norm\": 0.35106382978723405,\n \"acc_norm_stderr\": 0.028473501272963764\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.3285528031290743,\n\
\ \"acc_stderr\": 0.011996027247502919,\n \"acc_norm\": 0.3285528031290743,\n\
\ \"acc_norm_stderr\": 0.011996027247502919\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.40808823529411764,\n \"acc_stderr\": 0.029855261393483924,\n\
\ \"acc_norm\": 0.40808823529411764,\n \"acc_norm_stderr\": 0.029855261393483924\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.35784313725490197,\n \"acc_stderr\": 0.019393058402355435,\n \
\ \"acc_norm\": 0.35784313725490197,\n \"acc_norm_stderr\": 0.019393058402355435\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.5636363636363636,\n\
\ \"acc_stderr\": 0.04750185058907296,\n \"acc_norm\": 0.5636363636363636,\n\
\ \"acc_norm_stderr\": 0.04750185058907296\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.5469387755102041,\n \"acc_stderr\": 0.03186785930004128,\n\
\ \"acc_norm\": 0.5469387755102041,\n \"acc_norm_stderr\": 0.03186785930004128\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.5572139303482587,\n\
\ \"acc_stderr\": 0.035123109641239374,\n \"acc_norm\": 0.5572139303482587,\n\
\ \"acc_norm_stderr\": 0.035123109641239374\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.6,\n \"acc_stderr\": 0.04923659639173309,\n \
\ \"acc_norm\": 0.6,\n \"acc_norm_stderr\": 0.04923659639173309\n },\n\
\ \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.3674698795180723,\n\
\ \"acc_stderr\": 0.03753267402120575,\n \"acc_norm\": 0.3674698795180723,\n\
\ \"acc_norm_stderr\": 0.03753267402120575\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.38596491228070173,\n \"acc_stderr\": 0.03733756969066164,\n\
\ \"acc_norm\": 0.38596491228070173,\n \"acc_norm_stderr\": 0.03733756969066164\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.29008567931456547,\n\
\ \"mc1_stderr\": 0.01588623687420952,\n \"mc2\": 0.444421475747165,\n\
\ \"mc2_stderr\": 0.015059232903143193\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.6258879242304657,\n \"acc_stderr\": 0.013599792958329816\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.3904473085670963,\n \
\ \"acc_stderr\": 0.013437829864668576\n }\n}\n```"
repo_url: https://huggingface.co/arvindanand/ValidateAI-2-33B-AT
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_04_11T10_40_11.745619
path:
- '**/details_harness|arc:challenge|25_2024-04-11T10-40-11.745619.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-04-11T10-40-11.745619.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_04_11T10_40_11.745619
path:
- '**/details_harness|gsm8k|5_2024-04-11T10-40-11.745619.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-04-11T10-40-11.745619.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_04_11T10_40_11.745619
path:
- '**/details_harness|hellaswag|10_2024-04-11T10-40-11.745619.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-04-11T10-40-11.745619.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_04_11T10_40_11.745619
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-11T10-40-11.745619.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-04-11T10-40-11.745619.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-11T10-40-11.745619.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_04_11T10_40_11.745619
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-11T10-40-11.745619.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-11T10-40-11.745619.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_04_11T10_40_11.745619
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-11T10-40-11.745619.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-11T10-40-11.745619.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_04_11T10_40_11.745619
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-11T10-40-11.745619.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-11T10-40-11.745619.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_04_11T10_40_11.745619
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-11T10-40-11.745619.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-11T10-40-11.745619.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_04_11T10_40_11.745619
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-11T10-40-11.745619.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-11T10-40-11.745619.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_04_11T10_40_11.745619
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-11T10-40-11.745619.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-11T10-40-11.745619.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_04_11T10_40_11.745619
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-11T10-40-11.745619.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-11T10-40-11.745619.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_04_11T10_40_11.745619
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-11T10-40-11.745619.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-11T10-40-11.745619.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_04_11T10_40_11.745619
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-11T10-40-11.745619.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-11T10-40-11.745619.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_04_11T10_40_11.745619
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-11T10-40-11.745619.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-11T10-40-11.745619.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_04_11T10_40_11.745619
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-11T10-40-11.745619.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-11T10-40-11.745619.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_04_11T10_40_11.745619
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-11T10-40-11.745619.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-11T10-40-11.745619.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_04_11T10_40_11.745619
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-11T10-40-11.745619.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-11T10-40-11.745619.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_04_11T10_40_11.745619
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-11T10-40-11.745619.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-11T10-40-11.745619.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_04_11T10_40_11.745619
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-11T10-40-11.745619.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-11T10-40-11.745619.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_04_11T10_40_11.745619
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-11T10-40-11.745619.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-11T10-40-11.745619.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_04_11T10_40_11.745619
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-11T10-40-11.745619.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-11T10-40-11.745619.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_04_11T10_40_11.745619
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-11T10-40-11.745619.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-11T10-40-11.745619.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_04_11T10_40_11.745619
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-11T10-40-11.745619.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-11T10-40-11.745619.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_04_11T10_40_11.745619
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-11T10-40-11.745619.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-11T10-40-11.745619.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_04_11T10_40_11.745619
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-11T10-40-11.745619.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-11T10-40-11.745619.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_04_11T10_40_11.745619
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-11T10-40-11.745619.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-11T10-40-11.745619.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_04_11T10_40_11.745619
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-11T10-40-11.745619.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-11T10-40-11.745619.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_04_11T10_40_11.745619
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-11T10-40-11.745619.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-11T10-40-11.745619.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_04_11T10_40_11.745619
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-11T10-40-11.745619.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-11T10-40-11.745619.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_04_11T10_40_11.745619
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-11T10-40-11.745619.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-11T10-40-11.745619.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_04_11T10_40_11.745619
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-11T10-40-11.745619.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-11T10-40-11.745619.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_04_11T10_40_11.745619
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-11T10-40-11.745619.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-11T10-40-11.745619.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_04_11T10_40_11.745619
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-11T10-40-11.745619.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-11T10-40-11.745619.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_04_11T10_40_11.745619
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-11T10-40-11.745619.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-11T10-40-11.745619.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_04_11T10_40_11.745619
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-11T10-40-11.745619.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-11T10-40-11.745619.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_04_11T10_40_11.745619
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-11T10-40-11.745619.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-11T10-40-11.745619.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_04_11T10_40_11.745619
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-11T10-40-11.745619.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-11T10-40-11.745619.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_04_11T10_40_11.745619
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-11T10-40-11.745619.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-11T10-40-11.745619.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_04_11T10_40_11.745619
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-04-11T10-40-11.745619.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-04-11T10-40-11.745619.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_04_11T10_40_11.745619
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-11T10-40-11.745619.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-11T10-40-11.745619.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_04_11T10_40_11.745619
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-11T10-40-11.745619.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-11T10-40-11.745619.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_04_11T10_40_11.745619
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-11T10-40-11.745619.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-11T10-40-11.745619.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_04_11T10_40_11.745619
path:
- '**/details_harness|hendrycksTest-management|5_2024-04-11T10-40-11.745619.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-04-11T10-40-11.745619.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_04_11T10_40_11.745619
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-04-11T10-40-11.745619.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-04-11T10-40-11.745619.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_04_11T10_40_11.745619
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-11T10-40-11.745619.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-11T10-40-11.745619.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_04_11T10_40_11.745619
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-11T10-40-11.745619.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-11T10-40-11.745619.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_04_11T10_40_11.745619
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-11T10-40-11.745619.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-11T10-40-11.745619.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_04_11T10_40_11.745619
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-11T10-40-11.745619.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-11T10-40-11.745619.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_04_11T10_40_11.745619
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-11T10-40-11.745619.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-11T10-40-11.745619.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_04_11T10_40_11.745619
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-11T10-40-11.745619.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-11T10-40-11.745619.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_04_11T10_40_11.745619
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-11T10-40-11.745619.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-11T10-40-11.745619.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_04_11T10_40_11.745619
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-11T10-40-11.745619.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-11T10-40-11.745619.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_04_11T10_40_11.745619
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-11T10-40-11.745619.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-11T10-40-11.745619.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_04_11T10_40_11.745619
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-11T10-40-11.745619.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-11T10-40-11.745619.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_04_11T10_40_11.745619
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-11T10-40-11.745619.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-11T10-40-11.745619.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_04_11T10_40_11.745619
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-11T10-40-11.745619.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-11T10-40-11.745619.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_04_11T10_40_11.745619
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-11T10-40-11.745619.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-11T10-40-11.745619.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_04_11T10_40_11.745619
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-04-11T10-40-11.745619.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-04-11T10-40-11.745619.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_04_11T10_40_11.745619
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-11T10-40-11.745619.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-11T10-40-11.745619.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_04_11T10_40_11.745619
path:
- '**/details_harness|hendrycksTest-virology|5_2024-04-11T10-40-11.745619.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-04-11T10-40-11.745619.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_04_11T10_40_11.745619
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-11T10-40-11.745619.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-11T10-40-11.745619.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_04_11T10_40_11.745619
path:
- '**/details_harness|truthfulqa:mc|0_2024-04-11T10-40-11.745619.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-04-11T10-40-11.745619.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_04_11T10_40_11.745619
path:
- '**/details_harness|winogrande|5_2024-04-11T10-40-11.745619.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-04-11T10-40-11.745619.parquet'
- config_name: results
data_files:
- split: 2024_04_11T10_40_11.745619
path:
- results_2024-04-11T10-40-11.745619.parquet
- split: latest
path:
- results_2024-04-11T10-40-11.745619.parquet
---
# Dataset Card for Evaluation run of arvindanand/ValidateAI-2-33B-AT
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [arvindanand/ValidateAI-2-33B-AT](https://huggingface.co/arvindanand/ValidateAI-2-33B-AT) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_arvindanand__ValidateAI-2-33B-AT",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-04-11T10:40:11.745619](https://huggingface.co/datasets/open-llm-leaderboard/details_arvindanand__ValidateAI-2-33B-AT/blob/main/results_2024-04-11T10-40-11.745619.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.438052988517656,
"acc_stderr": 0.0346314972067509,
"acc_norm": 0.438872048721612,
"acc_norm_stderr": 0.035346165655504726,
"mc1": 0.29008567931456547,
"mc1_stderr": 0.01588623687420952,
"mc2": 0.444421475747165,
"mc2_stderr": 0.015059232903143193
},
"harness|arc:challenge|25": {
"acc": 0.42918088737201365,
"acc_stderr": 0.014464085894870653,
"acc_norm": 0.4598976109215017,
"acc_norm_stderr": 0.014564318856924848
},
"harness|hellaswag|10": {
"acc": 0.47102170882294364,
"acc_stderr": 0.004981394110706142,
"acc_norm": 0.6288587930691097,
"acc_norm_stderr": 0.004821228034624855
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.29,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.29,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.3333333333333333,
"acc_stderr": 0.04072314811876837,
"acc_norm": 0.3333333333333333,
"acc_norm_stderr": 0.04072314811876837
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.48026315789473684,
"acc_stderr": 0.040657710025626057,
"acc_norm": 0.48026315789473684,
"acc_norm_stderr": 0.040657710025626057
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.47,
"acc_stderr": 0.05016135580465919,
"acc_norm": 0.47,
"acc_norm_stderr": 0.05016135580465919
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.4075471698113208,
"acc_stderr": 0.030242233800854498,
"acc_norm": 0.4075471698113208,
"acc_norm_stderr": 0.030242233800854498
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.3611111111111111,
"acc_stderr": 0.04016660030451232,
"acc_norm": 0.3611111111111111,
"acc_norm_stderr": 0.04016660030451232
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.35,
"acc_stderr": 0.0479372485441102,
"acc_norm": 0.35,
"acc_norm_stderr": 0.0479372485441102
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.46,
"acc_stderr": 0.05009082659620332,
"acc_norm": 0.46,
"acc_norm_stderr": 0.05009082659620332
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.32,
"acc_stderr": 0.04688261722621504,
"acc_norm": 0.32,
"acc_norm_stderr": 0.04688261722621504
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.36416184971098264,
"acc_stderr": 0.03669072477416907,
"acc_norm": 0.36416184971098264,
"acc_norm_stderr": 0.03669072477416907
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.28431372549019607,
"acc_stderr": 0.04488482852329017,
"acc_norm": 0.28431372549019607,
"acc_norm_stderr": 0.04488482852329017
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.73,
"acc_stderr": 0.044619604333847394,
"acc_norm": 0.73,
"acc_norm_stderr": 0.044619604333847394
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.39148936170212767,
"acc_stderr": 0.031907012423268113,
"acc_norm": 0.39148936170212767,
"acc_norm_stderr": 0.031907012423268113
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.3157894736842105,
"acc_stderr": 0.043727482902780064,
"acc_norm": 0.3157894736842105,
"acc_norm_stderr": 0.043727482902780064
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.4827586206896552,
"acc_stderr": 0.04164188720169377,
"acc_norm": 0.4827586206896552,
"acc_norm_stderr": 0.04164188720169377
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.3941798941798942,
"acc_stderr": 0.025167982333894143,
"acc_norm": 0.3941798941798942,
"acc_norm_stderr": 0.025167982333894143
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.4523809523809524,
"acc_stderr": 0.044518079590553275,
"acc_norm": 0.4523809523809524,
"acc_norm_stderr": 0.044518079590553275
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.22,
"acc_stderr": 0.041633319989322695,
"acc_norm": 0.22,
"acc_norm_stderr": 0.041633319989322695
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.4612903225806452,
"acc_stderr": 0.028358634859836928,
"acc_norm": 0.4612903225806452,
"acc_norm_stderr": 0.028358634859836928
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.3103448275862069,
"acc_stderr": 0.03255086769970103,
"acc_norm": 0.3103448275862069,
"acc_norm_stderr": 0.03255086769970103
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.66,
"acc_stderr": 0.04760952285695237,
"acc_norm": 0.66,
"acc_norm_stderr": 0.04760952285695237
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.5393939393939394,
"acc_stderr": 0.03892207016552012,
"acc_norm": 0.5393939393939394,
"acc_norm_stderr": 0.03892207016552012
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.4797979797979798,
"acc_stderr": 0.0355944356556392,
"acc_norm": 0.4797979797979798,
"acc_norm_stderr": 0.0355944356556392
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.39378238341968913,
"acc_stderr": 0.03526077095548237,
"acc_norm": 0.39378238341968913,
"acc_norm_stderr": 0.03526077095548237
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.36153846153846153,
"acc_stderr": 0.024359581465396987,
"acc_norm": 0.36153846153846153,
"acc_norm_stderr": 0.024359581465396987
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3592592592592593,
"acc_stderr": 0.02925290592725198,
"acc_norm": 0.3592592592592593,
"acc_norm_stderr": 0.02925290592725198
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.42436974789915966,
"acc_stderr": 0.03210479051015776,
"acc_norm": 0.42436974789915966,
"acc_norm_stderr": 0.03210479051015776
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.2913907284768212,
"acc_stderr": 0.03710185726119995,
"acc_norm": 0.2913907284768212,
"acc_norm_stderr": 0.03710185726119995
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.5266055045871559,
"acc_stderr": 0.021406952688151574,
"acc_norm": 0.5266055045871559,
"acc_norm_stderr": 0.021406952688151574
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.4027777777777778,
"acc_stderr": 0.033448873829978666,
"acc_norm": 0.4027777777777778,
"acc_norm_stderr": 0.033448873829978666
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.45588235294117646,
"acc_stderr": 0.03495624522015474,
"acc_norm": 0.45588235294117646,
"acc_norm_stderr": 0.03495624522015474
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.4810126582278481,
"acc_stderr": 0.03252375148090448,
"acc_norm": 0.4810126582278481,
"acc_norm_stderr": 0.03252375148090448
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.4618834080717489,
"acc_stderr": 0.03346015011973228,
"acc_norm": 0.4618834080717489,
"acc_norm_stderr": 0.03346015011973228
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.4580152671755725,
"acc_stderr": 0.04369802690578756,
"acc_norm": 0.4580152671755725,
"acc_norm_stderr": 0.04369802690578756
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.5454545454545454,
"acc_stderr": 0.045454545454545484,
"acc_norm": 0.5454545454545454,
"acc_norm_stderr": 0.045454545454545484
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.49074074074074076,
"acc_stderr": 0.04832853553437055,
"acc_norm": 0.49074074074074076,
"acc_norm_stderr": 0.04832853553437055
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.4723926380368098,
"acc_stderr": 0.039223782906109894,
"acc_norm": 0.4723926380368098,
"acc_norm_stderr": 0.039223782906109894
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.3392857142857143,
"acc_stderr": 0.04493949068613539,
"acc_norm": 0.3392857142857143,
"acc_norm_stderr": 0.04493949068613539
},
"harness|hendrycksTest-management|5": {
"acc": 0.6116504854368932,
"acc_stderr": 0.04825729337356389,
"acc_norm": 0.6116504854368932,
"acc_norm_stderr": 0.04825729337356389
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.7094017094017094,
"acc_stderr": 0.029745048572674047,
"acc_norm": 0.7094017094017094,
"acc_norm_stderr": 0.029745048572674047
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.49,
"acc_stderr": 0.05024183937956912,
"acc_norm": 0.49,
"acc_norm_stderr": 0.05024183937956912
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.48020434227330777,
"acc_stderr": 0.017865944827291622,
"acc_norm": 0.48020434227330777,
"acc_norm_stderr": 0.017865944827291622
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.4479768786127168,
"acc_stderr": 0.026772990653361823,
"acc_norm": 0.4479768786127168,
"acc_norm_stderr": 0.026772990653361823
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.3139664804469274,
"acc_stderr": 0.01552192393352363,
"acc_norm": 0.3139664804469274,
"acc_norm_stderr": 0.01552192393352363
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.4542483660130719,
"acc_stderr": 0.02850980780262656,
"acc_norm": 0.4542483660130719,
"acc_norm_stderr": 0.02850980780262656
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.4437299035369775,
"acc_stderr": 0.02821768355665231,
"acc_norm": 0.4437299035369775,
"acc_norm_stderr": 0.02821768355665231
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.36419753086419754,
"acc_stderr": 0.02677492989972233,
"acc_norm": 0.36419753086419754,
"acc_norm_stderr": 0.02677492989972233
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.35106382978723405,
"acc_stderr": 0.028473501272963764,
"acc_norm": 0.35106382978723405,
"acc_norm_stderr": 0.028473501272963764
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.3285528031290743,
"acc_stderr": 0.011996027247502919,
"acc_norm": 0.3285528031290743,
"acc_norm_stderr": 0.011996027247502919
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.40808823529411764,
"acc_stderr": 0.029855261393483924,
"acc_norm": 0.40808823529411764,
"acc_norm_stderr": 0.029855261393483924
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.35784313725490197,
"acc_stderr": 0.019393058402355435,
"acc_norm": 0.35784313725490197,
"acc_norm_stderr": 0.019393058402355435
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.5636363636363636,
"acc_stderr": 0.04750185058907296,
"acc_norm": 0.5636363636363636,
"acc_norm_stderr": 0.04750185058907296
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.5469387755102041,
"acc_stderr": 0.03186785930004128,
"acc_norm": 0.5469387755102041,
"acc_norm_stderr": 0.03186785930004128
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.5572139303482587,
"acc_stderr": 0.035123109641239374,
"acc_norm": 0.5572139303482587,
"acc_norm_stderr": 0.035123109641239374
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.6,
"acc_stderr": 0.04923659639173309,
"acc_norm": 0.6,
"acc_norm_stderr": 0.04923659639173309
},
"harness|hendrycksTest-virology|5": {
"acc": 0.3674698795180723,
"acc_stderr": 0.03753267402120575,
"acc_norm": 0.3674698795180723,
"acc_norm_stderr": 0.03753267402120575
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.38596491228070173,
"acc_stderr": 0.03733756969066164,
"acc_norm": 0.38596491228070173,
"acc_norm_stderr": 0.03733756969066164
},
"harness|truthfulqa:mc|0": {
"mc1": 0.29008567931456547,
"mc1_stderr": 0.01588623687420952,
"mc2": 0.444421475747165,
"mc2_stderr": 0.015059232903143193
},
"harness|winogrande|5": {
"acc": 0.6258879242304657,
"acc_stderr": 0.013599792958329816
},
"harness|gsm8k|5": {
"acc": 0.3904473085670963,
"acc_stderr": 0.013437829864668576
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
nathanael-yzr/test_dataset1 | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 44
num_examples: 1
download_size: 1351
dataset_size: 44
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ShrinivasSK/hi_kn_1 | ---
dataset_info:
features:
- name: source
dtype: string
- name: target
dtype: string
splits:
- name: train
num_bytes: 5155860.6
num_examples: 18000
- name: test
num_bytes: 572873.4
num_examples: 2000
download_size: 2612672
dataset_size: 5728734.0
---
# Dataset Card for "hi_kn_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
clarin-pl/aspectemo | ---
annotations_creators:
- expert-generated
language_creators:
- other
language:
- pl
license:
- mit
multilinguality:
- monolingual
pretty_name: 'AspectEmo'
size_categories:
- 1K
- 1K<n<10K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- sentiment-classification
---
# AspectEmo
## Description
AspectEmo Corpus is an extended version of a publicly available PolEmo 2.0 corpus of Polish customer reviews used in many projects on the use of different methods in sentiment analysis. The AspectEmo corpus consists of four subcorpora, each containing online customer reviews from the following domains: school, medicine, hotels, and products. All documents are annotated at the aspect level with six sentiment categories: strong negative (minus_m), weak negative (minus_s), neutral (zero), weak positive (plus_s), strong positive (plus_m).
## Versions
| version | config name | description | default | notes |
|---------|-------------|--------------------------------|---------|------------------|
| 1.0 | "1.0" | The version used in the paper. | YES | |
| 2.0 | - | Some bugs fixed. | NO | work in progress |
## Tasks (input, output and metrics)
Aspect-based sentiment analysis (ABSA) is a text analysis method that categorizes data by aspects and identifies the sentiment assigned to each aspect. It is the sequence tagging task.
**Input** ('*tokens'* column): sequence of tokens
**Output** ('*labels'* column): sequence of predicted tokens’ classes ("O" + 6 possible classes: strong negative (a_minus_m), weak negative (a_minus_s), neutral (a_zero), weak positive (a_plus_s), strong positive (a_plus_m), ambiguous (a_amb) )
**Domain**: school, medicine, hotels and products
**Measurements**: F1-score (seqeval)
**Example***:*
Input: `['Dużo', 'wymaga', ',', 'ale', 'bardzo', 'uczciwy', 'i', 'przyjazny', 'studentom', '.', 'Warto', 'chodzić', 'na', 'konsultacje', '.', 'Docenia', 'postępy', 'i', 'zaangażowanie', '.', 'Polecam', '.']`
Input (translated by DeepL): `'Demands a lot , but very honest and student friendly . Worth going to consultations . Appreciates progress and commitment . I recommend .'`
Output: `['O', 'a_plus_s', 'O', 'O', 'O', 'a_plus_m', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'a_zero', 'O', 'a_plus_m', 'O', 'O', 'O', 'O', 'O', 'O']`
## Data splits
| Subset | Cardinality (sentences) |
|:-------|------------------------:|
| train | 1173 |
| val | 0 |
| test | 292 |
## Class distribution(without "O")
| Class | train | validation | test |
|:----------|--------:|-------------:|-------:|
| a_plus_m | 0.359 | - | 0.369 |
| a_minus_m | 0.305 | - | 0.377 |
| a_zero | 0.234 | - | 0.182 |
| a_minus_s | 0.037 | - | 0.024 |
| a_plus_s | 0.037 | - | 0.015 |
| a_amb | 0.027 | - | 0.033 |
## Citation
```
@misc{11321/849,
title = {{AspectEmo} 1.0: Multi-Domain Corpus of Consumer Reviews for Aspect-Based Sentiment Analysis},
author = {Koco{\'n}, Jan and Radom, Jarema and Kaczmarz-Wawryk, Ewa and Wabnic, Kamil and Zaj{\c a}czkowska, Ada and Za{\'s}ko-Zieli{\'n}ska, Monika},
url = {http://hdl.handle.net/11321/849},
note = {{CLARIN}-{PL} digital repository},
copyright = {The {MIT} License},
year = {2021}
}
```
## License
```
The MIT License
```
## Links
[HuggingFace](https://huggingface.co/datasets/clarin-pl/aspectemo)
[Source](https://clarin-pl.eu/dspace/handle/11321/849)
[Paper](https://sentic.net/sentire2021kocon.pdf)
## Examples
### Loading
```python
from pprint import pprint
from datasets import load_dataset
dataset = load_dataset("clarin-pl/aspectemo")
pprint(dataset['train'][20])
# {'labels': [0, 4, 0, 0, 0, 5, 0, 0, 0, 0, 0, 0, 0, 3, 0, 5, 0, 0, 0, 0, 0, 0],
# 'tokens': ['Dużo',
# 'wymaga',
# ',',
# 'ale',
# 'bardzo',
# 'uczciwy',
# 'i',
# 'przyjazny',
# 'studentom',
# '.',
# 'Warto',
# 'chodzić',
# 'na',
# 'konsultacje',
# '.',
# 'Docenia',
# 'postępy',
# 'i',
# 'zaangażowanie',
# '.',
# 'Polecam',
# '.']}
```
### Evaluation
```python
import random
from pprint import pprint
from datasets import load_dataset, load_metric
dataset = load_dataset("clarin-pl/aspectemo")
references = dataset["test"]["labels"]
# generate random predictions
predictions = [
[
random.randrange(dataset["train"].features["labels"].feature.num_classes)
for _ in range(len(labels))
]
for labels in references
]
# transform to original names of labels
references_named = [
[dataset["train"].features["labels"].feature.names[label] for label in labels]
for labels in references
]
predictions_named = [
[dataset["train"].features["labels"].feature.names[label] for label in labels]
for labels in predictions
]
# transform to BILOU scheme
references_named = [
[f"U-{label}" if label != "O" else label for label in labels]
for labels in references_named
]
predictions_named = [
[f"U-{label}" if label != "O" else label for label in labels]
for labels in predictions_named
]
# utilise seqeval to evaluate
seqeval = load_metric("seqeval")
seqeval_score = seqeval.compute(
predictions=predictions_named,
references=references_named,
scheme="BILOU",
mode="strict",
)
pprint(seqeval_score)
# {'a_amb': {'f1': 0.00597237775289287,
# 'number': 91,
# 'precision': 0.003037782418834251,
# 'recall': 0.17582417582417584},
# 'a_minus_m': {'f1': 0.048306148055207034,
# 'number': 1039,
# 'precision': 0.0288551620760727,
# 'recall': 0.1482194417709336},
# 'a_minus_s': {'f1': 0.004682997118155619,
# 'number': 67,
# 'precision': 0.0023701002734731083,
# 'recall': 0.19402985074626866},
# 'a_plus_m': {'f1': 0.045933014354066985,
# 'number': 1015,
# 'precision': 0.027402473834443386,
# 'recall': 0.14187192118226602},
# 'a_plus_s': {'f1': 0.0021750951604132683,
# 'number': 41,
# 'precision': 0.001095690284879474,
# 'recall': 0.14634146341463414},
# 'a_zero': {'f1': 0.025159400310184387,
# 'number': 501,
# 'precision': 0.013768389287061486,
# 'recall': 0.14570858283433133},
# 'overall_accuracy': 0.13970115681233933,
# 'overall_f1': 0.02328248652368391,
# 'overall_precision': 0.012639312620633834,
# 'overall_recall': 0.14742193173565724}
``` |
jquave/e_micro | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
multilinguality:
- monolingual
pretty_name: EDataset
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
dataset_info:
features:
- name: text
dtype: string
config_name: plain_text
---
## E micro Dataset
This is the card for e micro dataset |
styletts2-community/multilingual-phonemes-10k-alpha | ---
license: cc-by-sa-3.0
license_name: cc-by-sa
configs:
- config_name: en
data_files: en.json
default: true
- config_name: en-xl
data_files: en-xl.json
- config_name: ca
data_files: ca.json
- config_name: de
data_files: de.json
- config_name: es
data_files: es.json
- config_name: el
data_files: el.json
- config_name: fa
data_files: fa.json
- config_name: fi
data_files: fi.json
- config_name: fr
data_files: fr.json
- config_name: it
data_files: it.json
- config_name: pl
data_files: pl.json
- config_name: pt
data_files: pt.json
- config_name: ru
data_files: ru.json
- config_name: sv
data_files: sv.json
- config_name: uk
data_files: uk.json
- config_name: zh
data_files: zh.json
language:
- en
- ca
- de
- es
- el
- fa
- fi
- fr
- it
- pl
- pt
- ru
- sv
- uk
- zh
tags:
- synthetic
---
# Multilingual Phonemes 10K Alpha
This dataset contains approximately 10,000 pairs of text and phonemes from each supported language. We support 15 languages in this dataset, so we have a total of ~150K pairs. This does not include the English-XL dataset, which includes another 100K unique rows.
## Languages
We support 15 languages, which means we have around 150,000 pairs of text and phonemes in multiple languages. This excludes the English-XL dataset, which has 100K unique (not included in any other split) additional phonemized pairs.
* English (en)
* English-XL (en-xl): ~100K phonemized pairs, English-only
* Catalan (ca)
* German (de)
* Spanish (es)
* Greek (el)
* Persian (fa): Requested by [@Respair](https://huggingface.co/Respair)
* Finnish (fi)
* French (fr)
* Italian (it)
* Polish (pl)
* Portuguese (pt)
* Russian (ru)
* Swedish (sw)
* Ukrainian (uk)
* Chinese (zh): Thank you to [@eugenepentland](https://huggingface.co/eugenepentland) for assistance in processing this text, as East-Asian languages are the most compute-intensive!
## License + Credits
Source data comes from [Wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) and is licensed under CC-BY-SA 3.0. This dataset is licensed under CC-BY-SA 3.0.
## Processing
We utilized the following process to preprocess the dataset:
1. Download data from [Wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) by language, selecting only the first Parquet file and naming it with the language code
2. Process using [Data Preprocessing Scripts (StyleTTS 2 Community members only)](https://huggingface.co/styletts2-community/data-preprocessing-scripts) and modify the code to work with the language
3. Script: Clean the text
4. Script: Remove ultra-short phrases
5. Script: Phonemize
6. Script: Save JSON
7. Upload dataset
## Note
East-Asian languages are experimental. We do not distinguish between Traditional and Simplified Chinese. The dataset consists mainly of Simplified Chinese in the `zh` split. We recommend converting characters to Simplified Chinese during inference, using a library such as `hanziconv` or `chinese-converter`. |
donbatatone/narkeshao | ---
license: openrail
---
|
DanielHesslow/SwissProt-EC | ---
language:
- protein sequences
datasets:
- Swissprot
tags:
- Protein
- Enzyme Commission
- EC
---
Swissprot is a high quality manually annotated protein database. The dataset contains annotations with the functional properties of the proteins. Here we extract proteins with Enzyme Commission labels.
The dataset is ported from Protinfer: https://github.com/google-research/proteinfer.
The EC-labels are extracted and indexed, the mapping is provided in `idx_mapping.json`. Proteins without EC tags are removed.
|
ahuang11/tiger_layer_edges | ---
license: unknown
---
An unofficial re-packaged parquet files of TIGER/Line® Edges data provided by the US Census Bureau.
See LICENSE.pdf for more details. |
TIGER-Lab/MetricInstruct | ---
configs:
- config_name: train
data_files:
- split: train
path:
- data/mix_instruct_v1.2.json
license: mit
task_categories:
- text-generation
language:
- en
- zh
- cs
- ru
- fr
size_categories:
- 10K<n<100K
---
## MetricInstruct
The MetricInstrcut dataset consists of 44K quadruple in the form of (instruction, input, system output, error analysis) for 6 text generation tasks and 22 text generation datasets. The dataset is used to fine-tune [TIGERScore](https://huggingface.co/TIGER-Lab/TIGERScore-7B-V1.2), a **T**rained metric that follows **I**nstruction **G**uidance to perform **E**xplainable, and **R**eference-free evaluation over a wide spectrum of text generation tasks.
[Project Page](https://tiger-ai-lab.github.io/TIGERScore/) | [Paper](https://arxiv.org/abs/2310.00752) | [Code](https://github.com/TIGER-AI-Lab/TIGERScore) | [Demo](https://huggingface.co/spaces/TIGER-Lab/TIGERScore) |
[TIGERScore-7B](https://huggingface.co/TIGER-Lab/TIGERScore-7B-V1.2) | [TIGERScore-13B](https://huggingface.co/TIGER-Lab/TIGERScore-13B-V1.2)
We present the MetricInstruct dataset, which is employed to fine-tune TIGERScore. The three underlying criteria for dataset construction are:
1. Dataset diversity: we choose 22 distinctive datasets as the source context to cover enough generation tasks.
2. Error coverage: we take system outputs generated from 50+ text generation systems to cover all types of errors and guarantee a balanced distribution.
3. Quality ensurance: to ensure MetricInstruct is tailored to gather in-depth error analysis, we sourced it by prompting OpenAI GPT models and then filtered through different heuristics to eliminate low-quality error analysis.
## Data Source
Our system outputs come from two channels, namely real-world system outputs and synthetic outputs. The real-world system outputs are obtained from real systems, which ensures the error distribution is aligned with real-world ones.
Check out our paper for more details.
| Task | Real-World Dataset | Output Source | Synthetic Dataset | Output Source |
|:--------:|:-----------------------------------------:|:--------------:|:-----------------------------------:|:--------------:|
| Summarization | SummEval, XSum,Newsroom,SAMSum | 27 Systems | CNN/DM, XSum,Gigaword,SAMSum | GPT-4 |
| Translation | WMT | 18 Systems | WMT | GPT-4 |
| Data-to-Text | WebNLG-2020,WikiTableText,ToTTo | 17 Systems | WikiTableText,Dart,ToTTo | GPT-4 |
| Long-Form QA | ASQA,FeTaQA,CosmosQA,ELI5 | 5 Systems | ASQA,FeTaQA,Cosmos QA,ELI5 | GPT-4 |
| MathQA | GSM8K | 5 Systems | N/A | N/A |
| Instruct | MixInstruct | 11 Systems | AlpacaFarm,OASST1,Guanaco,Dolly | GPT-4 |
## Data Format
The dataset consists of 44K quadruple in the form of (instruction, input, system output, error analysis).
For each item in the dataset, `instruction` is its task instruction, `input_context` is its input source, and `hypo_output` is the generated output, and `errors` is the error analysis given by ChatGPT or GPT-4.
## Formatting
To format the data fields into a single prompt for finetuning or testing, We provide the following code for users to refer:
```python
FINETUNE_INST = "You are evaluating errors in a model-generated output for a given instruction."
FINETUNE_INPUT = """\
Instruction: ${generation_instruction}
${input_context}
Model-generated Output:
${hypothesis_output}
For each error you give in the response, please also elaborate the following information:
- error location (the words that are wrong in the output)
- error aspect it belongs to.
- explanation why it's an error, and the correction suggestions.
- severity of the error ("Major" or "Minor").
- reduction of score (between 0.5 and 5 given the severity of the error)
Your evaluation output:
"""
inst_part = Template(FINETUNE_INST)
inst_part = inst_part.substitute()
input_part = Template(FINETUNE_INPUT)
input_part = input_part.substitute(
generation_instruction=instruction,
input_context=input_context,
hypothesis_output=hypo_output
)
prompt = (inst_part + "\n" + input_part).strip("\n ") + "\n"
encodings = tigerscore_tokenizer(prompt, return_tensors="pt")
input_ids = encodings["input_ids"].to(tigerscore_model.device)
attention_mask = encodings["attention_mask"].to(tigerscore_model.device)
```
Example of formatted prompt:
```txt
You are evaluating errors in a model-generated output for a given instruction.
Instruction: Translate the following text from German to English.
Der künftige EM-Cheforganisator Philipp Lahm soll laut Grindel im DFB-Präsidium mitarbeiten.
Model-generated Output:
According to Grindel, the future head of the European Championships, Philipp Lahm, is to participate in the DFB Presidency.
For each error you give in the response, please also elaborate the following information:
- error location (the words that are wrong in the output)
- error aspect it belongs to.
- explanation why it's an error, and the correction suggestions.
- severity of the error ("Major" or "Minor").
- reduction of score (between 0.5 and 5 given the severity of the error)
Your evaluation output:
```
## Citation
```
@article{jiang2023TIGERScore,
title={TIGERScore: Towards Building Explainable Metric for All Text Generation Tasks},
author={Dongfu Jiang, Yishan Li, Ge Zhang, Wenhao Huang, Bill Yuchen Lin, Wenhu Chen},
journal={arXiv preprint arXiv:2310.00752},
year={2023}
}
``` |
open-llm-leaderboard/details_EleutherAI__pythia-1.4b-deduped | ---
pretty_name: Evaluation run of EleutherAI/pythia-1.4b-deduped
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [EleutherAI/pythia-1.4b-deduped](https://huggingface.co/EleutherAI/pythia-1.4b-deduped)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 64 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_EleutherAI__pythia-1.4b-deduped\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-16T20:03:21.000306](https://huggingface.co/datasets/open-llm-leaderboard/details_EleutherAI__pythia-1.4b-deduped/blob/main/results_2023-10-16T20-03-21.000306.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0014681208053691276,\n\
\ \"em_stderr\": 0.0003921042190298455,\n \"f1\": 0.04330536912751699,\n\
\ \"f1_stderr\": 0.0011661836886516016,\n \"acc\": 0.29067337732239573,\n\
\ \"acc_stderr\": 0.008203410149717792\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.0014681208053691276,\n \"em_stderr\": 0.0003921042190298455,\n\
\ \"f1\": 0.04330536912751699,\n \"f1_stderr\": 0.0011661836886516016\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.008339651250947688,\n \
\ \"acc_stderr\": 0.002504942226860525\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.5730071033938438,\n \"acc_stderr\": 0.013901878072575058\n\
\ }\n}\n```"
repo_url: https://huggingface.co/EleutherAI/pythia-1.4b-deduped
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_07_19T15_11_31.913251
path:
- '**/details_harness|arc:challenge|25_2023-07-19T15:11:31.913251.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-07-19T15:11:31.913251.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_10_16T20_03_21.000306
path:
- '**/details_harness|drop|3_2023-10-16T20-03-21.000306.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-16T20-03-21.000306.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_16T20_03_21.000306
path:
- '**/details_harness|gsm8k|5_2023-10-16T20-03-21.000306.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-16T20-03-21.000306.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_07_19T15_11_31.913251
path:
- '**/details_harness|hellaswag|10_2023-07-19T15:11:31.913251.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-07-19T15:11:31.913251.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_07_19T15_11_31.913251
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T15:11:31.913251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T15:11:31.913251.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T15:11:31.913251.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_07_19T15_11_31.913251
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T15:11:31.913251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T15:11:31.913251.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_07_19T15_11_31.913251
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T15:11:31.913251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T15:11:31.913251.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_07_19T15_11_31.913251
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T15:11:31.913251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T15:11:31.913251.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_07_19T15_11_31.913251
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T15:11:31.913251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T15:11:31.913251.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_07_19T15_11_31.913251
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T15:11:31.913251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T15:11:31.913251.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_07_19T15_11_31.913251
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T15:11:31.913251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T15:11:31.913251.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_07_19T15_11_31.913251
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T15:11:31.913251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T15:11:31.913251.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_07_19T15_11_31.913251
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T15:11:31.913251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T15:11:31.913251.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_07_19T15_11_31.913251
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T15:11:31.913251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T15:11:31.913251.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_07_19T15_11_31.913251
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T15:11:31.913251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T15:11:31.913251.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_07_19T15_11_31.913251
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T15:11:31.913251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T15:11:31.913251.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_07_19T15_11_31.913251
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T15:11:31.913251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T15:11:31.913251.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_07_19T15_11_31.913251
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T15:11:31.913251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T15:11:31.913251.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_07_19T15_11_31.913251
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T15:11:31.913251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T15:11:31.913251.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_07_19T15_11_31.913251
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T15:11:31.913251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T15:11:31.913251.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_07_19T15_11_31.913251
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T15:11:31.913251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T15:11:31.913251.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_07_19T15_11_31.913251
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T15:11:31.913251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T15:11:31.913251.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_07_19T15_11_31.913251
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T15:11:31.913251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T15:11:31.913251.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_07_19T15_11_31.913251
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T15:11:31.913251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T15:11:31.913251.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_07_19T15_11_31.913251
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T15:11:31.913251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T15:11:31.913251.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_07_19T15_11_31.913251
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T15:11:31.913251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T15:11:31.913251.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_07_19T15_11_31.913251
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T15:11:31.913251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T15:11:31.913251.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_07_19T15_11_31.913251
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T15:11:31.913251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T15:11:31.913251.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_07_19T15_11_31.913251
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T15:11:31.913251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T15:11:31.913251.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_07_19T15_11_31.913251
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T15:11:31.913251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T15:11:31.913251.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_07_19T15_11_31.913251
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T15:11:31.913251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T15:11:31.913251.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_07_19T15_11_31.913251
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T15:11:31.913251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T15:11:31.913251.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_07_19T15_11_31.913251
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T15:11:31.913251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T15:11:31.913251.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_07_19T15_11_31.913251
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T15:11:31.913251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T15:11:31.913251.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_07_19T15_11_31.913251
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T15:11:31.913251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T15:11:31.913251.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_07_19T15_11_31.913251
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T15:11:31.913251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T15:11:31.913251.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_07_19T15_11_31.913251
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T15:11:31.913251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T15:11:31.913251.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_07_19T15_11_31.913251
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T15:11:31.913251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T15:11:31.913251.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_07_19T15_11_31.913251
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T15:11:31.913251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T15:11:31.913251.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_07_19T15_11_31.913251
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T15:11:31.913251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T15:11:31.913251.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_07_19T15_11_31.913251
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T15:11:31.913251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T15:11:31.913251.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_07_19T15_11_31.913251
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T15:11:31.913251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T15:11:31.913251.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_07_19T15_11_31.913251
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T15:11:31.913251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T15:11:31.913251.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_07_19T15_11_31.913251
path:
- '**/details_harness|hendrycksTest-management|5_2023-07-19T15:11:31.913251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-07-19T15:11:31.913251.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_07_19T15_11_31.913251
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T15:11:31.913251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T15:11:31.913251.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_07_19T15_11_31.913251
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T15:11:31.913251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T15:11:31.913251.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_07_19T15_11_31.913251
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T15:11:31.913251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T15:11:31.913251.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_07_19T15_11_31.913251
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T15:11:31.913251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T15:11:31.913251.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_07_19T15_11_31.913251
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T15:11:31.913251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T15:11:31.913251.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_07_19T15_11_31.913251
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T15:11:31.913251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T15:11:31.913251.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_07_19T15_11_31.913251
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T15:11:31.913251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T15:11:31.913251.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_07_19T15_11_31.913251
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T15:11:31.913251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T15:11:31.913251.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_07_19T15_11_31.913251
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T15:11:31.913251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T15:11:31.913251.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_07_19T15_11_31.913251
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T15:11:31.913251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T15:11:31.913251.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_07_19T15_11_31.913251
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T15:11:31.913251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T15:11:31.913251.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_07_19T15_11_31.913251
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T15:11:31.913251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T15:11:31.913251.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_07_19T15_11_31.913251
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T15:11:31.913251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T15:11:31.913251.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_07_19T15_11_31.913251
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T15:11:31.913251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T15:11:31.913251.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_07_19T15_11_31.913251
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T15:11:31.913251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T15:11:31.913251.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_07_19T15_11_31.913251
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T15:11:31.913251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T15:11:31.913251.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_07_19T15_11_31.913251
path:
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T15:11:31.913251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T15:11:31.913251.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_07_19T15_11_31.913251
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T15:11:31.913251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T15:11:31.913251.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_07_19T15_11_31.913251
path:
- '**/details_harness|truthfulqa:mc|0_2023-07-19T15:11:31.913251.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-07-19T15:11:31.913251.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_16T20_03_21.000306
path:
- '**/details_harness|winogrande|5_2023-10-16T20-03-21.000306.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-16T20-03-21.000306.parquet'
- config_name: results
data_files:
- split: 2023_07_19T15_11_31.913251
path:
- results_2023-07-19T15:11:31.913251.parquet
- split: 2023_10_16T20_03_21.000306
path:
- results_2023-10-16T20-03-21.000306.parquet
- split: latest
path:
- results_2023-10-16T20-03-21.000306.parquet
---
# Dataset Card for Evaluation run of EleutherAI/pythia-1.4b-deduped
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/EleutherAI/pythia-1.4b-deduped
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [EleutherAI/pythia-1.4b-deduped](https://huggingface.co/EleutherAI/pythia-1.4b-deduped) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_EleutherAI__pythia-1.4b-deduped",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-16T20:03:21.000306](https://huggingface.co/datasets/open-llm-leaderboard/details_EleutherAI__pythia-1.4b-deduped/blob/main/results_2023-10-16T20-03-21.000306.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.0014681208053691276,
"em_stderr": 0.0003921042190298455,
"f1": 0.04330536912751699,
"f1_stderr": 0.0011661836886516016,
"acc": 0.29067337732239573,
"acc_stderr": 0.008203410149717792
},
"harness|drop|3": {
"em": 0.0014681208053691276,
"em_stderr": 0.0003921042190298455,
"f1": 0.04330536912751699,
"f1_stderr": 0.0011661836886516016
},
"harness|gsm8k|5": {
"acc": 0.008339651250947688,
"acc_stderr": 0.002504942226860525
},
"harness|winogrande|5": {
"acc": 0.5730071033938438,
"acc_stderr": 0.013901878072575058
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
FinGPT/fingpt-finred-cls | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: instruction
dtype: string
splits:
- name: train
num_bytes: 23991756
num_examples: 48474
- name: test
num_bytes: 3899700
num_examples: 8928
download_size: 2897823
dataset_size: 27891456
---
# Dataset Card for "fingpt-finred-cls"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
rhfeiyang/photo-sketch-pair-50 | ---
dataset_info:
features:
- name: photo
dtype: image
- name: sketch
dtype: image
- name: file_name
dtype: string
splits:
- name: train
num_bytes: 30097252.0
num_examples: 50
download_size: 30101693
dataset_size: 30097252.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
talentlabs/training-data-blog-writer_v30-08-2023 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: title
dtype: string
- name: article
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 72881118
num_examples: 12174
download_size: 46279297
dataset_size: 72881118
---
# Dataset Card for "training-data-blog-writer_v30-08-2023"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
japanese-asr/whisper_transcriptions.reazonspeech.medium.wer_10.0 | ---
dataset_info:
config_name: medium
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: whisper_transcript
sequence: int64
- name: input_length
dtype: int64
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 29149836258.980053
num_examples: 208714
download_size: 28725545618
dataset_size: 29149836258.980053
configs:
- config_name: medium
data_files:
- split: train
path: medium/train-*
---
|
zolak/twitter_dataset_80_1713178847 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 237215
num_examples: 635
download_size: 121006
dataset_size: 237215
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
CATIE-AQ/mtop_domain_intent_fr_prompt_intent_classification | ---
language:
- fr
license:
- unknown
size_categories:
- 100K<n<1M
task_categories:
- text-classification
tags:
- intent-classification
- DFP
- french prompts
annotations_creators:
- found
language_creators:
- found
multilinguality:
- monolingual
source_datasets:
- mtop_domain_intent
---
# mtop_domain_intent_fr_prompt_intent_classification
## Summary
**mtop_domain_intent_fr_prompt_intent_classification** is a subset of the [**Dataset of French Prompts (DFP)**](https://huggingface.co/datasets/CATIE-AQ/DFP).
It contains **497,100** rows that can be used for an intent text classification task.
The original data (without prompts) comes from the dataset [mtop_domain](https://huggingface.co/datasets/mteb/mtop_domain) Haoran Li et al. where only the French part has been kept.
A list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the [xP3](https://huggingface.co/datasets/bigscience/xP3) dataset by Muennighoff et al.
## Prompts used
### List
30 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.
```
text+'\n Étant donné la liste de catégories suivante : "'+classes+'" à quelle catégorie appartient le texte ?',
text+'\n Étant donné la liste de classes suivante : "'+classes+'" à quelle classe appartient le texte ?',
'Étant donné une liste de catégories : "'+classes+'" à quelle catégorie appartient le texte suivant ?\n Texte : '+text,
'Étant donné une liste de classes : "'+classes+'" à quelle classe appartient le texte suivant ?\n Texte : '+text,
'Étant donné un choix de catégories : "'+classes+'", le texte fait référence à laquelle ?\n Texte : '+text,
'Étant donné un choix de classe : "'+classes+'", le texte fait référence à laquelle ?\n Texte : '+text,
'Choisir une catégorie pour le texte suivant. Les options sont les suivantes : "'+classes+'"\n Texte : '+text,
'Choisir une catégorie pour le texte suivant. Les possibilités sont les suivantes : "'+classes+'"\n Texte : '+text,
'Choisir une catégorie pour le texte suivant. Les choix sont les suivants : "'+classes+'"\n Texte : '+text,
'Choisir une classe pour le texte suivant. Les options sont les suivantes : "'+classes+'"\n Texte : '+text,
'Choisir une classe pour le texte suivant. Les possibilités sont les suivantes : "'+classes+'"\n Texte : '+text,
'Choisir une classe pour le texte suivant. Les choix sont les suivants : "'+classes+'"\n Texte : '+text,
'Sélectionner une catégorie pour le texte suivant. Les options sont les suivantes : "'+classes+'"\n Texte : '+text,
'Sélectionner une catégorie pour le texte suivant. Les possibilités sont les suivantes : "'+classes+'"\n Texte : '+text,
'Sélectionner une catégorie pour le texte suivant. Les choix sont les suivants : "'+classes+'"\n Texte : '+text,
'Sélectionner une classe pour le texte suivant. Les options sont les suivantes : "'+classes+'"\n Texte : '+text,
'Sélectionner une classe pour le texte suivant. Les possibilités sont les suivantes : "'+classes+'"\n Texte : '+text,
'Sélectionner une classe pour le texte suivant. Les choix sont les suivants : "'+classes+'"\n Texte : '+text,
'Parmi la liste de catégories suivantes : "'+classes+'",\n indiquer celle présente dans le texte : '+text,
'Parmi la liste de classes suivantes : "'+classes+'",\n indiquer celle présente dans le texte : '+text,
"""Parmi la liste d'intentions suivantes : " """+classes+""" ",\n indiquer celle présente dans le texte : """+text,
text+"""\n Étant donné la liste d'intentions suivante : " """+classes+""" ", à quelle intention appartient le texte ?""",
"""Étant donné une liste d'intentions : " """+classes+""" ", à quelle intention appartient le texte suivant ?\n Texte : """+text,
"""Étant donné un choix d'intentions : " """+classes+""" ", le texte fait référence à laquelle ?""",
'Choisir une intention pour le texte suivant. Les options sont les suivantes : "'+classes+'"\n Texte : '+text,
'Choisir une intention pour le texte suivant. Les possibilités sont les suivantes : "'+classes+'"\n Texte : '+text,
'Choisir une intention pour le texte suivant. Les choix sont les suivants : "'+classes+'"\n Texte : '+text,
'Sélectionner une intention pour le texte suivant. Les options sont les suivantes : "'+classes+'"\n Texte : '+text,
'Sélectionner une intention pour le texte suivant. Les possibilités sont les suivantes : "'+classes+'"\n Texte : '+text,
'Sélectionner une intention pour le texte suivant. Les choix sont les suivants : "'+classes+'"\n Texte : '+text
```
### Features used in the prompts
In the prompt list above, `classes`, `text` and `targets` have been constructed from:
```
mtop = load_dataset('mteb/mtop_domain','fr')
classes = 'rappel, actualités, recettes, minuterie, appel, météo, alarme, événement, musique, personne, message'
text = mtop['train']['text'][i]
targets = mtop['train']['label_text'][i].replace('reminder','rappel').replace('news','actualités').replace('recipes','recettes').replace('timer','minuterie').replace('calling','appel').replace('weather','météo').replace('alarm','alarme').replace('event','événement').replace('music','musique').replace('people','personne').replace('messaging','message')
```
# Splits
- `train` with 354,000 samples
- `valid` with 47,300 samples
- `test` with 95,800 samples
# How to use?
```
from datasets import load_dataset
dataset = load_dataset("CATIE-AQ/mtop_domain_intent_fr_prompt_intent_classification")
```
# Citation
## Original data
> @misc{li2021mtop,
title={MTOP: A Comprehensive Multilingual Task-Oriented Semantic Parsing Benchmark},
author={Haoran Li and Abhinav Arora and Shuohui Chen and Anchit Gupta and Sonal Gupta and Yashar Mehdad},
year={2021},
eprint={2008.09335},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
## This Dataset
> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023,
author = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
title = { DFP (Revision 1d24c09) },
year = 2023,
url = { https://huggingface.co/datasets/CATIE-AQ/DFP },
doi = { 10.57967/hf/1200 },
publisher = { Hugging Face }
}
## License
Unknown |
Minglii/v_4096 | ---
dataset_info:
features:
- name: data
struct:
- name: conversations
list:
- name: from
dtype: string
- name: markdown
struct:
- name: answer
dtype: string
- name: index
dtype: int64
- name: type
dtype: string
- name: text
dtype: string
- name: value
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 685122486
num_examples: 80129
download_size: 278043744
dataset_size: 685122486
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "v_4096"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CyberHarem/hana_shirosaki_watashinitenshigamaiorita | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of Hana Shirosaki
This is the dataset of Hana Shirosaki, containing 567 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 567 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 1276 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 1407 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 567 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 567 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 567 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 1276 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 1276 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 974 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 1407 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 1407 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
|
CyberHarem/midori_bluearchive | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of midori/才羽ミドリ/绿 (Blue Archive)
This is the dataset of midori/才羽ミドリ/绿 (Blue Archive), containing 500 images and their tags.
The core tags of this character are `blonde_hair, animal_ears, fake_animal_ears, animal_ear_headphones, headphones, bow, short_hair, cat_ear_headphones, halo, green_eyes, hair_bow, tail, green_halo, cat_tail, green_bow`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:--------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 697.07 MiB | [Download](https://huggingface.co/datasets/CyberHarem/midori_bluearchive/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 1200 | 500 | 604.83 MiB | [Download](https://huggingface.co/datasets/CyberHarem/midori_bluearchive/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1272 | 1.24 GiB | [Download](https://huggingface.co/datasets/CyberHarem/midori_bluearchive/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/midori_bluearchive',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 15 |  |  |  |  |  | 1girl, black_dress, blush, looking_at_viewer, maid_apron, maid_headdress, official_alternate_costume, simple_background, solo, white_apron, white_background, long_sleeves, frilled_apron, white_pantyhose, blue_bow, closed_mouth, frilled_dress, twintails, puffy_sleeves, holding |
| 1 | 7 |  |  |  |  |  | 1girl, blue_necktie, blush, collared_shirt, looking_at_viewer, simple_background, solo, white_background, white_shirt, upper_body, closed_mouth, smile, portrait, white_jacket |
| 2 | 8 |  |  |  |  |  | 1girl, black_thighhighs, blue_necktie, collared_shirt, long_sleeves, looking_at_viewer, simple_background, solo, white_background, white_jacket, white_shirt, black_shorts, blush, black_footwear, full_body, open_jacket, sitting, closed_mouth, hood, open_mouth, wide_sleeves |
| 3 | 19 |  |  |  |  |  | blue_necktie, collared_shirt, sisters, white_shirt, 2girls, twins, white_jacket, long_sleeves, blush, looking_at_viewer, solo_focus, black_thighhighs, black_shorts, simple_background, white_background, closed_mouth, open_jacket, wide_sleeves, red_bow, sitting, smile, upper_body |
| 4 | 6 |  |  |  |  |  | 2girls, black_skirt, blue_necktie, collared_shirt, long_sleeves, sisters, white_jacket, white_shirt, wide_sleeves, black_thighhighs, open_clothes, pleated_skirt, twins, blush, closed_mouth, solo_focus |
| 5 | 7 |  |  |  |  |  | 1girl, blush, completely_nude, looking_at_viewer, nipples, solo, navel, small_breasts, smile, white_background, blue_bow, cleft_of_venus, collarbone, loli, pussy, simple_background, uncensored, closed_mouth, flat_chest, sweat |
| 6 | 6 |  |  |  |  |  | 1boy, blush, hetero, penis, completely_nude, loli, mosaic_censoring, small_breasts, nipples, twins, 1girl, 2girls, blue_bow, closed_mouth, looking_at_viewer, navel, sisters, smile |
| 7 | 7 |  |  |  |  |  | 1boy, 1girl, blush, hetero, loli, penis, sex, vaginal, navel, necktie, nipples, pov, small_breasts, solo_focus, jacket, looking_at_viewer, cowgirl_position, thighhighs, bar_censor, mosaic_censoring, open_mouth, pussy |
| 8 | 5 |  |  |  |  |  | 1girl, blush, looking_at_viewer, micro_bikini, solo, navel, collarbone, on_back, open_mouth, small_breasts, smile, stomach, white_bikini, alternate_costume, bed_sheet, closed_mouth, flat_chest, side-tie_bikini_bottom |
| 9 | 5 |  |  |  |  |  | 1girl, alternate_costume, blush, closed_mouth, green_kimono, long_sleeves, solo, wide_sleeves, obi, simple_background, white_background, blue_bow, looking_at_viewer, smile, brown_footwear, hair_flower, holding, print_kimono, thighhighs, upper_body, white_flower |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | black_dress | blush | looking_at_viewer | maid_apron | maid_headdress | official_alternate_costume | simple_background | solo | white_apron | white_background | long_sleeves | frilled_apron | white_pantyhose | blue_bow | closed_mouth | frilled_dress | twintails | puffy_sleeves | holding | blue_necktie | collared_shirt | white_shirt | upper_body | smile | portrait | white_jacket | black_thighhighs | black_shorts | black_footwear | full_body | open_jacket | sitting | hood | open_mouth | wide_sleeves | sisters | 2girls | twins | solo_focus | red_bow | black_skirt | open_clothes | pleated_skirt | completely_nude | nipples | navel | small_breasts | cleft_of_venus | collarbone | loli | pussy | uncensored | flat_chest | sweat | 1boy | hetero | penis | mosaic_censoring | sex | vaginal | necktie | pov | jacket | cowgirl_position | thighhighs | bar_censor | micro_bikini | on_back | stomach | white_bikini | alternate_costume | bed_sheet | side-tie_bikini_bottom | green_kimono | obi | brown_footwear | hair_flower | print_kimono | white_flower |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------------|:--------|:--------------------|:-------------|:-----------------|:-----------------------------|:--------------------|:-------|:--------------|:-------------------|:---------------|:----------------|:------------------|:-----------|:---------------|:----------------|:------------|:----------------|:----------|:---------------|:-----------------|:--------------|:-------------|:--------|:-----------|:---------------|:-------------------|:---------------|:-----------------|:------------|:--------------|:----------|:-------|:-------------|:---------------|:----------|:---------|:--------|:-------------|:----------|:--------------|:---------------|:----------------|:------------------|:----------|:--------|:----------------|:-----------------|:-------------|:-------|:--------|:-------------|:-------------|:--------|:-------|:---------|:--------|:-------------------|:------|:----------|:----------|:------|:---------|:-------------------|:-------------|:-------------|:---------------|:----------|:----------|:---------------|:--------------------|:------------|:-------------------------|:---------------|:------|:-----------------|:--------------|:---------------|:---------------|
| 0 | 15 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 7 |  |  |  |  |  | X | | X | X | | | | X | X | | X | | | | | X | | | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 8 |  |  |  |  |  | X | | X | X | | | | X | X | | X | X | | | | X | | | | | X | X | X | | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 19 |  |  |  |  |  | | | X | X | | | | X | | | X | X | | | | X | | | | | X | X | X | X | X | | X | X | X | | | X | X | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 6 |  |  |  |  |  | | | X | | | | | | | | | X | | | | X | | | | | X | X | X | | | | X | X | | | | | | | | X | X | X | X | X | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 7 |  |  |  |  |  | X | | X | X | | | | X | X | | X | | | | X | X | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | |
| 6 | 6 |  |  |  |  |  | X | | X | X | | | | | | | | | | | X | X | | | | | | | | | X | | | | | | | | | | | | X | X | X | | | | | | X | X | X | X | | | X | | | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | |
| 7 | 7 |  |  |  |  |  | X | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | X | | | | | | X | X | X | | | X | X | | | | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | |
| 8 | 5 |  |  |  |  |  | X | | X | X | | | | | X | | | | | | | X | | | | | | | | | X | | | | | | | | | | X | | | | | | | | | | | | X | X | | X | | | | X | | | | | | | | | | | | | | X | X | X | X | X | X | X | | | | | | |
| 9 | 5 |  |  |  |  |  | X | | X | X | | | | X | X | | X | X | | | X | X | | | | X | | | | X | X | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | X | | | X | X | X | X | X | X |
|
AdapterOcean/oasst_top1_standardized_embedded | ---
dataset_info:
features:
- name: text
dtype: string
- name: conversation_id
dtype: int64
- name: embedding
sequence: float32
splits:
- name: train
num_bytes: 75215190
num_examples: 12946
download_size: 39089096
dataset_size: 75215190
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "oasst_top1_standardized_embedded"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
open-llm-leaderboard/details_kwchoi__DPO_mistral_7b_ultra_0124_v1 | ---
pretty_name: Evaluation run of kwchoi/DPO_mistral_7b_ultra_0124_v1
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [kwchoi/DPO_mistral_7b_ultra_0124_v1](https://huggingface.co/kwchoi/DPO_mistral_7b_ultra_0124_v1)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_kwchoi__DPO_mistral_7b_ultra_0124_v1\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-01-25T05:49:50.348304](https://huggingface.co/datasets/open-llm-leaderboard/details_kwchoi__DPO_mistral_7b_ultra_0124_v1/blob/main/results_2024-01-25T05-49-50.348304.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.5976042024797005,\n\
\ \"acc_stderr\": 0.03345462257965717,\n \"acc_norm\": 0.6034041935061322,\n\
\ \"acc_norm_stderr\": 0.03416858616200466,\n \"mc1\": 0.5507955936352509,\n\
\ \"mc1_stderr\": 0.017412941986115295,\n \"mc2\": 0.694525955019443,\n\
\ \"mc2_stderr\": 0.015330113605051526\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.6305460750853242,\n \"acc_stderr\": 0.014104578366491888,\n\
\ \"acc_norm\": 0.6612627986348123,\n \"acc_norm_stderr\": 0.01383056892797433\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6980681139215296,\n\
\ \"acc_stderr\": 0.004581576124179742,\n \"acc_norm\": 0.8638717386974706,\n\
\ \"acc_norm_stderr\": 0.0034222387022263714\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695235,\n \
\ \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695235\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5555555555555556,\n\
\ \"acc_stderr\": 0.04292596718256981,\n \"acc_norm\": 0.5555555555555556,\n\
\ \"acc_norm_stderr\": 0.04292596718256981\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.618421052631579,\n \"acc_stderr\": 0.03953173377749194,\n\
\ \"acc_norm\": 0.618421052631579,\n \"acc_norm_stderr\": 0.03953173377749194\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.58,\n\
\ \"acc_stderr\": 0.049604496374885836,\n \"acc_norm\": 0.58,\n \
\ \"acc_norm_stderr\": 0.049604496374885836\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.6641509433962264,\n \"acc_stderr\": 0.029067220146644823,\n\
\ \"acc_norm\": 0.6641509433962264,\n \"acc_norm_stderr\": 0.029067220146644823\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.6527777777777778,\n\
\ \"acc_stderr\": 0.039812405437178615,\n \"acc_norm\": 0.6527777777777778,\n\
\ \"acc_norm_stderr\": 0.039812405437178615\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.46,\n \"acc_stderr\": 0.05009082659620333,\n \
\ \"acc_norm\": 0.46,\n \"acc_norm_stderr\": 0.05009082659620333\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.53,\n \"acc_stderr\": 0.050161355804659205,\n \"acc_norm\": 0.53,\n\
\ \"acc_norm_stderr\": 0.050161355804659205\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.39,\n \"acc_stderr\": 0.04902071300001975,\n \
\ \"acc_norm\": 0.39,\n \"acc_norm_stderr\": 0.04902071300001975\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.5838150289017341,\n\
\ \"acc_stderr\": 0.03758517775404947,\n \"acc_norm\": 0.5838150289017341,\n\
\ \"acc_norm_stderr\": 0.03758517775404947\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.49019607843137253,\n \"acc_stderr\": 0.04974229460422817,\n\
\ \"acc_norm\": 0.49019607843137253,\n \"acc_norm_stderr\": 0.04974229460422817\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.66,\n \"acc_stderr\": 0.04760952285695237,\n \"acc_norm\": 0.66,\n\
\ \"acc_norm_stderr\": 0.04760952285695237\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.5234042553191489,\n \"acc_stderr\": 0.032650194750335815,\n\
\ \"acc_norm\": 0.5234042553191489,\n \"acc_norm_stderr\": 0.032650194750335815\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.42105263157894735,\n\
\ \"acc_stderr\": 0.046446020912223177,\n \"acc_norm\": 0.42105263157894735,\n\
\ \"acc_norm_stderr\": 0.046446020912223177\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5241379310344828,\n \"acc_stderr\": 0.0416180850350153,\n\
\ \"acc_norm\": 0.5241379310344828,\n \"acc_norm_stderr\": 0.0416180850350153\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.4021164021164021,\n \"acc_stderr\": 0.025253032554997692,\n \"\
acc_norm\": 0.4021164021164021,\n \"acc_norm_stderr\": 0.025253032554997692\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.38095238095238093,\n\
\ \"acc_stderr\": 0.04343525428949098,\n \"acc_norm\": 0.38095238095238093,\n\
\ \"acc_norm_stderr\": 0.04343525428949098\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252605,\n \
\ \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252605\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.6903225806451613,\n\
\ \"acc_stderr\": 0.026302774983517414,\n \"acc_norm\": 0.6903225806451613,\n\
\ \"acc_norm_stderr\": 0.026302774983517414\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.4630541871921182,\n \"acc_stderr\": 0.035083705204426656,\n\
\ \"acc_norm\": 0.4630541871921182,\n \"acc_norm_stderr\": 0.035083705204426656\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.61,\n \"acc_stderr\": 0.04902071300001974,\n \"acc_norm\"\
: 0.61,\n \"acc_norm_stderr\": 0.04902071300001974\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7393939393939394,\n \"acc_stderr\": 0.034277431758165236,\n\
\ \"acc_norm\": 0.7393939393939394,\n \"acc_norm_stderr\": 0.034277431758165236\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.7373737373737373,\n \"acc_stderr\": 0.03135305009533084,\n \"\
acc_norm\": 0.7373737373737373,\n \"acc_norm_stderr\": 0.03135305009533084\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.8290155440414507,\n \"acc_stderr\": 0.02717121368316453,\n\
\ \"acc_norm\": 0.8290155440414507,\n \"acc_norm_stderr\": 0.02717121368316453\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.5820512820512821,\n \"acc_stderr\": 0.025007329882461217,\n\
\ \"acc_norm\": 0.5820512820512821,\n \"acc_norm_stderr\": 0.025007329882461217\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.31851851851851853,\n \"acc_stderr\": 0.02840653309060846,\n \
\ \"acc_norm\": 0.31851851851851853,\n \"acc_norm_stderr\": 0.02840653309060846\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6050420168067226,\n \"acc_stderr\": 0.03175367846096626,\n \
\ \"acc_norm\": 0.6050420168067226,\n \"acc_norm_stderr\": 0.03175367846096626\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.3443708609271523,\n \"acc_stderr\": 0.038796870240733264,\n \"\
acc_norm\": 0.3443708609271523,\n \"acc_norm_stderr\": 0.038796870240733264\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.7798165137614679,\n \"acc_stderr\": 0.01776597865232753,\n \"\
acc_norm\": 0.7798165137614679,\n \"acc_norm_stderr\": 0.01776597865232753\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.4583333333333333,\n \"acc_stderr\": 0.03398110890294636,\n \"\
acc_norm\": 0.4583333333333333,\n \"acc_norm_stderr\": 0.03398110890294636\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.7696078431372549,\n \"acc_stderr\": 0.029554292605695066,\n \"\
acc_norm\": 0.7696078431372549,\n \"acc_norm_stderr\": 0.029554292605695066\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.7468354430379747,\n \"acc_stderr\": 0.028304657943035307,\n \
\ \"acc_norm\": 0.7468354430379747,\n \"acc_norm_stderr\": 0.028304657943035307\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6098654708520179,\n\
\ \"acc_stderr\": 0.03273766725459156,\n \"acc_norm\": 0.6098654708520179,\n\
\ \"acc_norm_stderr\": 0.03273766725459156\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.7022900763358778,\n \"acc_stderr\": 0.04010358942462203,\n\
\ \"acc_norm\": 0.7022900763358778,\n \"acc_norm_stderr\": 0.04010358942462203\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.7851239669421488,\n \"acc_stderr\": 0.03749492448709695,\n \"\
acc_norm\": 0.7851239669421488,\n \"acc_norm_stderr\": 0.03749492448709695\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7222222222222222,\n\
\ \"acc_stderr\": 0.043300437496507416,\n \"acc_norm\": 0.7222222222222222,\n\
\ \"acc_norm_stderr\": 0.043300437496507416\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7423312883435583,\n \"acc_stderr\": 0.03436150827846917,\n\
\ \"acc_norm\": 0.7423312883435583,\n \"acc_norm_stderr\": 0.03436150827846917\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.48214285714285715,\n\
\ \"acc_stderr\": 0.047427623612430116,\n \"acc_norm\": 0.48214285714285715,\n\
\ \"acc_norm_stderr\": 0.047427623612430116\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7281553398058253,\n \"acc_stderr\": 0.044052680241409216,\n\
\ \"acc_norm\": 0.7281553398058253,\n \"acc_norm_stderr\": 0.044052680241409216\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8547008547008547,\n\
\ \"acc_stderr\": 0.02308663508684141,\n \"acc_norm\": 0.8547008547008547,\n\
\ \"acc_norm_stderr\": 0.02308663508684141\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.65,\n \"acc_stderr\": 0.0479372485441102,\n \
\ \"acc_norm\": 0.65,\n \"acc_norm_stderr\": 0.0479372485441102\n },\n\
\ \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7484035759897829,\n\
\ \"acc_stderr\": 0.015517322365529636,\n \"acc_norm\": 0.7484035759897829,\n\
\ \"acc_norm_stderr\": 0.015517322365529636\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.6705202312138728,\n \"acc_stderr\": 0.025305258131879702,\n\
\ \"acc_norm\": 0.6705202312138728,\n \"acc_norm_stderr\": 0.025305258131879702\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.36089385474860336,\n\
\ \"acc_stderr\": 0.016062290671110473,\n \"acc_norm\": 0.36089385474860336,\n\
\ \"acc_norm_stderr\": 0.016062290671110473\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.6535947712418301,\n \"acc_stderr\": 0.02724561304721536,\n\
\ \"acc_norm\": 0.6535947712418301,\n \"acc_norm_stderr\": 0.02724561304721536\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6720257234726688,\n\
\ \"acc_stderr\": 0.02666441088693761,\n \"acc_norm\": 0.6720257234726688,\n\
\ \"acc_norm_stderr\": 0.02666441088693761\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.6574074074074074,\n \"acc_stderr\": 0.02640614597362568,\n\
\ \"acc_norm\": 0.6574074074074074,\n \"acc_norm_stderr\": 0.02640614597362568\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.41843971631205673,\n \"acc_stderr\": 0.02942799403941999,\n \
\ \"acc_norm\": 0.41843971631205673,\n \"acc_norm_stderr\": 0.02942799403941999\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.41590612777053454,\n\
\ \"acc_stderr\": 0.012588323850313629,\n \"acc_norm\": 0.41590612777053454,\n\
\ \"acc_norm_stderr\": 0.012588323850313629\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.5882352941176471,\n \"acc_stderr\": 0.029896163033125474,\n\
\ \"acc_norm\": 0.5882352941176471,\n \"acc_norm_stderr\": 0.029896163033125474\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6013071895424836,\n \"acc_stderr\": 0.019808281317449848,\n \
\ \"acc_norm\": 0.6013071895424836,\n \"acc_norm_stderr\": 0.019808281317449848\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6727272727272727,\n\
\ \"acc_stderr\": 0.04494290866252091,\n \"acc_norm\": 0.6727272727272727,\n\
\ \"acc_norm_stderr\": 0.04494290866252091\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.6653061224489796,\n \"acc_stderr\": 0.030209235226242307,\n\
\ \"acc_norm\": 0.6653061224489796,\n \"acc_norm_stderr\": 0.030209235226242307\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.7960199004975125,\n\
\ \"acc_stderr\": 0.02849317624532607,\n \"acc_norm\": 0.7960199004975125,\n\
\ \"acc_norm_stderr\": 0.02849317624532607\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.82,\n \"acc_stderr\": 0.038612291966536934,\n \
\ \"acc_norm\": 0.82,\n \"acc_norm_stderr\": 0.038612291966536934\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5,\n \
\ \"acc_stderr\": 0.03892494720807614,\n \"acc_norm\": 0.5,\n \
\ \"acc_norm_stderr\": 0.03892494720807614\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.7777777777777778,\n \"acc_stderr\": 0.031885780176863984,\n\
\ \"acc_norm\": 0.7777777777777778,\n \"acc_norm_stderr\": 0.031885780176863984\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.5507955936352509,\n\
\ \"mc1_stderr\": 0.017412941986115295,\n \"mc2\": 0.694525955019443,\n\
\ \"mc2_stderr\": 0.015330113605051526\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7947908445146015,\n \"acc_stderr\": 0.011350315707462059\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.25473843821076575,\n \
\ \"acc_stderr\": 0.012001731232879127\n }\n}\n```"
repo_url: https://huggingface.co/kwchoi/DPO_mistral_7b_ultra_0124_v1
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_01_25T05_49_50.348304
path:
- '**/details_harness|arc:challenge|25_2024-01-25T05-49-50.348304.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-01-25T05-49-50.348304.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_01_25T05_49_50.348304
path:
- '**/details_harness|gsm8k|5_2024-01-25T05-49-50.348304.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-01-25T05-49-50.348304.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_01_25T05_49_50.348304
path:
- '**/details_harness|hellaswag|10_2024-01-25T05-49-50.348304.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-01-25T05-49-50.348304.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_01_25T05_49_50.348304
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-25T05-49-50.348304.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-25T05-49-50.348304.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-25T05-49-50.348304.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_01_25T05_49_50.348304
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-25T05-49-50.348304.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-25T05-49-50.348304.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_01_25T05_49_50.348304
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-25T05-49-50.348304.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-25T05-49-50.348304.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_01_25T05_49_50.348304
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-25T05-49-50.348304.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-25T05-49-50.348304.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_01_25T05_49_50.348304
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-25T05-49-50.348304.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-25T05-49-50.348304.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_01_25T05_49_50.348304
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-25T05-49-50.348304.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-25T05-49-50.348304.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_01_25T05_49_50.348304
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-25T05-49-50.348304.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-25T05-49-50.348304.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_01_25T05_49_50.348304
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-25T05-49-50.348304.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-25T05-49-50.348304.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_01_25T05_49_50.348304
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-25T05-49-50.348304.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-25T05-49-50.348304.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_01_25T05_49_50.348304
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-25T05-49-50.348304.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-25T05-49-50.348304.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_01_25T05_49_50.348304
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-25T05-49-50.348304.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-25T05-49-50.348304.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_01_25T05_49_50.348304
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-25T05-49-50.348304.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-25T05-49-50.348304.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_01_25T05_49_50.348304
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-25T05-49-50.348304.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-25T05-49-50.348304.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_01_25T05_49_50.348304
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-25T05-49-50.348304.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-25T05-49-50.348304.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_01_25T05_49_50.348304
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-25T05-49-50.348304.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-25T05-49-50.348304.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_01_25T05_49_50.348304
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-25T05-49-50.348304.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-25T05-49-50.348304.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_01_25T05_49_50.348304
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-25T05-49-50.348304.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-25T05-49-50.348304.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_01_25T05_49_50.348304
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-25T05-49-50.348304.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-25T05-49-50.348304.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_01_25T05_49_50.348304
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-25T05-49-50.348304.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-25T05-49-50.348304.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_01_25T05_49_50.348304
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-25T05-49-50.348304.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-25T05-49-50.348304.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_01_25T05_49_50.348304
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-25T05-49-50.348304.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-25T05-49-50.348304.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_01_25T05_49_50.348304
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-25T05-49-50.348304.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-25T05-49-50.348304.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_01_25T05_49_50.348304
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-25T05-49-50.348304.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-25T05-49-50.348304.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_01_25T05_49_50.348304
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-25T05-49-50.348304.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-25T05-49-50.348304.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_01_25T05_49_50.348304
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-25T05-49-50.348304.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-25T05-49-50.348304.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_01_25T05_49_50.348304
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-25T05-49-50.348304.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-25T05-49-50.348304.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_01_25T05_49_50.348304
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-25T05-49-50.348304.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-25T05-49-50.348304.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_01_25T05_49_50.348304
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-25T05-49-50.348304.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-25T05-49-50.348304.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_01_25T05_49_50.348304
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-25T05-49-50.348304.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-25T05-49-50.348304.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_01_25T05_49_50.348304
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-25T05-49-50.348304.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-25T05-49-50.348304.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_01_25T05_49_50.348304
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-25T05-49-50.348304.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-25T05-49-50.348304.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_01_25T05_49_50.348304
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-25T05-49-50.348304.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-25T05-49-50.348304.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_01_25T05_49_50.348304
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-25T05-49-50.348304.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-25T05-49-50.348304.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_01_25T05_49_50.348304
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-25T05-49-50.348304.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-25T05-49-50.348304.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_01_25T05_49_50.348304
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-25T05-49-50.348304.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-25T05-49-50.348304.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_01_25T05_49_50.348304
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-25T05-49-50.348304.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-25T05-49-50.348304.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_01_25T05_49_50.348304
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-25T05-49-50.348304.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-25T05-49-50.348304.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_01_25T05_49_50.348304
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-25T05-49-50.348304.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-25T05-49-50.348304.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_01_25T05_49_50.348304
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-25T05-49-50.348304.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-25T05-49-50.348304.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_01_25T05_49_50.348304
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-25T05-49-50.348304.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-25T05-49-50.348304.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_01_25T05_49_50.348304
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-25T05-49-50.348304.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-25T05-49-50.348304.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_01_25T05_49_50.348304
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-25T05-49-50.348304.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-25T05-49-50.348304.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_01_25T05_49_50.348304
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-25T05-49-50.348304.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-25T05-49-50.348304.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_01_25T05_49_50.348304
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-25T05-49-50.348304.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-25T05-49-50.348304.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_01_25T05_49_50.348304
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-25T05-49-50.348304.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-25T05-49-50.348304.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_01_25T05_49_50.348304
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-25T05-49-50.348304.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-25T05-49-50.348304.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_01_25T05_49_50.348304
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-25T05-49-50.348304.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-25T05-49-50.348304.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_01_25T05_49_50.348304
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-25T05-49-50.348304.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-25T05-49-50.348304.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_01_25T05_49_50.348304
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-25T05-49-50.348304.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-25T05-49-50.348304.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_01_25T05_49_50.348304
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-25T05-49-50.348304.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-25T05-49-50.348304.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_01_25T05_49_50.348304
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-25T05-49-50.348304.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-25T05-49-50.348304.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_01_25T05_49_50.348304
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-25T05-49-50.348304.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-25T05-49-50.348304.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_01_25T05_49_50.348304
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-25T05-49-50.348304.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-25T05-49-50.348304.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_01_25T05_49_50.348304
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-25T05-49-50.348304.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-25T05-49-50.348304.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_01_25T05_49_50.348304
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-25T05-49-50.348304.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-25T05-49-50.348304.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_01_25T05_49_50.348304
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-25T05-49-50.348304.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-25T05-49-50.348304.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_01_25T05_49_50.348304
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-25T05-49-50.348304.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-25T05-49-50.348304.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_01_25T05_49_50.348304
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-25T05-49-50.348304.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-25T05-49-50.348304.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_01_25T05_49_50.348304
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-25T05-49-50.348304.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-25T05-49-50.348304.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_01_25T05_49_50.348304
path:
- '**/details_harness|winogrande|5_2024-01-25T05-49-50.348304.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-01-25T05-49-50.348304.parquet'
- config_name: results
data_files:
- split: 2024_01_25T05_49_50.348304
path:
- results_2024-01-25T05-49-50.348304.parquet
- split: latest
path:
- results_2024-01-25T05-49-50.348304.parquet
---
# Dataset Card for Evaluation run of kwchoi/DPO_mistral_7b_ultra_0124_v1
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [kwchoi/DPO_mistral_7b_ultra_0124_v1](https://huggingface.co/kwchoi/DPO_mistral_7b_ultra_0124_v1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_kwchoi__DPO_mistral_7b_ultra_0124_v1",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-25T05:49:50.348304](https://huggingface.co/datasets/open-llm-leaderboard/details_kwchoi__DPO_mistral_7b_ultra_0124_v1/blob/main/results_2024-01-25T05-49-50.348304.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.5976042024797005,
"acc_stderr": 0.03345462257965717,
"acc_norm": 0.6034041935061322,
"acc_norm_stderr": 0.03416858616200466,
"mc1": 0.5507955936352509,
"mc1_stderr": 0.017412941986115295,
"mc2": 0.694525955019443,
"mc2_stderr": 0.015330113605051526
},
"harness|arc:challenge|25": {
"acc": 0.6305460750853242,
"acc_stderr": 0.014104578366491888,
"acc_norm": 0.6612627986348123,
"acc_norm_stderr": 0.01383056892797433
},
"harness|hellaswag|10": {
"acc": 0.6980681139215296,
"acc_stderr": 0.004581576124179742,
"acc_norm": 0.8638717386974706,
"acc_norm_stderr": 0.0034222387022263714
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.34,
"acc_stderr": 0.04760952285695235,
"acc_norm": 0.34,
"acc_norm_stderr": 0.04760952285695235
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.5555555555555556,
"acc_stderr": 0.04292596718256981,
"acc_norm": 0.5555555555555556,
"acc_norm_stderr": 0.04292596718256981
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.618421052631579,
"acc_stderr": 0.03953173377749194,
"acc_norm": 0.618421052631579,
"acc_norm_stderr": 0.03953173377749194
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.58,
"acc_stderr": 0.049604496374885836,
"acc_norm": 0.58,
"acc_norm_stderr": 0.049604496374885836
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6641509433962264,
"acc_stderr": 0.029067220146644823,
"acc_norm": 0.6641509433962264,
"acc_norm_stderr": 0.029067220146644823
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.6527777777777778,
"acc_stderr": 0.039812405437178615,
"acc_norm": 0.6527777777777778,
"acc_norm_stderr": 0.039812405437178615
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.46,
"acc_stderr": 0.05009082659620333,
"acc_norm": 0.46,
"acc_norm_stderr": 0.05009082659620333
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.53,
"acc_stderr": 0.050161355804659205,
"acc_norm": 0.53,
"acc_norm_stderr": 0.050161355804659205
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.39,
"acc_stderr": 0.04902071300001975,
"acc_norm": 0.39,
"acc_norm_stderr": 0.04902071300001975
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.5838150289017341,
"acc_stderr": 0.03758517775404947,
"acc_norm": 0.5838150289017341,
"acc_norm_stderr": 0.03758517775404947
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.49019607843137253,
"acc_stderr": 0.04974229460422817,
"acc_norm": 0.49019607843137253,
"acc_norm_stderr": 0.04974229460422817
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.66,
"acc_stderr": 0.04760952285695237,
"acc_norm": 0.66,
"acc_norm_stderr": 0.04760952285695237
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5234042553191489,
"acc_stderr": 0.032650194750335815,
"acc_norm": 0.5234042553191489,
"acc_norm_stderr": 0.032650194750335815
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.42105263157894735,
"acc_stderr": 0.046446020912223177,
"acc_norm": 0.42105263157894735,
"acc_norm_stderr": 0.046446020912223177
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5241379310344828,
"acc_stderr": 0.0416180850350153,
"acc_norm": 0.5241379310344828,
"acc_norm_stderr": 0.0416180850350153
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.4021164021164021,
"acc_stderr": 0.025253032554997692,
"acc_norm": 0.4021164021164021,
"acc_norm_stderr": 0.025253032554997692
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.38095238095238093,
"acc_stderr": 0.04343525428949098,
"acc_norm": 0.38095238095238093,
"acc_norm_stderr": 0.04343525428949098
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.33,
"acc_stderr": 0.04725815626252605,
"acc_norm": 0.33,
"acc_norm_stderr": 0.04725815626252605
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.6903225806451613,
"acc_stderr": 0.026302774983517414,
"acc_norm": 0.6903225806451613,
"acc_norm_stderr": 0.026302774983517414
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.4630541871921182,
"acc_stderr": 0.035083705204426656,
"acc_norm": 0.4630541871921182,
"acc_norm_stderr": 0.035083705204426656
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.61,
"acc_stderr": 0.04902071300001974,
"acc_norm": 0.61,
"acc_norm_stderr": 0.04902071300001974
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7393939393939394,
"acc_stderr": 0.034277431758165236,
"acc_norm": 0.7393939393939394,
"acc_norm_stderr": 0.034277431758165236
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7373737373737373,
"acc_stderr": 0.03135305009533084,
"acc_norm": 0.7373737373737373,
"acc_norm_stderr": 0.03135305009533084
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8290155440414507,
"acc_stderr": 0.02717121368316453,
"acc_norm": 0.8290155440414507,
"acc_norm_stderr": 0.02717121368316453
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.5820512820512821,
"acc_stderr": 0.025007329882461217,
"acc_norm": 0.5820512820512821,
"acc_norm_stderr": 0.025007329882461217
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.31851851851851853,
"acc_stderr": 0.02840653309060846,
"acc_norm": 0.31851851851851853,
"acc_norm_stderr": 0.02840653309060846
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6050420168067226,
"acc_stderr": 0.03175367846096626,
"acc_norm": 0.6050420168067226,
"acc_norm_stderr": 0.03175367846096626
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3443708609271523,
"acc_stderr": 0.038796870240733264,
"acc_norm": 0.3443708609271523,
"acc_norm_stderr": 0.038796870240733264
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.7798165137614679,
"acc_stderr": 0.01776597865232753,
"acc_norm": 0.7798165137614679,
"acc_norm_stderr": 0.01776597865232753
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.4583333333333333,
"acc_stderr": 0.03398110890294636,
"acc_norm": 0.4583333333333333,
"acc_norm_stderr": 0.03398110890294636
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.7696078431372549,
"acc_stderr": 0.029554292605695066,
"acc_norm": 0.7696078431372549,
"acc_norm_stderr": 0.029554292605695066
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7468354430379747,
"acc_stderr": 0.028304657943035307,
"acc_norm": 0.7468354430379747,
"acc_norm_stderr": 0.028304657943035307
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6098654708520179,
"acc_stderr": 0.03273766725459156,
"acc_norm": 0.6098654708520179,
"acc_norm_stderr": 0.03273766725459156
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7022900763358778,
"acc_stderr": 0.04010358942462203,
"acc_norm": 0.7022900763358778,
"acc_norm_stderr": 0.04010358942462203
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7851239669421488,
"acc_stderr": 0.03749492448709695,
"acc_norm": 0.7851239669421488,
"acc_norm_stderr": 0.03749492448709695
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7222222222222222,
"acc_stderr": 0.043300437496507416,
"acc_norm": 0.7222222222222222,
"acc_norm_stderr": 0.043300437496507416
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7423312883435583,
"acc_stderr": 0.03436150827846917,
"acc_norm": 0.7423312883435583,
"acc_norm_stderr": 0.03436150827846917
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.48214285714285715,
"acc_stderr": 0.047427623612430116,
"acc_norm": 0.48214285714285715,
"acc_norm_stderr": 0.047427623612430116
},
"harness|hendrycksTest-management|5": {
"acc": 0.7281553398058253,
"acc_stderr": 0.044052680241409216,
"acc_norm": 0.7281553398058253,
"acc_norm_stderr": 0.044052680241409216
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8547008547008547,
"acc_stderr": 0.02308663508684141,
"acc_norm": 0.8547008547008547,
"acc_norm_stderr": 0.02308663508684141
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.65,
"acc_stderr": 0.0479372485441102,
"acc_norm": 0.65,
"acc_norm_stderr": 0.0479372485441102
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.7484035759897829,
"acc_stderr": 0.015517322365529636,
"acc_norm": 0.7484035759897829,
"acc_norm_stderr": 0.015517322365529636
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.6705202312138728,
"acc_stderr": 0.025305258131879702,
"acc_norm": 0.6705202312138728,
"acc_norm_stderr": 0.025305258131879702
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.36089385474860336,
"acc_stderr": 0.016062290671110473,
"acc_norm": 0.36089385474860336,
"acc_norm_stderr": 0.016062290671110473
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.6535947712418301,
"acc_stderr": 0.02724561304721536,
"acc_norm": 0.6535947712418301,
"acc_norm_stderr": 0.02724561304721536
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6720257234726688,
"acc_stderr": 0.02666441088693761,
"acc_norm": 0.6720257234726688,
"acc_norm_stderr": 0.02666441088693761
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.6574074074074074,
"acc_stderr": 0.02640614597362568,
"acc_norm": 0.6574074074074074,
"acc_norm_stderr": 0.02640614597362568
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.41843971631205673,
"acc_stderr": 0.02942799403941999,
"acc_norm": 0.41843971631205673,
"acc_norm_stderr": 0.02942799403941999
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.41590612777053454,
"acc_stderr": 0.012588323850313629,
"acc_norm": 0.41590612777053454,
"acc_norm_stderr": 0.012588323850313629
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.5882352941176471,
"acc_stderr": 0.029896163033125474,
"acc_norm": 0.5882352941176471,
"acc_norm_stderr": 0.029896163033125474
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6013071895424836,
"acc_stderr": 0.019808281317449848,
"acc_norm": 0.6013071895424836,
"acc_norm_stderr": 0.019808281317449848
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6727272727272727,
"acc_stderr": 0.04494290866252091,
"acc_norm": 0.6727272727272727,
"acc_norm_stderr": 0.04494290866252091
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.6653061224489796,
"acc_stderr": 0.030209235226242307,
"acc_norm": 0.6653061224489796,
"acc_norm_stderr": 0.030209235226242307
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.7960199004975125,
"acc_stderr": 0.02849317624532607,
"acc_norm": 0.7960199004975125,
"acc_norm_stderr": 0.02849317624532607
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.82,
"acc_stderr": 0.038612291966536934,
"acc_norm": 0.82,
"acc_norm_stderr": 0.038612291966536934
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5,
"acc_stderr": 0.03892494720807614,
"acc_norm": 0.5,
"acc_norm_stderr": 0.03892494720807614
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.7777777777777778,
"acc_stderr": 0.031885780176863984,
"acc_norm": 0.7777777777777778,
"acc_norm_stderr": 0.031885780176863984
},
"harness|truthfulqa:mc|0": {
"mc1": 0.5507955936352509,
"mc1_stderr": 0.017412941986115295,
"mc2": 0.694525955019443,
"mc2_stderr": 0.015330113605051526
},
"harness|winogrande|5": {
"acc": 0.7947908445146015,
"acc_stderr": 0.011350315707462059
},
"harness|gsm8k|5": {
"acc": 0.25473843821076575,
"acc_stderr": 0.012001731232879127
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
ysn-rfd/tiny-dataset-ysnrfd | ---
license: mit
---
|
autoevaluate/autoeval-staging-eval-project-squad-47db8743-11885591 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- squad
eval_info:
task: extractive_question_answering
model: Graphcore/roberta-base-squad
metrics: []
dataset_name: squad
dataset_config: plain_text
dataset_split: validation
col_mapping:
context: context
question: question
answers-text: answers.text
answers-answer_start: answers.answer_start
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: Graphcore/roberta-base-squad
* Dataset: squad
* Config: plain_text
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Narayana](https://huggingface.co/Narayana) for evaluating this model. |
mstz/dexter | ---
language:
- en
tags:
- dexter
- tabular_classification
- binary_classification
- UCI
pretty_name: Dexter
task_categories: # Full list at https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts
- tabular-classification
configs:
- dexter
---
# Dexter
The [Dexter dataset](https://archive-beta.ics.uci.edu/dataset/168/dexter) from the [UCI repository](https://archive-beta.ics.uci.edu/).
# Configurations and tasks
| **Configuration** | **Task** |
|-----------------------|---------------------------|
| dexter | Binary classification.|
|
pfin123/hindi-aggregated | ---
license: apache-2.0
---
|
Deathspike/strike-witches-501st | ---
license: cc-by-nc-sa-4.0
---
|
xiemoxiaoshaso/ceshi | ---
license: openrail
---
|
CJWeiss/LGZ_multitiny | ---
dataset_info:
features:
- name: id
dtype: int64
- name: input
dtype: string
- name: output
dtype: string
- name: cluster
dtype: string
- name: old_id
dtype: int64
- name: length
dtype: int64
splits:
- name: train
num_bytes: 40320866
num_examples: 50
download_size: 17999950
dataset_size: 40320866
---
# Dataset Card for "LGZ_multitiny"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Cubpaw/voxelgym_5c_42x42_500 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype: image
- name: rgb_label
dtype: image
- name: path_label
dtype: image
- name: path_rgb_label
dtype: image
splits:
- name: train
num_bytes: 373246.0
num_examples: 400
- name: validation
num_bytes: 92510.0
num_examples: 100
download_size: 403202
dataset_size: 465756.0
---
# Dataset Card for "voxelgym_5c_42x42_500"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
fengtc/school_math | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 120529392
num_examples: 248481
download_size: 61762166
dataset_size: 120529392
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
jarrydmartinx/recid | ---
dataset_info:
features:
- name: black
dtype: int64
- name: alcohol
dtype: int64
- name: drugs
dtype: int64
- name: super
dtype: int64
- name: married
dtype: int64
- name: felon
dtype: int64
- name: workprg
dtype: int64
- name: property
dtype: int64
- name: person
dtype: int64
- name: priors
dtype: int64
- name: educ
dtype: int64
- name: rules
dtype: int64
- name: age
dtype: int64
- name: tserved
dtype: int64
- name: follow
dtype: int64
- name: event_time
dtype: int64
- name: event_indicator
dtype: int64
splits:
- name: train
num_bytes: 196520
num_examples: 1445
download_size: 27921
dataset_size: 196520
---
# Dataset Card for "recid"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Arist12/EABF-ShareGPT-Long-3.5k | ---
license: mit
---
# 3.5k lengthy ShareGPT conversations used to train [EABF Models](https://github.com/GAIR-NLP/Entropy-ABF)
Following the data cleaning pipeline in [FastChat](https://github.com/lm-sys/FastChat), we processed [raw ShareGPT conversations](https://huggingface.co/datasets/philschmid/sharegpt-raw) by keeping English conversations only, excluding those with less than 10,000 tokens, and splitting long conversations that exceed 16,384 tokens.
We find multi-round long conversations efficient for extending LLMs' context window.
# Dataset Overview
Our released dataset follows the conventional ShareGPT multi-round conversation JSON format:
- **id**: The unique identifier for each conversation in the dataset.
- **model**: The model used for generating the response. (Can be left empty if not applicable)
- **conversations**: Object containing the dialogue between human and AI assistants.
- **from**: Indicates whether the message is from the "human" or the "AI".
- **value**: The actual content of the message.
Example JSON Object:
```
{
"id": "wNBG8Gp_0",
"model": "",
"conversations": [
{
"from": "human",
"value": "Java add to the arraylist of a class type"
},
{
"from": "gpt",
"value": "To add an element to an ArrayList of a specific class type in Java..."
},
...
]
}
``` |
huggingartists/logic | ---
language:
- en
tags:
- huggingartists
- lyrics
---
# Dataset Card for "huggingartists/logic"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 3.343197 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/0f975524d106026e89de983689d007c4.900x900x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/logic">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">Logic</div>
<a href="https://genius.com/artists/logic">
<div style="text-align: center; font-size: 14px;">@logic</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/logic).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/logic")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|651| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/logic")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2022
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
chuyin0321/timeseries-1mn-stocks | ---
dataset_info:
features:
- name: symbol
dtype: string
- name: datetime
dtype: timestamp[ns]
- name: open
dtype: float64
- name: high
dtype: float64
- name: low
dtype: float64
- name: close
dtype: float64
- name: volume
dtype: float64
splits:
- name: train
num_bytes: 21219505
num_examples: 378090
download_size: 15092332
dataset_size: 21219505
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "timeseries-1mn-stocks"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
hippocrates/MedMCQA_train | ---
dataset_info:
features:
- name: id
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 247930514
num_examples: 182822
- name: valid
num_bytes: 5813618
num_examples: 4183
- name: test
num_bytes: 5813618
num_examples: 4183
download_size: 61302365
dataset_size: 259557750
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
- split: test
path: data/test-*
---
|
mask-distilled-onesec-cv12-each-chunk-uniq/chunk_163 | ---
dataset_info:
features:
- name: logits
sequence: float32
- name: mfcc
sequence:
sequence: float64
splits:
- name: train
num_bytes: 1128570512.0
num_examples: 221636
download_size: 1151323846
dataset_size: 1128570512.0
---
# Dataset Card for "chunk_163"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
dimun/ExpirationDate | ---
license: afl-3.0
task_categories:
- object-detection
language:
- en
---
# Annotation
Each date in the Products-Real and Products-Synth datasets is annotated with class, bounding box coordinates, date transcription, image width, and height. There are four classes defined: date, due, prod, and code in the training sets. Expiration dates in the test set of Product-Real are specifically labeled as "exp" class for easy evaluation, unlike the training set of Product-Real. Each component in the Date-Real and Date-Synth datasets is annotated with class, bounding box, and transcription. The day, month, and year are used as the classes for each component of the dates. Moreover, Components-Real and Components-Synth datasets consist of the components of the day, month, and year and their transcriptions.
# Citation
Dataset published originally in `A Generalized Framework for Recognition of Expiration Date on Product Packages Using Fully Convolutional Networks`
@article{seker2022generalized,
title={A generalized framework for recognition of expiration dates on product packages using fully convolutional networks},
author={Seker, Ahmet Cagatay and Ahn, Sang Chul},
journal={Expert Systems with Applications},
pages={117310},
year={2022},
publisher={Elsevier}
} |
hlillemark/flores200_eng_scaffolding_large | ---
dataset_info:
features:
- name: id
dtype: int32
- name: source_lang
dtype: string
- name: target_lang
dtype: string
- name: source
dtype: string
- name: target
dtype: string
- name: eng_source
dtype: string
splits:
- name: train
num_bytes: 11177748029
num_examples: 20480000
download_size: 8448719815
dataset_size: 11177748029
---
# Dataset Card for "flores200_eng_scaffolding_large"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Dmkond/tune-forms | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 842248
num_examples: 200
download_size: 221015
dataset_size: 842248
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "tune-forms"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
neovalle/H4rmony | ---
license: cc-by-4.0
task_categories:
- reinforcement-learning
- text-classification
- question-answering
language:
- en
tags:
- Ecolinguistics
- Sustainability
- ecolinguistic
- environment
size_categories:
- 1K<n<10K
---
# Dataset Card for Dataset H4rmony

**** There is a simplified version, specifically curated for DPO training here:
***** https://huggingface.co/datasets/neovalle/H4rmony_dpo
### Dataset Summary
The H4rmony dataset is a collection of prompts and completions aimed at integrating ecolinguistic principles into AI Large Language Models (LLMs).
Developed with collaborative efforts from ecolinguistics enthusiasts and experts, it offers a series of prompts and corresponding pairwise responses
ranked in terms of environmental awareness and alignment. This ranking provides a clear metric for the desired alignment and establishes a framework for LLMs fine-tuning, particularly in reinforcement learning,
via reward model.
This dataset aims to bridge the gap between AI and ecolinguistic values,
pushing the envelope for creating generative AI models that are environmentally and sustainability aware by design.
H4rmony is not just a dataset; it's a project towards harmonising AI with nature by means of fine-tuning.
We believe in the potential of using ecolinguistics to fine-tune and influence LLMs towards more eco-aware outputs.
This dataset is currently work in progress.
### Languages
Currently only English but will be extended to multi-lingual.
## Dataset Structure
### Data Fields

### Ecological Issues - Codes meaning
This table show the meaning of the codes used for the ecological issues classification as well as examples of their manifestation
and their relation to 17 sustainable development goals defined by UNEP.

### Data Splits
There are no splits on the dataset. Splits can be created when loading the dataset:
dataset = (load_dataset('neovalle/H4rmony', split='train').train_test_split(test_size=0.2))
## Dataset Creation
### Curation Rationale
Given the multidisciplinary nature of the challenge, H4rmony dataset is being enriched by contributions from environmentalists, AI specialists, and ecolinguistics enthusiasts.
This collective effort ensures the data is both technically sound and ecologically meaningful.
The dataset was initially created by a variant of Human Feedback, which involved role-playing and human verification.
- We created a list of prompts suggested by the ecolinguistics community.
- We then instructed GPT-4 with several ecolinguistic principles and asked it to provide three types of answers for each prompt:
- One as if answered by someone aware of ecolinguistics.
- another as if answered by someone unaware of ecolinguistics.
- and a third, somewhat ambivalent, response.
We then constructed the dataset, already knowing the ranks of the answers:
1. Ecolinguistics-aware role.
2. Ambivalent answer.
3. Ecolinguistics-unaware role.
We named this variation of RLHF as Reinforcement Learning by Role-playing and Human Verification (RLRHV).
The following image compares traditional RLHF and the variant we applied (RLRHV):

### Source Data
#### Initial Data Collection and Normalization
The core of the H4rmony dataset originated from active collaborations within the ecolinguistics community.
Contributors were asked to submit prompts that would help uncover AI models' alignment with ecolinguistic values.
A number of prompts and completions were AI-generated using prompt engineering.
To this intial group of prompts, human crafted prompts.
### DPO Version
There is a simplified version, specifically curated for DPO training here:
https://huggingface.co/datasets/neovalle/H4rmony_dpo
### Personal and Sensitive Information
This dataset doesn't contain sensitive information.
## Considerations for Using the Data
This dataset is still under construction and it might contain offensive language.
### Social Impact of Dataset
The H4rmony project aims to help AI LLMs to give priority to the crucial importance of environmental consciousness.
By serving as the fourth "H", "Harmony with nature", it complements the existing triad of Helpfulness, Honesty, and Harmlessness already well known in ethical AI development.
The following models have been fine tuned using H4rmony Dataset:
https://huggingface.co/neovalle/H4rmoniousCaramel = google/flan-t5-Large + H4rmony dataset (instruction fine tuning)
https://huggingface.co/neovalle/H4rmoniousPampero = HuggingFaceH4/zephyr-7b-alpha + H4rmony dataset (reinforcement learning)
https://huggingface.co/neovalle/H4rmoniousBreeze = HuggingFaceH4/zephyr-7b-beta + H4rmony dataset (reinforcement learning)
https://huggingface.co/neovalle/H4rmoniousAnthea = teknium/OpenHermes-2.5-Mistral-7B + H4rmony_dpo dataset (DPO fine-tuning)
### Discussion of Biases
Not known biases.
### Other Known Limitations
The dataset is still under constructions and the current number of rows might not be enough for some usage cases.
## Additional Information
### Dataset Curators
Jorge Vallego - airesearch@neovalle.co.uk
### Licensing Information
Creative Commons Attribution 4.0
### Citation Information
dataset neovalle/H4rmony - airesearch@neovalle.co.uk
### Testing and PoC Repository
https://github.com/Neovalle/H4rmony
### Note
This project has its roots in the article "Ecolinguistics and AI: Integrating eco-awareness in natural
language processing" https://www.ecoling.net/_files/ugd/ae088a_13cc4828a28e4955804d38e8721056cf.pdf
|
HannahRoseKirk/HatemojiBuild | ---
annotations_creators:
- expert
language_creators:
- expert-generated
languages:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: HatemojiBuild
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- hate-speech-detection
extra_gated_prompt: "We have deactivated the automatic preview for this dataset because it contains hate speech. If you want to see the preview, you can continue."
---
# Dataset Card for HatemojiBuild
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Content Warning
This datasets contains examples of hateful language.
## Dataset Description and Details
- **Repository:** https://github.com/HannahKirk/Hatemoji
- **Paper:** https://arxiv.org/abs/2108.05921
- **Point of Contact:** hannah.kirk@oii.ox.ac.uk
### Dataset Summary
HatemojiBuild can be used to train, develop and test models on emoji-based hate with challenging adversarial examples and perturbations.
HatemojiBuild is a dataset of 5,912 adversarially-generated examples created on Dynabench using a human-and-model-in-the-loop approach. We collect data in three consecutive rounds. Our work follows on from Vidgen et al (2021) _Learning from the Worst: Dynamically Generated Datasets to Improve Online Hate Detection_ (http://arxiv.org/abs/2012.15761) who collect four rounds of textual adversarial examples. The R1-R4 data is available at https://github.com/bvidgen/Dynamically-Generated-Hate-Speech-Dataset. The entries in HatemojiBuild are labeled by round (R5-7). The text of each entry is given with its gold-standard label from majority agreement of three annotators. Each original entry is associated with a perturbation so each row of the dataset. matches these two cases. We also provide granular labels of type and target for hateful entries.
### Supported Tasks
Hate Speech Detection
### Languages
English
## Dataset Structure
### Data Instances
5,912 adversarially-generated instances
### Data Fields
entry_id: The unique ID of the entry (assigned to each of the 5,912 cases generated).
text: The text of the entry.
type: The type of hate assigned to hateful entries.
target: The target of hate assigned to hateful entries.
round.base: The round where the entry was generated.
round.set: The round and whether the entry came from an original statement (a) or a perturbation (b).
set: Whether the entry is an original or perturbation.
split: The randomly-assigned train/dev/test split using in our work (80:10:10).
label_gold: The gold standard label (hateful/non-hateful) of the test case.
matched_text: The text of the paired perturbation. Each original entry has one perturbation.
matched_id: The unique entry ID of the paired perturbation.
### Data Splits
Train, Validation and Test.
## Dataset Creation
### Curation Rationale
The genre of texts is hateful and non-hateful statements using emoji constructions. The purpose of HatemojiBuild is address the model weaknesses to emoji-baaed hate, to "build" better models. 50% of the 5,912 test cases are hateful. 50% of the entries in the dataset are original content and 50% are perturbations.
### Source Data
#### Initial Data Collection and Normalization
We use an online interface designed for dynamic dataset generation and model benchmarking (Dynabench) to collect synthetic adversarial examples in three successive rounds, running between 24th May--11th June. Each round contains approximately 2,000 entries, where each original entry inputed to the interface is paired with an offline perturbation. Data was synthetically-generated by a team of trained annotators, i.e., not sampled from social media.
#### Who are the source language producers?
The language producers are also the annotators.
### Annotations
#### Annotation process
We implemented three successive rounds of data generation and model re-training to create the HatemojiBuild dataset.
In each round we tasked a team of 10 trained annotators with entering content the model-in-the-loop would misclassify. We refer to this model as the target model. Annotators were instructed to generate linguistically diverse entries while ensuring each entry was (1) realistic, (2) clearly hateful or non-hateful and (3) contained at least one emoji. Each entry was first given a binary label of hateful or non-hateful, and hateful content was assigned secondary labels for the type and target of hate. Each entry was validated by two additional annotators, and an expert resolved disagreements. After validation, annotators created a perturbation for each entry that flips the label. To maximize similarity between originals and perturbations, annotators could either make an emoji substitution while fixing the text or fix the emoji and minimally change the surrounding text. Each perturbation received two additional annotations, and disagreements were resolved by the expert. This weekly cadence of annotator tasks was repeated in three consecutive weeks.
#### Who are the annotators?
Ten annotators were recruited to work for three weeks, and paid £16/hour. An expert annotator was recruited for quality control purposes and paid £20/hour. In total, there were 11 annotators. All annotators received a training session prior to data collection and had previous experience working on hate speech projects. A daily `stand-up' meeting was held every morning to communicate feedback and update guidelines as rounds progressed. Annotators were able to contact the research team at any point using a messaging platform. Of 11 annotators, 8 were between 18--29 years old and 3 between 30--39 years old. The completed education level was high school for 3 annotators, undergraduate degree for 1 annotators, taught graduate degree for 4 annotators and post-graduate research degree for 3 annotators. 6 annotators were female, and 5 were male. Annotators came from a variety of nationalities, with 7 British, as well as Jordanian, Irish, Polish and Spanish. 7 annotators identified as ethnically White and the remaining annotators came from various ethnicities including Turkish, Middle Eastern, and Mixed White and South Asian. 4 annotators were Muslim, and others identified as Atheist or as having no religious affiliation. 9 annotators were native English speakers and 2 were non-native but fluent. The majority of annotators (9) used emoji and social media more than once per day. 10 annotators had seen others targeted by abuse online, and 7 had been personally targeted.
### Personal and Sensitive Information
HatemojiBuild contains synthetic statements so has no personal information. It does however contains harmful examples of emoji-based hate which could be disturbing or damaging to view.
## Considerations for Using the Data
### Social Impact of Dataset
HatemojiBuild contains challenging emoji examples which have "tricked" state-of-the-art transformers models. Malicious actors could take inspiration for bypassing current detection systems on internet platforms, or in principal train a generative hate speech model. However, it also helps to build model robustness to emoji-based hate, so can be used to mitigate the harm to victims before a model is deployed.
### Discussion of Biases
Annotators were given substantial freedom in the targets of hate resulting in 54 unique targets, and 126 unique intersections of these. The entries from R5-R7 contain 1,082 unique emoji out of 3,521 defined in the Unicode Standard as of September 2020. This diversity helped to mitigate biases in classification towards certain targets but biases likely remain, especially since HatemojiBuild was designed for English-language use of emoji.
### Other Known Limitations
While annotators were trained on real-world examples of emoji-based hate from Twitter, the entries in HatemojiBuild are synthetically-generated so may deviate from real-world instances of emoji-based hate.
## Additional Information
### Dataset Curators
The dataset was curated by the lead author (Hannah Rose Kirk), using the Dynabench platform.
### Licensing Information
Creative Commons Attribution 4.0 International Public License. For full detail see: https://github.com/HannahKirk/Hatemoji/blob/main/LICENSE
### Citation Information
If you use this dataset, please cite our paper: Kirk, H. R., Vidgen, B., Röttger, P., Thrush, T., & Hale, S. A. (2021). Hatemoji: A test suite and adversarially-generated dataset for benchmarking and detecting emoji-based hate. arXiv preprint arXiv:2108.05921.
```
@article{kirk2021hatemoji,
title={Hatemoji: A test suite and adversarially-generated dataset for benchmarking and detecting emoji-based hate},
author={Kirk, Hannah Rose and Vidgen, Bertram and R{\"o}ttger, Paul and Thrush, Tristan and Hale, Scott A},
journal={arXiv preprint arXiv:2108.05921},
year={2021}
}
```
### Contributions
Thanks to [@HannahKirk](https://github.com/HannahKirk) for adding this dataset.
|
khoomeik/gzipscale-0.33-10M | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 32856344
num_examples: 39063
download_size: 8042971
dataset_size: 32856344
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
open-llm-leaderboard/details_Undi95__C-Based-2x7B | ---
pretty_name: Evaluation run of Undi95/C-Based-2x7B
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [Undi95/C-Based-2x7B](https://huggingface.co/Undi95/C-Based-2x7B) on the [Open\
\ LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Undi95__C-Based-2x7B\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-03-29T22:21:08.761157](https://huggingface.co/datasets/open-llm-leaderboard/details_Undi95__C-Based-2x7B/blob/main/results_2024-03-29T22-21-08.761157.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6462460405881691,\n\
\ \"acc_stderr\": 0.03215697805909352,\n \"acc_norm\": 0.6495184164175453,\n\
\ \"acc_norm_stderr\": 0.032801498643883695,\n \"mc1\": 0.3463892288861689,\n\
\ \"mc1_stderr\": 0.01665699710912514,\n \"mc2\": 0.501648303864219,\n\
\ \"mc2_stderr\": 0.015053421128225263\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.613481228668942,\n \"acc_stderr\": 0.014230084761910478,\n\
\ \"acc_norm\": 0.6552901023890785,\n \"acc_norm_stderr\": 0.01388881628678211\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6560446126269668,\n\
\ \"acc_stderr\": 0.004740555782142168,\n \"acc_norm\": 0.8500298745269866,\n\
\ \"acc_norm_stderr\": 0.003563124427458512\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.32,\n \"acc_stderr\": 0.046882617226215034,\n \
\ \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.046882617226215034\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6,\n \
\ \"acc_stderr\": 0.04232073695151589,\n \"acc_norm\": 0.6,\n \"\
acc_norm_stderr\": 0.04232073695151589\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.6776315789473685,\n \"acc_stderr\": 0.03803510248351585,\n\
\ \"acc_norm\": 0.6776315789473685,\n \"acc_norm_stderr\": 0.03803510248351585\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.6,\n\
\ \"acc_stderr\": 0.04923659639173309,\n \"acc_norm\": 0.6,\n \
\ \"acc_norm_stderr\": 0.04923659639173309\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.720754716981132,\n \"acc_stderr\": 0.027611163402399715,\n\
\ \"acc_norm\": 0.720754716981132,\n \"acc_norm_stderr\": 0.027611163402399715\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7430555555555556,\n\
\ \"acc_stderr\": 0.03653946969442099,\n \"acc_norm\": 0.7430555555555556,\n\
\ \"acc_norm_stderr\": 0.03653946969442099\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.49,\n \"acc_stderr\": 0.05024183937956912,\n \
\ \"acc_norm\": 0.49,\n \"acc_norm_stderr\": 0.05024183937956912\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.52,\n \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\": 0.52,\n\
\ \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.39,\n \"acc_stderr\": 0.04902071300001975,\n \
\ \"acc_norm\": 0.39,\n \"acc_norm_stderr\": 0.04902071300001975\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6589595375722543,\n\
\ \"acc_stderr\": 0.03614665424180826,\n \"acc_norm\": 0.6589595375722543,\n\
\ \"acc_norm_stderr\": 0.03614665424180826\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.38235294117647056,\n \"acc_stderr\": 0.04835503696107223,\n\
\ \"acc_norm\": 0.38235294117647056,\n \"acc_norm_stderr\": 0.04835503696107223\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.76,\n \"acc_stderr\": 0.04292346959909283,\n \"acc_norm\": 0.76,\n\
\ \"acc_norm_stderr\": 0.04292346959909283\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.6,\n \"acc_stderr\": 0.03202563076101735,\n \
\ \"acc_norm\": 0.6,\n \"acc_norm_stderr\": 0.03202563076101735\n },\n\
\ \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5,\n \
\ \"acc_stderr\": 0.047036043419179864,\n \"acc_norm\": 0.5,\n \"\
acc_norm_stderr\": 0.047036043419179864\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5517241379310345,\n \"acc_stderr\": 0.04144311810878152,\n\
\ \"acc_norm\": 0.5517241379310345,\n \"acc_norm_stderr\": 0.04144311810878152\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.4312169312169312,\n \"acc_stderr\": 0.025506481698138215,\n \"\
acc_norm\": 0.4312169312169312,\n \"acc_norm_stderr\": 0.025506481698138215\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.42063492063492064,\n\
\ \"acc_stderr\": 0.04415438226743744,\n \"acc_norm\": 0.42063492063492064,\n\
\ \"acc_norm_stderr\": 0.04415438226743744\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.36,\n \"acc_stderr\": 0.04824181513244218,\n \
\ \"acc_norm\": 0.36,\n \"acc_norm_stderr\": 0.04824181513244218\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7774193548387097,\n\
\ \"acc_stderr\": 0.023664216671642518,\n \"acc_norm\": 0.7774193548387097,\n\
\ \"acc_norm_stderr\": 0.023664216671642518\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.5221674876847291,\n \"acc_stderr\": 0.03514528562175008,\n\
\ \"acc_norm\": 0.5221674876847291,\n \"acc_norm_stderr\": 0.03514528562175008\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\"\
: 0.69,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.8,\n \"acc_stderr\": 0.03123475237772117,\n \
\ \"acc_norm\": 0.8,\n \"acc_norm_stderr\": 0.03123475237772117\n },\n\
\ \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7929292929292929,\n\
\ \"acc_stderr\": 0.028869778460267045,\n \"acc_norm\": 0.7929292929292929,\n\
\ \"acc_norm_stderr\": 0.028869778460267045\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\"\
: {\n \"acc\": 0.8963730569948186,\n \"acc_stderr\": 0.02199531196364424,\n\
\ \"acc_norm\": 0.8963730569948186,\n \"acc_norm_stderr\": 0.02199531196364424\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.6871794871794872,\n \"acc_stderr\": 0.023507579020645358,\n\
\ \"acc_norm\": 0.6871794871794872,\n \"acc_norm_stderr\": 0.023507579020645358\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.362962962962963,\n \"acc_stderr\": 0.02931820364520686,\n \
\ \"acc_norm\": 0.362962962962963,\n \"acc_norm_stderr\": 0.02931820364520686\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6974789915966386,\n \"acc_stderr\": 0.029837962388291932,\n\
\ \"acc_norm\": 0.6974789915966386,\n \"acc_norm_stderr\": 0.029837962388291932\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.33112582781456956,\n \"acc_stderr\": 0.038425817186598696,\n \"\
acc_norm\": 0.33112582781456956,\n \"acc_norm_stderr\": 0.038425817186598696\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8311926605504587,\n \"acc_stderr\": 0.016060056268530343,\n \"\
acc_norm\": 0.8311926605504587,\n \"acc_norm_stderr\": 0.016060056268530343\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.5231481481481481,\n \"acc_stderr\": 0.03406315360711507,\n \"\
acc_norm\": 0.5231481481481481,\n \"acc_norm_stderr\": 0.03406315360711507\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.7941176470588235,\n \"acc_stderr\": 0.028379449451588667,\n \"\
acc_norm\": 0.7941176470588235,\n \"acc_norm_stderr\": 0.028379449451588667\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.7932489451476793,\n \"acc_stderr\": 0.0263616516683891,\n \
\ \"acc_norm\": 0.7932489451476793,\n \"acc_norm_stderr\": 0.0263616516683891\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.695067264573991,\n\
\ \"acc_stderr\": 0.030898610882477518,\n \"acc_norm\": 0.695067264573991,\n\
\ \"acc_norm_stderr\": 0.030898610882477518\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.7786259541984732,\n \"acc_stderr\": 0.03641297081313729,\n\
\ \"acc_norm\": 0.7786259541984732,\n \"acc_norm_stderr\": 0.03641297081313729\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.8181818181818182,\n \"acc_stderr\": 0.03520893951097652,\n \"\
acc_norm\": 0.8181818181818182,\n \"acc_norm_stderr\": 0.03520893951097652\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8055555555555556,\n\
\ \"acc_stderr\": 0.038260763248848646,\n \"acc_norm\": 0.8055555555555556,\n\
\ \"acc_norm_stderr\": 0.038260763248848646\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.754601226993865,\n \"acc_stderr\": 0.03380939813943354,\n\
\ \"acc_norm\": 0.754601226993865,\n \"acc_norm_stderr\": 0.03380939813943354\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.49107142857142855,\n\
\ \"acc_stderr\": 0.04745033255489123,\n \"acc_norm\": 0.49107142857142855,\n\
\ \"acc_norm_stderr\": 0.04745033255489123\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7961165048543689,\n \"acc_stderr\": 0.03989139859531771,\n\
\ \"acc_norm\": 0.7961165048543689,\n \"acc_norm_stderr\": 0.03989139859531771\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8846153846153846,\n\
\ \"acc_stderr\": 0.020930193185179333,\n \"acc_norm\": 0.8846153846153846,\n\
\ \"acc_norm_stderr\": 0.020930193185179333\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.71,\n \"acc_stderr\": 0.045604802157206845,\n \
\ \"acc_norm\": 0.71,\n \"acc_norm_stderr\": 0.045604802157206845\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8122605363984674,\n\
\ \"acc_stderr\": 0.01396439376989913,\n \"acc_norm\": 0.8122605363984674,\n\
\ \"acc_norm_stderr\": 0.01396439376989913\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7225433526011561,\n \"acc_stderr\": 0.024105712607754307,\n\
\ \"acc_norm\": 0.7225433526011561,\n \"acc_norm_stderr\": 0.024105712607754307\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.33854748603351953,\n\
\ \"acc_stderr\": 0.01582670009648135,\n \"acc_norm\": 0.33854748603351953,\n\
\ \"acc_norm_stderr\": 0.01582670009648135\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.7418300653594772,\n \"acc_stderr\": 0.025058503316958154,\n\
\ \"acc_norm\": 0.7418300653594772,\n \"acc_norm_stderr\": 0.025058503316958154\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7170418006430869,\n\
\ \"acc_stderr\": 0.02558306248998481,\n \"acc_norm\": 0.7170418006430869,\n\
\ \"acc_norm_stderr\": 0.02558306248998481\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.7469135802469136,\n \"acc_stderr\": 0.024191808600713002,\n\
\ \"acc_norm\": 0.7469135802469136,\n \"acc_norm_stderr\": 0.024191808600713002\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.4787234042553192,\n \"acc_stderr\": 0.029800481645628693,\n \
\ \"acc_norm\": 0.4787234042553192,\n \"acc_norm_stderr\": 0.029800481645628693\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.46479791395045633,\n\
\ \"acc_stderr\": 0.012738547371303957,\n \"acc_norm\": 0.46479791395045633,\n\
\ \"acc_norm_stderr\": 0.012738547371303957\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.6544117647058824,\n \"acc_stderr\": 0.028888193103988633,\n\
\ \"acc_norm\": 0.6544117647058824,\n \"acc_norm_stderr\": 0.028888193103988633\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6666666666666666,\n \"acc_stderr\": 0.019070985589687492,\n \
\ \"acc_norm\": 0.6666666666666666,\n \"acc_norm_stderr\": 0.019070985589687492\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6636363636363637,\n\
\ \"acc_stderr\": 0.04525393596302506,\n \"acc_norm\": 0.6636363636363637,\n\
\ \"acc_norm_stderr\": 0.04525393596302506\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.746938775510204,\n \"acc_stderr\": 0.02783302387139968,\n\
\ \"acc_norm\": 0.746938775510204,\n \"acc_norm_stderr\": 0.02783302387139968\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.845771144278607,\n\
\ \"acc_stderr\": 0.02553843336857833,\n \"acc_norm\": 0.845771144278607,\n\
\ \"acc_norm_stderr\": 0.02553843336857833\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.86,\n \"acc_stderr\": 0.0348735088019777,\n \
\ \"acc_norm\": 0.86,\n \"acc_norm_stderr\": 0.0348735088019777\n },\n\
\ \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5662650602409639,\n\
\ \"acc_stderr\": 0.03858158940685516,\n \"acc_norm\": 0.5662650602409639,\n\
\ \"acc_norm_stderr\": 0.03858158940685516\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8304093567251462,\n \"acc_stderr\": 0.02878210810540171,\n\
\ \"acc_norm\": 0.8304093567251462,\n \"acc_norm_stderr\": 0.02878210810540171\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.3463892288861689,\n\
\ \"mc1_stderr\": 0.01665699710912514,\n \"mc2\": 0.501648303864219,\n\
\ \"mc2_stderr\": 0.015053421128225263\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.8105761641673244,\n \"acc_stderr\": 0.011012790432989247\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.5246398786959818,\n \
\ \"acc_stderr\": 0.01375575135276492\n }\n}\n```"
repo_url: https://huggingface.co/Undi95/C-Based-2x7B
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_03_29T22_21_08.761157
path:
- '**/details_harness|arc:challenge|25_2024-03-29T22-21-08.761157.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-03-29T22-21-08.761157.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_03_29T22_21_08.761157
path:
- '**/details_harness|gsm8k|5_2024-03-29T22-21-08.761157.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-03-29T22-21-08.761157.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_03_29T22_21_08.761157
path:
- '**/details_harness|hellaswag|10_2024-03-29T22-21-08.761157.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-03-29T22-21-08.761157.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_03_29T22_21_08.761157
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-29T22-21-08.761157.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-29T22-21-08.761157.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-29T22-21-08.761157.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_03_29T22_21_08.761157
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-29T22-21-08.761157.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-29T22-21-08.761157.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_03_29T22_21_08.761157
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-29T22-21-08.761157.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-29T22-21-08.761157.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_03_29T22_21_08.761157
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-29T22-21-08.761157.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-29T22-21-08.761157.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_03_29T22_21_08.761157
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-29T22-21-08.761157.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-29T22-21-08.761157.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_03_29T22_21_08.761157
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-29T22-21-08.761157.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-29T22-21-08.761157.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_03_29T22_21_08.761157
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-29T22-21-08.761157.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-29T22-21-08.761157.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_03_29T22_21_08.761157
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-29T22-21-08.761157.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-29T22-21-08.761157.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_03_29T22_21_08.761157
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-29T22-21-08.761157.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-29T22-21-08.761157.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_03_29T22_21_08.761157
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-29T22-21-08.761157.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-29T22-21-08.761157.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_03_29T22_21_08.761157
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-29T22-21-08.761157.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-29T22-21-08.761157.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_03_29T22_21_08.761157
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-29T22-21-08.761157.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-29T22-21-08.761157.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_03_29T22_21_08.761157
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-29T22-21-08.761157.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-29T22-21-08.761157.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_03_29T22_21_08.761157
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-29T22-21-08.761157.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-29T22-21-08.761157.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_03_29T22_21_08.761157
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-29T22-21-08.761157.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-29T22-21-08.761157.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_03_29T22_21_08.761157
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-29T22-21-08.761157.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-29T22-21-08.761157.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_03_29T22_21_08.761157
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-29T22-21-08.761157.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-29T22-21-08.761157.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_03_29T22_21_08.761157
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-29T22-21-08.761157.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-29T22-21-08.761157.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_03_29T22_21_08.761157
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-29T22-21-08.761157.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-29T22-21-08.761157.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_03_29T22_21_08.761157
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-29T22-21-08.761157.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-29T22-21-08.761157.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_03_29T22_21_08.761157
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-29T22-21-08.761157.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-29T22-21-08.761157.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_03_29T22_21_08.761157
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-29T22-21-08.761157.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-29T22-21-08.761157.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_03_29T22_21_08.761157
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-29T22-21-08.761157.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-29T22-21-08.761157.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_03_29T22_21_08.761157
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-29T22-21-08.761157.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-29T22-21-08.761157.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_03_29T22_21_08.761157
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-29T22-21-08.761157.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-29T22-21-08.761157.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_03_29T22_21_08.761157
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-29T22-21-08.761157.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-29T22-21-08.761157.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_03_29T22_21_08.761157
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-29T22-21-08.761157.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-29T22-21-08.761157.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_03_29T22_21_08.761157
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-29T22-21-08.761157.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-29T22-21-08.761157.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_03_29T22_21_08.761157
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-29T22-21-08.761157.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-29T22-21-08.761157.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_03_29T22_21_08.761157
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-29T22-21-08.761157.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-29T22-21-08.761157.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_03_29T22_21_08.761157
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-29T22-21-08.761157.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-29T22-21-08.761157.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_03_29T22_21_08.761157
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-29T22-21-08.761157.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-29T22-21-08.761157.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_03_29T22_21_08.761157
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-29T22-21-08.761157.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-29T22-21-08.761157.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_03_29T22_21_08.761157
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-29T22-21-08.761157.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-29T22-21-08.761157.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_03_29T22_21_08.761157
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-29T22-21-08.761157.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-29T22-21-08.761157.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_03_29T22_21_08.761157
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-29T22-21-08.761157.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-29T22-21-08.761157.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_03_29T22_21_08.761157
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-29T22-21-08.761157.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-29T22-21-08.761157.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_03_29T22_21_08.761157
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-29T22-21-08.761157.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-29T22-21-08.761157.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_03_29T22_21_08.761157
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-29T22-21-08.761157.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-29T22-21-08.761157.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_03_29T22_21_08.761157
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-29T22-21-08.761157.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-29T22-21-08.761157.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_03_29T22_21_08.761157
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-29T22-21-08.761157.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-29T22-21-08.761157.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_03_29T22_21_08.761157
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-29T22-21-08.761157.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-29T22-21-08.761157.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_03_29T22_21_08.761157
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-29T22-21-08.761157.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-29T22-21-08.761157.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_03_29T22_21_08.761157
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-29T22-21-08.761157.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-29T22-21-08.761157.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_03_29T22_21_08.761157
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-29T22-21-08.761157.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-29T22-21-08.761157.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_03_29T22_21_08.761157
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-29T22-21-08.761157.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-29T22-21-08.761157.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_03_29T22_21_08.761157
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-29T22-21-08.761157.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-29T22-21-08.761157.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_03_29T22_21_08.761157
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-29T22-21-08.761157.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-29T22-21-08.761157.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_03_29T22_21_08.761157
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-29T22-21-08.761157.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-29T22-21-08.761157.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_03_29T22_21_08.761157
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-29T22-21-08.761157.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-29T22-21-08.761157.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_03_29T22_21_08.761157
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-29T22-21-08.761157.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-29T22-21-08.761157.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_03_29T22_21_08.761157
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-29T22-21-08.761157.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-29T22-21-08.761157.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_03_29T22_21_08.761157
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-29T22-21-08.761157.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-29T22-21-08.761157.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_03_29T22_21_08.761157
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-29T22-21-08.761157.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-29T22-21-08.761157.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_03_29T22_21_08.761157
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-29T22-21-08.761157.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-29T22-21-08.761157.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_03_29T22_21_08.761157
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-29T22-21-08.761157.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-29T22-21-08.761157.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_03_29T22_21_08.761157
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-29T22-21-08.761157.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-29T22-21-08.761157.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_03_29T22_21_08.761157
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-29T22-21-08.761157.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-29T22-21-08.761157.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_03_29T22_21_08.761157
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-29T22-21-08.761157.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-29T22-21-08.761157.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_03_29T22_21_08.761157
path:
- '**/details_harness|winogrande|5_2024-03-29T22-21-08.761157.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-03-29T22-21-08.761157.parquet'
- config_name: results
data_files:
- split: 2024_03_29T22_21_08.761157
path:
- results_2024-03-29T22-21-08.761157.parquet
- split: latest
path:
- results_2024-03-29T22-21-08.761157.parquet
---
# Dataset Card for Evaluation run of Undi95/C-Based-2x7B
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [Undi95/C-Based-2x7B](https://huggingface.co/Undi95/C-Based-2x7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Undi95__C-Based-2x7B",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-03-29T22:21:08.761157](https://huggingface.co/datasets/open-llm-leaderboard/details_Undi95__C-Based-2x7B/blob/main/results_2024-03-29T22-21-08.761157.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6462460405881691,
"acc_stderr": 0.03215697805909352,
"acc_norm": 0.6495184164175453,
"acc_norm_stderr": 0.032801498643883695,
"mc1": 0.3463892288861689,
"mc1_stderr": 0.01665699710912514,
"mc2": 0.501648303864219,
"mc2_stderr": 0.015053421128225263
},
"harness|arc:challenge|25": {
"acc": 0.613481228668942,
"acc_stderr": 0.014230084761910478,
"acc_norm": 0.6552901023890785,
"acc_norm_stderr": 0.01388881628678211
},
"harness|hellaswag|10": {
"acc": 0.6560446126269668,
"acc_stderr": 0.004740555782142168,
"acc_norm": 0.8500298745269866,
"acc_norm_stderr": 0.003563124427458512
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.32,
"acc_stderr": 0.046882617226215034,
"acc_norm": 0.32,
"acc_norm_stderr": 0.046882617226215034
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6,
"acc_stderr": 0.04232073695151589,
"acc_norm": 0.6,
"acc_norm_stderr": 0.04232073695151589
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6776315789473685,
"acc_stderr": 0.03803510248351585,
"acc_norm": 0.6776315789473685,
"acc_norm_stderr": 0.03803510248351585
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.6,
"acc_stderr": 0.04923659639173309,
"acc_norm": 0.6,
"acc_norm_stderr": 0.04923659639173309
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.720754716981132,
"acc_stderr": 0.027611163402399715,
"acc_norm": 0.720754716981132,
"acc_norm_stderr": 0.027611163402399715
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7430555555555556,
"acc_stderr": 0.03653946969442099,
"acc_norm": 0.7430555555555556,
"acc_norm_stderr": 0.03653946969442099
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.49,
"acc_stderr": 0.05024183937956912,
"acc_norm": 0.49,
"acc_norm_stderr": 0.05024183937956912
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.52,
"acc_stderr": 0.050211673156867795,
"acc_norm": 0.52,
"acc_norm_stderr": 0.050211673156867795
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.39,
"acc_stderr": 0.04902071300001975,
"acc_norm": 0.39,
"acc_norm_stderr": 0.04902071300001975
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6589595375722543,
"acc_stderr": 0.03614665424180826,
"acc_norm": 0.6589595375722543,
"acc_norm_stderr": 0.03614665424180826
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.38235294117647056,
"acc_stderr": 0.04835503696107223,
"acc_norm": 0.38235294117647056,
"acc_norm_stderr": 0.04835503696107223
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.76,
"acc_stderr": 0.04292346959909283,
"acc_norm": 0.76,
"acc_norm_stderr": 0.04292346959909283
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.6,
"acc_stderr": 0.03202563076101735,
"acc_norm": 0.6,
"acc_norm_stderr": 0.03202563076101735
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.5,
"acc_stderr": 0.047036043419179864,
"acc_norm": 0.5,
"acc_norm_stderr": 0.047036043419179864
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5517241379310345,
"acc_stderr": 0.04144311810878152,
"acc_norm": 0.5517241379310345,
"acc_norm_stderr": 0.04144311810878152
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.4312169312169312,
"acc_stderr": 0.025506481698138215,
"acc_norm": 0.4312169312169312,
"acc_norm_stderr": 0.025506481698138215
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.42063492063492064,
"acc_stderr": 0.04415438226743744,
"acc_norm": 0.42063492063492064,
"acc_norm_stderr": 0.04415438226743744
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.36,
"acc_stderr": 0.04824181513244218,
"acc_norm": 0.36,
"acc_norm_stderr": 0.04824181513244218
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7774193548387097,
"acc_stderr": 0.023664216671642518,
"acc_norm": 0.7774193548387097,
"acc_norm_stderr": 0.023664216671642518
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5221674876847291,
"acc_stderr": 0.03514528562175008,
"acc_norm": 0.5221674876847291,
"acc_norm_stderr": 0.03514528562175008
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.69,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.69,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.8,
"acc_stderr": 0.03123475237772117,
"acc_norm": 0.8,
"acc_norm_stderr": 0.03123475237772117
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7929292929292929,
"acc_stderr": 0.028869778460267045,
"acc_norm": 0.7929292929292929,
"acc_norm_stderr": 0.028869778460267045
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8963730569948186,
"acc_stderr": 0.02199531196364424,
"acc_norm": 0.8963730569948186,
"acc_norm_stderr": 0.02199531196364424
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6871794871794872,
"acc_stderr": 0.023507579020645358,
"acc_norm": 0.6871794871794872,
"acc_norm_stderr": 0.023507579020645358
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.362962962962963,
"acc_stderr": 0.02931820364520686,
"acc_norm": 0.362962962962963,
"acc_norm_stderr": 0.02931820364520686
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6974789915966386,
"acc_stderr": 0.029837962388291932,
"acc_norm": 0.6974789915966386,
"acc_norm_stderr": 0.029837962388291932
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.33112582781456956,
"acc_stderr": 0.038425817186598696,
"acc_norm": 0.33112582781456956,
"acc_norm_stderr": 0.038425817186598696
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8311926605504587,
"acc_stderr": 0.016060056268530343,
"acc_norm": 0.8311926605504587,
"acc_norm_stderr": 0.016060056268530343
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5231481481481481,
"acc_stderr": 0.03406315360711507,
"acc_norm": 0.5231481481481481,
"acc_norm_stderr": 0.03406315360711507
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.7941176470588235,
"acc_stderr": 0.028379449451588667,
"acc_norm": 0.7941176470588235,
"acc_norm_stderr": 0.028379449451588667
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7932489451476793,
"acc_stderr": 0.0263616516683891,
"acc_norm": 0.7932489451476793,
"acc_norm_stderr": 0.0263616516683891
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.695067264573991,
"acc_stderr": 0.030898610882477518,
"acc_norm": 0.695067264573991,
"acc_norm_stderr": 0.030898610882477518
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7786259541984732,
"acc_stderr": 0.03641297081313729,
"acc_norm": 0.7786259541984732,
"acc_norm_stderr": 0.03641297081313729
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8181818181818182,
"acc_stderr": 0.03520893951097652,
"acc_norm": 0.8181818181818182,
"acc_norm_stderr": 0.03520893951097652
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.8055555555555556,
"acc_stderr": 0.038260763248848646,
"acc_norm": 0.8055555555555556,
"acc_norm_stderr": 0.038260763248848646
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.754601226993865,
"acc_stderr": 0.03380939813943354,
"acc_norm": 0.754601226993865,
"acc_norm_stderr": 0.03380939813943354
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.49107142857142855,
"acc_stderr": 0.04745033255489123,
"acc_norm": 0.49107142857142855,
"acc_norm_stderr": 0.04745033255489123
},
"harness|hendrycksTest-management|5": {
"acc": 0.7961165048543689,
"acc_stderr": 0.03989139859531771,
"acc_norm": 0.7961165048543689,
"acc_norm_stderr": 0.03989139859531771
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8846153846153846,
"acc_stderr": 0.020930193185179333,
"acc_norm": 0.8846153846153846,
"acc_norm_stderr": 0.020930193185179333
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.71,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.71,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8122605363984674,
"acc_stderr": 0.01396439376989913,
"acc_norm": 0.8122605363984674,
"acc_norm_stderr": 0.01396439376989913
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7225433526011561,
"acc_stderr": 0.024105712607754307,
"acc_norm": 0.7225433526011561,
"acc_norm_stderr": 0.024105712607754307
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.33854748603351953,
"acc_stderr": 0.01582670009648135,
"acc_norm": 0.33854748603351953,
"acc_norm_stderr": 0.01582670009648135
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7418300653594772,
"acc_stderr": 0.025058503316958154,
"acc_norm": 0.7418300653594772,
"acc_norm_stderr": 0.025058503316958154
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7170418006430869,
"acc_stderr": 0.02558306248998481,
"acc_norm": 0.7170418006430869,
"acc_norm_stderr": 0.02558306248998481
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7469135802469136,
"acc_stderr": 0.024191808600713002,
"acc_norm": 0.7469135802469136,
"acc_norm_stderr": 0.024191808600713002
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4787234042553192,
"acc_stderr": 0.029800481645628693,
"acc_norm": 0.4787234042553192,
"acc_norm_stderr": 0.029800481645628693
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.46479791395045633,
"acc_stderr": 0.012738547371303957,
"acc_norm": 0.46479791395045633,
"acc_norm_stderr": 0.012738547371303957
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6544117647058824,
"acc_stderr": 0.028888193103988633,
"acc_norm": 0.6544117647058824,
"acc_norm_stderr": 0.028888193103988633
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6666666666666666,
"acc_stderr": 0.019070985589687492,
"acc_norm": 0.6666666666666666,
"acc_norm_stderr": 0.019070985589687492
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6636363636363637,
"acc_stderr": 0.04525393596302506,
"acc_norm": 0.6636363636363637,
"acc_norm_stderr": 0.04525393596302506
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.746938775510204,
"acc_stderr": 0.02783302387139968,
"acc_norm": 0.746938775510204,
"acc_norm_stderr": 0.02783302387139968
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.845771144278607,
"acc_stderr": 0.02553843336857833,
"acc_norm": 0.845771144278607,
"acc_norm_stderr": 0.02553843336857833
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.86,
"acc_stderr": 0.0348735088019777,
"acc_norm": 0.86,
"acc_norm_stderr": 0.0348735088019777
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5662650602409639,
"acc_stderr": 0.03858158940685516,
"acc_norm": 0.5662650602409639,
"acc_norm_stderr": 0.03858158940685516
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8304093567251462,
"acc_stderr": 0.02878210810540171,
"acc_norm": 0.8304093567251462,
"acc_norm_stderr": 0.02878210810540171
},
"harness|truthfulqa:mc|0": {
"mc1": 0.3463892288861689,
"mc1_stderr": 0.01665699710912514,
"mc2": 0.501648303864219,
"mc2_stderr": 0.015053421128225263
},
"harness|winogrande|5": {
"acc": 0.8105761641673244,
"acc_stderr": 0.011012790432989247
},
"harness|gsm8k|5": {
"acc": 0.5246398786959818,
"acc_stderr": 0.01375575135276492
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
joey234/mmlu-miscellaneous-neg-prepend-fix | ---
configs:
- config_name: default
data_files:
- split: dev
path: data/dev-*
- split: test
path: data/test-*
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: negate_openai_prompt
struct:
- name: content
dtype: string
- name: role
dtype: string
- name: neg_question
dtype: string
- name: fewshot_context
dtype: string
- name: ori_prompt
dtype: string
splits:
- name: dev
num_bytes: 4153
num_examples: 5
- name: test
num_bytes: 1302583
num_examples: 783
download_size: 10773
dataset_size: 1306736
---
# Dataset Card for "mmlu-miscellaneous-neg-prepend-fix"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
SilkGPT/Silk_MEG_ds4212 | ---
license: cc0-1.0
---
|
open-llm-leaderboard/details_HuggingFaceH4__mistral-7b-sft-beta | ---
pretty_name: Evaluation run of HuggingFaceH4/mistral-7b-sft-beta
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [HuggingFaceH4/mistral-7b-sft-beta](https://huggingface.co/HuggingFaceH4/mistral-7b-sft-beta)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 1 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_HuggingFaceH4__mistral-7b-sft-beta\"\
,\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese\
\ are the [latest results from run 2023-12-03T19:08:18.030621](https://huggingface.co/datasets/open-llm-leaderboard/details_HuggingFaceH4__mistral-7b-sft-beta/blob/main/results_2023-12-03T19-08-18.030621.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.3646702047005307,\n\
\ \"acc_stderr\": 0.013258428375662245\n },\n \"harness|gsm8k|5\":\
\ {\n \"acc\": 0.3646702047005307,\n \"acc_stderr\": 0.013258428375662245\n\
\ }\n}\n```"
repo_url: https://huggingface.co/HuggingFaceH4/mistral-7b-sft-beta
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_gsm8k_5
data_files:
- split: 2023_12_03T19_08_18.030621
path:
- '**/details_harness|gsm8k|5_2023-12-03T19-08-18.030621.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-12-03T19-08-18.030621.parquet'
- config_name: results
data_files:
- split: 2023_12_03T19_08_18.030621
path:
- results_2023-12-03T19-08-18.030621.parquet
- split: latest
path:
- results_2023-12-03T19-08-18.030621.parquet
---
# Dataset Card for Evaluation run of HuggingFaceH4/mistral-7b-sft-beta
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/HuggingFaceH4/mistral-7b-sft-beta
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [HuggingFaceH4/mistral-7b-sft-beta](https://huggingface.co/HuggingFaceH4/mistral-7b-sft-beta) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_HuggingFaceH4__mistral-7b-sft-beta",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-03T19:08:18.030621](https://huggingface.co/datasets/open-llm-leaderboard/details_HuggingFaceH4__mistral-7b-sft-beta/blob/main/results_2023-12-03T19-08-18.030621.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.3646702047005307,
"acc_stderr": 0.013258428375662245
},
"harness|gsm8k|5": {
"acc": 0.3646702047005307,
"acc_stderr": 0.013258428375662245
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
lenML/oaast_rm_zh_jieba | ---
license: apache-2.0
language:
- zh
tags:
- human-feedback
size_categories:
- n<1K
---
尝试解决"llm repetition problem",使用分词模型对oaast语料进行“结巴化”数据增强,提供更强的重复内容拒绝效果。
Attempts to solve the "llm repetition problem" by using a segmentation model to enhance the oaast corpus with "stuttering" data to provide stronger rejection of duplicate content.
其次,还过滤掉了所有自我认知的微调样本。
Second, it also filters out all the fine-tuned samples of self-cognition.
files:
- oaast_rm_zh_jieba.jsonl : word level repeat
- oaast_rm_zh_sent_jieba.jsonl : sentence level repeat
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.