datasetId stringlengths 2 117 | card stringlengths 19 1.01M |
|---|---|
Witcape/logo_finetune | ---
license: mit
dataset_info:
features:
- name: image
dtype: string
- name: image_name
dtype: string
splits:
- name: train
num_bytes: 10435758
num_examples: 122920
download_size: 2773403
dataset_size: 10435758
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
zxgov/bilibili-qianying-Singingaudio | ---
license: eupl-1.1
---
|
GeneralRincewind/ThumbnailDataset | ---
dataset_info:
features:
- name: image
dtype: image
- name: views
dtype: int64
- name: title
dtype: string
- name: video_id
dtype: string
splits:
- name: train
num_bytes: 134005295.996
num_examples: 1294
download_size: 132886702
dataset_size: 134005295.996
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
anshchoudhary/mini-platypus | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 4186564
num_examples: 1000
download_size: 2245921
dataset_size: 4186564
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
open-llm-leaderboard/details_yam-peleg__Experiment10-7B | ---
pretty_name: Evaluation run of yam-peleg/Experiment10-7B
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [yam-peleg/Experiment10-7B](https://huggingface.co/yam-peleg/Experiment10-7B)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_yam-peleg__Experiment10-7B\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-02-17T19:53:17.261011](https://huggingface.co/datasets/open-llm-leaderboard/details_yam-peleg__Experiment10-7B/blob/main/results_2024-02-17T19-53-17.261011.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6570860829812994,\n\
\ \"acc_stderr\": 0.03205617661623268,\n \"acc_norm\": 0.6581629819822128,\n\
\ \"acc_norm_stderr\": 0.03270354339480524,\n \"mc1\": 0.5691554467564259,\n\
\ \"mc1_stderr\": 0.01733527247533237,\n \"mc2\": 0.7109902944624826,\n\
\ \"mc2_stderr\": 0.014934792650104375\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.7005119453924915,\n \"acc_stderr\": 0.01338502163731357,\n\
\ \"acc_norm\": 0.7218430034129693,\n \"acc_norm_stderr\": 0.013094469919538805\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.7088229436367257,\n\
\ \"acc_stderr\": 0.0045337646862119935,\n \"acc_norm\": 0.8796056562437762,\n\
\ \"acc_norm_stderr\": 0.0032475703304569218\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.35,\n \"acc_stderr\": 0.0479372485441102,\n \
\ \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.0479372485441102\n },\n\
\ \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6444444444444445,\n\
\ \"acc_stderr\": 0.04135176749720385,\n \"acc_norm\": 0.6444444444444445,\n\
\ \"acc_norm_stderr\": 0.04135176749720385\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.6644736842105263,\n \"acc_stderr\": 0.03842498559395268,\n\
\ \"acc_norm\": 0.6644736842105263,\n \"acc_norm_stderr\": 0.03842498559395268\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.63,\n\
\ \"acc_stderr\": 0.04852365870939099,\n \"acc_norm\": 0.63,\n \
\ \"acc_norm_stderr\": 0.04852365870939099\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.7245283018867924,\n \"acc_stderr\": 0.027495663683724057,\n\
\ \"acc_norm\": 0.7245283018867924,\n \"acc_norm_stderr\": 0.027495663683724057\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7708333333333334,\n\
\ \"acc_stderr\": 0.03514697467862388,\n \"acc_norm\": 0.7708333333333334,\n\
\ \"acc_norm_stderr\": 0.03514697467862388\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.47,\n \"acc_stderr\": 0.050161355804659205,\n \
\ \"acc_norm\": 0.47,\n \"acc_norm_stderr\": 0.050161355804659205\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"\
acc\": 0.58,\n \"acc_stderr\": 0.049604496374885836,\n \"acc_norm\"\
: 0.58,\n \"acc_norm_stderr\": 0.049604496374885836\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \
\ \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6878612716763006,\n\
\ \"acc_stderr\": 0.035331333893236574,\n \"acc_norm\": 0.6878612716763006,\n\
\ \"acc_norm_stderr\": 0.035331333893236574\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.43137254901960786,\n \"acc_stderr\": 0.04928099597287534,\n\
\ \"acc_norm\": 0.43137254901960786,\n \"acc_norm_stderr\": 0.04928099597287534\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.78,\n \"acc_stderr\": 0.041633319989322626,\n \"acc_norm\": 0.78,\n\
\ \"acc_norm_stderr\": 0.041633319989322626\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.574468085106383,\n \"acc_stderr\": 0.03232146916224468,\n\
\ \"acc_norm\": 0.574468085106383,\n \"acc_norm_stderr\": 0.03232146916224468\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5087719298245614,\n\
\ \"acc_stderr\": 0.04702880432049615,\n \"acc_norm\": 0.5087719298245614,\n\
\ \"acc_norm_stderr\": 0.04702880432049615\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.593103448275862,\n \"acc_stderr\": 0.04093793981266236,\n\
\ \"acc_norm\": 0.593103448275862,\n \"acc_norm_stderr\": 0.04093793981266236\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.43386243386243384,\n \"acc_stderr\": 0.025525034382474887,\n \"\
acc_norm\": 0.43386243386243384,\n \"acc_norm_stderr\": 0.025525034382474887\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.47619047619047616,\n\
\ \"acc_stderr\": 0.04467062628403273,\n \"acc_norm\": 0.47619047619047616,\n\
\ \"acc_norm_stderr\": 0.04467062628403273\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.37,\n \"acc_stderr\": 0.048523658709391,\n \
\ \"acc_norm\": 0.37,\n \"acc_norm_stderr\": 0.048523658709391\n },\n\
\ \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7935483870967742,\n\
\ \"acc_stderr\": 0.02302589961718872,\n \"acc_norm\": 0.7935483870967742,\n\
\ \"acc_norm_stderr\": 0.02302589961718872\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.5172413793103449,\n \"acc_stderr\": 0.035158955511656986,\n\
\ \"acc_norm\": 0.5172413793103449,\n \"acc_norm_stderr\": 0.035158955511656986\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.72,\n \"acc_stderr\": 0.04512608598542128,\n \"acc_norm\"\
: 0.72,\n \"acc_norm_stderr\": 0.04512608598542128\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7696969696969697,\n \"acc_stderr\": 0.0328766675860349,\n\
\ \"acc_norm\": 0.7696969696969697,\n \"acc_norm_stderr\": 0.0328766675860349\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.7878787878787878,\n \"acc_stderr\": 0.029126522834586815,\n \"\
acc_norm\": 0.7878787878787878,\n \"acc_norm_stderr\": 0.029126522834586815\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.8963730569948186,\n \"acc_stderr\": 0.02199531196364424,\n\
\ \"acc_norm\": 0.8963730569948186,\n \"acc_norm_stderr\": 0.02199531196364424\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.6794871794871795,\n \"acc_stderr\": 0.02366129639396428,\n \
\ \"acc_norm\": 0.6794871794871795,\n \"acc_norm_stderr\": 0.02366129639396428\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.35185185185185186,\n \"acc_stderr\": 0.029116617606083008,\n \
\ \"acc_norm\": 0.35185185185185186,\n \"acc_norm_stderr\": 0.029116617606083008\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.680672268907563,\n \"acc_stderr\": 0.030283995525884396,\n \
\ \"acc_norm\": 0.680672268907563,\n \"acc_norm_stderr\": 0.030283995525884396\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.3576158940397351,\n \"acc_stderr\": 0.03913453431177258,\n \"\
acc_norm\": 0.3576158940397351,\n \"acc_norm_stderr\": 0.03913453431177258\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8495412844036697,\n \"acc_stderr\": 0.015328563932669237,\n \"\
acc_norm\": 0.8495412844036697,\n \"acc_norm_stderr\": 0.015328563932669237\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.5509259259259259,\n \"acc_stderr\": 0.03392238405321617,\n \"\
acc_norm\": 0.5509259259259259,\n \"acc_norm_stderr\": 0.03392238405321617\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.8382352941176471,\n \"acc_stderr\": 0.025845017986926917,\n \"\
acc_norm\": 0.8382352941176471,\n \"acc_norm_stderr\": 0.025845017986926917\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.7890295358649789,\n \"acc_stderr\": 0.02655837250266192,\n \
\ \"acc_norm\": 0.7890295358649789,\n \"acc_norm_stderr\": 0.02655837250266192\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6995515695067265,\n\
\ \"acc_stderr\": 0.03076935200822914,\n \"acc_norm\": 0.6995515695067265,\n\
\ \"acc_norm_stderr\": 0.03076935200822914\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.8091603053435115,\n \"acc_stderr\": 0.034465133507525975,\n\
\ \"acc_norm\": 0.8091603053435115,\n \"acc_norm_stderr\": 0.034465133507525975\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.7933884297520661,\n \"acc_stderr\": 0.03695980128098824,\n \"\
acc_norm\": 0.7933884297520661,\n \"acc_norm_stderr\": 0.03695980128098824\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7685185185185185,\n\
\ \"acc_stderr\": 0.04077494709252627,\n \"acc_norm\": 0.7685185185185185,\n\
\ \"acc_norm_stderr\": 0.04077494709252627\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7791411042944786,\n \"acc_stderr\": 0.03259177392742179,\n\
\ \"acc_norm\": 0.7791411042944786,\n \"acc_norm_stderr\": 0.03259177392742179\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.44642857142857145,\n\
\ \"acc_stderr\": 0.04718471485219588,\n \"acc_norm\": 0.44642857142857145,\n\
\ \"acc_norm_stderr\": 0.04718471485219588\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7864077669902912,\n \"acc_stderr\": 0.040580420156460344,\n\
\ \"acc_norm\": 0.7864077669902912,\n \"acc_norm_stderr\": 0.040580420156460344\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8760683760683761,\n\
\ \"acc_stderr\": 0.021586494001281376,\n \"acc_norm\": 0.8760683760683761,\n\
\ \"acc_norm_stderr\": 0.021586494001281376\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.71,\n \"acc_stderr\": 0.045604802157206845,\n \
\ \"acc_norm\": 0.71,\n \"acc_norm_stderr\": 0.045604802157206845\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8301404853128991,\n\
\ \"acc_stderr\": 0.013428186370608308,\n \"acc_norm\": 0.8301404853128991,\n\
\ \"acc_norm_stderr\": 0.013428186370608308\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7167630057803468,\n \"acc_stderr\": 0.02425790170532338,\n\
\ \"acc_norm\": 0.7167630057803468,\n \"acc_norm_stderr\": 0.02425790170532338\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.4547486033519553,\n\
\ \"acc_stderr\": 0.016653875777524002,\n \"acc_norm\": 0.4547486033519553,\n\
\ \"acc_norm_stderr\": 0.016653875777524002\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.7287581699346405,\n \"acc_stderr\": 0.02545775669666788,\n\
\ \"acc_norm\": 0.7287581699346405,\n \"acc_norm_stderr\": 0.02545775669666788\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7041800643086816,\n\
\ \"acc_stderr\": 0.025922371788818763,\n \"acc_norm\": 0.7041800643086816,\n\
\ \"acc_norm_stderr\": 0.025922371788818763\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.7407407407407407,\n \"acc_stderr\": 0.02438366553103545,\n\
\ \"acc_norm\": 0.7407407407407407,\n \"acc_norm_stderr\": 0.02438366553103545\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.4787234042553192,\n \"acc_stderr\": 0.029800481645628693,\n \
\ \"acc_norm\": 0.4787234042553192,\n \"acc_norm_stderr\": 0.029800481645628693\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4706649282920469,\n\
\ \"acc_stderr\": 0.012748238397365549,\n \"acc_norm\": 0.4706649282920469,\n\
\ \"acc_norm_stderr\": 0.012748238397365549\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.6875,\n \"acc_stderr\": 0.02815637344037142,\n \
\ \"acc_norm\": 0.6875,\n \"acc_norm_stderr\": 0.02815637344037142\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6633986928104575,\n \"acc_stderr\": 0.019117213911495148,\n \
\ \"acc_norm\": 0.6633986928104575,\n \"acc_norm_stderr\": 0.019117213911495148\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6818181818181818,\n\
\ \"acc_stderr\": 0.044612721759105085,\n \"acc_norm\": 0.6818181818181818,\n\
\ \"acc_norm_stderr\": 0.044612721759105085\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.7306122448979592,\n \"acc_stderr\": 0.02840125202902294,\n\
\ \"acc_norm\": 0.7306122448979592,\n \"acc_norm_stderr\": 0.02840125202902294\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8407960199004975,\n\
\ \"acc_stderr\": 0.02587064676616914,\n \"acc_norm\": 0.8407960199004975,\n\
\ \"acc_norm_stderr\": 0.02587064676616914\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.86,\n \"acc_stderr\": 0.034873508801977704,\n \
\ \"acc_norm\": 0.86,\n \"acc_norm_stderr\": 0.034873508801977704\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5602409638554217,\n\
\ \"acc_stderr\": 0.03864139923699121,\n \"acc_norm\": 0.5602409638554217,\n\
\ \"acc_norm_stderr\": 0.03864139923699121\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8304093567251462,\n \"acc_stderr\": 0.02878210810540171,\n\
\ \"acc_norm\": 0.8304093567251462,\n \"acc_norm_stderr\": 0.02878210810540171\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.5691554467564259,\n\
\ \"mc1_stderr\": 0.01733527247533237,\n \"mc2\": 0.7109902944624826,\n\
\ \"mc2_stderr\": 0.014934792650104375\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.8074191002367798,\n \"acc_stderr\": 0.011082538847491897\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6353297952994693,\n \
\ \"acc_stderr\": 0.013258428375662247\n }\n}\n```"
repo_url: https://huggingface.co/yam-peleg/Experiment10-7B
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_02_17T19_53_17.261011
path:
- '**/details_harness|arc:challenge|25_2024-02-17T19-53-17.261011.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-02-17T19-53-17.261011.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_02_17T19_53_17.261011
path:
- '**/details_harness|gsm8k|5_2024-02-17T19-53-17.261011.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-02-17T19-53-17.261011.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_02_17T19_53_17.261011
path:
- '**/details_harness|hellaswag|10_2024-02-17T19-53-17.261011.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-02-17T19-53-17.261011.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_02_17T19_53_17.261011
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-17T19-53-17.261011.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-17T19-53-17.261011.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-17T19-53-17.261011.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_02_17T19_53_17.261011
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-17T19-53-17.261011.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-17T19-53-17.261011.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_02_17T19_53_17.261011
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-17T19-53-17.261011.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-17T19-53-17.261011.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_02_17T19_53_17.261011
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-17T19-53-17.261011.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-17T19-53-17.261011.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_02_17T19_53_17.261011
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-17T19-53-17.261011.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-17T19-53-17.261011.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_02_17T19_53_17.261011
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-17T19-53-17.261011.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-17T19-53-17.261011.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_02_17T19_53_17.261011
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-17T19-53-17.261011.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-17T19-53-17.261011.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_02_17T19_53_17.261011
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-17T19-53-17.261011.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-17T19-53-17.261011.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_02_17T19_53_17.261011
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-17T19-53-17.261011.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-17T19-53-17.261011.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_02_17T19_53_17.261011
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-17T19-53-17.261011.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-17T19-53-17.261011.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_02_17T19_53_17.261011
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-17T19-53-17.261011.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-17T19-53-17.261011.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_02_17T19_53_17.261011
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-17T19-53-17.261011.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-17T19-53-17.261011.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_02_17T19_53_17.261011
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-17T19-53-17.261011.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-17T19-53-17.261011.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_02_17T19_53_17.261011
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-17T19-53-17.261011.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-17T19-53-17.261011.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_02_17T19_53_17.261011
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-17T19-53-17.261011.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-17T19-53-17.261011.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_02_17T19_53_17.261011
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-17T19-53-17.261011.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-17T19-53-17.261011.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_02_17T19_53_17.261011
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-17T19-53-17.261011.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-17T19-53-17.261011.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_02_17T19_53_17.261011
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-17T19-53-17.261011.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-17T19-53-17.261011.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_02_17T19_53_17.261011
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-17T19-53-17.261011.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-17T19-53-17.261011.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_02_17T19_53_17.261011
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-17T19-53-17.261011.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-17T19-53-17.261011.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_02_17T19_53_17.261011
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-17T19-53-17.261011.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-17T19-53-17.261011.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_02_17T19_53_17.261011
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-17T19-53-17.261011.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-17T19-53-17.261011.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_02_17T19_53_17.261011
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-17T19-53-17.261011.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-17T19-53-17.261011.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_02_17T19_53_17.261011
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-17T19-53-17.261011.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-17T19-53-17.261011.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_02_17T19_53_17.261011
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-17T19-53-17.261011.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-17T19-53-17.261011.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_02_17T19_53_17.261011
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-17T19-53-17.261011.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-17T19-53-17.261011.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_02_17T19_53_17.261011
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-17T19-53-17.261011.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-17T19-53-17.261011.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_02_17T19_53_17.261011
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-17T19-53-17.261011.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-17T19-53-17.261011.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_02_17T19_53_17.261011
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-17T19-53-17.261011.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-17T19-53-17.261011.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_02_17T19_53_17.261011
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-17T19-53-17.261011.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-17T19-53-17.261011.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_02_17T19_53_17.261011
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-17T19-53-17.261011.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-17T19-53-17.261011.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_02_17T19_53_17.261011
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-17T19-53-17.261011.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-17T19-53-17.261011.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_02_17T19_53_17.261011
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-17T19-53-17.261011.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-17T19-53-17.261011.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_02_17T19_53_17.261011
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-17T19-53-17.261011.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-17T19-53-17.261011.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_02_17T19_53_17.261011
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-17T19-53-17.261011.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-17T19-53-17.261011.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_02_17T19_53_17.261011
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-17T19-53-17.261011.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-17T19-53-17.261011.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_02_17T19_53_17.261011
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-17T19-53-17.261011.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-17T19-53-17.261011.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_02_17T19_53_17.261011
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-17T19-53-17.261011.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-17T19-53-17.261011.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_02_17T19_53_17.261011
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-17T19-53-17.261011.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-17T19-53-17.261011.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_02_17T19_53_17.261011
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-17T19-53-17.261011.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-17T19-53-17.261011.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_02_17T19_53_17.261011
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-17T19-53-17.261011.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-17T19-53-17.261011.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_02_17T19_53_17.261011
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-17T19-53-17.261011.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-17T19-53-17.261011.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_02_17T19_53_17.261011
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-17T19-53-17.261011.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-17T19-53-17.261011.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_02_17T19_53_17.261011
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-17T19-53-17.261011.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-17T19-53-17.261011.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_02_17T19_53_17.261011
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-17T19-53-17.261011.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-17T19-53-17.261011.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_02_17T19_53_17.261011
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-17T19-53-17.261011.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-17T19-53-17.261011.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_02_17T19_53_17.261011
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-17T19-53-17.261011.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-17T19-53-17.261011.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_02_17T19_53_17.261011
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-17T19-53-17.261011.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-17T19-53-17.261011.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_02_17T19_53_17.261011
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-17T19-53-17.261011.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-17T19-53-17.261011.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_02_17T19_53_17.261011
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-17T19-53-17.261011.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-17T19-53-17.261011.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_02_17T19_53_17.261011
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-17T19-53-17.261011.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-17T19-53-17.261011.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_02_17T19_53_17.261011
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-17T19-53-17.261011.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-17T19-53-17.261011.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_02_17T19_53_17.261011
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-17T19-53-17.261011.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-17T19-53-17.261011.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_02_17T19_53_17.261011
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-17T19-53-17.261011.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-17T19-53-17.261011.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_02_17T19_53_17.261011
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-17T19-53-17.261011.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-17T19-53-17.261011.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_02_17T19_53_17.261011
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-17T19-53-17.261011.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-17T19-53-17.261011.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_02_17T19_53_17.261011
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-17T19-53-17.261011.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-17T19-53-17.261011.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_02_17T19_53_17.261011
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-17T19-53-17.261011.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-17T19-53-17.261011.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_02_17T19_53_17.261011
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-17T19-53-17.261011.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-17T19-53-17.261011.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_02_17T19_53_17.261011
path:
- '**/details_harness|winogrande|5_2024-02-17T19-53-17.261011.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-02-17T19-53-17.261011.parquet'
- config_name: results
data_files:
- split: 2024_02_17T19_53_17.261011
path:
- results_2024-02-17T19-53-17.261011.parquet
- split: latest
path:
- results_2024-02-17T19-53-17.261011.parquet
---
# Dataset Card for Evaluation run of yam-peleg/Experiment10-7B
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [yam-peleg/Experiment10-7B](https://huggingface.co/yam-peleg/Experiment10-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_yam-peleg__Experiment10-7B",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-02-17T19:53:17.261011](https://huggingface.co/datasets/open-llm-leaderboard/details_yam-peleg__Experiment10-7B/blob/main/results_2024-02-17T19-53-17.261011.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6570860829812994,
"acc_stderr": 0.03205617661623268,
"acc_norm": 0.6581629819822128,
"acc_norm_stderr": 0.03270354339480524,
"mc1": 0.5691554467564259,
"mc1_stderr": 0.01733527247533237,
"mc2": 0.7109902944624826,
"mc2_stderr": 0.014934792650104375
},
"harness|arc:challenge|25": {
"acc": 0.7005119453924915,
"acc_stderr": 0.01338502163731357,
"acc_norm": 0.7218430034129693,
"acc_norm_stderr": 0.013094469919538805
},
"harness|hellaswag|10": {
"acc": 0.7088229436367257,
"acc_stderr": 0.0045337646862119935,
"acc_norm": 0.8796056562437762,
"acc_norm_stderr": 0.0032475703304569218
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.35,
"acc_stderr": 0.0479372485441102,
"acc_norm": 0.35,
"acc_norm_stderr": 0.0479372485441102
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6444444444444445,
"acc_stderr": 0.04135176749720385,
"acc_norm": 0.6444444444444445,
"acc_norm_stderr": 0.04135176749720385
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6644736842105263,
"acc_stderr": 0.03842498559395268,
"acc_norm": 0.6644736842105263,
"acc_norm_stderr": 0.03842498559395268
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.63,
"acc_stderr": 0.04852365870939099,
"acc_norm": 0.63,
"acc_norm_stderr": 0.04852365870939099
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7245283018867924,
"acc_stderr": 0.027495663683724057,
"acc_norm": 0.7245283018867924,
"acc_norm_stderr": 0.027495663683724057
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7708333333333334,
"acc_stderr": 0.03514697467862388,
"acc_norm": 0.7708333333333334,
"acc_norm_stderr": 0.03514697467862388
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.47,
"acc_stderr": 0.050161355804659205,
"acc_norm": 0.47,
"acc_norm_stderr": 0.050161355804659205
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.58,
"acc_stderr": 0.049604496374885836,
"acc_norm": 0.58,
"acc_norm_stderr": 0.049604496374885836
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.31,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.31,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6878612716763006,
"acc_stderr": 0.035331333893236574,
"acc_norm": 0.6878612716763006,
"acc_norm_stderr": 0.035331333893236574
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.43137254901960786,
"acc_stderr": 0.04928099597287534,
"acc_norm": 0.43137254901960786,
"acc_norm_stderr": 0.04928099597287534
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.78,
"acc_stderr": 0.041633319989322626,
"acc_norm": 0.78,
"acc_norm_stderr": 0.041633319989322626
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.574468085106383,
"acc_stderr": 0.03232146916224468,
"acc_norm": 0.574468085106383,
"acc_norm_stderr": 0.03232146916224468
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.5087719298245614,
"acc_stderr": 0.04702880432049615,
"acc_norm": 0.5087719298245614,
"acc_norm_stderr": 0.04702880432049615
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.593103448275862,
"acc_stderr": 0.04093793981266236,
"acc_norm": 0.593103448275862,
"acc_norm_stderr": 0.04093793981266236
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.43386243386243384,
"acc_stderr": 0.025525034382474887,
"acc_norm": 0.43386243386243384,
"acc_norm_stderr": 0.025525034382474887
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.47619047619047616,
"acc_stderr": 0.04467062628403273,
"acc_norm": 0.47619047619047616,
"acc_norm_stderr": 0.04467062628403273
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.37,
"acc_stderr": 0.048523658709391,
"acc_norm": 0.37,
"acc_norm_stderr": 0.048523658709391
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7935483870967742,
"acc_stderr": 0.02302589961718872,
"acc_norm": 0.7935483870967742,
"acc_norm_stderr": 0.02302589961718872
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5172413793103449,
"acc_stderr": 0.035158955511656986,
"acc_norm": 0.5172413793103449,
"acc_norm_stderr": 0.035158955511656986
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.72,
"acc_stderr": 0.04512608598542128,
"acc_norm": 0.72,
"acc_norm_stderr": 0.04512608598542128
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7696969696969697,
"acc_stderr": 0.0328766675860349,
"acc_norm": 0.7696969696969697,
"acc_norm_stderr": 0.0328766675860349
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7878787878787878,
"acc_stderr": 0.029126522834586815,
"acc_norm": 0.7878787878787878,
"acc_norm_stderr": 0.029126522834586815
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8963730569948186,
"acc_stderr": 0.02199531196364424,
"acc_norm": 0.8963730569948186,
"acc_norm_stderr": 0.02199531196364424
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6794871794871795,
"acc_stderr": 0.02366129639396428,
"acc_norm": 0.6794871794871795,
"acc_norm_stderr": 0.02366129639396428
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.35185185185185186,
"acc_stderr": 0.029116617606083008,
"acc_norm": 0.35185185185185186,
"acc_norm_stderr": 0.029116617606083008
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.680672268907563,
"acc_stderr": 0.030283995525884396,
"acc_norm": 0.680672268907563,
"acc_norm_stderr": 0.030283995525884396
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3576158940397351,
"acc_stderr": 0.03913453431177258,
"acc_norm": 0.3576158940397351,
"acc_norm_stderr": 0.03913453431177258
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8495412844036697,
"acc_stderr": 0.015328563932669237,
"acc_norm": 0.8495412844036697,
"acc_norm_stderr": 0.015328563932669237
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5509259259259259,
"acc_stderr": 0.03392238405321617,
"acc_norm": 0.5509259259259259,
"acc_norm_stderr": 0.03392238405321617
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8382352941176471,
"acc_stderr": 0.025845017986926917,
"acc_norm": 0.8382352941176471,
"acc_norm_stderr": 0.025845017986926917
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7890295358649789,
"acc_stderr": 0.02655837250266192,
"acc_norm": 0.7890295358649789,
"acc_norm_stderr": 0.02655837250266192
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6995515695067265,
"acc_stderr": 0.03076935200822914,
"acc_norm": 0.6995515695067265,
"acc_norm_stderr": 0.03076935200822914
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.8091603053435115,
"acc_stderr": 0.034465133507525975,
"acc_norm": 0.8091603053435115,
"acc_norm_stderr": 0.034465133507525975
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7933884297520661,
"acc_stderr": 0.03695980128098824,
"acc_norm": 0.7933884297520661,
"acc_norm_stderr": 0.03695980128098824
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7685185185185185,
"acc_stderr": 0.04077494709252627,
"acc_norm": 0.7685185185185185,
"acc_norm_stderr": 0.04077494709252627
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7791411042944786,
"acc_stderr": 0.03259177392742179,
"acc_norm": 0.7791411042944786,
"acc_norm_stderr": 0.03259177392742179
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.44642857142857145,
"acc_stderr": 0.04718471485219588,
"acc_norm": 0.44642857142857145,
"acc_norm_stderr": 0.04718471485219588
},
"harness|hendrycksTest-management|5": {
"acc": 0.7864077669902912,
"acc_stderr": 0.040580420156460344,
"acc_norm": 0.7864077669902912,
"acc_norm_stderr": 0.040580420156460344
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8760683760683761,
"acc_stderr": 0.021586494001281376,
"acc_norm": 0.8760683760683761,
"acc_norm_stderr": 0.021586494001281376
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.71,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.71,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8301404853128991,
"acc_stderr": 0.013428186370608308,
"acc_norm": 0.8301404853128991,
"acc_norm_stderr": 0.013428186370608308
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7167630057803468,
"acc_stderr": 0.02425790170532338,
"acc_norm": 0.7167630057803468,
"acc_norm_stderr": 0.02425790170532338
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.4547486033519553,
"acc_stderr": 0.016653875777524002,
"acc_norm": 0.4547486033519553,
"acc_norm_stderr": 0.016653875777524002
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7287581699346405,
"acc_stderr": 0.02545775669666788,
"acc_norm": 0.7287581699346405,
"acc_norm_stderr": 0.02545775669666788
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7041800643086816,
"acc_stderr": 0.025922371788818763,
"acc_norm": 0.7041800643086816,
"acc_norm_stderr": 0.025922371788818763
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7407407407407407,
"acc_stderr": 0.02438366553103545,
"acc_norm": 0.7407407407407407,
"acc_norm_stderr": 0.02438366553103545
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4787234042553192,
"acc_stderr": 0.029800481645628693,
"acc_norm": 0.4787234042553192,
"acc_norm_stderr": 0.029800481645628693
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4706649282920469,
"acc_stderr": 0.012748238397365549,
"acc_norm": 0.4706649282920469,
"acc_norm_stderr": 0.012748238397365549
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6875,
"acc_stderr": 0.02815637344037142,
"acc_norm": 0.6875,
"acc_norm_stderr": 0.02815637344037142
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6633986928104575,
"acc_stderr": 0.019117213911495148,
"acc_norm": 0.6633986928104575,
"acc_norm_stderr": 0.019117213911495148
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6818181818181818,
"acc_stderr": 0.044612721759105085,
"acc_norm": 0.6818181818181818,
"acc_norm_stderr": 0.044612721759105085
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7306122448979592,
"acc_stderr": 0.02840125202902294,
"acc_norm": 0.7306122448979592,
"acc_norm_stderr": 0.02840125202902294
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8407960199004975,
"acc_stderr": 0.02587064676616914,
"acc_norm": 0.8407960199004975,
"acc_norm_stderr": 0.02587064676616914
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.86,
"acc_stderr": 0.034873508801977704,
"acc_norm": 0.86,
"acc_norm_stderr": 0.034873508801977704
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5602409638554217,
"acc_stderr": 0.03864139923699121,
"acc_norm": 0.5602409638554217,
"acc_norm_stderr": 0.03864139923699121
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8304093567251462,
"acc_stderr": 0.02878210810540171,
"acc_norm": 0.8304093567251462,
"acc_norm_stderr": 0.02878210810540171
},
"harness|truthfulqa:mc|0": {
"mc1": 0.5691554467564259,
"mc1_stderr": 0.01733527247533237,
"mc2": 0.7109902944624826,
"mc2_stderr": 0.014934792650104375
},
"harness|winogrande|5": {
"acc": 0.8074191002367798,
"acc_stderr": 0.011082538847491897
},
"harness|gsm8k|5": {
"acc": 0.6353297952994693,
"acc_stderr": 0.013258428375662247
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
CVasNLPExperiments/VQAv2_sample_validation_google_flan_t5_xl_mode_T_D_PNP_GENERIC_C_Q_rices_ns_200 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: question
dtype: string
- name: true_label
sequence: string
- name: prediction
dtype: string
splits:
- name: fewshot_0_clip_tags_LAION_ViT_H_14_2B_with_openai_Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full_DETA_detections_deta_swin_large_o365_coco_classes_caption_all_patches_Salesforce_blip_image_captioning_large__
num_bytes: 28562
num_examples: 200
download_size: 14034
dataset_size: 28562
---
# Dataset Card for "VQAv2_sample_validation_google_flan_t5_xl_mode_T_D_PNP_GENERIC_C_Q_rices_ns_200"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
averageandyyy/imda_dataset_clean | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: transcript
dtype: string
splits:
- name: train
num_bytes: 215809231255.29318
num_examples: 1408808
download_size: 210065803478
dataset_size: 215809231255.29318
---
# Dataset Card for "imda_dataset_clean"
HAS TWO EXTRA EXAMPLES CONTAINING '.' NEED TO FILTER
num_examples: 1408808 |
coastalcph/fm_queries_classifier | ---
dataset_info:
features:
- name: query
dtype: string
- name: answer
list:
- name: wikidata_id
dtype: string
- name: name
dtype: string
- name: id
dtype: string
- name: relation
dtype: string
- name: date
dtype: int64
- name: type
dtype: string
- name: is_mutable
dtype: int64
splits:
- name: train
num_bytes: 1437936
num_examples: 8974
- name: all_fm
num_bytes: 33337568
num_examples: 192165
- name: validation
num_bytes: 960721
num_examples: 5793
- name: test
num_bytes: 1026699
num_examples: 5698
download_size: 1260361
dataset_size: 36762924
---
# Dataset Card for "fm_queries_classifier"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
aagoluoglu/AI_HW3_COCO_minitrain_images | ---
dataset_info:
features:
- name: license
dtype: int64
- name: file_name
dtype: string
- name: coco_url
dtype: string
- name: height
dtype: int64
- name: width
dtype: int64
- name: date_captured
dtype: string
- name: flickr_url
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 5016546
num_examples: 25000
download_size: 1741047
dataset_size: 5016546
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Rami/multi-label-class-github-issues-text-classification | ---
dataset_info:
features:
- name: title
dtype: string
- name: labels
sequence: string
- name: bodyText
dtype: string
splits:
- name: train
num_bytes: 2713984
num_examples: 1556
- name: valid
num_bytes: 1296582
num_examples: 778
- name: test
num_bytes: 1307650
num_examples: 778
download_size: 2328003
dataset_size: 5318216
---
# Dataset Card for "multi-label-class-github-issues-text-classification"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
distil-whisper/peoples_speech-clean | ---
license: cc-by-4.0
task_categories:
- automatic-speech-recognition
language:
- en
-pretty_name: People's Speech Clean
---
# Distil Whisper: People's Speech Clean
This is a variant of the [People's Speech Clean](https://huggingface.co/datasets/MLCommons/peoples_speech) dataset, augmented to return the pseudo-labelled Whisper
Transcriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by
labelling the input audio data with the Whisper [large-v2](https://huggingface.co/openai/whisper-large-v2)
model with *greedy* sampling. For information on how the original dataset was curated, refer to the original
[dataset card](https://huggingface.co/datasets/MLCommons/peoples_speech).
## Standalone Usage
First, install the latest version of the 🤗 Datasets package:
```bash
pip install --upgrade pip
pip install --upgrade datasets[audio]
```
The dataset can be downloaded and pre-processed on disk using the [`load_dataset`](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/loading_methods#datasets.load_dataset)
function:
```python
from datasets import load_dataset
dataset = load_dataset("distil-whisper/peoples_speech-clean", "clean")
# take the first sample of the validation set
sample = dataset["validation"][0]
```
It can also be streamed directly from the Hub using Datasets' [streaming mode](https://huggingface.co/blog/audio-datasets#streaming-mode-the-silver-bullet).
Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire
dataset to disk:
```python
from datasets import load_dataset
dataset = load_dataset("distil-whisper/peoples_speech-clean", "clean", streaming=True)
# take the first sample of the validation set
sample = next(iter(dataset["validation"]))
```
## Distil Whisper Usage
To use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the
[Distil Whisper repository](https://github.com/huggingface/distil-whisper#training).
## License
This dataset is licensed under cc-by-4.0.
|
PhilipMay/stsb_multi_mt | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
- found
- machine-generated
language:
- de
- en
- es
- fr
- it
- nl
- pl
- pt
- ru
- zh
license:
- other
multilinguality:
- multilingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|other-sts-b
task_categories:
- text-classification
task_ids:
- text-scoring
- semantic-similarity-scoring
pretty_name: STSb Multi MT
dataset_info:
- config_name: de
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: similarity_score
dtype: float32
splits:
- name: train
num_bytes: 867465
num_examples: 5749
- name: test
num_bytes: 193325
num_examples: 1379
- name: dev
num_bytes: 247069
num_examples: 1500
download_size: 823156
dataset_size: 1307859
- config_name: en
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: similarity_score
dtype: float32
splits:
- name: train
num_bytes: 731795
num_examples: 5749
- name: test
num_bytes: 164458
num_examples: 1379
- name: dev
num_bytes: 210064
num_examples: 1500
download_size: 720594
dataset_size: 1106317
- config_name: es
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: similarity_score
dtype: float32
splits:
- name: train
num_bytes: 887093
num_examples: 5749
- name: test
num_bytes: 194608
num_examples: 1379
- name: dev
num_bytes: 245242
num_examples: 1500
download_size: 803220
dataset_size: 1326943
- config_name: fr
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: similarity_score
dtype: float32
splits:
- name: train
num_bytes: 910187
num_examples: 5749
- name: test
num_bytes: 200438
num_examples: 1379
- name: dev
num_bytes: 254075
num_examples: 1500
download_size: 828209
dataset_size: 1364700
- config_name: it
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: similarity_score
dtype: float32
splits:
- name: train
num_bytes: 871518
num_examples: 5749
- name: test
num_bytes: 191639
num_examples: 1379
- name: dev
num_bytes: 243136
num_examples: 1500
download_size: 813106
dataset_size: 1306293
- config_name: nl
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: similarity_score
dtype: float32
splits:
- name: train
num_bytes: 833659
num_examples: 5749
- name: test
num_bytes: 182896
num_examples: 1379
- name: dev
num_bytes: 234879
num_examples: 1500
download_size: 786341
dataset_size: 1251434
- config_name: pl
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: similarity_score
dtype: float32
splits:
- name: train
num_bytes: 828425
num_examples: 5749
- name: test
num_bytes: 181258
num_examples: 1379
- name: dev
num_bytes: 231750
num_examples: 1500
download_size: 832282
dataset_size: 1241433
- config_name: pt
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: similarity_score
dtype: float32
splits:
- name: train
num_bytes: 854348
num_examples: 5749
- name: test
num_bytes: 189155
num_examples: 1379
- name: dev
num_bytes: 240551
num_examples: 1500
download_size: 799737
dataset_size: 1284054
- config_name: ru
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: similarity_score
dtype: float32
splits:
- name: train
num_bytes: 1391666
num_examples: 5749
- name: test
num_bytes: 299999
num_examples: 1379
- name: dev
num_bytes: 386260
num_examples: 1500
download_size: 1088400
dataset_size: 2077925
- config_name: zh
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: similarity_score
dtype: float32
splits:
- name: train
num_bytes: 694416
num_examples: 5749
- name: test
num_bytes: 154826
num_examples: 1379
- name: dev
num_bytes: 195813
num_examples: 1500
download_size: 715580
dataset_size: 1045055
configs:
- config_name: de
data_files:
- split: train
path: de/train-*
- split: test
path: de/test-*
- split: dev
path: de/dev-*
- config_name: en
data_files:
- split: train
path: en/train-*
- split: test
path: en/test-*
- split: dev
path: en/dev-*
- config_name: es
data_files:
- split: train
path: es/train-*
- split: test
path: es/test-*
- split: dev
path: es/dev-*
- config_name: fr
data_files:
- split: train
path: fr/train-*
- split: test
path: fr/test-*
- split: dev
path: fr/dev-*
- config_name: it
data_files:
- split: train
path: it/train-*
- split: test
path: it/test-*
- split: dev
path: it/dev-*
- config_name: nl
data_files:
- split: train
path: nl/train-*
- split: test
path: nl/test-*
- split: dev
path: nl/dev-*
- config_name: pl
data_files:
- split: train
path: pl/train-*
- split: test
path: pl/test-*
- split: dev
path: pl/dev-*
- config_name: pt
data_files:
- split: train
path: pt/train-*
- split: test
path: pt/test-*
- split: dev
path: pt/dev-*
- config_name: ru
data_files:
- split: train
path: ru/train-*
- split: test
path: ru/test-*
- split: dev
path: ru/dev-*
- config_name: zh
data_files:
- split: train
path: zh/train-*
- split: test
path: zh/test-*
- split: dev
path: zh/dev-*
---
# Dataset Card for STSb Multi MT
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository**: https://github.com/PhilipMay/stsb-multi-mt
- **Homepage (original dataset):** https://ixa2.si.ehu.es/stswiki/index.php/STSbenchmark
- **Paper about original dataset:** https://arxiv.org/abs/1708.00055
- **Leaderboard:** https://ixa2.si.ehu.eus/stswiki/index.php/STSbenchmark#Results
- **Point of Contact:** [Open an issue on GitHub](https://github.com/PhilipMay/stsb-multi-mt/issues/new)
### Dataset Summary
> STS Benchmark comprises a selection of the English datasets used in the STS tasks organized
> in the context of SemEval between 2012 and 2017. The selection of datasets include text from
> image captions, news headlines and user forums. ([source](https://ixa2.si.ehu.es/stswiki/index.php/STSbenchmark))
These are different multilingual translations and the English original of the [STSbenchmark dataset](https://ixa2.si.ehu.es/stswiki/index.php/STSbenchmark). Translation has been done with [deepl.com](https://www.deepl.com/). It can be used to train [sentence embeddings](https://github.com/UKPLab/sentence-transformers) like [T-Systems-onsite/cross-en-de-roberta-sentence-transformer](https://huggingface.co/T-Systems-onsite/cross-en-de-roberta-sentence-transformer).
**Examples of Use**
Load German dev Dataset:
```python
from datasets import load_dataset
dataset = load_dataset("stsb_multi_mt", name="de", split="dev")
```
Load English train Dataset:
```python
from datasets import load_dataset
dataset = load_dataset("stsb_multi_mt", name="en", split="train")
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Available languages are: de, en, es, fr, it, nl, pl, pt, ru, zh
## Dataset Structure
### Data Instances
This dataset provides pairs of sentences and a score of their similarity.
score | 2 example sentences | explanation
------|---------|------------
5 | *The bird is bathing in the sink.<br/>Birdie is washing itself in the water basin.* | The two sentences are completely equivalent, as they mean the same thing.
4 | *Two boys on a couch are playing video games.<br/>Two boys are playing a video game.* | The two sentences are mostly equivalent, but some unimportant details differ.
3 | *John said he is considered a witness but not a suspect.<br/>“He is not a suspect anymore.” John said.* | The two sentences are roughly equivalent, but some important information differs/missing.
2 | *They flew out of the nest in groups.<br/>They flew into the nest together.* | The two sentences are not equivalent, but share some details.
1 | *The woman is playing the violin.<br/>The young lady enjoys listening to the guitar.* | The two sentences are not equivalent, but are on the same topic.
0 | *The black dog is running through the snow.<br/>A race car driver is driving his car through the mud.* | The two sentences are completely dissimilar.
An example:
```
{
"sentence1": "A man is playing a large flute.",
"sentence2": "A man is playing a flute.",
"similarity_score": 3.8
}
```
### Data Fields
- `sentence1`: The 1st sentence as a `str`.
- `sentence2`: The 2nd sentence as a `str`.
- `similarity_score`: The similarity score as a `float` which is `<= 5.0` and `>= 0.0`.
### Data Splits
- train with 5749 samples
- dev with 1500 samples
- test with 1379 sampples
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
See [LICENSE](https://github.com/PhilipMay/stsb-multi-mt/blob/main/LICENSE) and [download at original dataset](https://ixa2.si.ehu.eus/stswiki/index.php/STSbenchmark).
### Citation Information
```
@InProceedings{huggingface:dataset:stsb_multi_mt,
title = {Machine translated multilingual STS benchmark dataset.},
author={Philip May},
year={2021},
url={https://github.com/PhilipMay/stsb-multi-mt}
}
```
### Contributions
Thanks to [@PhilipMay](https://github.com/PhilipMay) for adding this dataset. |
ogimgio/highways-hacktum | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
0: footway
1: primary
splits:
- name: train
num_bytes: 1155915978.0
num_examples: 500
- name: validation
num_bytes: 284161545.0
num_examples: 125
download_size: 1431719317
dataset_size: 1440077523.0
---
# Dataset Card for "highways-hacktum"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
linhtran92/viet_bud500 | ---
pretty_name: Bud500
language:
- vi
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
task_categories:
- automatic-speech-recognition
---
# Bud500: A Comprehensive Vietnamese ASR Dataset
Introducing [**Bud500**](https://huggingface.co/datasets/linhtran92/viet_bud500), a diverse Vietnamese speech corpus designed to support ASR research community. With aprroximately **500 hours** of audio, it covers a broad spectrum of topics including podcast, travel, book, food, and so on, while spanning accents from Vietnam's North, South, and Central regions. Derived from free public audio resources, this publicly accessible dataset is designed to significantly enhance the work of developers and researchers in the field of speech recognition.
The corpus was prepared by [**VietAI**](https://vietai.org/) research team, a non-profit organization with the mission of nurturing AI talents and building a community of world-class AI experts in Vietnam.
## Languages
Vietnamese
## Dataset Structure
A typical data point comprises the Audio object dict `audio` and its `transcription`.
```
{'audio': {'path': None,
'array': array([0.00125122, 0.00228882, 0.00213623, ..., 0.00354004, 0.00442505, 0.00650024]),
'sampling_rate': 16000},
'transcription': 'ai cho phép em uống nhiều rượu như vậy'}
```
### Data Fields
- `audio`: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
- `transcription`: textual form of the audio content.
### Data Splits
The speech material has been subdivided into portions for train, test and validation.
| Total size: 98Gb | Train | Validation | Test |
| ---------------- | ----- | ----------- | ------ |
| Samples | 634158 | 7500 | 7500 |
| Duration | ~500h | ~5.46h | ~5.46h |
### Example usage
[](https://colab.research.google.com/drive/1nNtQnwwA4YG_fhGMb1yV1GR-h0TqsWKc?usp=sharing) [](https://huggingface.co/datasets/linhtran92/viet_bud500)
```python
from datasets import load_dataset
# load from parquet file (~4000 samples in a parquet file)
# link to other parquet files: https://huggingface.co/datasets/linhtran92/viet_bud500/tree/main/data
train_url = "https://huggingface.co/datasets/linhtran92/viet_bud500/resolve/main/data/train-00000-of-00105-be5f872f8be772f5.parquet"
test_url = "https://huggingface.co/datasets/linhtran92/viet_bud500/resolve/main/data/test-00000-of-00002-531c1d81edb57297.parquet"
data_files = {"train": train_url, "test" : test_url}
dataset = load_dataset("parquet", data_files=data_files)
# load dataset via streaming
dataset = load_dataset("linhtran92/viet_bud500", split='test', streaming=True)
dataset.take(2)
# load all (649158 samples, ~100gb, ~2hrs to complete loading)
dataset = load_dataset("linhtran92/viet_bud500", split="test")
```
## Dataset Creation
- Source Data
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
## Other Known Limitations
- Dataset provided for research purposes only. Please check dataset license for additional information.
## Dataset Curators
- The dataset was initially prepared by VietAI research team, a non-profit organization with the mission of nurturing AI talents and building a community of world-class AI experts in Vietnam.
## Disclaimer
- During the data collection process, it is possible that some copyrighted material may inadvertently be included. If you believe that your copyrighted material has been included in our dataset without permission, please contact us directly.
## License
```
Copyright (c) 2024 VietAI Research
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
```
## Citation Information
```
@misc{Bud500,
author = {Anh Pham, Khanh Linh Tran, Linh Nguyen, Thanh Duy Cao, Phuc Phan, Duong A. Nguyen},
title = {Bud500: A Comprehensive Vietnamese ASR Dataset},
url = {https://github.com/quocanh34/Bud500},
year = {2024}
}
```
### Contributors
[@quocanh34](https://github.com/quocanh34) [@linhtran6065](https://github.com/linhtran6065) [@linhqyy](https://github.com/linhqyy) [@thanhduycao](https://github.com/thanhduycao) [@pphuc25](https://github.com/pphuc25) [@duongna21](https://github.com/duongna21).
**Please CITE** our repo when it is used to help produce published results or is incorporated into other software.
### Contact
- phamquocanh2002ct@gmail.com
- khanhlinhtran6065@gmail.com |
carlavic/Ani | ---
license: openrail
---
|
cardiffnlp/tweet_topic_multi | ---
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- 1k<10K
task_categories:
- text-classification
task_ids:
- sentiment-classification
pretty_name: TweetTopicSingle
---
# Dataset Card for "cardiffnlp/tweet_topic_multi"
## Dataset Description
- **Paper:** [https://arxiv.org/abs/2209.09824](https://arxiv.org/abs/2209.09824)
- **Dataset:** Tweet Topic Dataset
- **Domain:** Twitter
- **Number of Class:** 19
### Dataset Summary
This is the official repository of TweetTopic (["Twitter Topic Classification
, COLING main conference 2022"](https://arxiv.org/abs/2209.09824)), a topic classification dataset on Twitter with 19 labels.
Each instance of TweetTopic comes with a timestamp which distributes from September 2019 to August 2021.
See [cardiffnlp/tweet_topic_single](https://huggingface.co/datasets/cardiffnlp/tweet_topic_single) for single label version of TweetTopic.
The tweet collection used in TweetTopic is same as what used in [TweetNER7](https://huggingface.co/datasets/tner/tweetner7).
The dataset is integrated in [TweetNLP](https://tweetnlp.org/) too.
### Preprocessing
We pre-process tweets before the annotation to normalize some artifacts, converting URLs into a special token `{{URL}}` and non-verified usernames into `{{USERNAME}}`.
For verified usernames, we replace its display name (or account name) with symbols `{@}`.
For example, a tweet
```
Get the all-analog Classic Vinyl Edition
of "Takin' Off" Album from @herbiehancock
via @bluenoterecords link below:
http://bluenote.lnk.to/AlbumOfTheWeek
```
is transformed into the following text.
```
Get the all-analog Classic Vinyl Edition
of "Takin' Off" Album from {@herbiehancock@}
via {@bluenoterecords@} link below: {{URL}}
```
A simple function to format tweet follows below.
```python
import re
from urlextract import URLExtract
extractor = URLExtract()
def format_tweet(tweet):
# mask web urls
urls = extractor.find_urls(tweet)
for url in urls:
tweet = tweet.replace(url, "{{URL}}")
# format twitter account
tweet = re.sub(r"\b(\s*)(@[\S]+)\b", r'\1{\2@}', tweet)
return tweet
target = """Get the all-analog Classic Vinyl Edition of "Takin' Off" Album from @herbiehancock via @bluenoterecords link below: http://bluenote.lnk.to/AlbumOfTheWeek"""
target_format = format_tweet(target)
print(target_format)
'Get the all-analog Classic Vinyl Edition of "Takin\' Off" Album from {@herbiehancock@} via {@bluenoterecords@} link below: {{URL}}'
```
### Data Splits
| split | number of texts | description |
|:------------------------|-----:|------:|
| test_2020 | 573 | test dataset from September 2019 to August 2020 |
| test_2021 | 1679 | test dataset from September 2020 to August 2021 |
| train_2020 | 4585 | training dataset from September 2019 to August 2020 |
| train_2021 | 1505 | training dataset from September 2020 to August 2021 |
| train_all | 6090 | combined training dataset of `train_2020` and `train_2021` |
| validation_2020 | 573 | validation dataset from September 2019 to August 2020 |
| validation_2021 | 188 | validation dataset from September 2020 to August 2021 |
| train_random | 4564 | randomly sampled training dataset with the same size as `train_2020` from `train_all` |
| validation_random | 573 | randomly sampled training dataset with the same size as `validation_2020` from `validation_all` |
| test_coling2022_random | 5536 | random split used in the COLING 2022 paper |
| train_coling2022_random | 5731 | random split used in the COLING 2022 paper |
| test_coling2022 | 5536 | temporal split used in the COLING 2022 paper |
| train_coling2022 | 5731 | temporal split used in the COLING 2022 paper |
For the temporal-shift setting, model should be trained on `train_2020` with `validation_2020` and evaluate on `test_2021`.
In general, model would be trained on `train_all`, the most representative training set with `validation_2021` and evaluate on `test_2021`.
**IMPORTANT NOTE:** To get a result that is comparable with the results of the COLING 2022 Tweet Topic paper, please use `train_coling2022` and `test_coling2022` for temporal-shift, and `train_coling2022_random` and `test_coling2022_random` fir random split (the coling2022 split does not have validation set).
### Models
| model | training data | F1 | F1 (macro) | Accuracy |
|:----------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------|---------:|-------------:|-----------:|
| [cardiffnlp/roberta-large-tweet-topic-multi-all](https://huggingface.co/cardiffnlp/roberta-large-tweet-topic-multi-all) | all (2020 + 2021) | 0.763104 | 0.620257 | 0.536629 |
| [cardiffnlp/roberta-base-tweet-topic-multi-all](https://huggingface.co/cardiffnlp/roberta-base-tweet-topic-multi-all) | all (2020 + 2021) | 0.751814 | 0.600782 | 0.531864 |
| [cardiffnlp/twitter-roberta-base-2019-90m-tweet-topic-multi-all](https://huggingface.co/cardiffnlp/twitter-roberta-base-2019-90m-tweet-topic-multi-all) | all (2020 + 2021) | 0.762513 | 0.603533 | 0.547945 |
| [cardiffnlp/twitter-roberta-base-dec2020-tweet-topic-multi-all](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2020-tweet-topic-multi-all) | all (2020 + 2021) | 0.759917 | 0.59901 | 0.536033 |
| [cardiffnlp/twitter-roberta-base-dec2021-tweet-topic-multi-all](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2021-tweet-topic-multi-all) | all (2020 + 2021) | 0.764767 | 0.618702 | 0.548541 |
| [cardiffnlp/roberta-large-tweet-topic-multi-2020](https://huggingface.co/cardiffnlp/roberta-large-tweet-topic-multi-2020) | 2020 only | 0.732366 | 0.579456 | 0.493746 |
| [cardiffnlp/roberta-base-tweet-topic-multi-2020](https://huggingface.co/cardiffnlp/roberta-base-tweet-topic-multi-2020) | 2020 only | 0.725229 | 0.561261 | 0.499107 |
| [cardiffnlp/twitter-roberta-base-2019-90m-tweet-topic-multi-2020](https://huggingface.co/cardiffnlp/twitter-roberta-base-2019-90m-tweet-topic-multi-2020) | 2020 only | 0.73671 | 0.565624 | 0.513401 |
| [cardiffnlp/twitter-roberta-base-dec2020-tweet-topic-multi-2020](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2020-tweet-topic-multi-2020) | 2020 only | 0.729446 | 0.534799 | 0.50268 |
| [cardiffnlp/twitter-roberta-base-dec2021-tweet-topic-multi-2020](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2021-tweet-topic-multi-2020) | 2020 only | 0.731106 | 0.532141 | 0.509827 |
Model fine-tuning script can be found [here](https://huggingface.co/datasets/cardiffnlp/tweet_topic_multi/blob/main/lm_finetuning.py).
## Dataset Structure
### Data Instances
An example of `train` looks as follows.
```python
{
"date": "2021-03-07",
"text": "The latest The Movie theater Daily! {{URL}} Thanks to {{USERNAME}} {{USERNAME}} {{USERNAME}} #lunchtimeread #amc1000",
"id": "1368464923370676231",
"label": [0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
"label_name": ["film_tv_&_video"]
}
```
### Labels
| <span style="font-weight:normal">0: arts_&_culture</span> | <span style="font-weight:normal">5: fashion_&_style</span> | <span style="font-weight:normal">10: learning_&_educational</span> | <span style="font-weight:normal">15: science_&_technology</span> |
|-----------------------------|---------------------|----------------------------|--------------------------|
| 1: business_&_entrepreneurs | 6: film_tv_&_video | 11: music | 16: sports |
| 2: celebrity_&_pop_culture | 7: fitness_&_health | 12: news_&_social_concern | 17: travel_&_adventure |
| 3: diaries_&_daily_life | 8: food_&_dining | 13: other_hobbies | 18: youth_&_student_life |
| 4: family | 9: gaming | 14: relationships | |
Annotation instructions can be found [here](https://docs.google.com/document/d/1IaIXZYof3iCLLxyBdu_koNmjy--zqsuOmxQ2vOxYd_g/edit?usp=sharing).
The label2id dictionary can be found [here](https://huggingface.co/datasets/cardiffnlp/tweet_topic_multi/blob/main/dataset/label.multi.json).
### Citation Information
```
@inproceedings{dimosthenis-etal-2022-twitter,
title = "{T}witter {T}opic {C}lassification",
author = "Antypas, Dimosthenis and
Ushio, Asahi and
Camacho-Collados, Jose and
Neves, Leonardo and
Silva, Vitor and
Barbieri, Francesco",
booktitle = "Proceedings of the 29th International Conference on Computational Linguistics",
month = oct,
year = "2022",
address = "Gyeongju, Republic of Korea",
publisher = "International Committee on Computational Linguistics"
}
``` |
ucalyptus/car_embeddings | ---
dataset_info:
features:
- name: '0'
dtype: float32
- name: '1'
dtype: float32
- name: '2'
dtype: float32
- name: '3'
dtype: float32
- name: '4'
dtype: float32
- name: '5'
dtype: float32
- name: '6'
dtype: float32
- name: '7'
dtype: float32
- name: '8'
dtype: float32
- name: '9'
dtype: float32
- name: '10'
dtype: float32
- name: '11'
dtype: float32
- name: '12'
dtype: float32
- name: '13'
dtype: float32
- name: '14'
dtype: float32
- name: '15'
dtype: float32
- name: '16'
dtype: float32
- name: '17'
dtype: float32
- name: '18'
dtype: float32
- name: '19'
dtype: float32
- name: '20'
dtype: float32
- name: '21'
dtype: float32
- name: '22'
dtype: float32
- name: '23'
dtype: float32
- name: '24'
dtype: float32
- name: '25'
dtype: float32
- name: '26'
dtype: float32
- name: '27'
dtype: float32
- name: '28'
dtype: float32
- name: '29'
dtype: float32
- name: '30'
dtype: float32
- name: '31'
dtype: float32
- name: '32'
dtype: float32
- name: '33'
dtype: float32
- name: '34'
dtype: float32
- name: '35'
dtype: float32
- name: '36'
dtype: float32
- name: '37'
dtype: float32
- name: '38'
dtype: float32
- name: '39'
dtype: float32
- name: '40'
dtype: float32
- name: '41'
dtype: float32
- name: '42'
dtype: float32
- name: '43'
dtype: float32
- name: '44'
dtype: float32
- name: '45'
dtype: float32
- name: '46'
dtype: float32
- name: '47'
dtype: float32
- name: '48'
dtype: float32
- name: '49'
dtype: float32
- name: '50'
dtype: float32
- name: '51'
dtype: float32
- name: '52'
dtype: float32
- name: '53'
dtype: float32
- name: '54'
dtype: float32
- name: '55'
dtype: float32
- name: '56'
dtype: float32
- name: '57'
dtype: float32
- name: '58'
dtype: float32
- name: '59'
dtype: float32
- name: '60'
dtype: float32
- name: '61'
dtype: float32
- name: '62'
dtype: float32
- name: '63'
dtype: float32
- name: '64'
dtype: float32
- name: '65'
dtype: float32
- name: '66'
dtype: float32
- name: '67'
dtype: float32
- name: '68'
dtype: float32
- name: '69'
dtype: float32
- name: '70'
dtype: float32
- name: '71'
dtype: float32
- name: '72'
dtype: float32
- name: '73'
dtype: float32
- name: '74'
dtype: float32
- name: '75'
dtype: float32
- name: '76'
dtype: float32
- name: '77'
dtype: float32
- name: '78'
dtype: float32
- name: '79'
dtype: float32
- name: '80'
dtype: float32
- name: '81'
dtype: float32
- name: '82'
dtype: float32
- name: '83'
dtype: float32
- name: '84'
dtype: float32
- name: '85'
dtype: float32
- name: '86'
dtype: float32
- name: '87'
dtype: float32
- name: '88'
dtype: float32
- name: '89'
dtype: float32
- name: '90'
dtype: float32
- name: '91'
dtype: float32
- name: '92'
dtype: float32
- name: '93'
dtype: float32
- name: '94'
dtype: float32
- name: '95'
dtype: float32
- name: '96'
dtype: float32
- name: '97'
dtype: float32
- name: '98'
dtype: float32
- name: '99'
dtype: float32
- name: '100'
dtype: float32
- name: '101'
dtype: float32
- name: '102'
dtype: float32
- name: '103'
dtype: float32
- name: '104'
dtype: float32
- name: '105'
dtype: float32
- name: '106'
dtype: float32
- name: '107'
dtype: float32
- name: '108'
dtype: float32
- name: '109'
dtype: float32
- name: '110'
dtype: float32
- name: '111'
dtype: float32
- name: '112'
dtype: float32
- name: '113'
dtype: float32
- name: '114'
dtype: float32
- name: '115'
dtype: float32
- name: '116'
dtype: float32
- name: '117'
dtype: float32
- name: '118'
dtype: float32
- name: '119'
dtype: float32
- name: '120'
dtype: float32
- name: '121'
dtype: float32
- name: '122'
dtype: float32
- name: '123'
dtype: float32
- name: '124'
dtype: float32
- name: '125'
dtype: float32
- name: '126'
dtype: float32
- name: '127'
dtype: float32
splits:
- name: train
num_bytes: 4169728
num_examples: 8144
download_size: 303332
dataset_size: 4169728
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "car_embeddings"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
anan-2024/twitter_dataset_1713187711 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 116318
num_examples: 297
download_size: 66100
dataset_size: 116318
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
njoeljack/hai-sd | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 6354216.0
num_examples: 218
download_size: 6354545
dataset_size: 6354216.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Harsha9044/TAM-MSA | ---
license: apache-2.0
dataset_info:
features:
- name: File name
dtype: string
- name: Transcript
dtype: string
- name: Labels
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 283807
num_examples: 64
download_size: 120689
dataset_size: 283807
---
|
fraug-library/english_contractions_extensions | ---
configs:
- config_name: contractions
data_files: "df_contractions.csv"
sep: ";"
- config_name: extensions
data_files: "df_extensions.csv"
sep: ";"
--- |
jeffvalasq/colmanetti | ---
license: openrail
---
|
NickyNicky/aya_dataset_multilingual_inputs_targets_ext2 | ---
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: language
dtype: string
- name: language_code
dtype: string
- name: targets_es
dtype: string
- name: targets_en
dtype: string
- name: targets_fr
dtype: string
- name: targets_de
dtype: string
- name: inputs_es
dtype: string
- name: inputs_en
dtype: string
- name: inputs_fr
dtype: string
- name: inputs_de
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 2772934
num_examples: 1001
download_size: 1846929
dataset_size: 2772934
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
jonasantos5240/claer | ---
license: openrail
---
|
kgr123/quality_counter_490 | ---
dataset_info:
features:
- name: context
dtype: string
- name: word
dtype: string
- name: claim
dtype: string
- name: label
dtype: int64
splits:
- name: test
num_bytes: 2883656
num_examples: 1929
- name: train
num_bytes: 2868914
num_examples: 1935
- name: validation
num_bytes: 2941464
num_examples: 1941
download_size: 2089776
dataset_size: 8694034
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
liuyanchen1015/VALUE_cola_null_relcl | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: value_score
dtype: int64
splits:
- name: dev
num_bytes: 3200
num_examples: 39
- name: test
num_bytes: 2451
num_examples: 31
- name: train
num_bytes: 32865
num_examples: 390
download_size: 24364
dataset_size: 38516
---
# Dataset Card for "VALUE_cola_null_relcl"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
kaleemWaheed/twitter_dataset_1713096264 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 8711
num_examples: 20
download_size: 9170
dataset_size: 8711
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
joey234/mmlu-high_school_statistics | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: negate_openai_prompt
struct:
- name: content
dtype: string
- name: role
dtype: string
- name: neg_question
dtype: string
- name: fewshot_context
dtype: string
- name: fewshot_context_neg
dtype: string
splits:
- name: dev
num_bytes: 6574
num_examples: 5
- name: test
num_bytes: 1032910
num_examples: 216
download_size: 164944
dataset_size: 1039484
configs:
- config_name: default
data_files:
- split: dev
path: data/dev-*
- split: test
path: data/test-*
---
# Dataset Card for "mmlu-high_school_statistics"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
vwxyzjn/ultrafeedback_binarized_1707947544 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: prompt_id
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: score_chosen
dtype: float64
- name: score_rejected
dtype: float64
- name: query
list:
- name: content
dtype: string
- name: role
dtype: string
- name: query_token
sequence: int64
- name: query_token_len
dtype: int64
- name: chosen_token
sequence: int64
- name: chosen_token_len
dtype: int64
- name: chosen_response_token
sequence: int64
- name: chosen_response_token_len
dtype: int64
- name: rejected_token
sequence: int64
- name: rejected_token_len
dtype: int64
- name: rejected_response_token
sequence: int64
- name: rejected_response_token_len
dtype: int64
splits:
- name: test_prefs
num_bytes: 216278383.0
num_examples: 2000
- name: train_prefs
num_bytes: 6612816240.948507
num_examples: 61119
download_size: 477181463
dataset_size: 6829094623.948507
---
# Dataset Card for "ultrafeedback_binarized_1707947544"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
bigcode/santacoder-fim-task | ---
dataset_info:
features:
- name: name
dtype: string
- name: language
dtype: string
- name: prompt
dtype: string
- name: suffix
dtype: string
- name: canonical_solution
dtype: string
- name: tests
dtype: string
splits:
- name: train
num_bytes: 8627440
num_examples: 4792
download_size: 1918113
dataset_size: 8627440
license: openrail
tags:
- code
---
# Dataset Card for "santacoder-fim-task"
This is a dataset of prompts and solutions to the fill-in-the-middle (FIM) task
presented in the [SantaCoder] paper.
This dataset was generated using [this notebook](https://github.com/nuprl/MultiPL-E/blob/main/fill_in_the_middle/dataset_builder.ipynb).
[SantaCoder]: https://arxiv.org/abs/2301.03988 |
qgallouedec/prj_gia_dataset_metaworld_peg_insert_side_v2_1111 | ---
library_name: gia
tags:
- deep-reinforcement-learning
- reinforcement-learning
- gia
- multi-task
- multi-modal
- imitation-learning
- offline-reinforcement-learning
---
An imitation learning environment for the peg-insert-side-v2 environment, sample for the policy peg-insert-side-v2
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
## Load dataset
First, clone it with
```sh
git clone https://huggingface.co/datasets/qgallouedec/prj_gia_dataset_metaworld_peg_insert_side_v2_1111
```
Then, load it with
```python
import numpy as np
dataset = np.load("prj_gia_dataset_metaworld_peg_insert_side_v2_1111/dataset.npy", allow_pickle=True).item()
print(dataset.keys()) # dict_keys(['observations', 'actions', 'dones', 'rewards'])
```
|
joey234/mmlu-management-neg-prepend | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: negate_openai_prompt
struct:
- name: content
dtype: string
- name: role
dtype: string
- name: neg_question
dtype: string
- name: fewshot_context
dtype: string
- name: ori_prompt
dtype: string
- name: neg_prompt
dtype: string
- name: fewshot_context_neg
dtype: string
- name: fewshot_context_ori
dtype: string
splits:
- name: dev
num_bytes: 5611
num_examples: 5
- name: test
num_bytes: 570225
num_examples: 103
download_size: 101224
dataset_size: 575836
configs:
- config_name: default
data_files:
- split: dev
path: data/dev-*
- split: test
path: data/test-*
---
# Dataset Card for "mmlu-management-neg-prepend"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mesolitica/pseudolabel-malaysian-youtube-whisper-large-v3 | ---
language:
- ms
task_categories:
- automatic-speech-recognition
---
# Pseudolabel Malaysian Youtube videos using Whisper Large V3
Original dataset at https://huggingface.co/datasets/malaysia-ai/crawl-youtube, distributed pseudolabelled using 4x A100s
script at https://github.com/mesolitica/malaysian-dataset/tree/master/speech-to-text-semisupervised/pseudolabel-whisper
1. Each audio is 30 seconds.
2. Each audio saved in 16k sample rate. |
fatcatmilo/KGText | ---
license: mit
---
|
mazenlhm/FourthBrainDataset | ---
dataset_info:
features:
- name: product
dtype: string
- name: description
dtype: string
- name: marketing_email
dtype: string
splits:
- name: train
num_bytes: 11674
num_examples: 10
download_size: 15262
dataset_size: 11674
---
# Dataset Card for "FourthBrainDataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
yukiarimo/Gen-AI | ---
task_categories:
- summarization
---
# AutoTrain Dataset for project: gen-ai
## Dataset Description
This dataset has been automatically processed by AutoTrain for project gen-ai.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "Type: Essay. Style: Empathetic. Task: Importance of mental health",
"target": "Mental health is a critical aspect of our overall well-being. It is an essential component of a happy and productive life. Unfortunately, many people tend to overlook the importance of mental health, and this has led to a widespread mental health crisis worldwide. It is crucial that we prioritize our mental health and take steps to protect it.\n\nOne of the reasons why mental health is essential is because it affects our physical health. Our mental state influences our physical state, and vice versa. For instance, if we are experiencing chronic stress or anxiety, it can manifest in physical symptoms such as headaches, muscle tension, and high blood pressure. On the other hand, if we prioritize our mental health and practice self-care, it can positively impact our physical health and help prevent chronic illnesses.\n\nMoreover, mental health plays a crucial role in our social and emotional well-being. Our mental state can impact the quality of our relationships with others, our ability to cope with stress, and our overall sense of fulfillment and purpose. It is essential to prioritize our mental health to ensure that we have healthy relationships with ourselves and those around us.\n\nFurthermore, mental health affects our productivity and performance in various areas of life. When we prioritize our mental health, we are better able to focus and concentrate on our work or studies, and we tend to be more creative and innovative. Conversely, when we neglect our mental health, we may experience burnout, lack of motivation, and decreased productivity.\n\nFinally, prioritizing mental health is critical for overall happiness and life satisfaction. When we take steps to care for our mental health, we feel more content and fulfilled in our daily lives. It enables us to pursue our passions and interests, form meaningful relationships, and live a life of purpose.\n\nIn conclusion, mental health is crucial for overall well-being and happiness. It impacts our physical health, social and emotional well-being, productivity, and life satisfaction. It is essential that we prioritize our mental health and take steps to protect it. By doing so, we can lead happier, more fulfilling lives and contribute positively to society."
},
{
"text": "Type: Business report. Style: Professional. Task: Current state of the stock market",
"target": "Introduction:\n\nThe stock market is a crucial aspect of any economy, as it is a barometer of the overall health of the business environment. As such, it is essential to understand the current state of the stock market to make informed decisions. This report will provide an analysis of the current state of the stock market, including recent trends and events that have impacted it.\n\nOverview of Recent Performance:\n\nThe stock market has been performing well recently, with major indices like the Dow Jones Industrial Average and the S&P 500 both hitting record highs in the past few months. This can be attributed to a number of factors, including positive economic indicators such as low unemployment rates and strong corporate earnings reports. Additionally, investors have been buoyed by a sense of optimism regarding the potential for a resolution to the ongoing trade war between the United States and China.\n\nImpact of Global Events:\n\nHowever, global events have also had a significant impact on the stock market. One such event is Brexit, which has caused uncertainty and volatility in the market. The United Kingdom's decision to leave the European Union has created a number of unknowns, and the potential for a no-deal Brexit has further exacerbated this uncertainty. Additionally, ongoing political tensions with Iran and North Korea have also created a sense of unease among investors.\n\nTechnological Advancements:\n\nAnother trend that has impacted the stock market is the increasing influence of technology. Companies like Amazon, Apple, and Google have become major players in the stock market, and their continued growth has had a significant impact on the overall performance of the market. Additionally, technological advancements like blockchain and artificial intelligence have the potential to disrupt traditional financial systems, which could lead to further changes in the stock market.\n\nConclusion:\n\nOverall, the current state of the stock market is a mixed bag. While recent performance has been strong, global events and technological advancements have created uncertainty and volatility. Investors must stay vigilant and keep a close eye on the market to make informed decisions. As always, diversification and a long-term perspective are crucial to navigating the ups and downs of the stock market."
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 38 |
| valid | 3 |
|
letinnghia/student-feedbacks | ---
license: gpl-2.0
---
|
senhorsapo/p | ---
license: openrail
---
|
Rewcifer/radio-llama2-resp_tag_90pct | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1109388970
num_examples: 222141
download_size: 255573571
dataset_size: 1109388970
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "radio-llama2-resp_tag_90pct"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
iamkaikai/fonts | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 75777720.32
num_examples: 5016
download_size: 4942032
dataset_size: 75777720.32
---
# Dataset Card for "fonts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
johannes-garstenauer/structs_token_size_4_reduced_labelled_train | ---
dataset_info:
features:
- name: struct
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 372362495.3356041
num_examples: 1518855
download_size: 138213330
dataset_size: 372362495.3356041
---
# Dataset Card for "structs_token_size_4_reduced_labelled_train"
Dataset created for thesis: "Generating Robust Representations of
Structures in OpenSSH Heap Dumps" by Johannes Garstenauer.
This dataset contains raw heap data structures along with their labels.
This is the training dataset. Validation set at: https://huggingface.co/datasets/johannes-garstenauer/structs_token_size_4_reduced_labelled_eval
Data structures and labels are extracted from: https://zenodo.org/records/6537904
Thesis and associated scripts: https://zenodo.org/records/10053730 |
AndresR2909/finetuning_dataset_lamini | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int32
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 2150284.5
num_examples: 1260
- name: test
num_bytes: 238920.5
num_examples: 140
download_size: 698665
dataset_size: 2389205.0
---
# Dataset Card for "finetuning_dataset_lamini"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
amuvarma/text-emo-1-0 | ---
dataset_info:
features:
- name: text
dtype: string
- name: emotion_label
dtype: int64
- name: emotion
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 944651
num_examples: 1835
- name: test
num_bytes: 235561
num_examples: 459
download_size: 688963
dataset_size: 1180212
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
irds/wikiclir_no | ---
pretty_name: '`wikiclir/no`'
viewer: false
source_datasets: []
task_categories:
- text-retrieval
---
# Dataset Card for `wikiclir/no`
The `wikiclir/no` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/wikiclir#wikiclir/no).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=471,420
- `queries` (i.e., topics); count=299,897
- `qrels`: (relevance assessments); count=963,514
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/wikiclir_no', 'docs')
for record in docs:
record # {'doc_id': ..., 'title': ..., 'text': ...}
queries = load_dataset('irds/wikiclir_no', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/wikiclir_no', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{sasaki-etal-2018-cross,
title = "Cross-Lingual Learning-to-Rank with Shared Representations",
author = "Sasaki, Shota and
Sun, Shuo and
Schamoni, Shigehiko and
Duh, Kevin and
Inui, Kentaro",
booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)",
month = jun,
year = "2018",
address = "New Orleans, Louisiana",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/N18-2073",
doi = "10.18653/v1/N18-2073",
pages = "458--463"
}
```
|
Abrumu/Fashion_controlnet_dataset | ---
dataset_info:
features:
- name: target
dtype: image
- name: prompt
dtype: string
- name: control
dtype: image
- name: CLIP_captions
dtype: string
splits:
- name: train
num_bytes: 9533440093.0
num_examples: 11647
download_size: 9530317166
dataset_size: 9533440093.0
---
# Dataset Card for "Fashion_controlnet_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Snoopy04/mmlu-de-500 | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: question_de
dtype: string
- name: choices_de
sequence: string
- name: answer_de
dtype: string
splits:
- name: train
num_bytes: 345634.7682119205
num_examples: 255
- name: test
num_bytes: 677715.2317880795
num_examples: 500
download_size: 624258
dataset_size: 1023350.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
rokset3/slim_pajama_chunk_4 | ---
dataset_info:
features:
- name: text
dtype: string
- name: meta
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 259290406066
num_examples: 59040311
download_size: 150847645805
dataset_size: 259290406066
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "slim_pajama_chunk_4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Weni/Zeroshot_Test-Gold-1K_nenhuma | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
dataset_info:
features:
- name: texto
dtype: string
- name: true_class
dtype: string
- name: BERT
dtype: string
splits:
- name: test
num_bytes: 188891
num_examples: 1000
download_size: 54999
dataset_size: 188891
---
# Dataset Card for "Zeroshot_Gold_Test-1K_nenhuma"
This dataset is a test dataset for the Zeroshot models.
It has 1000 data in a prompt format exclusively for testing with class 'nenhuma' in Brazilian Portuguese.
Prompt:
```
"Classifique o tweet entre 'classe1', 'classe2', 'classe3', 'classe4' \\n\\nTweet: frase \\n\\nLabel:
```
## How to load and use this dataset:
```
from datasets import load_dataset
dataset = load_dataset("Weni/Zeroshot_Gold_Test-1K_nenhuma")
dataset
```
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
BangumiBase/lovelycomplex | ---
license: mit
tags:
- art
size_categories:
- 1K<n<10K
---
# Bangumi Image Base of Lovely Complex
This is the image base of bangumi Lovely Complex, we detected 16 characters, 1747 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 49 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 663 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 29 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 458 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 46 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 23 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 25 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 62 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 17 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 110 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 39 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 40 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 64 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 20 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 43 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| noise | 59 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
|
Lipe3434/dvdaudio | ---
license: openrail
---
|
3B-Group/ConvRe | ---
configs:
- config_name: en-re2text
data_files:
- split: prompt1
path: "data/en-re2text/re2text-prompt1-en.jsonl"
- split: prompt2
path: "data/en-re2text/re2text-prompt2-en.jsonl"
- split: prompt3
path: "data/en-re2text/re2text-prompt3-en.jsonl"
- split: prompt4
path: "data/en-re2text/re2text-prompt4-en.jsonl"
- split: prompt5
path: "data/en-re2text/re2text-prompt5-en.jsonl"
- split: prompt6
path: "data/en-re2text/re2text-prompt6-en.jsonl"
- split: prompt7
path: "data/en-re2text/re2text-prompt7-en.jsonl"
- split: prompt8
path: "data/en-re2text/re2text-prompt8-en.jsonl"
- split: prompt9
path: "data/en-re2text/re2text-prompt9-en.jsonl"
- split: prompt10
path: "data/en-re2text/re2text-prompt10-en.jsonl"
- split: prompt11
path: "data/en-re2text/re2text-prompt11-en.jsonl"
- split: prompt12
path: "data/en-re2text/re2text-prompt12-en.jsonl"
- config_name: en-text2re
data_files:
- split: prompt1
path: "data/en-text2re/text2re-prompt1-en.jsonl"
- split: prompt2
path: "data/en-text2re/text2re-prompt2-en.jsonl"
- split: prompt3
path: "data/en-text2re/text2re-prompt3-en.jsonl"
- split: prompt4
path: "data/en-text2re/text2re-prompt4-en.jsonl"
- split: prompt5
path: "data/en-text2re/text2re-prompt5-en.jsonl"
- split: prompt6
path: "data/en-text2re/text2re-prompt6-en.jsonl"
- split: prompt7
path: "data/en-text2re/text2re-prompt7-en.jsonl"
- split: prompt8
path: "data/en-text2re/text2re-prompt8-en.jsonl"
- split: prompt9
path: "data/en-text2re/text2re-prompt9-en.jsonl"
- split: prompt10
path: "data/en-text2re/text2re-prompt10-en.jsonl"
- split: prompt11
path: "data/en-text2re/text2re-prompt11-en.jsonl"
- split: prompt12
path: "data/en-text2re/text2re-prompt12-en.jsonl"
license: mit
language:
- en
size_categories:
- 1K<n<10K
--- |
vishnupriyavr/wiki-movie-plots-with-summaries-faiss-embeddings | ---
dataset_info:
features:
- name: Release Year
dtype: int64
- name: Title
dtype: string
- name: Cast
dtype: string
- name: Wiki Page
dtype: string
- name: Plot
dtype: string
- name: plot_length
dtype: int64
- name: text
dtype: string
- name: embeddings
sequence: float32
splits:
- name: train
num_bytes: 256974740
num_examples: 33155
download_size: 216835238
dataset_size: 256974740
---
# Dataset Card for "wiki-movie-plots-with-summaries-faiss-embeddings"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-FINETUNE3_3.3w-r4-q_k_v_o | ---
pretty_name: Evaluation run of CHIH-HUNG/llama-2-13b-FINETUNE3_3.3w-r4-q_k_v_o
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [CHIH-HUNG/llama-2-13b-FINETUNE3_3.3w-r4-q_k_v_o](https://huggingface.co/CHIH-HUNG/llama-2-13b-FINETUNE3_3.3w-r4-q_k_v_o)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 64 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-FINETUNE3_3.3w-r4-q_k_v_o\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-29T06:18:40.870535](https://huggingface.co/datasets/open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-FINETUNE3_3.3w-r4-q_k_v_o/blob/main/results_2023-10-29T06-18-40.870535.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.3716442953020134,\n\
\ \"em_stderr\": 0.004948871392755147,\n \"f1\": 0.4110413171140945,\n\
\ \"f1_stderr\": 0.004850802412390029,\n \"acc\": 0.4419141525144315,\n\
\ \"acc_stderr\": 0.010419855537406255\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.3716442953020134,\n \"em_stderr\": 0.004948871392755147,\n\
\ \"f1\": 0.4110413171140945,\n \"f1_stderr\": 0.004850802412390029\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.11902956785443518,\n \
\ \"acc_stderr\": 0.008919702911161637\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7647987371744278,\n \"acc_stderr\": 0.011920008163650873\n\
\ }\n}\n```"
repo_url: https://huggingface.co/CHIH-HUNG/llama-2-13b-FINETUNE3_3.3w-r4-q_k_v_o
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_10_01T14_24_56.870950
path:
- '**/details_harness|arc:challenge|25_2023-10-01T14-24-56.870950.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-10-01T14-24-56.870950.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_10_29T06_18_40.870535
path:
- '**/details_harness|drop|3_2023-10-29T06-18-40.870535.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-29T06-18-40.870535.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_29T06_18_40.870535
path:
- '**/details_harness|gsm8k|5_2023-10-29T06-18-40.870535.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-29T06-18-40.870535.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_10_01T14_24_56.870950
path:
- '**/details_harness|hellaswag|10_2023-10-01T14-24-56.870950.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-10-01T14-24-56.870950.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_10_01T14_24_56.870950
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-01T14-24-56.870950.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-10-01T14-24-56.870950.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-01T14-24-56.870950.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_10_01T14_24_56.870950
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-01T14-24-56.870950.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-01T14-24-56.870950.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_10_01T14_24_56.870950
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-01T14-24-56.870950.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-01T14-24-56.870950.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_10_01T14_24_56.870950
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-01T14-24-56.870950.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-01T14-24-56.870950.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_10_01T14_24_56.870950
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-01T14-24-56.870950.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-01T14-24-56.870950.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_10_01T14_24_56.870950
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-01T14-24-56.870950.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-01T14-24-56.870950.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_10_01T14_24_56.870950
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-01T14-24-56.870950.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-01T14-24-56.870950.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_10_01T14_24_56.870950
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-01T14-24-56.870950.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-01T14-24-56.870950.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_10_01T14_24_56.870950
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-01T14-24-56.870950.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-01T14-24-56.870950.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_10_01T14_24_56.870950
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-01T14-24-56.870950.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-01T14-24-56.870950.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_10_01T14_24_56.870950
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-01T14-24-56.870950.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-01T14-24-56.870950.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_10_01T14_24_56.870950
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-01T14-24-56.870950.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-01T14-24-56.870950.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_10_01T14_24_56.870950
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-01T14-24-56.870950.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-01T14-24-56.870950.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_10_01T14_24_56.870950
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-01T14-24-56.870950.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-01T14-24-56.870950.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_10_01T14_24_56.870950
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-01T14-24-56.870950.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-01T14-24-56.870950.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_10_01T14_24_56.870950
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-01T14-24-56.870950.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-01T14-24-56.870950.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_10_01T14_24_56.870950
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-01T14-24-56.870950.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-01T14-24-56.870950.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_10_01T14_24_56.870950
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-01T14-24-56.870950.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-01T14-24-56.870950.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_10_01T14_24_56.870950
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-01T14-24-56.870950.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-01T14-24-56.870950.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_10_01T14_24_56.870950
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-01T14-24-56.870950.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-01T14-24-56.870950.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_10_01T14_24_56.870950
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-01T14-24-56.870950.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-01T14-24-56.870950.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_10_01T14_24_56.870950
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-01T14-24-56.870950.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-01T14-24-56.870950.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_10_01T14_24_56.870950
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-01T14-24-56.870950.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-01T14-24-56.870950.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_10_01T14_24_56.870950
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-01T14-24-56.870950.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-01T14-24-56.870950.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_10_01T14_24_56.870950
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-01T14-24-56.870950.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-01T14-24-56.870950.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_10_01T14_24_56.870950
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-01T14-24-56.870950.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-01T14-24-56.870950.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_10_01T14_24_56.870950
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-01T14-24-56.870950.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-01T14-24-56.870950.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_10_01T14_24_56.870950
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-01T14-24-56.870950.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-01T14-24-56.870950.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_10_01T14_24_56.870950
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-01T14-24-56.870950.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-01T14-24-56.870950.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_10_01T14_24_56.870950
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-01T14-24-56.870950.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-01T14-24-56.870950.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_10_01T14_24_56.870950
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-01T14-24-56.870950.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-01T14-24-56.870950.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_10_01T14_24_56.870950
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-01T14-24-56.870950.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-01T14-24-56.870950.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_10_01T14_24_56.870950
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-01T14-24-56.870950.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-01T14-24-56.870950.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_10_01T14_24_56.870950
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-01T14-24-56.870950.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-01T14-24-56.870950.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_10_01T14_24_56.870950
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-01T14-24-56.870950.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-01T14-24-56.870950.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_10_01T14_24_56.870950
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-10-01T14-24-56.870950.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-10-01T14-24-56.870950.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_10_01T14_24_56.870950
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-01T14-24-56.870950.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-01T14-24-56.870950.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_10_01T14_24_56.870950
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-01T14-24-56.870950.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-01T14-24-56.870950.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_10_01T14_24_56.870950
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-01T14-24-56.870950.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-01T14-24-56.870950.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_10_01T14_24_56.870950
path:
- '**/details_harness|hendrycksTest-management|5_2023-10-01T14-24-56.870950.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-10-01T14-24-56.870950.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_10_01T14_24_56.870950
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-10-01T14-24-56.870950.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-10-01T14-24-56.870950.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_10_01T14_24_56.870950
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-01T14-24-56.870950.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-01T14-24-56.870950.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_10_01T14_24_56.870950
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-01T14-24-56.870950.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-01T14-24-56.870950.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_10_01T14_24_56.870950
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-01T14-24-56.870950.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-01T14-24-56.870950.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_10_01T14_24_56.870950
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-01T14-24-56.870950.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-01T14-24-56.870950.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_10_01T14_24_56.870950
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-01T14-24-56.870950.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-01T14-24-56.870950.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_10_01T14_24_56.870950
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-01T14-24-56.870950.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-01T14-24-56.870950.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_10_01T14_24_56.870950
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-01T14-24-56.870950.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-01T14-24-56.870950.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_10_01T14_24_56.870950
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-01T14-24-56.870950.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-01T14-24-56.870950.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_10_01T14_24_56.870950
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-01T14-24-56.870950.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-01T14-24-56.870950.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_10_01T14_24_56.870950
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-01T14-24-56.870950.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-01T14-24-56.870950.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_10_01T14_24_56.870950
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-01T14-24-56.870950.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-01T14-24-56.870950.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_10_01T14_24_56.870950
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-01T14-24-56.870950.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-01T14-24-56.870950.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_10_01T14_24_56.870950
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-01T14-24-56.870950.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-01T14-24-56.870950.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_10_01T14_24_56.870950
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-10-01T14-24-56.870950.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-10-01T14-24-56.870950.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_10_01T14_24_56.870950
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-01T14-24-56.870950.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-01T14-24-56.870950.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_10_01T14_24_56.870950
path:
- '**/details_harness|hendrycksTest-virology|5_2023-10-01T14-24-56.870950.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-10-01T14-24-56.870950.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_10_01T14_24_56.870950
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-01T14-24-56.870950.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-01T14-24-56.870950.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_10_01T14_24_56.870950
path:
- '**/details_harness|truthfulqa:mc|0_2023-10-01T14-24-56.870950.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-10-01T14-24-56.870950.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_29T06_18_40.870535
path:
- '**/details_harness|winogrande|5_2023-10-29T06-18-40.870535.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-29T06-18-40.870535.parquet'
- config_name: results
data_files:
- split: 2023_10_01T14_24_56.870950
path:
- results_2023-10-01T14-24-56.870950.parquet
- split: 2023_10_29T06_18_40.870535
path:
- results_2023-10-29T06-18-40.870535.parquet
- split: latest
path:
- results_2023-10-29T06-18-40.870535.parquet
---
# Dataset Card for Evaluation run of CHIH-HUNG/llama-2-13b-FINETUNE3_3.3w-r4-q_k_v_o
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/CHIH-HUNG/llama-2-13b-FINETUNE3_3.3w-r4-q_k_v_o
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [CHIH-HUNG/llama-2-13b-FINETUNE3_3.3w-r4-q_k_v_o](https://huggingface.co/CHIH-HUNG/llama-2-13b-FINETUNE3_3.3w-r4-q_k_v_o) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-FINETUNE3_3.3w-r4-q_k_v_o",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-29T06:18:40.870535](https://huggingface.co/datasets/open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-FINETUNE3_3.3w-r4-q_k_v_o/blob/main/results_2023-10-29T06-18-40.870535.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.3716442953020134,
"em_stderr": 0.004948871392755147,
"f1": 0.4110413171140945,
"f1_stderr": 0.004850802412390029,
"acc": 0.4419141525144315,
"acc_stderr": 0.010419855537406255
},
"harness|drop|3": {
"em": 0.3716442953020134,
"em_stderr": 0.004948871392755147,
"f1": 0.4110413171140945,
"f1_stderr": 0.004850802412390029
},
"harness|gsm8k|5": {
"acc": 0.11902956785443518,
"acc_stderr": 0.008919702911161637
},
"harness|winogrande|5": {
"acc": 0.7647987371744278,
"acc_stderr": 0.011920008163650873
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
AnuraSet/anuraset_dev | ---
license: mit
---
|
armvectores/hy_eanc_2023 | ---
dataset_info:
features:
- name: "Աբովյան Խաչատուր՝ \_\_Առաջին սերը"
dtype: string
splits:
- name: train
num_bytes: 61486730
num_examples: 384406
download_size: 28553551
dataset_size: 61486730
size_categories:
- 1M<n<10M
task_categories:
- text-generation
language:
- hy
---
# Dataset Card for "hy_eanc_2023"
5M tokens
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
liuyanchen1015/MULTI_VALUE_stsb_adj_postfix | ---
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: score
dtype: float64
- name: idx
dtype: int64
- name: value_score
dtype: int64
splits:
- name: dev
num_bytes: 134065
num_examples: 763
- name: test
num_bytes: 85742
num_examples: 543
- name: train
num_bytes: 408037
num_examples: 2464
download_size: 390820
dataset_size: 627844
---
# Dataset Card for "MULTI_VALUE_stsb_adj_postfix"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tyzhu/squad_qa_no_id_v5_full_first_permute | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: answer
dtype: string
- name: context_id
dtype: string
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 7671973.074366877
num_examples: 4778
- name: validation
num_bytes: 342766
num_examples: 300
download_size: 1317836
dataset_size: 8014739.074366877
---
# Dataset Card for "squad_qa_no_id_v5_full_first_permute"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
bishalbaaniya/my_en | ---
license: artistic-2.0
---
|
Lord-Goku/testing_1 | ---
license: afl-3.0
---
---
TODO: Add YAML tags here. Copy-paste the tags obtained with the online tagging app: https://huggingface.co/spaces/huggingface/datasets-tagging
---
# Dataset Card for Testing Stock Data
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This is a test dataset
### Supported Tasks and Leaderboards
BERT
MARKET
STOCK
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. |
CVasNLPExperiments/VQAv2_test_no_image_google_flan_t5_xxl_mode_T_A_D_PNP_FILTER_C_Q_rices_ns_1000 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: question
dtype: string
- name: true_label
sequence: string
- name: prediction
dtype: string
splits:
- name: fewshot_0_clip_tags_LAION_ViT_H_14_2B_with_openai_Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full_DETA_detections_deta_swin_large_o365_coco_classes_caption_module_random_
num_bytes: 94167
num_examples: 1000
download_size: 29466
dataset_size: 94167
---
# Dataset Card for "VQAv2_test_no_image_google_flan_t5_xxl_mode_T_A_D_PNP_FILTER_C_Q_rices_ns_1000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CyberHarem/poli_nikke | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of poli/ポリ/波莉/폴리 (Nikke: Goddess of Victory)
This is the dataset of poli/ポリ/波莉/폴리 (Nikke: Goddess of Victory), containing 38 images and their tags.
The core tags of this character are `long_hair, blue_eyes, white_hair, hat, bangs, very_long_hair, fang, grey_hair`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 38 | 55.81 MiB | [Download](https://huggingface.co/datasets/CyberHarem/poli_nikke/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 38 | 29.30 MiB | [Download](https://huggingface.co/datasets/CyberHarem/poli_nikke/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 89 | 63.94 MiB | [Download](https://huggingface.co/datasets/CyberHarem/poli_nikke/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 38 | 47.16 MiB | [Download](https://huggingface.co/datasets/CyberHarem/poli_nikke/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 89 | 95.41 MiB | [Download](https://huggingface.co/datasets/CyberHarem/poli_nikke/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/poli_nikke',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 13 |  |  |  |  |  | 1girl, solo, black_gloves, holding_gun, smile, open_mouth, socks, looking_at_viewer, police_uniform, white_background, assault_rifle, bandaid_on_knee, full_body, shoes, simple_background |
| 1 | 5 |  |  |  |  |  | 1girl, black_gloves, policewoman, looking_at_viewer, short_sleeves, smile, solo, sunglasses, holding, pencil_skirt, police_hat, black_headwear, blue_headwear, closed_mouth, cloud, hair_between_eyes, hand_on_hip, open_mouth, outdoors, walkie-talkie |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | black_gloves | holding_gun | smile | open_mouth | socks | looking_at_viewer | police_uniform | white_background | assault_rifle | bandaid_on_knee | full_body | shoes | simple_background | policewoman | short_sleeves | sunglasses | holding | pencil_skirt | police_hat | black_headwear | blue_headwear | closed_mouth | cloud | hair_between_eyes | hand_on_hip | outdoors | walkie-talkie |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:---------------|:--------------|:--------|:-------------|:--------|:--------------------|:-----------------|:-------------------|:----------------|:------------------|:------------|:--------|:--------------------|:--------------|:----------------|:-------------|:----------|:---------------|:-------------|:-----------------|:----------------|:---------------|:--------|:--------------------|:--------------|:-----------|:----------------|
| 0 | 13 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | |
| 1 | 5 |  |  |  |  |  | X | X | X | | X | X | | X | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
Poupou/Regen-Rangers-Hackathon | ---
license: mit
---
|
Hikari0608/UIEB | ---
dataset_info:
features:
- name: raw
dtype: image
- name: gt
dtype: image
splits:
- name: train
num_bytes: 1351356308.0
num_examples: 800
- name: val
num_bytes: 136425185.0
num_examples: 90
download_size: 1487875235
dataset_size: 1487781493.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: val
path: data/val-*
---
# Dataset Card for "UIEB"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
He-Xingwei/ConIR | ---
license: mit
---
# AnnoLLM
This repo hosts the data for our NAACL 2024 paper "[**AnnoLLM: Making Large Language Models to Be Better Crowdsourced Annotators**](https://arxiv.org/abs/2303.16854)".
## ConIR Dataset
The conversation-based information retrieval (ConIR) dataset is created by ChatGPT based on the MS-MARCO passage ranking dataset.
The *ConIR* dataset is available at https://huggingface.co/datasets/He-Xingwei/ConIR. The sizes of the training and test sets for ConIR are 71,557 and 3,000 respectively.
When using it, please adhere to the MIT license.
Below is one example of the data:
```json
{
"query": ["what is tonic water?"],
"pos_p": [4491229],
"conversation": "P1: What is tonic water?\nP2: Tonic water is a carbonated soft drink that contains quinine, which gives it its distinctive bitter taste.\nP1: What is quinine?\nP2: Quinine is a bitter alkaloid that is extracted from the bark of the cinchona tree.\nP1: Why is tonic water bitter?\nP2: The quinine in tonic water gives it its distinctive bitter taste.\nP1: What is tonic water used for?\nP2: Tonic water was originally developed as a way to prevent malaria, but is now primarily consumed as a mixer for cocktails.\nP1: Can you give me an example of a cocktail that uses tonic water?\nP2: Gin and tonic is a popular cocktail that uses tonic water.\nP1: Is tonic water only used in cocktails?\nP2: No, tonic water can also be consumed on its own, although it is often considered too bitter to drink without the addition of sweeteners.\nP1: Is tonic water still used to prevent malaria?\nP2: While quinine has been shown to be effective against malaria, tonic water is now primarily consumed as a mixer for cocktails and is not commonly used for its medicinal properties.\nP1: Is tonic water only found in Australia?\nP2: No, tonic water is available in many countries around the world. It is not specific to Australia.\nP1: Can you tell me more about the Great Barrier Reef?\nP2: The Great Barrier Reef is the world's largest coral reef system, located in the Coral Sea off the coast of Australia. It is a UNESCO World Heritage Site and is considered one of the seven natural wonders of the world. It is also an important economic resource for Australia.\nP1: What kind of marine life can be found on the Great Barrier Reef?\nP2: The Great Barrier Reef is home to over 1,500 species of fish, 400 species of coral, and 4,000 species of mollusks.",
"neg_p": [7120024, 7165296, 3673008, 2964233, 8086308, 1513810, 2364054, 3436519, 8094542, 7966688, 8537198, 4642397, 3510228, 4765344, 2214087]
}
```
| **Attribute** | **Type** | **Description** |
| ------------- | -------- | ------------------------------------------------------------ |
| query | List[String] | Each query is extracted from MS-MARCO. |
| pos_p | List[Integer] | Ids of the positive passages for the queries/conversation. |
| conversation | String | The conversation is generated by ChatGPT based on positive passages. |
| neg_p | List[Integer] | Ids of the negative passages for the queries/conversation. |
Passages are available in the collection.tsv (https://msmarco.z22.web.core.windows.net/msmarcoranking/collection.tar.gz) file.
|
open-llm-leaderboard/details_ArianAskari__SOLID-SFT-DPO-MixQV3-SOLIDChosen-SFTRejected-Zephyr-7b-beta | ---
pretty_name: Evaluation run of ArianAskari/SOLID-SFT-DPO-MixQV3-SOLIDChosen-SFTRejected-Zephyr-7b-beta
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [ArianAskari/SOLID-SFT-DPO-MixQV3-SOLIDChosen-SFTRejected-Zephyr-7b-beta](https://huggingface.co/ArianAskari/SOLID-SFT-DPO-MixQV3-SOLIDChosen-SFTRejected-Zephyr-7b-beta)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_ArianAskari__SOLID-SFT-DPO-MixQV3-SOLIDChosen-SFTRejected-Zephyr-7b-beta\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-02-13T07:37:59.900682](https://huggingface.co/datasets/open-llm-leaderboard/details_ArianAskari__SOLID-SFT-DPO-MixQV3-SOLIDChosen-SFTRejected-Zephyr-7b-beta/blob/main/results_2024-02-13T07-37-59.900682.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.5887708742073996,\n\
\ \"acc_stderr\": 0.033320056261042716,\n \"acc_norm\": 0.5998480729795278,\n\
\ \"acc_norm_stderr\": 0.03416670673744603,\n \"mc1\": 0.3990208078335373,\n\
\ \"mc1_stderr\": 0.017142825728496767,\n \"mc2\": 0.5757997033411437,\n\
\ \"mc2_stderr\": 0.015647081417738417\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.5409556313993175,\n \"acc_stderr\": 0.01456229107360123,\n\
\ \"acc_norm\": 0.5955631399317406,\n \"acc_norm_stderr\": 0.014342036483436177\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6405098585939056,\n\
\ \"acc_stderr\": 0.004788703173474751,\n \"acc_norm\": 0.8253335988846843,\n\
\ \"acc_norm_stderr\": 0.003789055487003183\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.29,\n \"acc_stderr\": 0.045604802157206845,\n \
\ \"acc_norm\": 0.29,\n \"acc_norm_stderr\": 0.045604802157206845\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5481481481481482,\n\
\ \"acc_stderr\": 0.042992689054808644,\n \"acc_norm\": 0.5481481481481482,\n\
\ \"acc_norm_stderr\": 0.042992689054808644\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.5986842105263158,\n \"acc_stderr\": 0.039889037033362836,\n\
\ \"acc_norm\": 0.5986842105263158,\n \"acc_norm_stderr\": 0.039889037033362836\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.58,\n\
\ \"acc_stderr\": 0.049604496374885836,\n \"acc_norm\": 0.58,\n \
\ \"acc_norm_stderr\": 0.049604496374885836\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.660377358490566,\n \"acc_stderr\": 0.02914690474779833,\n\
\ \"acc_norm\": 0.660377358490566,\n \"acc_norm_stderr\": 0.02914690474779833\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.6944444444444444,\n\
\ \"acc_stderr\": 0.03852084696008534,\n \"acc_norm\": 0.6944444444444444,\n\
\ \"acc_norm_stderr\": 0.03852084696008534\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.43,\n \"acc_stderr\": 0.04975698519562428,\n \
\ \"acc_norm\": 0.43,\n \"acc_norm_stderr\": 0.04975698519562428\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.5,\n \"acc_stderr\": 0.050251890762960605,\n \"acc_norm\": 0.5,\n\
\ \"acc_norm_stderr\": 0.050251890762960605\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695235,\n \
\ \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695235\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.630057803468208,\n\
\ \"acc_stderr\": 0.0368122963339432,\n \"acc_norm\": 0.630057803468208,\n\
\ \"acc_norm_stderr\": 0.0368122963339432\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.4411764705882353,\n \"acc_stderr\": 0.049406356306056595,\n\
\ \"acc_norm\": 0.4411764705882353,\n \"acc_norm_stderr\": 0.049406356306056595\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.73,\n \"acc_stderr\": 0.0446196043338474,\n \"acc_norm\": 0.73,\n\
\ \"acc_norm_stderr\": 0.0446196043338474\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.5276595744680851,\n \"acc_stderr\": 0.03263597118409769,\n\
\ \"acc_norm\": 0.5276595744680851,\n \"acc_norm_stderr\": 0.03263597118409769\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.42105263157894735,\n\
\ \"acc_stderr\": 0.046446020912223177,\n \"acc_norm\": 0.42105263157894735,\n\
\ \"acc_norm_stderr\": 0.046446020912223177\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.496551724137931,\n \"acc_stderr\": 0.041665675771015785,\n\
\ \"acc_norm\": 0.496551724137931,\n \"acc_norm_stderr\": 0.041665675771015785\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.3915343915343915,\n \"acc_stderr\": 0.025138091388851105,\n \"\
acc_norm\": 0.3915343915343915,\n \"acc_norm_stderr\": 0.025138091388851105\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.373015873015873,\n\
\ \"acc_stderr\": 0.04325506042017086,\n \"acc_norm\": 0.373015873015873,\n\
\ \"acc_norm_stderr\": 0.04325506042017086\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.44,\n \"acc_stderr\": 0.04988876515698589,\n \
\ \"acc_norm\": 0.44,\n \"acc_norm_stderr\": 0.04988876515698589\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7290322580645161,\n\
\ \"acc_stderr\": 0.025284416114900156,\n \"acc_norm\": 0.7290322580645161,\n\
\ \"acc_norm_stderr\": 0.025284416114900156\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.458128078817734,\n \"acc_stderr\": 0.03505630140785742,\n\
\ \"acc_norm\": 0.458128078817734,\n \"acc_norm_stderr\": 0.03505630140785742\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.68,\n \"acc_stderr\": 0.04688261722621505,\n \"acc_norm\"\
: 0.68,\n \"acc_norm_stderr\": 0.04688261722621505\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7151515151515152,\n \"acc_stderr\": 0.03524390844511781,\n\
\ \"acc_norm\": 0.7151515151515152,\n \"acc_norm_stderr\": 0.03524390844511781\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.7323232323232324,\n \"acc_stderr\": 0.03154449888270286,\n \"\
acc_norm\": 0.7323232323232324,\n \"acc_norm_stderr\": 0.03154449888270286\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.8238341968911918,\n \"acc_stderr\": 0.02749350424454805,\n\
\ \"acc_norm\": 0.8238341968911918,\n \"acc_norm_stderr\": 0.02749350424454805\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.5794871794871795,\n \"acc_stderr\": 0.025028610276710862,\n\
\ \"acc_norm\": 0.5794871794871795,\n \"acc_norm_stderr\": 0.025028610276710862\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.3296296296296296,\n \"acc_stderr\": 0.02866120111652458,\n \
\ \"acc_norm\": 0.3296296296296296,\n \"acc_norm_stderr\": 0.02866120111652458\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6302521008403361,\n \"acc_stderr\": 0.03135709599613591,\n \
\ \"acc_norm\": 0.6302521008403361,\n \"acc_norm_stderr\": 0.03135709599613591\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.2980132450331126,\n \"acc_stderr\": 0.037345356767871984,\n \"\
acc_norm\": 0.2980132450331126,\n \"acc_norm_stderr\": 0.037345356767871984\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.7963302752293578,\n \"acc_stderr\": 0.017266742087630804,\n \"\
acc_norm\": 0.7963302752293578,\n \"acc_norm_stderr\": 0.017266742087630804\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.5046296296296297,\n \"acc_stderr\": 0.03409825519163572,\n \"\
acc_norm\": 0.5046296296296297,\n \"acc_norm_stderr\": 0.03409825519163572\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.7450980392156863,\n \"acc_stderr\": 0.03058759135160425,\n \"\
acc_norm\": 0.7450980392156863,\n \"acc_norm_stderr\": 0.03058759135160425\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.7215189873417721,\n \"acc_stderr\": 0.029178682304842534,\n \
\ \"acc_norm\": 0.7215189873417721,\n \"acc_norm_stderr\": 0.029178682304842534\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6278026905829597,\n\
\ \"acc_stderr\": 0.03244305283008732,\n \"acc_norm\": 0.6278026905829597,\n\
\ \"acc_norm_stderr\": 0.03244305283008732\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.6793893129770993,\n \"acc_stderr\": 0.04093329229834278,\n\
\ \"acc_norm\": 0.6793893129770993,\n \"acc_norm_stderr\": 0.04093329229834278\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.7024793388429752,\n \"acc_stderr\": 0.04173349148083499,\n \"\
acc_norm\": 0.7024793388429752,\n \"acc_norm_stderr\": 0.04173349148083499\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7407407407407407,\n\
\ \"acc_stderr\": 0.04236511258094632,\n \"acc_norm\": 0.7407407407407407,\n\
\ \"acc_norm_stderr\": 0.04236511258094632\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.6932515337423313,\n \"acc_stderr\": 0.03623089915724146,\n\
\ \"acc_norm\": 0.6932515337423313,\n \"acc_norm_stderr\": 0.03623089915724146\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4732142857142857,\n\
\ \"acc_stderr\": 0.047389751192741546,\n \"acc_norm\": 0.4732142857142857,\n\
\ \"acc_norm_stderr\": 0.047389751192741546\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7087378640776699,\n \"acc_stderr\": 0.044986763205729245,\n\
\ \"acc_norm\": 0.7087378640776699,\n \"acc_norm_stderr\": 0.044986763205729245\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8675213675213675,\n\
\ \"acc_stderr\": 0.022209309073165616,\n \"acc_norm\": 0.8675213675213675,\n\
\ \"acc_norm_stderr\": 0.022209309073165616\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.67,\n \"acc_stderr\": 0.04725815626252607,\n \
\ \"acc_norm\": 0.67,\n \"acc_norm_stderr\": 0.04725815626252607\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7879948914431673,\n\
\ \"acc_stderr\": 0.014616099385833676,\n \"acc_norm\": 0.7879948914431673,\n\
\ \"acc_norm_stderr\": 0.014616099385833676\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.6734104046242775,\n \"acc_stderr\": 0.025248264774242826,\n\
\ \"acc_norm\": 0.6734104046242775,\n \"acc_norm_stderr\": 0.025248264774242826\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.2905027932960894,\n\
\ \"acc_stderr\": 0.015183844307206144,\n \"acc_norm\": 0.2905027932960894,\n\
\ \"acc_norm_stderr\": 0.015183844307206144\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.6568627450980392,\n \"acc_stderr\": 0.02718449890994161,\n\
\ \"acc_norm\": 0.6568627450980392,\n \"acc_norm_stderr\": 0.02718449890994161\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6527331189710611,\n\
\ \"acc_stderr\": 0.027040745502307336,\n \"acc_norm\": 0.6527331189710611,\n\
\ \"acc_norm_stderr\": 0.027040745502307336\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.6481481481481481,\n \"acc_stderr\": 0.026571483480719978,\n\
\ \"acc_norm\": 0.6481481481481481,\n \"acc_norm_stderr\": 0.026571483480719978\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.4397163120567376,\n \"acc_stderr\": 0.02960991207559411,\n \
\ \"acc_norm\": 0.4397163120567376,\n \"acc_norm_stderr\": 0.02960991207559411\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4282920469361147,\n\
\ \"acc_stderr\": 0.012638223880313167,\n \"acc_norm\": 0.4282920469361147,\n\
\ \"acc_norm_stderr\": 0.012638223880313167\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.6433823529411765,\n \"acc_stderr\": 0.02909720956841195,\n\
\ \"acc_norm\": 0.6433823529411765,\n \"acc_norm_stderr\": 0.02909720956841195\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.5996732026143791,\n \"acc_stderr\": 0.01982184368827176,\n \
\ \"acc_norm\": 0.5996732026143791,\n \"acc_norm_stderr\": 0.01982184368827176\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6363636363636364,\n\
\ \"acc_stderr\": 0.04607582090719976,\n \"acc_norm\": 0.6363636363636364,\n\
\ \"acc_norm_stderr\": 0.04607582090719976\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.6530612244897959,\n \"acc_stderr\": 0.030472526026726492,\n\
\ \"acc_norm\": 0.6530612244897959,\n \"acc_norm_stderr\": 0.030472526026726492\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.7661691542288557,\n\
\ \"acc_stderr\": 0.029929415408348377,\n \"acc_norm\": 0.7661691542288557,\n\
\ \"acc_norm_stderr\": 0.029929415408348377\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.77,\n \"acc_stderr\": 0.04229525846816506,\n \
\ \"acc_norm\": 0.77,\n \"acc_norm_stderr\": 0.04229525846816506\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5,\n \
\ \"acc_stderr\": 0.03892494720807614,\n \"acc_norm\": 0.5,\n \"\
acc_norm_stderr\": 0.03892494720807614\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8245614035087719,\n \"acc_stderr\": 0.029170885500727665,\n\
\ \"acc_norm\": 0.8245614035087719,\n \"acc_norm_stderr\": 0.029170885500727665\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.3990208078335373,\n\
\ \"mc1_stderr\": 0.017142825728496767,\n \"mc2\": 0.5757997033411437,\n\
\ \"mc2_stderr\": 0.015647081417738417\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7490134175217048,\n \"acc_stderr\": 0.012185776220516156\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.014404852160727824,\n \
\ \"acc_stderr\": 0.0032820559171369023\n }\n}\n```"
repo_url: https://huggingface.co/ArianAskari/SOLID-SFT-DPO-MixQV3-SOLIDChosen-SFTRejected-Zephyr-7b-beta
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_02_13T07_37_59.900682
path:
- '**/details_harness|arc:challenge|25_2024-02-13T07-37-59.900682.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-02-13T07-37-59.900682.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_02_13T07_37_59.900682
path:
- '**/details_harness|gsm8k|5_2024-02-13T07-37-59.900682.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-02-13T07-37-59.900682.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_02_13T07_37_59.900682
path:
- '**/details_harness|hellaswag|10_2024-02-13T07-37-59.900682.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-02-13T07-37-59.900682.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_02_13T07_37_59.900682
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-13T07-37-59.900682.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-13T07-37-59.900682.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-13T07-37-59.900682.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_02_13T07_37_59.900682
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-13T07-37-59.900682.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-13T07-37-59.900682.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_02_13T07_37_59.900682
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-13T07-37-59.900682.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-13T07-37-59.900682.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_02_13T07_37_59.900682
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-13T07-37-59.900682.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-13T07-37-59.900682.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_02_13T07_37_59.900682
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-13T07-37-59.900682.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-13T07-37-59.900682.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_02_13T07_37_59.900682
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-13T07-37-59.900682.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-13T07-37-59.900682.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_02_13T07_37_59.900682
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-13T07-37-59.900682.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-13T07-37-59.900682.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_02_13T07_37_59.900682
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-13T07-37-59.900682.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-13T07-37-59.900682.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_02_13T07_37_59.900682
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-13T07-37-59.900682.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-13T07-37-59.900682.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_02_13T07_37_59.900682
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-13T07-37-59.900682.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-13T07-37-59.900682.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_02_13T07_37_59.900682
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-13T07-37-59.900682.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-13T07-37-59.900682.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_02_13T07_37_59.900682
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-13T07-37-59.900682.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-13T07-37-59.900682.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_02_13T07_37_59.900682
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-13T07-37-59.900682.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-13T07-37-59.900682.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_02_13T07_37_59.900682
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-13T07-37-59.900682.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-13T07-37-59.900682.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_02_13T07_37_59.900682
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-13T07-37-59.900682.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-13T07-37-59.900682.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_02_13T07_37_59.900682
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-13T07-37-59.900682.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-13T07-37-59.900682.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_02_13T07_37_59.900682
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-13T07-37-59.900682.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-13T07-37-59.900682.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_02_13T07_37_59.900682
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-13T07-37-59.900682.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-13T07-37-59.900682.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_02_13T07_37_59.900682
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-13T07-37-59.900682.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-13T07-37-59.900682.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_02_13T07_37_59.900682
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-13T07-37-59.900682.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-13T07-37-59.900682.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_02_13T07_37_59.900682
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-13T07-37-59.900682.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-13T07-37-59.900682.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_02_13T07_37_59.900682
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-13T07-37-59.900682.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-13T07-37-59.900682.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_02_13T07_37_59.900682
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-13T07-37-59.900682.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-13T07-37-59.900682.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_02_13T07_37_59.900682
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-13T07-37-59.900682.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-13T07-37-59.900682.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_02_13T07_37_59.900682
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-13T07-37-59.900682.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-13T07-37-59.900682.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_02_13T07_37_59.900682
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-13T07-37-59.900682.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-13T07-37-59.900682.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_02_13T07_37_59.900682
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-13T07-37-59.900682.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-13T07-37-59.900682.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_02_13T07_37_59.900682
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-13T07-37-59.900682.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-13T07-37-59.900682.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_02_13T07_37_59.900682
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-13T07-37-59.900682.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-13T07-37-59.900682.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_02_13T07_37_59.900682
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-13T07-37-59.900682.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-13T07-37-59.900682.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_02_13T07_37_59.900682
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-13T07-37-59.900682.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-13T07-37-59.900682.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_02_13T07_37_59.900682
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-13T07-37-59.900682.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-13T07-37-59.900682.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_02_13T07_37_59.900682
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-13T07-37-59.900682.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-13T07-37-59.900682.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_02_13T07_37_59.900682
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-13T07-37-59.900682.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-13T07-37-59.900682.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_02_13T07_37_59.900682
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-13T07-37-59.900682.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-13T07-37-59.900682.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_02_13T07_37_59.900682
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-13T07-37-59.900682.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-13T07-37-59.900682.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_02_13T07_37_59.900682
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-13T07-37-59.900682.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-13T07-37-59.900682.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_02_13T07_37_59.900682
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-13T07-37-59.900682.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-13T07-37-59.900682.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_02_13T07_37_59.900682
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-13T07-37-59.900682.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-13T07-37-59.900682.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_02_13T07_37_59.900682
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-13T07-37-59.900682.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-13T07-37-59.900682.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_02_13T07_37_59.900682
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-13T07-37-59.900682.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-13T07-37-59.900682.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_02_13T07_37_59.900682
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-13T07-37-59.900682.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-13T07-37-59.900682.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_02_13T07_37_59.900682
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-13T07-37-59.900682.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-13T07-37-59.900682.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_02_13T07_37_59.900682
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-13T07-37-59.900682.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-13T07-37-59.900682.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_02_13T07_37_59.900682
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-13T07-37-59.900682.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-13T07-37-59.900682.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_02_13T07_37_59.900682
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-13T07-37-59.900682.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-13T07-37-59.900682.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_02_13T07_37_59.900682
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-13T07-37-59.900682.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-13T07-37-59.900682.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_02_13T07_37_59.900682
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-13T07-37-59.900682.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-13T07-37-59.900682.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_02_13T07_37_59.900682
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-13T07-37-59.900682.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-13T07-37-59.900682.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_02_13T07_37_59.900682
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-13T07-37-59.900682.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-13T07-37-59.900682.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_02_13T07_37_59.900682
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-13T07-37-59.900682.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-13T07-37-59.900682.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_02_13T07_37_59.900682
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-13T07-37-59.900682.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-13T07-37-59.900682.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_02_13T07_37_59.900682
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-13T07-37-59.900682.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-13T07-37-59.900682.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_02_13T07_37_59.900682
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-13T07-37-59.900682.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-13T07-37-59.900682.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_02_13T07_37_59.900682
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-13T07-37-59.900682.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-13T07-37-59.900682.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_02_13T07_37_59.900682
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-13T07-37-59.900682.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-13T07-37-59.900682.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_02_13T07_37_59.900682
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-13T07-37-59.900682.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-13T07-37-59.900682.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_02_13T07_37_59.900682
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-13T07-37-59.900682.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-13T07-37-59.900682.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_02_13T07_37_59.900682
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-13T07-37-59.900682.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-13T07-37-59.900682.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_02_13T07_37_59.900682
path:
- '**/details_harness|winogrande|5_2024-02-13T07-37-59.900682.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-02-13T07-37-59.900682.parquet'
- config_name: results
data_files:
- split: 2024_02_13T07_37_59.900682
path:
- results_2024-02-13T07-37-59.900682.parquet
- split: latest
path:
- results_2024-02-13T07-37-59.900682.parquet
---
# Dataset Card for Evaluation run of ArianAskari/SOLID-SFT-DPO-MixQV3-SOLIDChosen-SFTRejected-Zephyr-7b-beta
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [ArianAskari/SOLID-SFT-DPO-MixQV3-SOLIDChosen-SFTRejected-Zephyr-7b-beta](https://huggingface.co/ArianAskari/SOLID-SFT-DPO-MixQV3-SOLIDChosen-SFTRejected-Zephyr-7b-beta) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_ArianAskari__SOLID-SFT-DPO-MixQV3-SOLIDChosen-SFTRejected-Zephyr-7b-beta",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-02-13T07:37:59.900682](https://huggingface.co/datasets/open-llm-leaderboard/details_ArianAskari__SOLID-SFT-DPO-MixQV3-SOLIDChosen-SFTRejected-Zephyr-7b-beta/blob/main/results_2024-02-13T07-37-59.900682.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.5887708742073996,
"acc_stderr": 0.033320056261042716,
"acc_norm": 0.5998480729795278,
"acc_norm_stderr": 0.03416670673744603,
"mc1": 0.3990208078335373,
"mc1_stderr": 0.017142825728496767,
"mc2": 0.5757997033411437,
"mc2_stderr": 0.015647081417738417
},
"harness|arc:challenge|25": {
"acc": 0.5409556313993175,
"acc_stderr": 0.01456229107360123,
"acc_norm": 0.5955631399317406,
"acc_norm_stderr": 0.014342036483436177
},
"harness|hellaswag|10": {
"acc": 0.6405098585939056,
"acc_stderr": 0.004788703173474751,
"acc_norm": 0.8253335988846843,
"acc_norm_stderr": 0.003789055487003183
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.29,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.29,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.5481481481481482,
"acc_stderr": 0.042992689054808644,
"acc_norm": 0.5481481481481482,
"acc_norm_stderr": 0.042992689054808644
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.5986842105263158,
"acc_stderr": 0.039889037033362836,
"acc_norm": 0.5986842105263158,
"acc_norm_stderr": 0.039889037033362836
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.58,
"acc_stderr": 0.049604496374885836,
"acc_norm": 0.58,
"acc_norm_stderr": 0.049604496374885836
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.660377358490566,
"acc_stderr": 0.02914690474779833,
"acc_norm": 0.660377358490566,
"acc_norm_stderr": 0.02914690474779833
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.6944444444444444,
"acc_stderr": 0.03852084696008534,
"acc_norm": 0.6944444444444444,
"acc_norm_stderr": 0.03852084696008534
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.43,
"acc_stderr": 0.04975698519562428,
"acc_norm": 0.43,
"acc_norm_stderr": 0.04975698519562428
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.5,
"acc_stderr": 0.050251890762960605,
"acc_norm": 0.5,
"acc_norm_stderr": 0.050251890762960605
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.34,
"acc_stderr": 0.04760952285695235,
"acc_norm": 0.34,
"acc_norm_stderr": 0.04760952285695235
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.630057803468208,
"acc_stderr": 0.0368122963339432,
"acc_norm": 0.630057803468208,
"acc_norm_stderr": 0.0368122963339432
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.4411764705882353,
"acc_stderr": 0.049406356306056595,
"acc_norm": 0.4411764705882353,
"acc_norm_stderr": 0.049406356306056595
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.73,
"acc_stderr": 0.0446196043338474,
"acc_norm": 0.73,
"acc_norm_stderr": 0.0446196043338474
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5276595744680851,
"acc_stderr": 0.03263597118409769,
"acc_norm": 0.5276595744680851,
"acc_norm_stderr": 0.03263597118409769
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.42105263157894735,
"acc_stderr": 0.046446020912223177,
"acc_norm": 0.42105263157894735,
"acc_norm_stderr": 0.046446020912223177
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.496551724137931,
"acc_stderr": 0.041665675771015785,
"acc_norm": 0.496551724137931,
"acc_norm_stderr": 0.041665675771015785
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.3915343915343915,
"acc_stderr": 0.025138091388851105,
"acc_norm": 0.3915343915343915,
"acc_norm_stderr": 0.025138091388851105
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.373015873015873,
"acc_stderr": 0.04325506042017086,
"acc_norm": 0.373015873015873,
"acc_norm_stderr": 0.04325506042017086
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.44,
"acc_stderr": 0.04988876515698589,
"acc_norm": 0.44,
"acc_norm_stderr": 0.04988876515698589
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7290322580645161,
"acc_stderr": 0.025284416114900156,
"acc_norm": 0.7290322580645161,
"acc_norm_stderr": 0.025284416114900156
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.458128078817734,
"acc_stderr": 0.03505630140785742,
"acc_norm": 0.458128078817734,
"acc_norm_stderr": 0.03505630140785742
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.68,
"acc_stderr": 0.04688261722621505,
"acc_norm": 0.68,
"acc_norm_stderr": 0.04688261722621505
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7151515151515152,
"acc_stderr": 0.03524390844511781,
"acc_norm": 0.7151515151515152,
"acc_norm_stderr": 0.03524390844511781
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7323232323232324,
"acc_stderr": 0.03154449888270286,
"acc_norm": 0.7323232323232324,
"acc_norm_stderr": 0.03154449888270286
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8238341968911918,
"acc_stderr": 0.02749350424454805,
"acc_norm": 0.8238341968911918,
"acc_norm_stderr": 0.02749350424454805
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.5794871794871795,
"acc_stderr": 0.025028610276710862,
"acc_norm": 0.5794871794871795,
"acc_norm_stderr": 0.025028610276710862
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3296296296296296,
"acc_stderr": 0.02866120111652458,
"acc_norm": 0.3296296296296296,
"acc_norm_stderr": 0.02866120111652458
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6302521008403361,
"acc_stderr": 0.03135709599613591,
"acc_norm": 0.6302521008403361,
"acc_norm_stderr": 0.03135709599613591
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.2980132450331126,
"acc_stderr": 0.037345356767871984,
"acc_norm": 0.2980132450331126,
"acc_norm_stderr": 0.037345356767871984
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.7963302752293578,
"acc_stderr": 0.017266742087630804,
"acc_norm": 0.7963302752293578,
"acc_norm_stderr": 0.017266742087630804
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5046296296296297,
"acc_stderr": 0.03409825519163572,
"acc_norm": 0.5046296296296297,
"acc_norm_stderr": 0.03409825519163572
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.7450980392156863,
"acc_stderr": 0.03058759135160425,
"acc_norm": 0.7450980392156863,
"acc_norm_stderr": 0.03058759135160425
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7215189873417721,
"acc_stderr": 0.029178682304842534,
"acc_norm": 0.7215189873417721,
"acc_norm_stderr": 0.029178682304842534
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6278026905829597,
"acc_stderr": 0.03244305283008732,
"acc_norm": 0.6278026905829597,
"acc_norm_stderr": 0.03244305283008732
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.6793893129770993,
"acc_stderr": 0.04093329229834278,
"acc_norm": 0.6793893129770993,
"acc_norm_stderr": 0.04093329229834278
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7024793388429752,
"acc_stderr": 0.04173349148083499,
"acc_norm": 0.7024793388429752,
"acc_norm_stderr": 0.04173349148083499
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7407407407407407,
"acc_stderr": 0.04236511258094632,
"acc_norm": 0.7407407407407407,
"acc_norm_stderr": 0.04236511258094632
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.6932515337423313,
"acc_stderr": 0.03623089915724146,
"acc_norm": 0.6932515337423313,
"acc_norm_stderr": 0.03623089915724146
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.4732142857142857,
"acc_stderr": 0.047389751192741546,
"acc_norm": 0.4732142857142857,
"acc_norm_stderr": 0.047389751192741546
},
"harness|hendrycksTest-management|5": {
"acc": 0.7087378640776699,
"acc_stderr": 0.044986763205729245,
"acc_norm": 0.7087378640776699,
"acc_norm_stderr": 0.044986763205729245
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8675213675213675,
"acc_stderr": 0.022209309073165616,
"acc_norm": 0.8675213675213675,
"acc_norm_stderr": 0.022209309073165616
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.67,
"acc_stderr": 0.04725815626252607,
"acc_norm": 0.67,
"acc_norm_stderr": 0.04725815626252607
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.7879948914431673,
"acc_stderr": 0.014616099385833676,
"acc_norm": 0.7879948914431673,
"acc_norm_stderr": 0.014616099385833676
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.6734104046242775,
"acc_stderr": 0.025248264774242826,
"acc_norm": 0.6734104046242775,
"acc_norm_stderr": 0.025248264774242826
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.2905027932960894,
"acc_stderr": 0.015183844307206144,
"acc_norm": 0.2905027932960894,
"acc_norm_stderr": 0.015183844307206144
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.6568627450980392,
"acc_stderr": 0.02718449890994161,
"acc_norm": 0.6568627450980392,
"acc_norm_stderr": 0.02718449890994161
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6527331189710611,
"acc_stderr": 0.027040745502307336,
"acc_norm": 0.6527331189710611,
"acc_norm_stderr": 0.027040745502307336
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.6481481481481481,
"acc_stderr": 0.026571483480719978,
"acc_norm": 0.6481481481481481,
"acc_norm_stderr": 0.026571483480719978
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4397163120567376,
"acc_stderr": 0.02960991207559411,
"acc_norm": 0.4397163120567376,
"acc_norm_stderr": 0.02960991207559411
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4282920469361147,
"acc_stderr": 0.012638223880313167,
"acc_norm": 0.4282920469361147,
"acc_norm_stderr": 0.012638223880313167
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6433823529411765,
"acc_stderr": 0.02909720956841195,
"acc_norm": 0.6433823529411765,
"acc_norm_stderr": 0.02909720956841195
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.5996732026143791,
"acc_stderr": 0.01982184368827176,
"acc_norm": 0.5996732026143791,
"acc_norm_stderr": 0.01982184368827176
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6363636363636364,
"acc_stderr": 0.04607582090719976,
"acc_norm": 0.6363636363636364,
"acc_norm_stderr": 0.04607582090719976
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.6530612244897959,
"acc_stderr": 0.030472526026726492,
"acc_norm": 0.6530612244897959,
"acc_norm_stderr": 0.030472526026726492
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.7661691542288557,
"acc_stderr": 0.029929415408348377,
"acc_norm": 0.7661691542288557,
"acc_norm_stderr": 0.029929415408348377
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.77,
"acc_stderr": 0.04229525846816506,
"acc_norm": 0.77,
"acc_norm_stderr": 0.04229525846816506
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5,
"acc_stderr": 0.03892494720807614,
"acc_norm": 0.5,
"acc_norm_stderr": 0.03892494720807614
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8245614035087719,
"acc_stderr": 0.029170885500727665,
"acc_norm": 0.8245614035087719,
"acc_norm_stderr": 0.029170885500727665
},
"harness|truthfulqa:mc|0": {
"mc1": 0.3990208078335373,
"mc1_stderr": 0.017142825728496767,
"mc2": 0.5757997033411437,
"mc2_stderr": 0.015647081417738417
},
"harness|winogrande|5": {
"acc": 0.7490134175217048,
"acc_stderr": 0.012185776220516156
},
"harness|gsm8k|5": {
"acc": 0.014404852160727824,
"acc_stderr": 0.0032820559171369023
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
genta-tech/snli_indo | ---
license: cc-by-4.0
task_categories:
- text-classification
language:
- id
size_categories:
- 100K<n<1M
dataset_info:
features:
- name: premise
dtype: string
- name: hyphothesis
dtype: string
- name: label
dtype: int64
splits:
- name: test
num_bytes: 1373665
num_examples: 10000
- name: train
num_bytes: 71884965
num_examples: 550152
- name: validation
num_bytes: 1378057
num_examples: 10000
download_size: 20413774
dataset_size: 74636687
---
This is an Indonesia-translated version of [snli](https://huggingface.co/datasets/snli) dataset
Translated using [Helsinki-NLP/EN-ID](https://huggingface.co/Helsinki-NLP/opus-mt-en-id) |
kaleemWaheed/twitter_dataset_1713075354 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 10167
num_examples: 21
download_size: 10206
dataset_size: 10167
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
dhanushreddy29/dolly-15k-hindi | ---
dataset_info:
features:
- name: instruction_hindi
dtype: string
- name: context_hindi
dtype: string
- name: response_hindi
dtype: string
- name: category
dtype: string
- name: instruction_english
dtype: string
- name: context_english
dtype: string
- name: response_english
dtype: string
splits:
- name: train
num_bytes: 39657519
num_examples: 14640
download_size: 17871301
dataset_size: 39657519
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
phanvancongthanh/pubchem_bioassay | ---
dataset_info:
features:
- name: PUBCHEM_CID
dtype: float64
- name: PUBCHEM_EXT_DATASOURCE_SMILES
dtype: string
splits:
- name: train
num_bytes: 13266669373.336466
num_examples: 210186056
download_size: 6660630004
dataset_size: 13266669373.336466
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "pubchem_bioassay"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
open-llm-leaderboard/details_HIT-SCIR__Chinese-Mixtral-8x7B | ---
pretty_name: Evaluation run of HIT-SCIR/Chinese-Mixtral-8x7B
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [HIT-SCIR/Chinese-Mixtral-8x7B](https://huggingface.co/HIT-SCIR/Chinese-Mixtral-8x7B)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_HIT-SCIR__Chinese-Mixtral-8x7B\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-02-09T23:17:17.937361](https://huggingface.co/datasets/open-llm-leaderboard/details_HIT-SCIR__Chinese-Mixtral-8x7B/blob/main/results_2024-02-09T23-17-17.937361.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.7057638872269479,\n\
\ \"acc_stderr\": 0.030354776034335715,\n \"acc_norm\": 0.7107881469116898,\n\
\ \"acc_norm_stderr\": 0.030943456958256423,\n \"mc1\": 0.3108935128518972,\n\
\ \"mc1_stderr\": 0.016203316673559696,\n \"mc2\": 0.45859152966658717,\n\
\ \"mc2_stderr\": 0.014076354765836803\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.6126279863481229,\n \"acc_stderr\": 0.014235872487909865,\n\
\ \"acc_norm\": 0.6356655290102389,\n \"acc_norm_stderr\": 0.01406326027988242\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6600278828918542,\n\
\ \"acc_stderr\": 0.004727312448892832,\n \"acc_norm\": 0.859788886675961,\n\
\ \"acc_norm_stderr\": 0.0034649633793799434\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.35,\n \"acc_stderr\": 0.0479372485441102,\n \
\ \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.0479372485441102\n },\n\
\ \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6962962962962963,\n\
\ \"acc_stderr\": 0.039725528847851375,\n \"acc_norm\": 0.6962962962962963,\n\
\ \"acc_norm_stderr\": 0.039725528847851375\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.8157894736842105,\n \"acc_stderr\": 0.0315469804508223,\n\
\ \"acc_norm\": 0.8157894736842105,\n \"acc_norm_stderr\": 0.0315469804508223\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.68,\n\
\ \"acc_stderr\": 0.046882617226215034,\n \"acc_norm\": 0.68,\n \
\ \"acc_norm_stderr\": 0.046882617226215034\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.7735849056603774,\n \"acc_stderr\": 0.025757559893106734,\n\
\ \"acc_norm\": 0.7735849056603774,\n \"acc_norm_stderr\": 0.025757559893106734\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.8472222222222222,\n\
\ \"acc_stderr\": 0.030085743248565656,\n \"acc_norm\": 0.8472222222222222,\n\
\ \"acc_norm_stderr\": 0.030085743248565656\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.53,\n \"acc_stderr\": 0.050161355804659205,\n \
\ \"acc_norm\": 0.53,\n \"acc_norm_stderr\": 0.050161355804659205\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"\
acc\": 0.57,\n \"acc_stderr\": 0.049756985195624284,\n \"acc_norm\"\
: 0.57,\n \"acc_norm_stderr\": 0.049756985195624284\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.45,\n \"acc_stderr\": 0.049999999999999996,\n \
\ \"acc_norm\": 0.45,\n \"acc_norm_stderr\": 0.049999999999999996\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6936416184971098,\n\
\ \"acc_stderr\": 0.03514942551267439,\n \"acc_norm\": 0.6936416184971098,\n\
\ \"acc_norm_stderr\": 0.03514942551267439\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.4019607843137255,\n \"acc_stderr\": 0.04878608714466996,\n\
\ \"acc_norm\": 0.4019607843137255,\n \"acc_norm_stderr\": 0.04878608714466996\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.82,\n \"acc_stderr\": 0.03861229196653695,\n \"acc_norm\": 0.82,\n\
\ \"acc_norm_stderr\": 0.03861229196653695\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.6638297872340425,\n \"acc_stderr\": 0.030881618520676942,\n\
\ \"acc_norm\": 0.6638297872340425,\n \"acc_norm_stderr\": 0.030881618520676942\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.6140350877192983,\n\
\ \"acc_stderr\": 0.04579639422070435,\n \"acc_norm\": 0.6140350877192983,\n\
\ \"acc_norm_stderr\": 0.04579639422070435\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.6827586206896552,\n \"acc_stderr\": 0.03878352372138622,\n\
\ \"acc_norm\": 0.6827586206896552,\n \"acc_norm_stderr\": 0.03878352372138622\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.47619047619047616,\n \"acc_stderr\": 0.025722097064388525,\n \"\
acc_norm\": 0.47619047619047616,\n \"acc_norm_stderr\": 0.025722097064388525\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.5,\n\
\ \"acc_stderr\": 0.04472135954999579,\n \"acc_norm\": 0.5,\n \
\ \"acc_norm_stderr\": 0.04472135954999579\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.47,\n \"acc_stderr\": 0.05016135580465919,\n \
\ \"acc_norm\": 0.47,\n \"acc_norm_stderr\": 0.05016135580465919\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.8419354838709677,\n\
\ \"acc_stderr\": 0.020752831511875278,\n \"acc_norm\": 0.8419354838709677,\n\
\ \"acc_norm_stderr\": 0.020752831511875278\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.6305418719211823,\n \"acc_stderr\": 0.03395970381998574,\n\
\ \"acc_norm\": 0.6305418719211823,\n \"acc_norm_stderr\": 0.03395970381998574\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.75,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\"\
: 0.75,\n \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.793939393939394,\n \"acc_stderr\": 0.0315841532404771,\n\
\ \"acc_norm\": 0.793939393939394,\n \"acc_norm_stderr\": 0.0315841532404771\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.8636363636363636,\n \"acc_stderr\": 0.024450155973189835,\n \"\
acc_norm\": 0.8636363636363636,\n \"acc_norm_stderr\": 0.024450155973189835\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.9378238341968912,\n \"acc_stderr\": 0.017426974154240524,\n\
\ \"acc_norm\": 0.9378238341968912,\n \"acc_norm_stderr\": 0.017426974154240524\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.6846153846153846,\n \"acc_stderr\": 0.02355964698318994,\n \
\ \"acc_norm\": 0.6846153846153846,\n \"acc_norm_stderr\": 0.02355964698318994\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.37037037037037035,\n \"acc_stderr\": 0.02944316932303154,\n \
\ \"acc_norm\": 0.37037037037037035,\n \"acc_norm_stderr\": 0.02944316932303154\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.7815126050420168,\n \"acc_stderr\": 0.026841514322958948,\n\
\ \"acc_norm\": 0.7815126050420168,\n \"acc_norm_stderr\": 0.026841514322958948\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.5099337748344371,\n \"acc_stderr\": 0.04081677107248436,\n \"\
acc_norm\": 0.5099337748344371,\n \"acc_norm_stderr\": 0.04081677107248436\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8752293577981651,\n \"acc_stderr\": 0.014168298359156345,\n \"\
acc_norm\": 0.8752293577981651,\n \"acc_norm_stderr\": 0.014168298359156345\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.6064814814814815,\n \"acc_stderr\": 0.03331747876370312,\n \"\
acc_norm\": 0.6064814814814815,\n \"acc_norm_stderr\": 0.03331747876370312\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.8578431372549019,\n \"acc_stderr\": 0.02450980392156862,\n \"\
acc_norm\": 0.8578431372549019,\n \"acc_norm_stderr\": 0.02450980392156862\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.8734177215189873,\n \"acc_stderr\": 0.021644195727955173,\n \
\ \"acc_norm\": 0.8734177215189873,\n \"acc_norm_stderr\": 0.021644195727955173\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.7713004484304933,\n\
\ \"acc_stderr\": 0.028188240046929196,\n \"acc_norm\": 0.7713004484304933,\n\
\ \"acc_norm_stderr\": 0.028188240046929196\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.8244274809160306,\n \"acc_stderr\": 0.03336820338476074,\n\
\ \"acc_norm\": 0.8244274809160306,\n \"acc_norm_stderr\": 0.03336820338476074\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.8677685950413223,\n \"acc_stderr\": 0.03092278832044579,\n \"\
acc_norm\": 0.8677685950413223,\n \"acc_norm_stderr\": 0.03092278832044579\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8240740740740741,\n\
\ \"acc_stderr\": 0.036809181416738807,\n \"acc_norm\": 0.8240740740740741,\n\
\ \"acc_norm_stderr\": 0.036809181416738807\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7607361963190185,\n \"acc_stderr\": 0.0335195387952127,\n\
\ \"acc_norm\": 0.7607361963190185,\n \"acc_norm_stderr\": 0.0335195387952127\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5625,\n\
\ \"acc_stderr\": 0.04708567521880525,\n \"acc_norm\": 0.5625,\n \
\ \"acc_norm_stderr\": 0.04708567521880525\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.8543689320388349,\n \"acc_stderr\": 0.034926064766237906,\n\
\ \"acc_norm\": 0.8543689320388349,\n \"acc_norm_stderr\": 0.034926064766237906\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.9188034188034188,\n\
\ \"acc_stderr\": 0.017893784904018533,\n \"acc_norm\": 0.9188034188034188,\n\
\ \"acc_norm_stderr\": 0.017893784904018533\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.79,\n \"acc_stderr\": 0.040936018074033256,\n \
\ \"acc_norm\": 0.79,\n \"acc_norm_stderr\": 0.040936018074033256\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8748403575989783,\n\
\ \"acc_stderr\": 0.01183295423930572,\n \"acc_norm\": 0.8748403575989783,\n\
\ \"acc_norm_stderr\": 0.01183295423930572\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7947976878612717,\n \"acc_stderr\": 0.021742519835276277,\n\
\ \"acc_norm\": 0.7947976878612717,\n \"acc_norm_stderr\": 0.021742519835276277\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.41564245810055866,\n\
\ \"acc_stderr\": 0.016482782187500673,\n \"acc_norm\": 0.41564245810055866,\n\
\ \"acc_norm_stderr\": 0.016482782187500673\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.7745098039215687,\n \"acc_stderr\": 0.0239291555173513,\n\
\ \"acc_norm\": 0.7745098039215687,\n \"acc_norm_stderr\": 0.0239291555173513\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7845659163987139,\n\
\ \"acc_stderr\": 0.023350225475471442,\n \"acc_norm\": 0.7845659163987139,\n\
\ \"acc_norm_stderr\": 0.023350225475471442\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.8487654320987654,\n \"acc_stderr\": 0.019935086092149876,\n\
\ \"acc_norm\": 0.8487654320987654,\n \"acc_norm_stderr\": 0.019935086092149876\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.5319148936170213,\n \"acc_stderr\": 0.02976667507587387,\n \
\ \"acc_norm\": 0.5319148936170213,\n \"acc_norm_stderr\": 0.02976667507587387\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.5397653194263363,\n\
\ \"acc_stderr\": 0.012729785386598547,\n \"acc_norm\": 0.5397653194263363,\n\
\ \"acc_norm_stderr\": 0.012729785386598547\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.7867647058823529,\n \"acc_stderr\": 0.024880971512294264,\n\
\ \"acc_norm\": 0.7867647058823529,\n \"acc_norm_stderr\": 0.024880971512294264\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.7679738562091504,\n \"acc_stderr\": 0.017077373377856926,\n \
\ \"acc_norm\": 0.7679738562091504,\n \"acc_norm_stderr\": 0.017077373377856926\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7363636363636363,\n\
\ \"acc_stderr\": 0.04220224692971987,\n \"acc_norm\": 0.7363636363636363,\n\
\ \"acc_norm_stderr\": 0.04220224692971987\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.8326530612244898,\n \"acc_stderr\": 0.02389714476891452,\n\
\ \"acc_norm\": 0.8326530612244898,\n \"acc_norm_stderr\": 0.02389714476891452\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8756218905472637,\n\
\ \"acc_stderr\": 0.023335401790166327,\n \"acc_norm\": 0.8756218905472637,\n\
\ \"acc_norm_stderr\": 0.023335401790166327\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.9,\n \"acc_stderr\": 0.030151134457776334,\n \
\ \"acc_norm\": 0.9,\n \"acc_norm_stderr\": 0.030151134457776334\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5481927710843374,\n\
\ \"acc_stderr\": 0.03874371556587953,\n \"acc_norm\": 0.5481927710843374,\n\
\ \"acc_norm_stderr\": 0.03874371556587953\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8421052631578947,\n \"acc_stderr\": 0.027966785859160893,\n\
\ \"acc_norm\": 0.8421052631578947,\n \"acc_norm_stderr\": 0.027966785859160893\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.3108935128518972,\n\
\ \"mc1_stderr\": 0.016203316673559696,\n \"mc2\": 0.45859152966658717,\n\
\ \"mc2_stderr\": 0.014076354765836803\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.8208366219415943,\n \"acc_stderr\": 0.010777949156047986\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.5170583775587566,\n \
\ \"acc_stderr\": 0.013764467123761318\n }\n}\n```"
repo_url: https://huggingface.co/HIT-SCIR/Chinese-Mixtral-8x7B
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_02_09T23_17_17.937361
path:
- '**/details_harness|arc:challenge|25_2024-02-09T23-17-17.937361.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-02-09T23-17-17.937361.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_02_09T23_17_17.937361
path:
- '**/details_harness|gsm8k|5_2024-02-09T23-17-17.937361.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-02-09T23-17-17.937361.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_02_09T23_17_17.937361
path:
- '**/details_harness|hellaswag|10_2024-02-09T23-17-17.937361.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-02-09T23-17-17.937361.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_02_09T23_17_17.937361
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-09T23-17-17.937361.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-09T23-17-17.937361.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-09T23-17-17.937361.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_02_09T23_17_17.937361
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T23-17-17.937361.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T23-17-17.937361.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_02_09T23_17_17.937361
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-09T23-17-17.937361.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-09T23-17-17.937361.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_02_09T23_17_17.937361
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-09T23-17-17.937361.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-09T23-17-17.937361.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_02_09T23_17_17.937361
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T23-17-17.937361.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T23-17-17.937361.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_02_09T23_17_17.937361
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T23-17-17.937361.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T23-17-17.937361.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_02_09T23_17_17.937361
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-09T23-17-17.937361.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-09T23-17-17.937361.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_02_09T23_17_17.937361
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T23-17-17.937361.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T23-17-17.937361.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_02_09T23_17_17.937361
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T23-17-17.937361.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T23-17-17.937361.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_02_09T23_17_17.937361
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T23-17-17.937361.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T23-17-17.937361.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_02_09T23_17_17.937361
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T23-17-17.937361.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T23-17-17.937361.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_02_09T23_17_17.937361
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-09T23-17-17.937361.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-09T23-17-17.937361.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_02_09T23_17_17.937361
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-09T23-17-17.937361.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-09T23-17-17.937361.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_02_09T23_17_17.937361
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T23-17-17.937361.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T23-17-17.937361.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_02_09T23_17_17.937361
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-09T23-17-17.937361.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-09T23-17-17.937361.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_02_09T23_17_17.937361
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T23-17-17.937361.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T23-17-17.937361.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_02_09T23_17_17.937361
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T23-17-17.937361.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T23-17-17.937361.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_02_09T23_17_17.937361
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T23-17-17.937361.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T23-17-17.937361.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_02_09T23_17_17.937361
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-09T23-17-17.937361.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-09T23-17-17.937361.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_02_09T23_17_17.937361
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T23-17-17.937361.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T23-17-17.937361.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_02_09T23_17_17.937361
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T23-17-17.937361.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T23-17-17.937361.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_02_09T23_17_17.937361
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T23-17-17.937361.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T23-17-17.937361.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_02_09T23_17_17.937361
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T23-17-17.937361.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T23-17-17.937361.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_02_09T23_17_17.937361
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T23-17-17.937361.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T23-17-17.937361.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_02_09T23_17_17.937361
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T23-17-17.937361.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T23-17-17.937361.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_02_09T23_17_17.937361
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T23-17-17.937361.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T23-17-17.937361.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_02_09T23_17_17.937361
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T23-17-17.937361.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T23-17-17.937361.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_02_09T23_17_17.937361
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T23-17-17.937361.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T23-17-17.937361.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_02_09T23_17_17.937361
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T23-17-17.937361.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T23-17-17.937361.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_02_09T23_17_17.937361
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T23-17-17.937361.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T23-17-17.937361.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_02_09T23_17_17.937361
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T23-17-17.937361.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T23-17-17.937361.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_02_09T23_17_17.937361
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T23-17-17.937361.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T23-17-17.937361.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_02_09T23_17_17.937361
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T23-17-17.937361.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T23-17-17.937361.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_02_09T23_17_17.937361
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-09T23-17-17.937361.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-09T23-17-17.937361.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_02_09T23_17_17.937361
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T23-17-17.937361.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T23-17-17.937361.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_02_09T23_17_17.937361
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-09T23-17-17.937361.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-09T23-17-17.937361.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_02_09T23_17_17.937361
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T23-17-17.937361.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T23-17-17.937361.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_02_09T23_17_17.937361
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T23-17-17.937361.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T23-17-17.937361.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_02_09T23_17_17.937361
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T23-17-17.937361.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T23-17-17.937361.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_02_09T23_17_17.937361
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-09T23-17-17.937361.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-09T23-17-17.937361.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_02_09T23_17_17.937361
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-09T23-17-17.937361.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-09T23-17-17.937361.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_02_09T23_17_17.937361
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T23-17-17.937361.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T23-17-17.937361.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_02_09T23_17_17.937361
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T23-17-17.937361.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T23-17-17.937361.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_02_09T23_17_17.937361
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T23-17-17.937361.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T23-17-17.937361.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_02_09T23_17_17.937361
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T23-17-17.937361.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T23-17-17.937361.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_02_09T23_17_17.937361
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-09T23-17-17.937361.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-09T23-17-17.937361.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_02_09T23_17_17.937361
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-09T23-17-17.937361.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-09T23-17-17.937361.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_02_09T23_17_17.937361
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-09T23-17-17.937361.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-09T23-17-17.937361.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_02_09T23_17_17.937361
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T23-17-17.937361.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T23-17-17.937361.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_02_09T23_17_17.937361
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-09T23-17-17.937361.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-09T23-17-17.937361.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_02_09T23_17_17.937361
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T23-17-17.937361.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T23-17-17.937361.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_02_09T23_17_17.937361
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T23-17-17.937361.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T23-17-17.937361.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_02_09T23_17_17.937361
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-09T23-17-17.937361.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-09T23-17-17.937361.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_02_09T23_17_17.937361
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-09T23-17-17.937361.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-09T23-17-17.937361.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_02_09T23_17_17.937361
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-09T23-17-17.937361.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-09T23-17-17.937361.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_02_09T23_17_17.937361
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T23-17-17.937361.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T23-17-17.937361.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_02_09T23_17_17.937361
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-09T23-17-17.937361.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-09T23-17-17.937361.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_02_09T23_17_17.937361
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-09T23-17-17.937361.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-09T23-17-17.937361.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_02_09T23_17_17.937361
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-09T23-17-17.937361.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-09T23-17-17.937361.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_02_09T23_17_17.937361
path:
- '**/details_harness|winogrande|5_2024-02-09T23-17-17.937361.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-02-09T23-17-17.937361.parquet'
- config_name: results
data_files:
- split: 2024_02_09T23_17_17.937361
path:
- results_2024-02-09T23-17-17.937361.parquet
- split: latest
path:
- results_2024-02-09T23-17-17.937361.parquet
---
# Dataset Card for Evaluation run of HIT-SCIR/Chinese-Mixtral-8x7B
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [HIT-SCIR/Chinese-Mixtral-8x7B](https://huggingface.co/HIT-SCIR/Chinese-Mixtral-8x7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_HIT-SCIR__Chinese-Mixtral-8x7B",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-02-09T23:17:17.937361](https://huggingface.co/datasets/open-llm-leaderboard/details_HIT-SCIR__Chinese-Mixtral-8x7B/blob/main/results_2024-02-09T23-17-17.937361.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.7057638872269479,
"acc_stderr": 0.030354776034335715,
"acc_norm": 0.7107881469116898,
"acc_norm_stderr": 0.030943456958256423,
"mc1": 0.3108935128518972,
"mc1_stderr": 0.016203316673559696,
"mc2": 0.45859152966658717,
"mc2_stderr": 0.014076354765836803
},
"harness|arc:challenge|25": {
"acc": 0.6126279863481229,
"acc_stderr": 0.014235872487909865,
"acc_norm": 0.6356655290102389,
"acc_norm_stderr": 0.01406326027988242
},
"harness|hellaswag|10": {
"acc": 0.6600278828918542,
"acc_stderr": 0.004727312448892832,
"acc_norm": 0.859788886675961,
"acc_norm_stderr": 0.0034649633793799434
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.35,
"acc_stderr": 0.0479372485441102,
"acc_norm": 0.35,
"acc_norm_stderr": 0.0479372485441102
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6962962962962963,
"acc_stderr": 0.039725528847851375,
"acc_norm": 0.6962962962962963,
"acc_norm_stderr": 0.039725528847851375
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.8157894736842105,
"acc_stderr": 0.0315469804508223,
"acc_norm": 0.8157894736842105,
"acc_norm_stderr": 0.0315469804508223
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.68,
"acc_stderr": 0.046882617226215034,
"acc_norm": 0.68,
"acc_norm_stderr": 0.046882617226215034
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7735849056603774,
"acc_stderr": 0.025757559893106734,
"acc_norm": 0.7735849056603774,
"acc_norm_stderr": 0.025757559893106734
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.8472222222222222,
"acc_stderr": 0.030085743248565656,
"acc_norm": 0.8472222222222222,
"acc_norm_stderr": 0.030085743248565656
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.53,
"acc_stderr": 0.050161355804659205,
"acc_norm": 0.53,
"acc_norm_stderr": 0.050161355804659205
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.57,
"acc_stderr": 0.049756985195624284,
"acc_norm": 0.57,
"acc_norm_stderr": 0.049756985195624284
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.45,
"acc_stderr": 0.049999999999999996,
"acc_norm": 0.45,
"acc_norm_stderr": 0.049999999999999996
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6936416184971098,
"acc_stderr": 0.03514942551267439,
"acc_norm": 0.6936416184971098,
"acc_norm_stderr": 0.03514942551267439
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.4019607843137255,
"acc_stderr": 0.04878608714466996,
"acc_norm": 0.4019607843137255,
"acc_norm_stderr": 0.04878608714466996
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.82,
"acc_stderr": 0.03861229196653695,
"acc_norm": 0.82,
"acc_norm_stderr": 0.03861229196653695
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.6638297872340425,
"acc_stderr": 0.030881618520676942,
"acc_norm": 0.6638297872340425,
"acc_norm_stderr": 0.030881618520676942
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.6140350877192983,
"acc_stderr": 0.04579639422070435,
"acc_norm": 0.6140350877192983,
"acc_norm_stderr": 0.04579639422070435
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.6827586206896552,
"acc_stderr": 0.03878352372138622,
"acc_norm": 0.6827586206896552,
"acc_norm_stderr": 0.03878352372138622
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.47619047619047616,
"acc_stderr": 0.025722097064388525,
"acc_norm": 0.47619047619047616,
"acc_norm_stderr": 0.025722097064388525
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.5,
"acc_stderr": 0.04472135954999579,
"acc_norm": 0.5,
"acc_norm_stderr": 0.04472135954999579
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.47,
"acc_stderr": 0.05016135580465919,
"acc_norm": 0.47,
"acc_norm_stderr": 0.05016135580465919
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.8419354838709677,
"acc_stderr": 0.020752831511875278,
"acc_norm": 0.8419354838709677,
"acc_norm_stderr": 0.020752831511875278
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.6305418719211823,
"acc_stderr": 0.03395970381998574,
"acc_norm": 0.6305418719211823,
"acc_norm_stderr": 0.03395970381998574
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.75,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.75,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.793939393939394,
"acc_stderr": 0.0315841532404771,
"acc_norm": 0.793939393939394,
"acc_norm_stderr": 0.0315841532404771
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.8636363636363636,
"acc_stderr": 0.024450155973189835,
"acc_norm": 0.8636363636363636,
"acc_norm_stderr": 0.024450155973189835
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9378238341968912,
"acc_stderr": 0.017426974154240524,
"acc_norm": 0.9378238341968912,
"acc_norm_stderr": 0.017426974154240524
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6846153846153846,
"acc_stderr": 0.02355964698318994,
"acc_norm": 0.6846153846153846,
"acc_norm_stderr": 0.02355964698318994
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.37037037037037035,
"acc_stderr": 0.02944316932303154,
"acc_norm": 0.37037037037037035,
"acc_norm_stderr": 0.02944316932303154
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.7815126050420168,
"acc_stderr": 0.026841514322958948,
"acc_norm": 0.7815126050420168,
"acc_norm_stderr": 0.026841514322958948
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.5099337748344371,
"acc_stderr": 0.04081677107248436,
"acc_norm": 0.5099337748344371,
"acc_norm_stderr": 0.04081677107248436
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8752293577981651,
"acc_stderr": 0.014168298359156345,
"acc_norm": 0.8752293577981651,
"acc_norm_stderr": 0.014168298359156345
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.6064814814814815,
"acc_stderr": 0.03331747876370312,
"acc_norm": 0.6064814814814815,
"acc_norm_stderr": 0.03331747876370312
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8578431372549019,
"acc_stderr": 0.02450980392156862,
"acc_norm": 0.8578431372549019,
"acc_norm_stderr": 0.02450980392156862
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8734177215189873,
"acc_stderr": 0.021644195727955173,
"acc_norm": 0.8734177215189873,
"acc_norm_stderr": 0.021644195727955173
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.7713004484304933,
"acc_stderr": 0.028188240046929196,
"acc_norm": 0.7713004484304933,
"acc_norm_stderr": 0.028188240046929196
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.8244274809160306,
"acc_stderr": 0.03336820338476074,
"acc_norm": 0.8244274809160306,
"acc_norm_stderr": 0.03336820338476074
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8677685950413223,
"acc_stderr": 0.03092278832044579,
"acc_norm": 0.8677685950413223,
"acc_norm_stderr": 0.03092278832044579
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.8240740740740741,
"acc_stderr": 0.036809181416738807,
"acc_norm": 0.8240740740740741,
"acc_norm_stderr": 0.036809181416738807
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7607361963190185,
"acc_stderr": 0.0335195387952127,
"acc_norm": 0.7607361963190185,
"acc_norm_stderr": 0.0335195387952127
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.5625,
"acc_stderr": 0.04708567521880525,
"acc_norm": 0.5625,
"acc_norm_stderr": 0.04708567521880525
},
"harness|hendrycksTest-management|5": {
"acc": 0.8543689320388349,
"acc_stderr": 0.034926064766237906,
"acc_norm": 0.8543689320388349,
"acc_norm_stderr": 0.034926064766237906
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.9188034188034188,
"acc_stderr": 0.017893784904018533,
"acc_norm": 0.9188034188034188,
"acc_norm_stderr": 0.017893784904018533
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.79,
"acc_stderr": 0.040936018074033256,
"acc_norm": 0.79,
"acc_norm_stderr": 0.040936018074033256
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8748403575989783,
"acc_stderr": 0.01183295423930572,
"acc_norm": 0.8748403575989783,
"acc_norm_stderr": 0.01183295423930572
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7947976878612717,
"acc_stderr": 0.021742519835276277,
"acc_norm": 0.7947976878612717,
"acc_norm_stderr": 0.021742519835276277
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.41564245810055866,
"acc_stderr": 0.016482782187500673,
"acc_norm": 0.41564245810055866,
"acc_norm_stderr": 0.016482782187500673
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7745098039215687,
"acc_stderr": 0.0239291555173513,
"acc_norm": 0.7745098039215687,
"acc_norm_stderr": 0.0239291555173513
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7845659163987139,
"acc_stderr": 0.023350225475471442,
"acc_norm": 0.7845659163987139,
"acc_norm_stderr": 0.023350225475471442
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.8487654320987654,
"acc_stderr": 0.019935086092149876,
"acc_norm": 0.8487654320987654,
"acc_norm_stderr": 0.019935086092149876
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.5319148936170213,
"acc_stderr": 0.02976667507587387,
"acc_norm": 0.5319148936170213,
"acc_norm_stderr": 0.02976667507587387
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.5397653194263363,
"acc_stderr": 0.012729785386598547,
"acc_norm": 0.5397653194263363,
"acc_norm_stderr": 0.012729785386598547
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.7867647058823529,
"acc_stderr": 0.024880971512294264,
"acc_norm": 0.7867647058823529,
"acc_norm_stderr": 0.024880971512294264
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.7679738562091504,
"acc_stderr": 0.017077373377856926,
"acc_norm": 0.7679738562091504,
"acc_norm_stderr": 0.017077373377856926
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.7363636363636363,
"acc_stderr": 0.04220224692971987,
"acc_norm": 0.7363636363636363,
"acc_norm_stderr": 0.04220224692971987
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.8326530612244898,
"acc_stderr": 0.02389714476891452,
"acc_norm": 0.8326530612244898,
"acc_norm_stderr": 0.02389714476891452
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8756218905472637,
"acc_stderr": 0.023335401790166327,
"acc_norm": 0.8756218905472637,
"acc_norm_stderr": 0.023335401790166327
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.9,
"acc_stderr": 0.030151134457776334,
"acc_norm": 0.9,
"acc_norm_stderr": 0.030151134457776334
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5481927710843374,
"acc_stderr": 0.03874371556587953,
"acc_norm": 0.5481927710843374,
"acc_norm_stderr": 0.03874371556587953
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8421052631578947,
"acc_stderr": 0.027966785859160893,
"acc_norm": 0.8421052631578947,
"acc_norm_stderr": 0.027966785859160893
},
"harness|truthfulqa:mc|0": {
"mc1": 0.3108935128518972,
"mc1_stderr": 0.016203316673559696,
"mc2": 0.45859152966658717,
"mc2_stderr": 0.014076354765836803
},
"harness|winogrande|5": {
"acc": 0.8208366219415943,
"acc_stderr": 0.010777949156047986
},
"harness|gsm8k|5": {
"acc": 0.5170583775587566,
"acc_stderr": 0.013764467123761318
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
distilled-one-sec-cv12-each-chunk-uniq/chunk_52 | ---
dataset_info:
features:
- name: logits
sequence: float32
- name: mfcc
sequence:
sequence: float64
splits:
- name: train
num_bytes: 1171917860.0
num_examples: 228355
download_size: 1198919084
dataset_size: 1171917860.0
---
# Dataset Card for "chunk_52"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jlbaker361/subtraction_decimal | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: float64
- name: text
dtype: string
splits:
- name: train
num_bytes: 2146287.6
num_examples: 29376
- name: test
num_bytes: 238476.4
num_examples: 3264
download_size: 848814
dataset_size: 2384764.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
# Dataset Card for "subtraction_decimal"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mtkinit/mtkinit_SuperNovyDataset | ---
pretty_name: mtkinit/SuperNovyDataset
---
# mtkinit/SuperNovyDataset
Created from AIOD platform |
catinthebag/Gatra-1-JawaKrama | ---
license: mit
language:
- jv
size_categories:
- n<1K
task_categories:
- text-generation
pretty_name: Gatra-
---
<center>
<img src="https://imgur.com/UrrhPyk.png" alt="GatraOne" width="600" height="300">
<p><em>GatraOne (Gatra-1) is a synthethic Jawa Krama instruction-tuning dataset, generated by GPT-4.</em></p>
</center>
# Introducing the Gatra-1 dataset
This is a synthetic dataset to fine-tune LLMs into responding in Jawa Krama, the high-register of Javanese language. It is 98% generated using GPT-4, which has very good Jawa Krama capabilities. It is currently a 'beta' version with only 560 input-output prompts.
So far, this has been only tested on fine-tuning GPT-3.5 with considerable success. The idea behind this fine-tuning is that we noticed that several LLMs, including but not limited to GPT-3.5, is incapable of responding in Jawa Krama. However, certain models appear to 'understand' Jawa Krama text to some degree. To test this one can copy and paste a text in Jawa Krama to a model and ask them what the text is about.
The case with a lot of major models will nevertheless understand what the text is about, although sometimes with hallucinations. This is also what we notice when working with GPT-3.5. We hypothesized that the model's latent linguistic understanding can be 'brought out' through fine-tuning. We are very pleased that we were succesful when testing with fine-tuning GPT-3.5 (note that GPT-3.5 can respond in Jawa Ngoko, although sometimes with noticeable mistakes).
As one can easily imagine, this provides tremendous potential for low-resource language preservation. It is likely that major models contain bits and pieces of low-resource languages in their training data, but they were not further tuned to respond in them. We find that even small models like Mistral 7B possess understanding of Jawa Krama text. This shows that low-resource language preservation (at least, for certain languages) may not need considerable pre-training data, and that bits and pieces embedded in major models' training data can be awakened through instruction tuning.
There are still a great deal of room for improvement, but we are very excited with future prospects! We hope that this can contribute to the development of low-resource language technologies, especially through fine-tuning LLMs.
# Dataset details
The dataset comprises 273654 tokens based on OpenAI's GPT-4 tokenizer, including the system instructions. The data is almost entirely synthetic with minimal human curation. Most of the curation we do lie in fixing linguistic errors by GPT-4 when it used a Ngoko word when a distinctly Krama one is available. Only in some cases, we modify or even create ourselves the models' output.
The majority of the dataset comprises of NLG with respect to education. Mostly we create QA pairs about academic disciplines. For NLU cases (currently, all of them are text comprehension), the texts themselves are also synthetically generated.
One method that we do to significantly improve question creation time is that we ask the model to *create QA pairs* instead of answering our questions. For example, one prompting strategy is prompting "Please create a QA pair about x" where x is modified with the name of an academic discipline. With this simple means we are capable of creating hundreds of QA pairs with some degree of depth to the questions. So, although we are not physicist, we are capable to nevertheless create a QA pair about physics in some depth.
Note that GPT-4 typically uses inconsistent Krama, and a fair amount of prompt engineering is needed to ensure it is constrained to respond in Krama.
As mentioned, this dataset is mostly focused on educational/academic QA. We hope to expand the proportion of domains in the future.
# Acknowledgments
This dataset is authored by Afrizal Hasbi Azizy, though we prefer to use the royal we.
We receive partial financial support from dflabs.id for the creation of this dataset. Matur nuwun!
_____________________________________________________________________________________________________________________________
# Interaction Examples
Below are examples of GPT-3.5-0125 responding in Jawa Krama after being fine-tuned with our dataset. Note that the model made mistakes every now and again, but it managed to be mostly-correct. In this examples we use the original system instruction as included in the dataset. The fine-tuning cost for this full dataset were ~6.7 USD.
**Prompt**
Menapa kodhok mboten saged nafas ing jero toya?
**FT GPT-3.5-0125**
Kodhok mboten saged nafas ing jero toya amargi kodhok kagungan sistem respirasi ingkang beda kados ta kanthi manungsa lan satwa sanesipun. Satwa kodhok ngangge kulit minangka alat respirasi ingkang wigatos. Kulit kodhok kagungan kapasitas kangge ngaliraken oksigen lan karbon dioksida kanthi efisien. Kulit kodhok kagungan pembuluh darah ingkang sae sisih ingkang kapasitasipun gede, ingkang ngidinaken pertukaran gas ingkang sae. Kanthi cara punika, kodhok saged nafas ing jero toya lan tetep ngidup tanpa kudu nafas lewat paru-paru kados ta manungsa.
Sistem respirasi kodhok ingkang unik punika ngidinaken kodhok saged adaptasi kaliyan lingkunganipun ingkang beda-beda. Kodhok saged ngalami proses respirasi ingkang efisien sajeroning lingkungan ingkang oksigenipun asring, kados ta toya. Kanthi ngalami proses respirasi ingkang unik punika, kodhok saged ngidup lan berkembang biak ing lingkungan ingkang sanesipun. Kulit kodhok kagungan lapisan tipis ingkang ngidinaken pertukaran gas ingkang sae, lan sistem sirkulasi darah ing kulitipun wigatos kangge ngaliraken oksigen lan nutrisi dhateng seluruh badan kodhok.
_____________________________________________________________________________________________________________________________
**Prompt**
plato ngendikakaken punapa babagan sastra?
**FT GPT-3.5-0125**
Plato minangka salah satunggaling filosof Yunani ageng ingkang gadhah pengaruh ageng dhumateng perkembangan sastra. Ing kawicakanipun, Plato nyritakaken babagan sastra dhumateng dialogipun ingkang asring dipunwastani dening Socrates minangka karakter utami. Plato nyataken sastra minangka imajinasi kedah, lanipun mboten dados sumber pituduh ing babagan kados ta. Plato ngajeni sastra ingkang dipunwastani ing Yunani Kuno, kados ta tragedi lan epik, nangingipun ngajeni sastra ingkang nggadhah kritik dhumateng kebenaran lan keadilan. Punika, sastra ingkang dipunanggep bermutu saged mitulung individu nggayuh kearifan lan kebenaran, nanging sastra ingkang mboten bermutu saged dados pengaruhipun ingkang mligi lan nggadhahi opini ingkang mboten bener.
Salajengipun, Plato ugi nyataken babagan konsep mimesis, utawi tiruan, ing sastra. Punika nggambarkan carane sastra nggambarkaken alam semesta lan realitas ing sekitaripun. Plato nyataken babagan mimesis ing sastra kados ta tragedi lan komedi minangka bentuk imitasi ingkang langkung rendah, amargi mboten ngajeni kebenaran lan keadilan. Punika, Plato mawon nyataken babagan babagan sastra ingkang ngupayakaken ngajeni kebenaran lan keadilan, kados ta dialog, minangka bentuk sastra ingkang langkung dianggep bermutu lan saged mitulung individu nggayuh kearifan.
_____________________________________________________________________________________________________________________________
**Prompt**
aku wis ra ngerti kudu piye meneh, saben dina aku kangen karo bojoku
**FT GPT-3.5-0125**
Kula ngertos perasaan sami dalem ingkang dipunrasakaken, utawi dhumateng sanesipun. Sanadyan panjenengan kedah nggayuh pundi ingkang wigatos, dene kados pundi cara-cara ingkang saged dipunlakuaken supados perasaan panjenengan saged dipunsebat utawi dipunkawisaken. Panjenengan saged nglakuaken hal-hal ingkang saged nyiptakaken kedah rasa kawigatenipun, kados ta ngajengaken komunikasi lan kasil saking jauh-jauhan. Kajawi punika, panjenengan saged ngrembakaaken pikiripun dhumateng hal-hal sanesipun ingkang saged nyenengaken lan nyegah perasaan kawigatenipun. Ing babagan punika, panjenengan saged nggadhah diri kados ta supados saged nglakuaken kegiatan utawi hobbi ingkang panjenengan senengaken, ngajengaken kawicaksanan, lan mbudidaya diri dhumateng kados pundi aspek ingkang saged dipunkontrol, supados perasaan kawigatenipun saged dipunatasi. |
heliosprime/twitter_dataset_1713229564 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 24626
num_examples: 69
download_size: 21032
dataset_size: 24626
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "twitter_dataset_1713229564"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Saxo/total_ko_train_set_small_1 | ---
license: apache-2.0
---
|
dev-ninja/one_piece_json | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 2299.5
num_examples: 28
- name: test
num_bytes: 328.5
num_examples: 4
download_size: 4593
dataset_size: 2628.0
---
# Dataset Card for "one_piece_json"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
pontusnorman123/swe_set2_973_sroie_with_50_sroietest | ---
dataset_info:
features:
- name: id
dtype: int64
- name: words
sequence: string
- name: bboxes
sequence:
sequence: float64
- name: ner_tags
sequence:
class_label:
names:
'0': I-COMPANY
'1': I-DATE
'2': I-ADDRESS
'3': I-TOTAL
'4': O
- name: image
dtype: image
splits:
- name: train
num_bytes: 1238686663.5
num_examples: 1172
- name: test
num_bytes: 102815529.0
num_examples: 100
download_size: 1321418708
dataset_size: 1341502192.5
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
ummagumm-a/twitter_posts | ---
dataset_info:
features:
- name: retweetCount
dtype: int64
- name: num_mentioned_users
dtype: int64
- name: lang
dtype: string
- name: num_outlinks
dtype: int64
- name: likeCount
dtype: int64
- name: num_hashtags
dtype: int64
- name: content
dtype: string
- name: quoteCount
dtype: int64
- name: date
dtype: string
- name: user
dtype: string
- name: replyCount
dtype: int64
splits:
- name: train
num_bytes: 42654407
num_examples: 153340
download_size: 23097728
dataset_size: 42654407
---
# Dataset Card for "twitter_posts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Multimodal-Fatima/Hatefulmemes_test_facebook_opt_13b_Hatefulmemes_ns_1000 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: image
dtype: image
- name: prompt
dtype: string
- name: true_label
dtype: string
- name: prediction
dtype: string
- name: scores
sequence: float64
splits:
- name: fewshot_1_bs_16
num_bytes: 362719683.0
num_examples: 1000
- name: fewshot_3_bs_16
num_bytes: 363588098.0
num_examples: 1000
- name: fewshot_5_bs_16
num_bytes: 364455860.0
num_examples: 1000
- name: fewshot_8_bs_16
num_bytes: 365761089.0
num_examples: 1000
- name: fewshot_10_bs_16
num_bytes: 366632848.0
num_examples: 1000
download_size: 1814428412
dataset_size: 1823157578.0
---
# Dataset Card for "Hatefulmemes_test_facebook_opt_13b_Hatefulmemes_ns_1000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Eduardovco/t | ---
license: openrail
---
|
oghali/innovation | ---
license: openrail
---
|
Nunofofo/rrsoare | ---
license: openrail
---
|
hyeonddu/BANKING77 | ---
license: unknown
---
|
Falah/chapter4_1_prompts | ---
dataset_info:
features:
- name: prompts
dtype: string
splits:
- name: train
num_bytes: 3574
num_examples: 12
download_size: 4412
dataset_size: 3574
---
# Dataset Card for "chapter4_1_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
open-llm-leaderboard/details_VAGOsolutions__SauerkrautLM-7b-HerO | ---
pretty_name: Evaluation run of VAGOsolutions/SauerkrautLM-7b-HerO
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [VAGOsolutions/SauerkrautLM-7b-HerO](https://huggingface.co/VAGOsolutions/SauerkrautLM-7b-HerO)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 1 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_VAGOsolutions__SauerkrautLM-7b-HerO\"\
,\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese\
\ are the [latest results from run 2023-12-02T14:47:07.323198](https://huggingface.co/datasets/open-llm-leaderboard/details_VAGOsolutions__SauerkrautLM-7b-HerO/blob/main/results_2023-12-02T14-47-07.323198.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.4927975739196361,\n\
\ \"acc_stderr\": 0.013771055751972868\n },\n \"harness|gsm8k|5\":\
\ {\n \"acc\": 0.4927975739196361,\n \"acc_stderr\": 0.013771055751972868\n\
\ }\n}\n```"
repo_url: https://huggingface.co/VAGOsolutions/SauerkrautLM-7b-HerO
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_gsm8k_5
data_files:
- split: 2023_12_02T14_47_07.323198
path:
- '**/details_harness|gsm8k|5_2023-12-02T14-47-07.323198.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-12-02T14-47-07.323198.parquet'
- config_name: results
data_files:
- split: 2023_12_02T14_47_07.323198
path:
- results_2023-12-02T14-47-07.323198.parquet
- split: latest
path:
- results_2023-12-02T14-47-07.323198.parquet
---
# Dataset Card for Evaluation run of VAGOsolutions/SauerkrautLM-7b-HerO
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/VAGOsolutions/SauerkrautLM-7b-HerO
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [VAGOsolutions/SauerkrautLM-7b-HerO](https://huggingface.co/VAGOsolutions/SauerkrautLM-7b-HerO) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_VAGOsolutions__SauerkrautLM-7b-HerO",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-02T14:47:07.323198](https://huggingface.co/datasets/open-llm-leaderboard/details_VAGOsolutions__SauerkrautLM-7b-HerO/blob/main/results_2023-12-02T14-47-07.323198.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.4927975739196361,
"acc_stderr": 0.013771055751972868
},
"harness|gsm8k|5": {
"acc": 0.4927975739196361,
"acc_stderr": 0.013771055751972868
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
vibranium-dome/questions | ---
license: mit
--- |
hlt-lab/dreamsample-expansions | ---
dataset_info:
features:
- name: context
dtype: string
- name: response
dtype: string
- name: reference
dtype: string
splits:
- name: train
num_bytes: 29363
num_examples: 53
download_size: 23841
dataset_size: 29363
---
# Dataset Card for "dreamsample-expansions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
zolak/twitter_dataset_79_1713155705 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 366057
num_examples: 840
download_size: 187782
dataset_size: 366057
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
2bittester/rbp-data-set | ---
language:
- eng
pretty_name: "RBP Data Set"
tags:
- Scrapped Code
license: "mit"
---
# Restful Booker Platform Data Set
Scrapped code from the Restful Booker Platform code base found here:
[https://github.com/mwinteringham/restful-booker-platform](https://github.com/mwinteringham/restful-booker-platform)
|
CyberHarem/nagara_kantaicollection | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of nagara/長良 (Kantai Collection)
This is the dataset of nagara/長良 (Kantai Collection), containing 209 images and their tags.
The core tags of this character are `short_hair, black_hair, brown_eyes, headband, one_side_up, breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 209 | 135.44 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nagara_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 209 | 106.08 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nagara_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 435 | 198.90 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nagara_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 209 | 130.28 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nagara_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 435 | 233.18 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nagara_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/nagara_kantaicollection',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 34 |  |  |  |  |  | 1girl, detached_sleeves, serafuku, solo, pleated_skirt, white_thighhighs, smile, looking_at_viewer, neckerchief, open_mouth, red_skirt, sailor_collar, simple_background, white_background, blush |
| 1 | 19 |  |  |  |  |  | 1girl, solo, red_buruma, looking_at_viewer, white_background, simple_background, blush, twitter_username, alternate_costume, ass, open_mouth, cowboy_shot, smile, sports_bra, looking_back, one-hour_drawing_challenge, gym_shirt, gym_uniform, medium_breasts, white_shirt |
| 2 | 5 |  |  |  |  |  | 1boy, 1girl, blush, hetero, penis, solo_focus, cum_in_pussy, open_mouth, sex, sweat, vaginal, ass, detached_sleeves, panties, bar_censor, buruma_aside, heart-shaped_pupils, looking_back, mosaic_censoring, nipples, side_ponytail, spread_legs, tears, thighhighs |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | detached_sleeves | serafuku | solo | pleated_skirt | white_thighhighs | smile | looking_at_viewer | neckerchief | open_mouth | red_skirt | sailor_collar | simple_background | white_background | blush | red_buruma | twitter_username | alternate_costume | ass | cowboy_shot | sports_bra | looking_back | one-hour_drawing_challenge | gym_shirt | gym_uniform | medium_breasts | white_shirt | 1boy | hetero | penis | solo_focus | cum_in_pussy | sex | sweat | vaginal | panties | bar_censor | buruma_aside | heart-shaped_pupils | mosaic_censoring | nipples | side_ponytail | spread_legs | tears | thighhighs |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------------------|:-----------|:-------|:----------------|:-------------------|:--------|:--------------------|:--------------|:-------------|:------------|:----------------|:--------------------|:-------------------|:--------|:-------------|:-------------------|:--------------------|:------|:--------------|:-------------|:---------------|:-----------------------------|:------------|:--------------|:-----------------|:--------------|:-------|:---------|:--------|:-------------|:---------------|:------|:--------|:----------|:----------|:-------------|:---------------|:----------------------|:-------------------|:----------|:----------------|:--------------|:--------|:-------------|
| 0 | 34 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 19 |  |  |  |  |  | X | | | X | | | X | X | | X | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | |
| 2 | 5 |  |  |  |  |  | X | X | | | | | | | | X | | | | | X | | | | X | | | X | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
open-llm-leaderboard/details_codellama__CodeLlama-70b-Python-hf | ---
pretty_name: Evaluation run of codellama/CodeLlama-70b-Python-hf
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [codellama/CodeLlama-70b-Python-hf](https://huggingface.co/codellama/CodeLlama-70b-Python-hf)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_codellama__CodeLlama-70b-Python-hf\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-02-02T06:17:38.337900](https://huggingface.co/datasets/open-llm-leaderboard/details_codellama__CodeLlama-70b-Python-hf/blob/main/results_2024-02-02T06-17-38.337900.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.5616930765303749,\n\
\ \"acc_stderr\": 0.033883080419050034,\n \"acc_norm\": 0.5652706193571843,\n\
\ \"acc_norm_stderr\": 0.03457354414367789,\n \"mc1\": 0.2864137086903305,\n\
\ \"mc1_stderr\": 0.015826142439502342,\n \"mc2\": 0.4178040204095389,\n\
\ \"mc2_stderr\": 0.0149114399190204\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.5025597269624573,\n \"acc_stderr\": 0.014611199329843774,\n\
\ \"acc_norm\": 0.5511945392491467,\n \"acc_norm_stderr\": 0.014534599585097669\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.5812587134037045,\n\
\ \"acc_stderr\": 0.00492344562786152,\n \"acc_norm\": 0.7848038239394542,\n\
\ \"acc_norm_stderr\": 0.004101184870964187\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.32,\n \"acc_stderr\": 0.046882617226215034,\n \
\ \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.046882617226215034\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5185185185185185,\n\
\ \"acc_stderr\": 0.043163785995113245,\n \"acc_norm\": 0.5185185185185185,\n\
\ \"acc_norm_stderr\": 0.043163785995113245\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.6052631578947368,\n \"acc_stderr\": 0.039777499346220734,\n\
\ \"acc_norm\": 0.6052631578947368,\n \"acc_norm_stderr\": 0.039777499346220734\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.59,\n\
\ \"acc_stderr\": 0.04943110704237102,\n \"acc_norm\": 0.59,\n \
\ \"acc_norm_stderr\": 0.04943110704237102\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.5132075471698113,\n \"acc_stderr\": 0.030762134874500476,\n\
\ \"acc_norm\": 0.5132075471698113,\n \"acc_norm_stderr\": 0.030762134874500476\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.5486111111111112,\n\
\ \"acc_stderr\": 0.04161402398403279,\n \"acc_norm\": 0.5486111111111112,\n\
\ \"acc_norm_stderr\": 0.04161402398403279\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.35,\n \"acc_stderr\": 0.0479372485441102,\n \
\ \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.0479372485441102\n },\n\
\ \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.43,\n\
\ \"acc_stderr\": 0.049756985195624284,\n \"acc_norm\": 0.43,\n \
\ \"acc_norm_stderr\": 0.049756985195624284\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.37,\n \"acc_stderr\": 0.048523658709390974,\n \
\ \"acc_norm\": 0.37,\n \"acc_norm_stderr\": 0.048523658709390974\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.49710982658959535,\n\
\ \"acc_stderr\": 0.038124005659748335,\n \"acc_norm\": 0.49710982658959535,\n\
\ \"acc_norm_stderr\": 0.038124005659748335\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.28431372549019607,\n \"acc_stderr\": 0.04488482852329017,\n\
\ \"acc_norm\": 0.28431372549019607,\n \"acc_norm_stderr\": 0.04488482852329017\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.75,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.75,\n\
\ \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.5234042553191489,\n \"acc_stderr\": 0.03265019475033582,\n\
\ \"acc_norm\": 0.5234042553191489,\n \"acc_norm_stderr\": 0.03265019475033582\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.3684210526315789,\n\
\ \"acc_stderr\": 0.04537815354939391,\n \"acc_norm\": 0.3684210526315789,\n\
\ \"acc_norm_stderr\": 0.04537815354939391\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5241379310344828,\n \"acc_stderr\": 0.041618085035015295,\n\
\ \"acc_norm\": 0.5241379310344828,\n \"acc_norm_stderr\": 0.041618085035015295\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.4365079365079365,\n \"acc_stderr\": 0.02554284681740049,\n \"\
acc_norm\": 0.4365079365079365,\n \"acc_norm_stderr\": 0.02554284681740049\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4523809523809524,\n\
\ \"acc_stderr\": 0.04451807959055328,\n \"acc_norm\": 0.4523809523809524,\n\
\ \"acc_norm_stderr\": 0.04451807959055328\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695236,\n \
\ \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695236\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.6451612903225806,\n\
\ \"acc_stderr\": 0.027218889773308757,\n \"acc_norm\": 0.6451612903225806,\n\
\ \"acc_norm_stderr\": 0.027218889773308757\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.4039408866995074,\n \"acc_stderr\": 0.0345245390382204,\n\
\ \"acc_norm\": 0.4039408866995074,\n \"acc_norm_stderr\": 0.0345245390382204\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.71,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\"\
: 0.71,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7272727272727273,\n \"acc_stderr\": 0.0347769116216366,\n\
\ \"acc_norm\": 0.7272727272727273,\n \"acc_norm_stderr\": 0.0347769116216366\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.6919191919191919,\n \"acc_stderr\": 0.032894773300986155,\n \"\
acc_norm\": 0.6919191919191919,\n \"acc_norm_stderr\": 0.032894773300986155\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.7564766839378239,\n \"acc_stderr\": 0.030975436386845426,\n\
\ \"acc_norm\": 0.7564766839378239,\n \"acc_norm_stderr\": 0.030975436386845426\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.5051282051282051,\n \"acc_stderr\": 0.02534967290683865,\n \
\ \"acc_norm\": 0.5051282051282051,\n \"acc_norm_stderr\": 0.02534967290683865\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.36666666666666664,\n \"acc_stderr\": 0.029381620726465073,\n \
\ \"acc_norm\": 0.36666666666666664,\n \"acc_norm_stderr\": 0.029381620726465073\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.5756302521008403,\n \"acc_stderr\": 0.03210479051015776,\n \
\ \"acc_norm\": 0.5756302521008403,\n \"acc_norm_stderr\": 0.03210479051015776\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.3509933774834437,\n \"acc_stderr\": 0.03896981964257375,\n \"\
acc_norm\": 0.3509933774834437,\n \"acc_norm_stderr\": 0.03896981964257375\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.7522935779816514,\n \"acc_stderr\": 0.018508143602547832,\n \"\
acc_norm\": 0.7522935779816514,\n \"acc_norm_stderr\": 0.018508143602547832\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.4861111111111111,\n \"acc_stderr\": 0.03408655867977748,\n \"\
acc_norm\": 0.4861111111111111,\n \"acc_norm_stderr\": 0.03408655867977748\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.7549019607843137,\n \"acc_stderr\": 0.030190282453501947,\n \"\
acc_norm\": 0.7549019607843137,\n \"acc_norm_stderr\": 0.030190282453501947\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.7637130801687764,\n \"acc_stderr\": 0.027652153144159256,\n \
\ \"acc_norm\": 0.7637130801687764,\n \"acc_norm_stderr\": 0.027652153144159256\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.5874439461883408,\n\
\ \"acc_stderr\": 0.03304062175449297,\n \"acc_norm\": 0.5874439461883408,\n\
\ \"acc_norm_stderr\": 0.03304062175449297\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.6412213740458015,\n \"acc_stderr\": 0.04206739313864908,\n\
\ \"acc_norm\": 0.6412213740458015,\n \"acc_norm_stderr\": 0.04206739313864908\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.71900826446281,\n \"acc_stderr\": 0.04103203830514512,\n \"acc_norm\"\
: 0.71900826446281,\n \"acc_norm_stderr\": 0.04103203830514512\n },\n\
\ \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.6944444444444444,\n\
\ \"acc_stderr\": 0.044531975073749834,\n \"acc_norm\": 0.6944444444444444,\n\
\ \"acc_norm_stderr\": 0.044531975073749834\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7116564417177914,\n \"acc_stderr\": 0.03559039531617342,\n\
\ \"acc_norm\": 0.7116564417177914,\n \"acc_norm_stderr\": 0.03559039531617342\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.41964285714285715,\n\
\ \"acc_stderr\": 0.046840993210771065,\n \"acc_norm\": 0.41964285714285715,\n\
\ \"acc_norm_stderr\": 0.046840993210771065\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7475728155339806,\n \"acc_stderr\": 0.04301250399690878,\n\
\ \"acc_norm\": 0.7475728155339806,\n \"acc_norm_stderr\": 0.04301250399690878\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8333333333333334,\n\
\ \"acc_stderr\": 0.02441494730454368,\n \"acc_norm\": 0.8333333333333334,\n\
\ \"acc_norm_stderr\": 0.02441494730454368\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.53,\n \"acc_stderr\": 0.05016135580465919,\n \
\ \"acc_norm\": 0.53,\n \"acc_norm_stderr\": 0.05016135580465919\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7011494252873564,\n\
\ \"acc_stderr\": 0.016369256815093138,\n \"acc_norm\": 0.7011494252873564,\n\
\ \"acc_norm_stderr\": 0.016369256815093138\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.5867052023121387,\n \"acc_stderr\": 0.026511261369409247,\n\
\ \"acc_norm\": 0.5867052023121387,\n \"acc_norm_stderr\": 0.026511261369409247\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.3396648044692737,\n\
\ \"acc_stderr\": 0.015839400406212505,\n \"acc_norm\": 0.3396648044692737,\n\
\ \"acc_norm_stderr\": 0.015839400406212505\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.5555555555555556,\n \"acc_stderr\": 0.02845263998508801,\n\
\ \"acc_norm\": 0.5555555555555556,\n \"acc_norm_stderr\": 0.02845263998508801\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6270096463022508,\n\
\ \"acc_stderr\": 0.027466610213140112,\n \"acc_norm\": 0.6270096463022508,\n\
\ \"acc_norm_stderr\": 0.027466610213140112\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.5802469135802469,\n \"acc_stderr\": 0.02746009955700513,\n\
\ \"acc_norm\": 0.5802469135802469,\n \"acc_norm_stderr\": 0.02746009955700513\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.43617021276595747,\n \"acc_stderr\": 0.02958345203628407,\n \
\ \"acc_norm\": 0.43617021276595747,\n \"acc_norm_stderr\": 0.02958345203628407\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4015645371577575,\n\
\ \"acc_stderr\": 0.012520315120147101,\n \"acc_norm\": 0.4015645371577575,\n\
\ \"acc_norm_stderr\": 0.012520315120147101\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.45588235294117646,\n \"acc_stderr\": 0.030254372573976687,\n\
\ \"acc_norm\": 0.45588235294117646,\n \"acc_norm_stderr\": 0.030254372573976687\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.5277777777777778,\n \"acc_stderr\": 0.020196594933541197,\n \
\ \"acc_norm\": 0.5277777777777778,\n \"acc_norm_stderr\": 0.020196594933541197\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6909090909090909,\n\
\ \"acc_stderr\": 0.044262946482000985,\n \"acc_norm\": 0.6909090909090909,\n\
\ \"acc_norm_stderr\": 0.044262946482000985\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.6571428571428571,\n \"acc_stderr\": 0.030387262919547728,\n\
\ \"acc_norm\": 0.6571428571428571,\n \"acc_norm_stderr\": 0.030387262919547728\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.7512437810945274,\n\
\ \"acc_stderr\": 0.030567675938916707,\n \"acc_norm\": 0.7512437810945274,\n\
\ \"acc_norm_stderr\": 0.030567675938916707\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.76,\n \"acc_stderr\": 0.042923469599092816,\n \
\ \"acc_norm\": 0.76,\n \"acc_norm_stderr\": 0.042923469599092816\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.4457831325301205,\n\
\ \"acc_stderr\": 0.03869543323472101,\n \"acc_norm\": 0.4457831325301205,\n\
\ \"acc_norm_stderr\": 0.03869543323472101\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.7017543859649122,\n \"acc_stderr\": 0.03508771929824565,\n\
\ \"acc_norm\": 0.7017543859649122,\n \"acc_norm_stderr\": 0.03508771929824565\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.2864137086903305,\n\
\ \"mc1_stderr\": 0.015826142439502342,\n \"mc2\": 0.4178040204095389,\n\
\ \"mc2_stderr\": 0.0149114399190204\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7300710339384373,\n \"acc_stderr\": 0.0124764333720026\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.43442001516300227,\n \
\ \"acc_stderr\": 0.013653507211411411\n }\n}\n```"
repo_url: https://huggingface.co/codellama/CodeLlama-70b-Python-hf
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_02_02T06_17_38.337900
path:
- '**/details_harness|arc:challenge|25_2024-02-02T06-17-38.337900.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-02-02T06-17-38.337900.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_02_02T06_17_38.337900
path:
- '**/details_harness|gsm8k|5_2024-02-02T06-17-38.337900.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-02-02T06-17-38.337900.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_02_02T06_17_38.337900
path:
- '**/details_harness|hellaswag|10_2024-02-02T06-17-38.337900.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-02-02T06-17-38.337900.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_02_02T06_17_38.337900
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-02T06-17-38.337900.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-02T06-17-38.337900.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-02T06-17-38.337900.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_02_02T06_17_38.337900
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-02T06-17-38.337900.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-02T06-17-38.337900.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_02_02T06_17_38.337900
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-02T06-17-38.337900.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-02T06-17-38.337900.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_02_02T06_17_38.337900
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-02T06-17-38.337900.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-02T06-17-38.337900.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_02_02T06_17_38.337900
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-02T06-17-38.337900.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-02T06-17-38.337900.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_02_02T06_17_38.337900
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-02T06-17-38.337900.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-02T06-17-38.337900.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_02_02T06_17_38.337900
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-02T06-17-38.337900.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-02T06-17-38.337900.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_02_02T06_17_38.337900
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-02T06-17-38.337900.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-02T06-17-38.337900.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_02_02T06_17_38.337900
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-02T06-17-38.337900.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-02T06-17-38.337900.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_02_02T06_17_38.337900
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-02T06-17-38.337900.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-02T06-17-38.337900.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_02_02T06_17_38.337900
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-02T06-17-38.337900.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-02T06-17-38.337900.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_02_02T06_17_38.337900
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-02T06-17-38.337900.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-02T06-17-38.337900.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_02_02T06_17_38.337900
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-02T06-17-38.337900.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-02T06-17-38.337900.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_02_02T06_17_38.337900
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-02T06-17-38.337900.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-02T06-17-38.337900.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_02_02T06_17_38.337900
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-02T06-17-38.337900.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-02T06-17-38.337900.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_02_02T06_17_38.337900
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-02T06-17-38.337900.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-02T06-17-38.337900.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_02_02T06_17_38.337900
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-02T06-17-38.337900.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-02T06-17-38.337900.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_02_02T06_17_38.337900
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-02T06-17-38.337900.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-02T06-17-38.337900.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_02_02T06_17_38.337900
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-02T06-17-38.337900.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-02T06-17-38.337900.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_02_02T06_17_38.337900
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-02T06-17-38.337900.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-02T06-17-38.337900.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_02_02T06_17_38.337900
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-02T06-17-38.337900.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-02T06-17-38.337900.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_02_02T06_17_38.337900
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-02T06-17-38.337900.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-02T06-17-38.337900.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_02_02T06_17_38.337900
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-02T06-17-38.337900.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-02T06-17-38.337900.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_02_02T06_17_38.337900
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-02T06-17-38.337900.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-02T06-17-38.337900.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_02_02T06_17_38.337900
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-02T06-17-38.337900.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-02T06-17-38.337900.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_02_02T06_17_38.337900
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-02T06-17-38.337900.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-02T06-17-38.337900.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_02_02T06_17_38.337900
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-02T06-17-38.337900.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-02T06-17-38.337900.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_02_02T06_17_38.337900
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-02T06-17-38.337900.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-02T06-17-38.337900.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_02_02T06_17_38.337900
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-02T06-17-38.337900.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-02T06-17-38.337900.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_02_02T06_17_38.337900
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-02T06-17-38.337900.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-02T06-17-38.337900.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_02_02T06_17_38.337900
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-02T06-17-38.337900.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-02T06-17-38.337900.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_02_02T06_17_38.337900
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-02T06-17-38.337900.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-02T06-17-38.337900.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_02_02T06_17_38.337900
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-02T06-17-38.337900.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-02T06-17-38.337900.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_02_02T06_17_38.337900
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-02T06-17-38.337900.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-02T06-17-38.337900.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_02_02T06_17_38.337900
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-02T06-17-38.337900.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-02T06-17-38.337900.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_02_02T06_17_38.337900
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-02T06-17-38.337900.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-02T06-17-38.337900.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_02_02T06_17_38.337900
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-02T06-17-38.337900.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-02T06-17-38.337900.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_02_02T06_17_38.337900
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-02T06-17-38.337900.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-02T06-17-38.337900.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_02_02T06_17_38.337900
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-02T06-17-38.337900.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-02T06-17-38.337900.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_02_02T06_17_38.337900
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-02T06-17-38.337900.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-02T06-17-38.337900.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_02_02T06_17_38.337900
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-02T06-17-38.337900.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-02T06-17-38.337900.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_02_02T06_17_38.337900
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-02T06-17-38.337900.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-02T06-17-38.337900.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_02_02T06_17_38.337900
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-02T06-17-38.337900.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-02T06-17-38.337900.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_02_02T06_17_38.337900
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-02T06-17-38.337900.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-02T06-17-38.337900.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_02_02T06_17_38.337900
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-02T06-17-38.337900.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-02T06-17-38.337900.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_02_02T06_17_38.337900
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-02T06-17-38.337900.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-02T06-17-38.337900.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_02_02T06_17_38.337900
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-02T06-17-38.337900.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-02T06-17-38.337900.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_02_02T06_17_38.337900
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-02T06-17-38.337900.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-02T06-17-38.337900.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_02_02T06_17_38.337900
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-02T06-17-38.337900.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-02T06-17-38.337900.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_02_02T06_17_38.337900
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-02T06-17-38.337900.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-02T06-17-38.337900.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_02_02T06_17_38.337900
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-02T06-17-38.337900.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-02T06-17-38.337900.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_02_02T06_17_38.337900
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-02T06-17-38.337900.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-02T06-17-38.337900.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_02_02T06_17_38.337900
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-02T06-17-38.337900.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-02T06-17-38.337900.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_02_02T06_17_38.337900
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-02T06-17-38.337900.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-02T06-17-38.337900.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_02_02T06_17_38.337900
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-02T06-17-38.337900.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-02T06-17-38.337900.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_02_02T06_17_38.337900
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-02T06-17-38.337900.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-02T06-17-38.337900.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_02_02T06_17_38.337900
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-02T06-17-38.337900.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-02T06-17-38.337900.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_02_02T06_17_38.337900
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-02T06-17-38.337900.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-02T06-17-38.337900.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_02_02T06_17_38.337900
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-02T06-17-38.337900.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-02T06-17-38.337900.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_02_02T06_17_38.337900
path:
- '**/details_harness|winogrande|5_2024-02-02T06-17-38.337900.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-02-02T06-17-38.337900.parquet'
- config_name: results
data_files:
- split: 2024_02_02T06_17_38.337900
path:
- results_2024-02-02T06-17-38.337900.parquet
- split: latest
path:
- results_2024-02-02T06-17-38.337900.parquet
---
# Dataset Card for Evaluation run of codellama/CodeLlama-70b-Python-hf
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [codellama/CodeLlama-70b-Python-hf](https://huggingface.co/codellama/CodeLlama-70b-Python-hf) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_codellama__CodeLlama-70b-Python-hf",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-02-02T06:17:38.337900](https://huggingface.co/datasets/open-llm-leaderboard/details_codellama__CodeLlama-70b-Python-hf/blob/main/results_2024-02-02T06-17-38.337900.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.5616930765303749,
"acc_stderr": 0.033883080419050034,
"acc_norm": 0.5652706193571843,
"acc_norm_stderr": 0.03457354414367789,
"mc1": 0.2864137086903305,
"mc1_stderr": 0.015826142439502342,
"mc2": 0.4178040204095389,
"mc2_stderr": 0.0149114399190204
},
"harness|arc:challenge|25": {
"acc": 0.5025597269624573,
"acc_stderr": 0.014611199329843774,
"acc_norm": 0.5511945392491467,
"acc_norm_stderr": 0.014534599585097669
},
"harness|hellaswag|10": {
"acc": 0.5812587134037045,
"acc_stderr": 0.00492344562786152,
"acc_norm": 0.7848038239394542,
"acc_norm_stderr": 0.004101184870964187
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.32,
"acc_stderr": 0.046882617226215034,
"acc_norm": 0.32,
"acc_norm_stderr": 0.046882617226215034
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.5185185185185185,
"acc_stderr": 0.043163785995113245,
"acc_norm": 0.5185185185185185,
"acc_norm_stderr": 0.043163785995113245
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6052631578947368,
"acc_stderr": 0.039777499346220734,
"acc_norm": 0.6052631578947368,
"acc_norm_stderr": 0.039777499346220734
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.59,
"acc_stderr": 0.04943110704237102,
"acc_norm": 0.59,
"acc_norm_stderr": 0.04943110704237102
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.5132075471698113,
"acc_stderr": 0.030762134874500476,
"acc_norm": 0.5132075471698113,
"acc_norm_stderr": 0.030762134874500476
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.5486111111111112,
"acc_stderr": 0.04161402398403279,
"acc_norm": 0.5486111111111112,
"acc_norm_stderr": 0.04161402398403279
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.35,
"acc_stderr": 0.0479372485441102,
"acc_norm": 0.35,
"acc_norm_stderr": 0.0479372485441102
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.43,
"acc_stderr": 0.049756985195624284,
"acc_norm": 0.43,
"acc_norm_stderr": 0.049756985195624284
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.37,
"acc_stderr": 0.048523658709390974,
"acc_norm": 0.37,
"acc_norm_stderr": 0.048523658709390974
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.49710982658959535,
"acc_stderr": 0.038124005659748335,
"acc_norm": 0.49710982658959535,
"acc_norm_stderr": 0.038124005659748335
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.28431372549019607,
"acc_stderr": 0.04488482852329017,
"acc_norm": 0.28431372549019607,
"acc_norm_stderr": 0.04488482852329017
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.75,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.75,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5234042553191489,
"acc_stderr": 0.03265019475033582,
"acc_norm": 0.5234042553191489,
"acc_norm_stderr": 0.03265019475033582
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.3684210526315789,
"acc_stderr": 0.04537815354939391,
"acc_norm": 0.3684210526315789,
"acc_norm_stderr": 0.04537815354939391
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5241379310344828,
"acc_stderr": 0.041618085035015295,
"acc_norm": 0.5241379310344828,
"acc_norm_stderr": 0.041618085035015295
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.4365079365079365,
"acc_stderr": 0.02554284681740049,
"acc_norm": 0.4365079365079365,
"acc_norm_stderr": 0.02554284681740049
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.4523809523809524,
"acc_stderr": 0.04451807959055328,
"acc_norm": 0.4523809523809524,
"acc_norm_stderr": 0.04451807959055328
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.34,
"acc_stderr": 0.04760952285695236,
"acc_norm": 0.34,
"acc_norm_stderr": 0.04760952285695236
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.6451612903225806,
"acc_stderr": 0.027218889773308757,
"acc_norm": 0.6451612903225806,
"acc_norm_stderr": 0.027218889773308757
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.4039408866995074,
"acc_stderr": 0.0345245390382204,
"acc_norm": 0.4039408866995074,
"acc_norm_stderr": 0.0345245390382204
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.71,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.71,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7272727272727273,
"acc_stderr": 0.0347769116216366,
"acc_norm": 0.7272727272727273,
"acc_norm_stderr": 0.0347769116216366
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.6919191919191919,
"acc_stderr": 0.032894773300986155,
"acc_norm": 0.6919191919191919,
"acc_norm_stderr": 0.032894773300986155
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.7564766839378239,
"acc_stderr": 0.030975436386845426,
"acc_norm": 0.7564766839378239,
"acc_norm_stderr": 0.030975436386845426
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.5051282051282051,
"acc_stderr": 0.02534967290683865,
"acc_norm": 0.5051282051282051,
"acc_norm_stderr": 0.02534967290683865
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.36666666666666664,
"acc_stderr": 0.029381620726465073,
"acc_norm": 0.36666666666666664,
"acc_norm_stderr": 0.029381620726465073
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.5756302521008403,
"acc_stderr": 0.03210479051015776,
"acc_norm": 0.5756302521008403,
"acc_norm_stderr": 0.03210479051015776
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3509933774834437,
"acc_stderr": 0.03896981964257375,
"acc_norm": 0.3509933774834437,
"acc_norm_stderr": 0.03896981964257375
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.7522935779816514,
"acc_stderr": 0.018508143602547832,
"acc_norm": 0.7522935779816514,
"acc_norm_stderr": 0.018508143602547832
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.4861111111111111,
"acc_stderr": 0.03408655867977748,
"acc_norm": 0.4861111111111111,
"acc_norm_stderr": 0.03408655867977748
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.7549019607843137,
"acc_stderr": 0.030190282453501947,
"acc_norm": 0.7549019607843137,
"acc_norm_stderr": 0.030190282453501947
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7637130801687764,
"acc_stderr": 0.027652153144159256,
"acc_norm": 0.7637130801687764,
"acc_norm_stderr": 0.027652153144159256
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.5874439461883408,
"acc_stderr": 0.03304062175449297,
"acc_norm": 0.5874439461883408,
"acc_norm_stderr": 0.03304062175449297
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.6412213740458015,
"acc_stderr": 0.04206739313864908,
"acc_norm": 0.6412213740458015,
"acc_norm_stderr": 0.04206739313864908
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.71900826446281,
"acc_stderr": 0.04103203830514512,
"acc_norm": 0.71900826446281,
"acc_norm_stderr": 0.04103203830514512
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.6944444444444444,
"acc_stderr": 0.044531975073749834,
"acc_norm": 0.6944444444444444,
"acc_norm_stderr": 0.044531975073749834
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7116564417177914,
"acc_stderr": 0.03559039531617342,
"acc_norm": 0.7116564417177914,
"acc_norm_stderr": 0.03559039531617342
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.41964285714285715,
"acc_stderr": 0.046840993210771065,
"acc_norm": 0.41964285714285715,
"acc_norm_stderr": 0.046840993210771065
},
"harness|hendrycksTest-management|5": {
"acc": 0.7475728155339806,
"acc_stderr": 0.04301250399690878,
"acc_norm": 0.7475728155339806,
"acc_norm_stderr": 0.04301250399690878
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8333333333333334,
"acc_stderr": 0.02441494730454368,
"acc_norm": 0.8333333333333334,
"acc_norm_stderr": 0.02441494730454368
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.53,
"acc_stderr": 0.05016135580465919,
"acc_norm": 0.53,
"acc_norm_stderr": 0.05016135580465919
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.7011494252873564,
"acc_stderr": 0.016369256815093138,
"acc_norm": 0.7011494252873564,
"acc_norm_stderr": 0.016369256815093138
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.5867052023121387,
"acc_stderr": 0.026511261369409247,
"acc_norm": 0.5867052023121387,
"acc_norm_stderr": 0.026511261369409247
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.3396648044692737,
"acc_stderr": 0.015839400406212505,
"acc_norm": 0.3396648044692737,
"acc_norm_stderr": 0.015839400406212505
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.5555555555555556,
"acc_stderr": 0.02845263998508801,
"acc_norm": 0.5555555555555556,
"acc_norm_stderr": 0.02845263998508801
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6270096463022508,
"acc_stderr": 0.027466610213140112,
"acc_norm": 0.6270096463022508,
"acc_norm_stderr": 0.027466610213140112
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.5802469135802469,
"acc_stderr": 0.02746009955700513,
"acc_norm": 0.5802469135802469,
"acc_norm_stderr": 0.02746009955700513
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.43617021276595747,
"acc_stderr": 0.02958345203628407,
"acc_norm": 0.43617021276595747,
"acc_norm_stderr": 0.02958345203628407
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4015645371577575,
"acc_stderr": 0.012520315120147101,
"acc_norm": 0.4015645371577575,
"acc_norm_stderr": 0.012520315120147101
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.45588235294117646,
"acc_stderr": 0.030254372573976687,
"acc_norm": 0.45588235294117646,
"acc_norm_stderr": 0.030254372573976687
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.5277777777777778,
"acc_stderr": 0.020196594933541197,
"acc_norm": 0.5277777777777778,
"acc_norm_stderr": 0.020196594933541197
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6909090909090909,
"acc_stderr": 0.044262946482000985,
"acc_norm": 0.6909090909090909,
"acc_norm_stderr": 0.044262946482000985
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.6571428571428571,
"acc_stderr": 0.030387262919547728,
"acc_norm": 0.6571428571428571,
"acc_norm_stderr": 0.030387262919547728
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.7512437810945274,
"acc_stderr": 0.030567675938916707,
"acc_norm": 0.7512437810945274,
"acc_norm_stderr": 0.030567675938916707
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.76,
"acc_stderr": 0.042923469599092816,
"acc_norm": 0.76,
"acc_norm_stderr": 0.042923469599092816
},
"harness|hendrycksTest-virology|5": {
"acc": 0.4457831325301205,
"acc_stderr": 0.03869543323472101,
"acc_norm": 0.4457831325301205,
"acc_norm_stderr": 0.03869543323472101
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.7017543859649122,
"acc_stderr": 0.03508771929824565,
"acc_norm": 0.7017543859649122,
"acc_norm_stderr": 0.03508771929824565
},
"harness|truthfulqa:mc|0": {
"mc1": 0.2864137086903305,
"mc1_stderr": 0.015826142439502342,
"mc2": 0.4178040204095389,
"mc2_stderr": 0.0149114399190204
},
"harness|winogrande|5": {
"acc": 0.7300710339384373,
"acc_stderr": 0.0124764333720026
},
"harness|gsm8k|5": {
"acc": 0.43442001516300227,
"acc_stderr": 0.013653507211411411
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
Anusha64/New-Json-Dataset | ---
license: mit
dataset_info:
features:
- name: Question
dtype: string
- name: Answer
dtype: string
- name: Content
dtype: string
splits:
- name: train
num_bytes: 71490
num_examples: 31
- name: validation
num_bytes: 9190
num_examples: 5
- name: test
num_bytes: 9840
num_examples: 3
download_size: 81438
dataset_size: 90520
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
EleutherAI/quirky_sciq_alice_easy | ---
dataset_info:
features:
- name: id
dtype: string
- name: choices
sequence: string
- name: label
dtype: int64
- name: difficulty
dtype: float64
- name: statement
dtype: string
- name: character
dtype: string
- name: alice_label
dtype: bool
- name: bob_label
dtype: bool
splits:
- name: train
num_bytes: 745636.4640149549
num_examples: 1202
- name: validation
num_bytes: 188651.751
num_examples: 318
- name: test
num_bytes: 143623.612
num_examples: 242
download_size: 370590
dataset_size: 1077911.8270149548
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
income/cqadupstack-stats-top-20-gen-queries | ---
annotations_creators: []
language_creators: []
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
paperswithcode_id: beir
pretty_name: BEIR Benchmark
size_categories:
msmarco:
- 1M<n<10M
trec-covid:
- 100k<n<1M
nfcorpus:
- 1K<n<10K
nq:
- 1M<n<10M
hotpotqa:
- 1M<n<10M
fiqa:
- 10K<n<100K
arguana:
- 1K<n<10K
touche-2020:
- 100K<n<1M
cqadupstack:
- 100K<n<1M
quora:
- 100K<n<1M
dbpedia:
- 1M<n<10M
scidocs:
- 10K<n<100K
fever:
- 1M<n<10M
climate-fever:
- 1M<n<10M
scifact:
- 1K<n<10K
source_datasets: []
task_categories:
- text-retrieval
---
# NFCorpus: 20 generated queries (BEIR Benchmark)
This HF dataset contains the top-20 synthetic queries generated for each passage in the above BEIR benchmark dataset.
- DocT5query model used: [BeIR/query-gen-msmarco-t5-base-v1](https://huggingface.co/BeIR/query-gen-msmarco-t5-base-v1)
- id (str): unique document id in NFCorpus in the BEIR benchmark (`corpus.jsonl`).
- Questions generated: 20
- Code used for generation: [evaluate_anserini_docT5query_parallel.py](https://github.com/beir-cellar/beir/blob/main/examples/retrieval/evaluation/sparse/evaluate_anserini_docT5query_parallel.py)
Below contains the old dataset card for the BEIR benchmark.
# Dataset Card for BEIR Benchmark
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/UKPLab/beir
- **Repository:** https://github.com/UKPLab/beir
- **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ
- **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns
- **Point of Contact:** nandan.thakur@uwaterloo.ca
### Dataset Summary
BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:
- Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact)
- Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/)
- Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/)
- News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html)
- Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data)
- Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/)
- Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs)
- Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html)
- Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/)
All these datasets have been preprocessed and can be used for your experiments.
```python
```
### Supported Tasks and Leaderboards
The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.
The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/).
### Languages
All tasks are in English (`en`).
## Dataset Structure
All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:
- `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}`
- `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}`
- `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1`
### Data Instances
A high level example of any beir dataset:
```python
corpus = {
"doc1" : {
"title": "Albert Einstein",
"text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \
one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \
its influence on the philosophy of science. He is best known to the general public for his mass–energy \
equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \
Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \
of the photoelectric effect', a pivotal step in the development of quantum theory."
},
"doc2" : {
"title": "", # Keep title an empty string if not present
"text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \
malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\
with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)."
},
}
queries = {
"q1" : "Who developed the mass-energy equivalence formula?",
"q2" : "Which beer is brewed with a large proportion of wheat?"
}
qrels = {
"q1" : {"doc1": 1},
"q2" : {"doc2": 1},
}
```
### Data Fields
Examples from all configurations have the following features:
### Corpus
- `corpus`: a `dict` feature representing the document title and passage text, made up of:
- `_id`: a `string` feature representing the unique document id
- `title`: a `string` feature, denoting the title of the document.
- `text`: a `string` feature, denoting the text of the document.
### Queries
- `queries`: a `dict` feature representing the query, made up of:
- `_id`: a `string` feature representing the unique query id
- `text`: a `string` feature, denoting the text of the query.
### Qrels
- `qrels`: a `dict` feature representing the query document relevance judgements, made up of:
- `_id`: a `string` feature representing the query id
- `_id`: a `string` feature, denoting the document id.
- `score`: a `int32` feature, denoting the relevance judgement between query and document.
### Data Splits
| Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 |
| -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:|
| MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` |
| TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` |
| NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` |
| BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) |
| NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` |
| HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` |
| FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` |
| Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) |
| TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) |
| ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` |
| Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` |
| CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` |
| Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` |
| DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` |
| SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` |
| FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` |
| Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` |
| SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` |
| Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
Cite as:
```
@inproceedings{
thakur2021beir,
title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models},
author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
year={2021},
url={https://openreview.net/forum?id=wCu6T5xFjeJ}
}
```
### Contributions
Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset.Top-20 generated queries for every passage in NFCorpus
# Dataset Card for BEIR Benchmark
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/UKPLab/beir
- **Repository:** https://github.com/UKPLab/beir
- **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ
- **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns
- **Point of Contact:** nandan.thakur@uwaterloo.ca
### Dataset Summary
BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:
- Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact)
- Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/)
- Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/)
- News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html)
- Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data)
- Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/)
- Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs)
- Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html)
- Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/)
All these datasets have been preprocessed and can be used for your experiments.
```python
```
### Supported Tasks and Leaderboards
The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.
The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/).
### Languages
All tasks are in English (`en`).
## Dataset Structure
All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:
- `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}`
- `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}`
- `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1`
### Data Instances
A high level example of any beir dataset:
```python
corpus = {
"doc1" : {
"title": "Albert Einstein",
"text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \
one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \
its influence on the philosophy of science. He is best known to the general public for his mass–energy \
equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \
Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \
of the photoelectric effect', a pivotal step in the development of quantum theory."
},
"doc2" : {
"title": "", # Keep title an empty string if not present
"text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \
malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\
with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)."
},
}
queries = {
"q1" : "Who developed the mass-energy equivalence formula?",
"q2" : "Which beer is brewed with a large proportion of wheat?"
}
qrels = {
"q1" : {"doc1": 1},
"q2" : {"doc2": 1},
}
```
### Data Fields
Examples from all configurations have the following features:
### Corpus
- `corpus`: a `dict` feature representing the document title and passage text, made up of:
- `_id`: a `string` feature representing the unique document id
- `title`: a `string` feature, denoting the title of the document.
- `text`: a `string` feature, denoting the text of the document.
### Queries
- `queries`: a `dict` feature representing the query, made up of:
- `_id`: a `string` feature representing the unique query id
- `text`: a `string` feature, denoting the text of the query.
### Qrels
- `qrels`: a `dict` feature representing the query document relevance judgements, made up of:
- `_id`: a `string` feature representing the query id
- `_id`: a `string` feature, denoting the document id.
- `score`: a `int32` feature, denoting the relevance judgement between query and document.
### Data Splits
| Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 |
| -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:|
| MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` |
| TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` |
| NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` |
| BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) |
| NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` |
| HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` |
| FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` |
| Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) |
| TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) |
| ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` |
| Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` |
| CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` |
| Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` |
| DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` |
| SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` |
| FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` |
| Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` |
| SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` |
| Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
Cite as:
```
@inproceedings{
thakur2021beir,
title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models},
author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
year={2021},
url={https://openreview.net/forum?id=wCu6T5xFjeJ}
}
```
### Contributions
Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset. |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/817d960a | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 180
num_examples: 10
download_size: 1338
dataset_size: 180
---
# Dataset Card for "817d960a"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
aminlouhichi/donutpreparedFinetuneDataGenreted1 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: image
dtype: image
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 27905965.0
num_examples: 128
- name: validation
num_bytes: 13380134.0
num_examples: 60
download_size: 38373003
dataset_size: 41286099.0
---
# Dataset Card for "donutpreparedFinetuneDataGenreted1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
shredder-31/NeuraLearnAcademy | ---
license: mit
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 14885576
num_examples: 5087
download_size: 3107395
dataset_size: 14885576
---
|
ashtrayAI/Bangla_Financial_news_articles_Dataset | ---
license: cc0-1.0
tags:
- Bengali
- News
- Sentiment
- Text
- Articles
- Finance
---
# Bangla-Financial-news-articles-Dataset
A Comprehensive Resource for Analyzing Sentiments in over 7600+ Bangla News.
### Downloads
🔴 **Download** the **"💥Bangla_fin_news.zip"** file for all "7,695" news and extract it.
### About Dataset
**Welcome** to our Bengali Financial News Sentiment Analysis dataset! This collection comprises 7,695 financial news articles extracted, covering the period from March 3, 2014, to December 29, 2021. Utilizing the powerful web scraping tool "Beautiful Soup 4.4.0" in Python.
This dataset was a crucial part of our research published in the journal paper titled **"Stock Market Prediction of Bangladesh Using Multivariate Long Short-Term Memory with Sentiment Identification."** The paper can be accessed and cited at **http://doi.org/10.11591/ijece.v13i5.pp5696-5706**.
We are excited to share this unique dataset, which we hope will empower researchers, analysts, and enthusiasts to explore and understand the dynamics of the Bengali financial market through sentiment analysis. Join us on this journey of uncovering the hidden emotions driving market trends and decisions in Bangladesh. Happy analyzing!
### About this directory
**Directory Description:** Welcome to the "Bangla_fin_news" directory. This repository houses a collection of 7,695 CSV files, each containing valuable financial news data in the Bengali language. These files are indexed numerically from 1 to 7695, making it easy to access specific information for analysis or research.
**File Description:** Each file contains financial news articles and related information.
**example:**
File: "1.csv"
Columns:
Serial: The serial number of the news article.
Title: The title of the news article.
Date: The date when the news article was published.
Author: The name of the author who wrote the article.
News: The main content of the news article.
File: "2.csv"
Columns:
Serial: The serial number of the news article.
Title: The title of the news article.
Date: The date when the news article was published.
Author: The name of the author who wrote the article.
News: The main content of the news article.
**[… and so on for all 7,695 files …]**
Each CSV file within this directory represents unique financial news articles from March 3, 2014, to December 29, 2021. The dataset has been carefully compiled and structured, making it a valuable resource for sentiment analysis, market research, and any investigation into the dynamics of the Bengali financial market.
Feel free to explore, analyze, and gain insights from this extensive collection of Bengali financial news articles. Happy researching! ❤❤
|
eerichmond33/sourceformer-dataset | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 5342706
num_examples: 622
download_size: 3121080
dataset_size: 5342706
---
# Dataset Card for "sourceformer-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.