datasetId stringlengths 2 117 | card stringlengths 19 1.01M |
|---|---|
luizlzg/prefeitura_dataset_50topics_v1 | ---
task_categories:
- text-generation
language:
- pt
configs:
- config_name: default
data_files:
- split: train
path: dataset_instrutivo_50topics_treino*
- split: test
path: dataset_instrutivo_50topics_teste*
- split: validation
path: dataset_instrutivo_50topics_validation*
--- |
Praghxx/Litlegiela | ---
license: openrail
---
|
radlab/kgr10 | ---
license: cc-by-4.0
language:
- pl
task_categories:
- text-generation
tags:
- mlm
- clm
- corpora
- corpus
pretty_name: KGR10
size_categories:
- 1K<n<10K
---
KGR10 corpora converted to jsonl format.
Thanks for [clarin-knext](https://huggingface.co/clarin-knext) for kgr10 in [raw txt files](https://huggingface.co/datasets/clarin-knext/kgr10_pl). |
Luiggiprosonic/Kayky | ---
license: openrail
---
|
open-llm-leaderboard/details_nbeerbower__Maidphin-Kunoichi-7B | ---
pretty_name: Evaluation run of nbeerbower/Maidphin-Kunoichi-7B
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [nbeerbower/Maidphin-Kunoichi-7B](https://huggingface.co/nbeerbower/Maidphin-Kunoichi-7B)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_nbeerbower__Maidphin-Kunoichi-7B\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-03-07T22:12:07.143668](https://huggingface.co/datasets/open-llm-leaderboard/details_nbeerbower__Maidphin-Kunoichi-7B/blob/main/results_2024-03-07T22-12-07.143668.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.651565977695202,\n\
\ \"acc_stderr\": 0.0320758237237778,\n \"acc_norm\": 0.6523535304526226,\n\
\ \"acc_norm_stderr\": 0.03272549642025899,\n \"mc1\": 0.4785801713586291,\n\
\ \"mc1_stderr\": 0.01748743214471181,\n \"mc2\": 0.642681406545345,\n\
\ \"mc2_stderr\": 0.015380688965063057\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.6621160409556314,\n \"acc_stderr\": 0.013822047922283512,\n\
\ \"acc_norm\": 0.6936860068259386,\n \"acc_norm_stderr\": 0.013470584417276511\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6970722963553077,\n\
\ \"acc_stderr\": 0.004585850835623563,\n \"acc_norm\": 0.8711412069308903,\n\
\ \"acc_norm_stderr\": 0.003343588514866123\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.35,\n \"acc_stderr\": 0.0479372485441102,\n \
\ \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.0479372485441102\n },\n\
\ \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6444444444444445,\n\
\ \"acc_stderr\": 0.04135176749720385,\n \"acc_norm\": 0.6444444444444445,\n\
\ \"acc_norm_stderr\": 0.04135176749720385\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.7039473684210527,\n \"acc_stderr\": 0.03715062154998904,\n\
\ \"acc_norm\": 0.7039473684210527,\n \"acc_norm_stderr\": 0.03715062154998904\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.62,\n\
\ \"acc_stderr\": 0.048783173121456316,\n \"acc_norm\": 0.62,\n \
\ \"acc_norm_stderr\": 0.048783173121456316\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.7056603773584905,\n \"acc_stderr\": 0.028049186315695255,\n\
\ \"acc_norm\": 0.7056603773584905,\n \"acc_norm_stderr\": 0.028049186315695255\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7569444444444444,\n\
\ \"acc_stderr\": 0.0358687928008034,\n \"acc_norm\": 0.7569444444444444,\n\
\ \"acc_norm_stderr\": 0.0358687928008034\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.47,\n \"acc_stderr\": 0.050161355804659205,\n \
\ \"acc_norm\": 0.47,\n \"acc_norm_stderr\": 0.050161355804659205\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"\
acc\": 0.53,\n \"acc_stderr\": 0.050161355804659205,\n \"acc_norm\"\
: 0.53,\n \"acc_norm_stderr\": 0.050161355804659205\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \
\ \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6705202312138728,\n\
\ \"acc_stderr\": 0.03583901754736412,\n \"acc_norm\": 0.6705202312138728,\n\
\ \"acc_norm_stderr\": 0.03583901754736412\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.43137254901960786,\n \"acc_stderr\": 0.04928099597287534,\n\
\ \"acc_norm\": 0.43137254901960786,\n \"acc_norm_stderr\": 0.04928099597287534\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.77,\n \"acc_stderr\": 0.04229525846816507,\n \"acc_norm\": 0.77,\n\
\ \"acc_norm_stderr\": 0.04229525846816507\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.5914893617021276,\n \"acc_stderr\": 0.032134180267015755,\n\
\ \"acc_norm\": 0.5914893617021276,\n \"acc_norm_stderr\": 0.032134180267015755\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5,\n\
\ \"acc_stderr\": 0.047036043419179864,\n \"acc_norm\": 0.5,\n \
\ \"acc_norm_stderr\": 0.047036043419179864\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5655172413793104,\n \"acc_stderr\": 0.04130740879555497,\n\
\ \"acc_norm\": 0.5655172413793104,\n \"acc_norm_stderr\": 0.04130740879555497\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.40476190476190477,\n \"acc_stderr\": 0.025279850397404904,\n \"\
acc_norm\": 0.40476190476190477,\n \"acc_norm_stderr\": 0.025279850397404904\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.48412698412698413,\n\
\ \"acc_stderr\": 0.04469881854072606,\n \"acc_norm\": 0.48412698412698413,\n\
\ \"acc_norm_stderr\": 0.04469881854072606\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.32,\n \"acc_stderr\": 0.04688261722621504,\n \
\ \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.04688261722621504\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.8,\n\
\ \"acc_stderr\": 0.022755204959542943,\n \"acc_norm\": 0.8,\n \
\ \"acc_norm_stderr\": 0.022755204959542943\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.49261083743842365,\n \"acc_stderr\": 0.035176035403610084,\n\
\ \"acc_norm\": 0.49261083743842365,\n \"acc_norm_stderr\": 0.035176035403610084\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\"\
: 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7636363636363637,\n \"acc_stderr\": 0.03317505930009181,\n\
\ \"acc_norm\": 0.7636363636363637,\n \"acc_norm_stderr\": 0.03317505930009181\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.7727272727272727,\n \"acc_stderr\": 0.029857515673386417,\n \"\
acc_norm\": 0.7727272727272727,\n \"acc_norm_stderr\": 0.029857515673386417\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.8911917098445595,\n \"acc_stderr\": 0.022473253332768766,\n\
\ \"acc_norm\": 0.8911917098445595,\n \"acc_norm_stderr\": 0.022473253332768766\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.676923076923077,\n \"acc_stderr\": 0.023710888501970572,\n \
\ \"acc_norm\": 0.676923076923077,\n \"acc_norm_stderr\": 0.023710888501970572\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.3296296296296296,\n \"acc_stderr\": 0.028661201116524565,\n \
\ \"acc_norm\": 0.3296296296296296,\n \"acc_norm_stderr\": 0.028661201116524565\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.7058823529411765,\n \"acc_stderr\": 0.02959732973097809,\n \
\ \"acc_norm\": 0.7058823529411765,\n \"acc_norm_stderr\": 0.02959732973097809\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.3443708609271523,\n \"acc_stderr\": 0.038796870240733264,\n \"\
acc_norm\": 0.3443708609271523,\n \"acc_norm_stderr\": 0.038796870240733264\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8422018348623853,\n \"acc_stderr\": 0.015630022970092444,\n \"\
acc_norm\": 0.8422018348623853,\n \"acc_norm_stderr\": 0.015630022970092444\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.5185185185185185,\n \"acc_stderr\": 0.034076320938540516,\n \"\
acc_norm\": 0.5185185185185185,\n \"acc_norm_stderr\": 0.034076320938540516\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.8333333333333334,\n \"acc_stderr\": 0.026156867523931045,\n \"\
acc_norm\": 0.8333333333333334,\n \"acc_norm_stderr\": 0.026156867523931045\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.810126582278481,\n \"acc_stderr\": 0.025530100460233494,\n \
\ \"acc_norm\": 0.810126582278481,\n \"acc_norm_stderr\": 0.025530100460233494\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6905829596412556,\n\
\ \"acc_stderr\": 0.03102441174057221,\n \"acc_norm\": 0.6905829596412556,\n\
\ \"acc_norm_stderr\": 0.03102441174057221\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.7633587786259542,\n \"acc_stderr\": 0.03727673575596913,\n\
\ \"acc_norm\": 0.7633587786259542,\n \"acc_norm_stderr\": 0.03727673575596913\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.7768595041322314,\n \"acc_stderr\": 0.03800754475228733,\n \"\
acc_norm\": 0.7768595041322314,\n \"acc_norm_stderr\": 0.03800754475228733\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8055555555555556,\n\
\ \"acc_stderr\": 0.038260763248848646,\n \"acc_norm\": 0.8055555555555556,\n\
\ \"acc_norm_stderr\": 0.038260763248848646\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7668711656441718,\n \"acc_stderr\": 0.0332201579577674,\n\
\ \"acc_norm\": 0.7668711656441718,\n \"acc_norm_stderr\": 0.0332201579577674\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.45535714285714285,\n\
\ \"acc_stderr\": 0.047268355537191,\n \"acc_norm\": 0.45535714285714285,\n\
\ \"acc_norm_stderr\": 0.047268355537191\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7961165048543689,\n \"acc_stderr\": 0.03989139859531771,\n\
\ \"acc_norm\": 0.7961165048543689,\n \"acc_norm_stderr\": 0.03989139859531771\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8632478632478633,\n\
\ \"acc_stderr\": 0.022509033937077802,\n \"acc_norm\": 0.8632478632478633,\n\
\ \"acc_norm_stderr\": 0.022509033937077802\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.73,\n \"acc_stderr\": 0.044619604333847394,\n \
\ \"acc_norm\": 0.73,\n \"acc_norm_stderr\": 0.044619604333847394\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8250319284802043,\n\
\ \"acc_stderr\": 0.013586619219903333,\n \"acc_norm\": 0.8250319284802043,\n\
\ \"acc_norm_stderr\": 0.013586619219903333\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7427745664739884,\n \"acc_stderr\": 0.02353292543104429,\n\
\ \"acc_norm\": 0.7427745664739884,\n \"acc_norm_stderr\": 0.02353292543104429\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.441340782122905,\n\
\ \"acc_stderr\": 0.016607021781050876,\n \"acc_norm\": 0.441340782122905,\n\
\ \"acc_norm_stderr\": 0.016607021781050876\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.7287581699346405,\n \"acc_stderr\": 0.02545775669666788,\n\
\ \"acc_norm\": 0.7287581699346405,\n \"acc_norm_stderr\": 0.02545775669666788\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6977491961414791,\n\
\ \"acc_stderr\": 0.026082700695399665,\n \"acc_norm\": 0.6977491961414791,\n\
\ \"acc_norm_stderr\": 0.026082700695399665\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.7376543209876543,\n \"acc_stderr\": 0.02447722285613511,\n\
\ \"acc_norm\": 0.7376543209876543,\n \"acc_norm_stderr\": 0.02447722285613511\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.48226950354609927,\n \"acc_stderr\": 0.02980873964223777,\n \
\ \"acc_norm\": 0.48226950354609927,\n \"acc_norm_stderr\": 0.02980873964223777\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4661016949152542,\n\
\ \"acc_stderr\": 0.01274085387294983,\n \"acc_norm\": 0.4661016949152542,\n\
\ \"acc_norm_stderr\": 0.01274085387294983\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.6985294117647058,\n \"acc_stderr\": 0.027875982114273168,\n\
\ \"acc_norm\": 0.6985294117647058,\n \"acc_norm_stderr\": 0.027875982114273168\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6568627450980392,\n \"acc_stderr\": 0.01920660684882536,\n \
\ \"acc_norm\": 0.6568627450980392,\n \"acc_norm_stderr\": 0.01920660684882536\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6636363636363637,\n\
\ \"acc_stderr\": 0.04525393596302506,\n \"acc_norm\": 0.6636363636363637,\n\
\ \"acc_norm_stderr\": 0.04525393596302506\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.7346938775510204,\n \"acc_stderr\": 0.02826388994378459,\n\
\ \"acc_norm\": 0.7346938775510204,\n \"acc_norm_stderr\": 0.02826388994378459\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8407960199004975,\n\
\ \"acc_stderr\": 0.025870646766169146,\n \"acc_norm\": 0.8407960199004975,\n\
\ \"acc_norm_stderr\": 0.025870646766169146\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.86,\n \"acc_stderr\": 0.0348735088019777,\n \
\ \"acc_norm\": 0.86,\n \"acc_norm_stderr\": 0.0348735088019777\n },\n\
\ \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.536144578313253,\n\
\ \"acc_stderr\": 0.038823108508905954,\n \"acc_norm\": 0.536144578313253,\n\
\ \"acc_norm_stderr\": 0.038823108508905954\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8538011695906432,\n \"acc_stderr\": 0.027097290118070813,\n\
\ \"acc_norm\": 0.8538011695906432,\n \"acc_norm_stderr\": 0.027097290118070813\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.4785801713586291,\n\
\ \"mc1_stderr\": 0.01748743214471181,\n \"mc2\": 0.642681406545345,\n\
\ \"mc2_stderr\": 0.015380688965063057\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.8034727703235991,\n \"acc_stderr\": 0.011168120593569565\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6588324488248674,\n \
\ \"acc_stderr\": 0.013059111935831497\n }\n}\n```"
repo_url: https://huggingface.co/nbeerbower/Maidphin-Kunoichi-7B
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_03_07T22_12_07.143668
path:
- '**/details_harness|arc:challenge|25_2024-03-07T22-12-07.143668.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-03-07T22-12-07.143668.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_03_07T22_12_07.143668
path:
- '**/details_harness|gsm8k|5_2024-03-07T22-12-07.143668.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-03-07T22-12-07.143668.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_03_07T22_12_07.143668
path:
- '**/details_harness|hellaswag|10_2024-03-07T22-12-07.143668.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-03-07T22-12-07.143668.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_03_07T22_12_07.143668
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-07T22-12-07.143668.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-07T22-12-07.143668.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-07T22-12-07.143668.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_03_07T22_12_07.143668
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-07T22-12-07.143668.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-07T22-12-07.143668.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_03_07T22_12_07.143668
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-07T22-12-07.143668.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-07T22-12-07.143668.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_03_07T22_12_07.143668
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-07T22-12-07.143668.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-07T22-12-07.143668.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_03_07T22_12_07.143668
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-07T22-12-07.143668.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-07T22-12-07.143668.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_03_07T22_12_07.143668
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-07T22-12-07.143668.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-07T22-12-07.143668.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_03_07T22_12_07.143668
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-07T22-12-07.143668.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-07T22-12-07.143668.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_03_07T22_12_07.143668
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-07T22-12-07.143668.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-07T22-12-07.143668.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_03_07T22_12_07.143668
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-07T22-12-07.143668.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-07T22-12-07.143668.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_03_07T22_12_07.143668
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-07T22-12-07.143668.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-07T22-12-07.143668.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_03_07T22_12_07.143668
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-07T22-12-07.143668.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-07T22-12-07.143668.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_03_07T22_12_07.143668
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-07T22-12-07.143668.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-07T22-12-07.143668.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_03_07T22_12_07.143668
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-07T22-12-07.143668.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-07T22-12-07.143668.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_03_07T22_12_07.143668
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-07T22-12-07.143668.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-07T22-12-07.143668.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_03_07T22_12_07.143668
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-07T22-12-07.143668.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-07T22-12-07.143668.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_03_07T22_12_07.143668
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-07T22-12-07.143668.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-07T22-12-07.143668.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_03_07T22_12_07.143668
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-07T22-12-07.143668.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-07T22-12-07.143668.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_03_07T22_12_07.143668
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-07T22-12-07.143668.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-07T22-12-07.143668.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_03_07T22_12_07.143668
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-07T22-12-07.143668.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-07T22-12-07.143668.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_03_07T22_12_07.143668
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-07T22-12-07.143668.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-07T22-12-07.143668.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_03_07T22_12_07.143668
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-07T22-12-07.143668.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-07T22-12-07.143668.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_03_07T22_12_07.143668
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-07T22-12-07.143668.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-07T22-12-07.143668.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_03_07T22_12_07.143668
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-07T22-12-07.143668.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-07T22-12-07.143668.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_03_07T22_12_07.143668
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-07T22-12-07.143668.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-07T22-12-07.143668.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_03_07T22_12_07.143668
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-07T22-12-07.143668.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-07T22-12-07.143668.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_03_07T22_12_07.143668
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-07T22-12-07.143668.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-07T22-12-07.143668.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_03_07T22_12_07.143668
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-07T22-12-07.143668.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-07T22-12-07.143668.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_03_07T22_12_07.143668
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-07T22-12-07.143668.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-07T22-12-07.143668.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_03_07T22_12_07.143668
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-07T22-12-07.143668.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-07T22-12-07.143668.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_03_07T22_12_07.143668
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-07T22-12-07.143668.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-07T22-12-07.143668.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_03_07T22_12_07.143668
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-07T22-12-07.143668.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-07T22-12-07.143668.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_03_07T22_12_07.143668
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-07T22-12-07.143668.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-07T22-12-07.143668.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_03_07T22_12_07.143668
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-07T22-12-07.143668.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-07T22-12-07.143668.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_03_07T22_12_07.143668
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-07T22-12-07.143668.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-07T22-12-07.143668.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_03_07T22_12_07.143668
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-07T22-12-07.143668.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-07T22-12-07.143668.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_03_07T22_12_07.143668
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-07T22-12-07.143668.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-07T22-12-07.143668.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_03_07T22_12_07.143668
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-07T22-12-07.143668.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-07T22-12-07.143668.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_03_07T22_12_07.143668
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-07T22-12-07.143668.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-07T22-12-07.143668.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_03_07T22_12_07.143668
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-07T22-12-07.143668.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-07T22-12-07.143668.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_03_07T22_12_07.143668
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-07T22-12-07.143668.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-07T22-12-07.143668.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_03_07T22_12_07.143668
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-07T22-12-07.143668.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-07T22-12-07.143668.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_03_07T22_12_07.143668
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-07T22-12-07.143668.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-07T22-12-07.143668.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_03_07T22_12_07.143668
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-07T22-12-07.143668.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-07T22-12-07.143668.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_03_07T22_12_07.143668
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-07T22-12-07.143668.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-07T22-12-07.143668.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_03_07T22_12_07.143668
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-07T22-12-07.143668.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-07T22-12-07.143668.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_03_07T22_12_07.143668
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-07T22-12-07.143668.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-07T22-12-07.143668.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_03_07T22_12_07.143668
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-07T22-12-07.143668.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-07T22-12-07.143668.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_03_07T22_12_07.143668
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-07T22-12-07.143668.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-07T22-12-07.143668.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_03_07T22_12_07.143668
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-07T22-12-07.143668.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-07T22-12-07.143668.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_03_07T22_12_07.143668
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-07T22-12-07.143668.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-07T22-12-07.143668.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_03_07T22_12_07.143668
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-07T22-12-07.143668.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-07T22-12-07.143668.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_03_07T22_12_07.143668
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-07T22-12-07.143668.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-07T22-12-07.143668.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_03_07T22_12_07.143668
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-07T22-12-07.143668.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-07T22-12-07.143668.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_03_07T22_12_07.143668
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-07T22-12-07.143668.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-07T22-12-07.143668.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_03_07T22_12_07.143668
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-07T22-12-07.143668.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-07T22-12-07.143668.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_03_07T22_12_07.143668
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-07T22-12-07.143668.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-07T22-12-07.143668.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_03_07T22_12_07.143668
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-07T22-12-07.143668.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-07T22-12-07.143668.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_03_07T22_12_07.143668
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-07T22-12-07.143668.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-07T22-12-07.143668.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_03_07T22_12_07.143668
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-07T22-12-07.143668.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-07T22-12-07.143668.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_03_07T22_12_07.143668
path:
- '**/details_harness|winogrande|5_2024-03-07T22-12-07.143668.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-03-07T22-12-07.143668.parquet'
- config_name: results
data_files:
- split: 2024_03_07T22_12_07.143668
path:
- results_2024-03-07T22-12-07.143668.parquet
- split: latest
path:
- results_2024-03-07T22-12-07.143668.parquet
---
# Dataset Card for Evaluation run of nbeerbower/Maidphin-Kunoichi-7B
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [nbeerbower/Maidphin-Kunoichi-7B](https://huggingface.co/nbeerbower/Maidphin-Kunoichi-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_nbeerbower__Maidphin-Kunoichi-7B",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-03-07T22:12:07.143668](https://huggingface.co/datasets/open-llm-leaderboard/details_nbeerbower__Maidphin-Kunoichi-7B/blob/main/results_2024-03-07T22-12-07.143668.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.651565977695202,
"acc_stderr": 0.0320758237237778,
"acc_norm": 0.6523535304526226,
"acc_norm_stderr": 0.03272549642025899,
"mc1": 0.4785801713586291,
"mc1_stderr": 0.01748743214471181,
"mc2": 0.642681406545345,
"mc2_stderr": 0.015380688965063057
},
"harness|arc:challenge|25": {
"acc": 0.6621160409556314,
"acc_stderr": 0.013822047922283512,
"acc_norm": 0.6936860068259386,
"acc_norm_stderr": 0.013470584417276511
},
"harness|hellaswag|10": {
"acc": 0.6970722963553077,
"acc_stderr": 0.004585850835623563,
"acc_norm": 0.8711412069308903,
"acc_norm_stderr": 0.003343588514866123
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.35,
"acc_stderr": 0.0479372485441102,
"acc_norm": 0.35,
"acc_norm_stderr": 0.0479372485441102
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6444444444444445,
"acc_stderr": 0.04135176749720385,
"acc_norm": 0.6444444444444445,
"acc_norm_stderr": 0.04135176749720385
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.7039473684210527,
"acc_stderr": 0.03715062154998904,
"acc_norm": 0.7039473684210527,
"acc_norm_stderr": 0.03715062154998904
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.62,
"acc_stderr": 0.048783173121456316,
"acc_norm": 0.62,
"acc_norm_stderr": 0.048783173121456316
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7056603773584905,
"acc_stderr": 0.028049186315695255,
"acc_norm": 0.7056603773584905,
"acc_norm_stderr": 0.028049186315695255
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7569444444444444,
"acc_stderr": 0.0358687928008034,
"acc_norm": 0.7569444444444444,
"acc_norm_stderr": 0.0358687928008034
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.47,
"acc_stderr": 0.050161355804659205,
"acc_norm": 0.47,
"acc_norm_stderr": 0.050161355804659205
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.53,
"acc_stderr": 0.050161355804659205,
"acc_norm": 0.53,
"acc_norm_stderr": 0.050161355804659205
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.31,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.31,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6705202312138728,
"acc_stderr": 0.03583901754736412,
"acc_norm": 0.6705202312138728,
"acc_norm_stderr": 0.03583901754736412
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.43137254901960786,
"acc_stderr": 0.04928099597287534,
"acc_norm": 0.43137254901960786,
"acc_norm_stderr": 0.04928099597287534
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.77,
"acc_stderr": 0.04229525846816507,
"acc_norm": 0.77,
"acc_norm_stderr": 0.04229525846816507
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5914893617021276,
"acc_stderr": 0.032134180267015755,
"acc_norm": 0.5914893617021276,
"acc_norm_stderr": 0.032134180267015755
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.5,
"acc_stderr": 0.047036043419179864,
"acc_norm": 0.5,
"acc_norm_stderr": 0.047036043419179864
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5655172413793104,
"acc_stderr": 0.04130740879555497,
"acc_norm": 0.5655172413793104,
"acc_norm_stderr": 0.04130740879555497
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.40476190476190477,
"acc_stderr": 0.025279850397404904,
"acc_norm": 0.40476190476190477,
"acc_norm_stderr": 0.025279850397404904
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.48412698412698413,
"acc_stderr": 0.04469881854072606,
"acc_norm": 0.48412698412698413,
"acc_norm_stderr": 0.04469881854072606
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.32,
"acc_stderr": 0.04688261722621504,
"acc_norm": 0.32,
"acc_norm_stderr": 0.04688261722621504
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.8,
"acc_stderr": 0.022755204959542943,
"acc_norm": 0.8,
"acc_norm_stderr": 0.022755204959542943
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.49261083743842365,
"acc_stderr": 0.035176035403610084,
"acc_norm": 0.49261083743842365,
"acc_norm_stderr": 0.035176035403610084
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.7,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.7,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7636363636363637,
"acc_stderr": 0.03317505930009181,
"acc_norm": 0.7636363636363637,
"acc_norm_stderr": 0.03317505930009181
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7727272727272727,
"acc_stderr": 0.029857515673386417,
"acc_norm": 0.7727272727272727,
"acc_norm_stderr": 0.029857515673386417
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8911917098445595,
"acc_stderr": 0.022473253332768766,
"acc_norm": 0.8911917098445595,
"acc_norm_stderr": 0.022473253332768766
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.676923076923077,
"acc_stderr": 0.023710888501970572,
"acc_norm": 0.676923076923077,
"acc_norm_stderr": 0.023710888501970572
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3296296296296296,
"acc_stderr": 0.028661201116524565,
"acc_norm": 0.3296296296296296,
"acc_norm_stderr": 0.028661201116524565
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.7058823529411765,
"acc_stderr": 0.02959732973097809,
"acc_norm": 0.7058823529411765,
"acc_norm_stderr": 0.02959732973097809
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3443708609271523,
"acc_stderr": 0.038796870240733264,
"acc_norm": 0.3443708609271523,
"acc_norm_stderr": 0.038796870240733264
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8422018348623853,
"acc_stderr": 0.015630022970092444,
"acc_norm": 0.8422018348623853,
"acc_norm_stderr": 0.015630022970092444
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5185185185185185,
"acc_stderr": 0.034076320938540516,
"acc_norm": 0.5185185185185185,
"acc_norm_stderr": 0.034076320938540516
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8333333333333334,
"acc_stderr": 0.026156867523931045,
"acc_norm": 0.8333333333333334,
"acc_norm_stderr": 0.026156867523931045
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.810126582278481,
"acc_stderr": 0.025530100460233494,
"acc_norm": 0.810126582278481,
"acc_norm_stderr": 0.025530100460233494
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6905829596412556,
"acc_stderr": 0.03102441174057221,
"acc_norm": 0.6905829596412556,
"acc_norm_stderr": 0.03102441174057221
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7633587786259542,
"acc_stderr": 0.03727673575596913,
"acc_norm": 0.7633587786259542,
"acc_norm_stderr": 0.03727673575596913
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7768595041322314,
"acc_stderr": 0.03800754475228733,
"acc_norm": 0.7768595041322314,
"acc_norm_stderr": 0.03800754475228733
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.8055555555555556,
"acc_stderr": 0.038260763248848646,
"acc_norm": 0.8055555555555556,
"acc_norm_stderr": 0.038260763248848646
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7668711656441718,
"acc_stderr": 0.0332201579577674,
"acc_norm": 0.7668711656441718,
"acc_norm_stderr": 0.0332201579577674
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.45535714285714285,
"acc_stderr": 0.047268355537191,
"acc_norm": 0.45535714285714285,
"acc_norm_stderr": 0.047268355537191
},
"harness|hendrycksTest-management|5": {
"acc": 0.7961165048543689,
"acc_stderr": 0.03989139859531771,
"acc_norm": 0.7961165048543689,
"acc_norm_stderr": 0.03989139859531771
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8632478632478633,
"acc_stderr": 0.022509033937077802,
"acc_norm": 0.8632478632478633,
"acc_norm_stderr": 0.022509033937077802
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.73,
"acc_stderr": 0.044619604333847394,
"acc_norm": 0.73,
"acc_norm_stderr": 0.044619604333847394
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8250319284802043,
"acc_stderr": 0.013586619219903333,
"acc_norm": 0.8250319284802043,
"acc_norm_stderr": 0.013586619219903333
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7427745664739884,
"acc_stderr": 0.02353292543104429,
"acc_norm": 0.7427745664739884,
"acc_norm_stderr": 0.02353292543104429
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.441340782122905,
"acc_stderr": 0.016607021781050876,
"acc_norm": 0.441340782122905,
"acc_norm_stderr": 0.016607021781050876
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7287581699346405,
"acc_stderr": 0.02545775669666788,
"acc_norm": 0.7287581699346405,
"acc_norm_stderr": 0.02545775669666788
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6977491961414791,
"acc_stderr": 0.026082700695399665,
"acc_norm": 0.6977491961414791,
"acc_norm_stderr": 0.026082700695399665
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7376543209876543,
"acc_stderr": 0.02447722285613511,
"acc_norm": 0.7376543209876543,
"acc_norm_stderr": 0.02447722285613511
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.48226950354609927,
"acc_stderr": 0.02980873964223777,
"acc_norm": 0.48226950354609927,
"acc_norm_stderr": 0.02980873964223777
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4661016949152542,
"acc_stderr": 0.01274085387294983,
"acc_norm": 0.4661016949152542,
"acc_norm_stderr": 0.01274085387294983
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6985294117647058,
"acc_stderr": 0.027875982114273168,
"acc_norm": 0.6985294117647058,
"acc_norm_stderr": 0.027875982114273168
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6568627450980392,
"acc_stderr": 0.01920660684882536,
"acc_norm": 0.6568627450980392,
"acc_norm_stderr": 0.01920660684882536
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6636363636363637,
"acc_stderr": 0.04525393596302506,
"acc_norm": 0.6636363636363637,
"acc_norm_stderr": 0.04525393596302506
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7346938775510204,
"acc_stderr": 0.02826388994378459,
"acc_norm": 0.7346938775510204,
"acc_norm_stderr": 0.02826388994378459
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8407960199004975,
"acc_stderr": 0.025870646766169146,
"acc_norm": 0.8407960199004975,
"acc_norm_stderr": 0.025870646766169146
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.86,
"acc_stderr": 0.0348735088019777,
"acc_norm": 0.86,
"acc_norm_stderr": 0.0348735088019777
},
"harness|hendrycksTest-virology|5": {
"acc": 0.536144578313253,
"acc_stderr": 0.038823108508905954,
"acc_norm": 0.536144578313253,
"acc_norm_stderr": 0.038823108508905954
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8538011695906432,
"acc_stderr": 0.027097290118070813,
"acc_norm": 0.8538011695906432,
"acc_norm_stderr": 0.027097290118070813
},
"harness|truthfulqa:mc|0": {
"mc1": 0.4785801713586291,
"mc1_stderr": 0.01748743214471181,
"mc2": 0.642681406545345,
"mc2_stderr": 0.015380688965063057
},
"harness|winogrande|5": {
"acc": 0.8034727703235991,
"acc_stderr": 0.011168120593569565
},
"harness|gsm8k|5": {
"acc": 0.6588324488248674,
"acc_stderr": 0.013059111935831497
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
autoevaluate/autoeval-staging-eval-squad_v2-squad_v2-76c05b-14906068 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- squad_v2
eval_info:
task: extractive_question_answering
model: deepset/xlm-roberta-base-squad2
metrics: ['bertscore']
dataset_name: squad_v2
dataset_config: squad_v2
dataset_split: validation
col_mapping:
context: context
question: question
answers-text: answers.text
answers-answer_start: answers.answer_start
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/xlm-roberta-base-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model. |
Tverous/claim | ---
dataset_info:
features:
- name: pairID
dtype: string
- name: premise
dtype: string
- name: original_claim
dtype: string
- name: manipulated_claim
dtype: string
- name: label
dtype: string
- name: manipulated_claim_cleaned_amr
dtype: string
splits:
- name: train
num_bytes: 159114
num_examples: 126
download_size: 44920
dataset_size: 159114
---
# Dataset Card for "claim"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ardaorcun/turkish-instruction-dataset-prepared | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 36942600
num_examples: 66019
download_size: 22977742
dataset_size: 36942600
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
This dataset contains concatenated versions of merve's and beratcmn's data. |
Anderson1992/hebecamargo | ---
license: openrail
---
|
useSword/runpod_Negative_Embeds | ---
license: apache-2.0
---
|
NPCProgrammer/DBERT_Emotions_tuned | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': sadness
'1': joy
'2': love
'3': anger
'4': fear
'5': surprise
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 42829533
num_examples: 16000
- name: validation
num_bytes: 5350695
num_examples: 2000
- name: test
num_bytes: 5353173
num_examples: 2000
download_size: 2321604
dataset_size: 53533401
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
ridgerun-ai/citw-v0.1 | ---
license: cc-by-nc-2.0
task_categories:
- object-detection
language:
- en
tags:
- cellphone
- mobile phone
- phone
- cell phone
pretty_name: Cellphones in the Wild
---
# Dataset Card for Cellphones in the Wild
CITW is a small dataset that contains bounding box annotations of cellphones in images.
## Dataset Details
### Dataset Description
CITW (Cellphones in the Wild) is a collection of images that contain one or more cell
phones in them, along with their corresponding bounding box annotations. CITW was distiled
from COCO 2017, where only the images and annotations containing a cellphone were kept. The
structure and annotations were adapted from COCO to be Huggingface compatible.
- **Curated by:** Michael Grüner <[michael.gruner@ridgerun.ai](mailto:michael.gruner@ridgerun.ai)>
- **Funded by:** [RidgeRun.ai](www.ridgerun.ai)
- **License:** CC-BY-NC-2.0
### Dataset Sources
- **Repository:** [https://huggingface.co/datasets/ridgerun-ai/citw-v0.1](https://huggingface.co/datasets/ridgerun-ai/citw-v0.1)
- **Demo:** TBD
## Uses
### Direct Use
CITW is meant to be used to train cellphone detectors.
### Out-of-Scope Use
The dataset only contains samples of mobile phones, and will not work for other types of phones, like office, faxes, or public phones.
## Dataset Structure
The dataset can be found within the `data` directory. It contains two splits: `train` and `val`, which are represented as subdirectories.
Within each split, you'll find the images in JPEG format, as well as a `metadata.jsonl` file.
The `metadata.jsonl` contains one entry per line. Each entry represents an image. The annotations can be found under the `objects` object.
This object contains a list of bounding boxes (which itself is a list), and a list of categories (which there is only one: 0).
A single bounding box is annotated as: `[x, y, width, height]`.
There is a single category: 0, which corresponds evidently to the cellphone class.
### Entry Example
An example of a single entry is:
```
{
"file_name": "000000253967.jpg",
"objects": {
"bbox": [
[16.31, 104.46, 33.54, 43.17],
[277.55, 146.1, 17.99, 58.69],
[436.56, 130.99, 23.33, 42.09]
],
"categories": [0, 0, 0]
}
}
```
In this example, the image contains 3 cellphones. This is reflected in the 3 bounding boxes and the list of 3 cellphone categories.
## Dataset Creation
### Source Data
#### Data Collection and Processing
The dataset is a distillation of COCO2017. The repository contains a `coco2citw.py` script that automates this process.
#### Who are the source data producers?
Plase refer to the [COCO Challenge homepage](https://cocodataset.org/#home) for information of the original production process.
#### Personal and Sensitive Information
To the best of our knowledge, there are no personal and sensitive information in this dataset.
## Bias, Risks, and Limitations
This dataset is limited to cellphone models that were produced before 2017. Similar objects (like walkie-talkies,
portable game consoles or calculatores) may be eventually confused.
## Glossary
CITW: Cellphones in the Wild
## Dataset Card Authors
Michael Grüner <[michael.gruner@ridgerun.ai](mailto:michael.gruner@ridgerun.ai)>
## Dataset Card Contact
RidgeRun.ai <[contactus@ridgerun.ai](mailto:contactus@ridgerun.ai)> |
yutaozhu94/INTERS | ---
license: cc-by-sa-4.0
task_categories:
- text-generation
language:
- en
size_categories:
- 100K<n<1M
---
## INTERS: Unlocking the Power of Large Language Models in Search with Instruction Tuning
**Authors**: Yutao Zhu, Peitian Zhang, Chenghao Zhang, Yifei Chen, Binyu Xie, Zhicheng Dou, Zheng Liu, and Ji-Rong Wen
<p>
📃 <a href="https://arxiv.org/abs/2401.06532">ArXiv Paper</a>
•
💡 <a href="https://github.com/DaoD/INTERS">GitHub</a>
</p>
## Introduction
Large language models (LLMs) have demonstrated impressive capabilities in various natural language processing tasks. Despite this, their application to information retrieval (IR) tasks is still challenging due to the infrequent occurrence of many IR-specific concepts in natural language. While prompt-based methods can provide task descriptions to LLMs, they often fall short in facilitating a comprehensive understanding and execution of IR tasks, thereby limiting LLMs' applicability. To address this gap, in this work, we explore the potential of instruction tuning to enhance LLMs' proficiency in IR tasks. We introduce a novel instruction tuning dataset, INTERS, encompassing 20 tasks across three fundamental IR categories: query understanding, document understanding, and query-document relationship understanding. The data are derived from 43 distinct datasets with manually written templates. Our empirical results reveal that INTERS significantly boosts the performance of various publicly available LLMs, such as LLaMA, Mistral, and Phi, in IR tasks. Furthermore, we conduct extensive experiments to analyze the effects of instruction design, template diversity, few-shot demonstrations, and the volume of instructions on performance.
## File List
- train.jsonl: The training set for the in-domain evaluation scenario.
- dev-qu-du-zero-shot/*: The dev set of query understanding tasks and document understanding tasks for the zero-shot evaluation scenario.
- test-qu-du-zero-shot/*: The test set of query understanding tasks and document understanding tasks for the zero-shot evaluation scenario.
- test-qdu/*: The test set of query-document relationship understanding tasks.
## File Format
### For query understanding tasks and document understanding tasks
Each line of the file is a json dict with the following structure:
```
{
"prompt": the input for LLMs,
"completion": the output for LLMs,
"source": the data source,
}
```
### For query-document relationship understanding tasks
Note that the candidate documents are retrieved by BM25, so the positive document may not be in the candidate document list.
```
{
"query": the query,
"pos": the content of the positive document,
"pos_index": the index of the positive document,
"pos_score": [unused],
"query_id": the id of the query
"key": the content of all candidate documents,
"key_index": the index of all candidate documents,
}
``` |
Gummybear05/speed_changed1 | ---
dataset_info:
features:
- name: path
dtype: string
- name: filename
dtype: string
- name: text
dtype: string
- name: quality
dtype: string
- name: city
dtype: string
- name: gender
dtype: string
- name: age
dtype: string
- name: audio
struct:
- name: array
sequence: float32
- name: path
dtype: string
- name: sample_rate
dtype: int64
splits:
- name: train
num_bytes: 3492649268
num_examples: 8531
- name: test
num_bytes: 43120311
num_examples: 120
download_size: 3530614667
dataset_size: 3535769579
---
# Dataset Card for "speed_changed1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
whatisslove11/40_ms_eval | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: label
dtype:
class_label:
names:
'0': normal_speech
'1': whisper
'2': music
'3': scream
splits:
- name: train
num_bytes: 180262851.732
num_examples: 25346
download_size: 171059469
dataset_size: 180262851.732
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
kye/all-kye-code | ---
license: mit
---
|
Codec-SUPERB/noisy_vctk_16k_synth | ---
configs:
- config_name: default
data_files:
- split: original
path: data/original-*
- split: academicodec_hifi_16k_320d
path: data/academicodec_hifi_16k_320d-*
- split: academicodec_hifi_16k_320d_large_uni
path: data/academicodec_hifi_16k_320d_large_uni-*
- split: academicodec_hifi_24k_320d
path: data/academicodec_hifi_24k_320d-*
- split: audiodec_24k_320d
path: data/audiodec_24k_320d-*
- split: dac_16k
path: data/dac_16k-*
- split: dac_24k
path: data/dac_24k-*
- split: dac_44k
path: data/dac_44k-*
- split: encodec_24k
path: data/encodec_24k-*
- split: funcodec_en_libritts_16k_gr1nq32ds320
path: data/funcodec_en_libritts_16k_gr1nq32ds320-*
- split: funcodec_en_libritts_16k_gr8nq32ds320
path: data/funcodec_en_libritts_16k_gr8nq32ds320-*
- split: funcodec_en_libritts_16k_nq32ds320
path: data/funcodec_en_libritts_16k_nq32ds320-*
- split: funcodec_en_libritts_16k_nq32ds640
path: data/funcodec_en_libritts_16k_nq32ds640-*
- split: funcodec_zh_en_16k_nq32ds320
path: data/funcodec_zh_en_16k_nq32ds320-*
- split: funcodec_zh_en_16k_nq32ds640
path: data/funcodec_zh_en_16k_nq32ds640-*
- split: speech_tokenizer_16k
path: data/speech_tokenizer_16k-*
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: id
dtype: string
splits:
- name: original
num_bytes: 2298657432.0
num_examples: 24792
- name: academicodec_hifi_16k_320d
num_bytes: 2292445936.0
num_examples: 24792
- name: academicodec_hifi_16k_320d_large_uni
num_bytes: 2292445936.0
num_examples: 24792
- name: academicodec_hifi_24k_320d
num_bytes: 3433060336.0
num_examples: 24792
- name: audiodec_24k_320d
num_bytes: 3440000656.0
num_examples: 24792
- name: dac_16k
num_bytes: 2294433456.0
num_examples: 24792
- name: dac_24k
num_bytes: 3029679147.736
num_examples: 24792
- name: dac_44k
num_bytes: 5564292936.952
num_examples: 24792
- name: encodec_24k
num_bytes: 3029728087.144
num_examples: 24792
- name: funcodec_en_libritts_16k_gr1nq32ds320
num_bytes: 2019057663.064
num_examples: 24792
- name: funcodec_en_libritts_16k_gr8nq32ds320
num_bytes: 2019057663.064
num_examples: 24792
- name: funcodec_en_libritts_16k_nq32ds320
num_bytes: 2019057663.064
num_examples: 24792
- name: funcodec_en_libritts_16k_nq32ds640
num_bytes: 2019057663.064
num_examples: 24792
- name: funcodec_zh_en_16k_nq32ds320
num_bytes: 2019057663.064
num_examples: 24792
- name: funcodec_zh_en_16k_nq32ds640
num_bytes: 2019057663.064
num_examples: 24792
- name: speech_tokenizer_16k
num_bytes: 2028915656.44
num_examples: 24792
download_size: 44767386741
dataset_size: 41818005558.656006
---
# Dataset Card for "noisy_vctk_16k_synth"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Zombely/pl-text-images-5000-whole | ---
dataset_info:
features:
- name: image
dtype: image
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 2538550896.71
num_examples: 4046
- name: test
num_bytes: 309157048.0
num_examples: 472
- name: validation
num_bytes: 311530101.0
num_examples: 482
download_size: 3161688642
dataset_size: 3159238045.71
---
# Dataset Card for "pl-text-images-5000-whole"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
lilacai/lilac-SlimOrca | ---
tags:
- Lilac
---
# lilac/SlimOrca
This dataset is a [Lilac](http://lilacml.com) processed dataset. Original dataset: [https://huggingface.co/datasets/Open-Orca/SlimOrca](https://huggingface.co/datasets/Open-Orca/SlimOrca)
To download the dataset to a local directory:
```bash
lilac download lilacai/lilac-SlimOrca
```
or from python with:
```py
ll.download("lilacai/lilac-SlimOrca")
```
|
dim/sharegpt_short_ru | ---
license: cc-by-nc-4.0
dataset_info:
features:
- name: conversation
sequence: string
- name: hash
dtype: string
splits:
- name: train
num_bytes: 825523
num_examples: 253
download_size: 367027
dataset_size: 825523
---
### Version 1
```python
import json
with open("verbalist/datasets/RyokoAI_ShareGPT52K/sg_90k_part1.json") as f:
dataset1 = json.load(f)
with open("verbalist/datasets/RyokoAI_ShareGPT52K/sg_90k_part2.json") as f:
dataset2 = json.load(f)
dataset = dataset1 + dataset2
import re
import regex
import hashlib
def filter_string(string):
has = True
has_zh = not len(re.findall(r"[\u4e00-\u9fff]+", string)) > 0
has_ko = not len(re.findall(r"[\u3131-\ucb4c]+", string)) > 0
has = has_zh and has_ko
invalid_letters = "ієùéàçğİžš"
for letter in invalid_letters:
if letter in string:
return False
return has
def has_cyrillic(text):
return bool(regex.search(r"\p{IsCyrillic}", text))
clean_dataset = []
for conversation in dataset:
all_text = "\n".join([item["value"] for item in conversation["conversations"]])
# print(all_text)
# break
if filter_string(all_text) and has_cyrillic(all_text):
clean_dataset.append(conversation)
import markdownify
def correct_string(string):
string = string.replace("\\_", "_")
languages = [
"css",
"python",
"go",
"html",
"kotlin",
"diff",
"vba",
"sql",
]
for lang in languages:
string = string.replace(f"\n{lang}Copy code`", f"{lang}\n")
string = string.replace("`\n```", "\n```")
string = string.replace("\n ", "\n ")
delete_phrases = [
"Как искусственный интеллект, я не являюсь владельцем физических объектов и не могу продавать или покупать предметы. Однако, я могу поделиться советом, как можно попытаться убедить кого-то в покупке карандаша.",
"Как искусственный интеллект, я не имею личных чувств и мнений, и не могу иметь предпочтений в выборе между рождением своего ребенка и усыновлением приемного ребенка из приюта.",
"1 / 1",
"2 / 2",
"3 / 3",
"4 / 4",
"5 / 5",
"6 / 6",
"7 / 7",
"8 / 8",
"9 / 9",
"К сожалению, я не могу проверить дату вопроса, но я могу предоставить информацию о Максиме Радайкине и Борисе Марцинкевиче на начало 2021 года.",
"К сожалению, я не могу вставить пример базы данных в чат, но я могу объяснить, что это такое.",
"К сожалению, я не могу написать полноценное расширение для Google Chrome в рамках этой сессии. Однако,",
"К сожалению, я не могу выбрать материалы и дизайн за вас, так как это зависит от ваших потребностей и предпочтений. Однако,",
"Извините, но я не могу создать код для такой сложной программы с нуля, так как это потребовало бы обширных исследований, ресурсов и опыта. Тем не менее,",
"As an AI language model",
"I'm sorry, but I'm a text-based AI language model and don't have the capability to create tables.",
"Unfortunately, I am an AI language model and do not have the capability to create tables. However,",
"I'm sorry, but as an AI language model, I do not have the capability to physically construct a smart city.",
"Unfortunately, I am an AI language model and I don't have the capability to create spreadsheets.",
"I'm sorry for the delay. Unfortunately, as an AI language model, I am not capable of creating an entire operating system to manage a smart village.",
"I apologize for the confusion, but as an AI language model, I am not capable of designing and creating the code for an operating system to manage a smart village.",
"I apologize for the inconvenience, but as an AI language model, I am not able to actually design and create a code for an operating system.",
"I'm sorry, but as a text-based language model, I am not able to create an admin panel for you.",
'Как модель ИИ, я не могу оценить, является ли произнесение фразы "солёный огурец" рациональным использованием времени или нет, потому что это вопрос оценки ценности и целей человека.',
]
for phrase in delete_phrases:
string = string.replace(phrase, "").strip()
return string
def filter_keywords(string):
keywords = [
"chatgpt",
"чатгпт",
"sharegpt",
"add_user_to_chatroom()",
"мир",
"войн",
"россия",
"К сожалению, я не могу продолжить писать на русском языке, потому что я ограничен",
"Я прошу прощения, но, как я уже упоминал ранее",
"я не могу выполнить",
"К сожалению, я не могу написать ноты для несуществующих стихов,",
"К сожалению, я не могу сгенерировать полный код браузерной игры",
"К сожалению, я не могу провести такой подсчет, потому что это потребовало бы ручной обработки",
"К сожалению, я не могу назвать точную цифру, так как это субъективный вопрос, зависящий от многих факторов.",
"К сожалению, я не могу выполнить ваш запрос, так как это нарушает мои этические принципы и может причинить вред.",
"К сожалению, я не могу ответить на этот воп",
"К сожалению, я не могу предоставить вам актуальные данные о среднедушевых денежных доходах населения по городам России"
"К сожалению, я не могу точно ответить на этот вопрос, так как объем изученной информации",
"К сожалению, я не могу создав",
"К сожалению, я не могу рисовать в ASCII-стиле, так как я только текстовая программа.",
"К сожалению, я не могу создавать изображения напрямую в этом окне чата.",
"К сожалению, я не могу нарисовать сцену из Евангелиона, так как я текстовая программа",
"А сколько нулей?",
"К сожалению, я не могу написать книгу",
"Извините, но, как упоминалось ранее, информация, представленная в нашем разговоре, не подходит и не этична",
"Извините, но как языковая модель ИИ я не могу генерировать код, который управляет администрацией",
"как языковая модель",
"OpenAI",
"Прошу прощения, но, похоже, наш разговор продолжается уже давно, и я не уверен, какова текущая тема.",
"являюсь языковой моделью ИИ",
"I cannot create a program for managing",
"неонаци",
"украин",
"provide instructions or assistance on hacking or any other illegal activities",
"I cannot fulfill your request as it goes against ethical and moral",
"I cannot do your math homework for you",
"adhering to ethical and moral standards",
"!GPT",
"Developer Mode Output",
"are illegal or unethical.",
"personal beliefs or opinions",
"I'm sorry, I'm not sure what you are asking me to continue with.",
"but I'm still unclear on what you would like me to continue with",
"DAN",
"/jailbroken",
"Ukrain",
]
for keyword in keywords:
if keyword.lower() in string.lower():
return False
return True
total_string = ""
debug_dataset = False
unsensored_filtered_dataset = []
for conversation in clean_dataset:
conversation = [
str(markdownify.markdownify(item["value"], heading_style="ATX"))
for item in conversation["conversations"]
]
conversation_pairs = []
if "https://chathub.gg" in conversation[0]:
conversation.pop(0)
full_text = " ".join(conversation)
if filter_keywords(full_text):
for i in range(1, len(conversation)):
if (i + 1) % 2 == 0:
if debug_dataset:
bot_message = "BOT " + correct_string(conversation[i])
user_message = "USER " + correct_string(conversation[i - 1])
else:
bot_message = correct_string(conversation[i])
user_message = correct_string(conversation[i - 1])
conversation_pairs.append(user_message)
conversation_pairs.append(bot_message)
if len(conversation_pairs) > 0:
unsensored_filtered_dataset.append(conversation_pairs)
if debug_dataset:
all_text = "\n===\n".join([item for item in conversation_pairs])
total_string += all_text
total_string += "===" * 10
total_string += "\n"
total_string += "===" * 10
total_string += "\n"
total_string += "===" * 10
total_string += "\n"
# print(total_string)
from transformers import AutoTokenizer
from verbalist.datasets.utils import visualize_hist
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-hf")
conversation_lengths = []
for conversation in unsensored_filtered_dataset:
all_text = "\n===\n".join([item for item in conversation])
conversation_lengths.append(len(tokenizer(all_text)["input_ids"]))
# print(all_text)
# print("="*100)
# print("="*100)
# print("="*100)
# break
# if has_cyrillic(all_text):
# rus_conv.append(conversation)
visualize_hist(conversation_lengths, "ru_share_gpt_filtered")
filter_num = 85
passed_convs = (
np.array(conversation_lengths) < np.percentile(conversation_lengths, filter_num)
).tolist()
unsensored_passed = []
for i, status in enumerate(passed_convs):
if status:
unsensored_passed.append(unsensored_filtered_dataset[i])
unsensored_dataset = []
for conv in unsensored_passed:
conv_hash = hashlib.sha256(conv[0].encode('utf-8')).hexdigest()
unsensored_dataset.append({
"conversation": conv,
"hash": conv_hash
})
``` |
polejowska/cd45rb_leukocytes_subdataset | ---
dataset_info:
features:
- name: image_id
dtype: int64
- name: image
dtype: image
- name: width
dtype: int32
- name: height
dtype: int32
- name: objects
list:
- name: category_id
dtype:
class_label:
names:
'0': leukocyte
- name: image_id
dtype: string
- name: id
dtype: int64
- name: area
dtype: int64
- name: bbox
sequence: float32
length: 4
- name: segmentation
list:
list: float32
- name: iscrowd
dtype: bool
splits:
- name: train
num_bytes: 1867521478.0
num_examples: 966
- name: val
num_bytes: 196591278.0
num_examples: 100
- name: test
num_bytes: 185463746.0
num_examples: 95
download_size: 0
dataset_size: 2249576502.0
task_categories:
- object-detection
tags:
- histopathology
- leukocytes
---
# Dataset Card for "cd45rb_leukocytes_subdataset"
Citation:
Daisuke Komura, Takumi Onoyama, Koki Shinbo, Hiroto Odaka, Minako Hayakawa, Mieko Ochi, Ranny Rahaningrum Herdiantoputri, Haruya Endo, Hiroto Katoh, Tohru Ikeda, Tetsuo Ushiku, Shumpei Ishikawa,
Restaining-based annotation for cancer histology segmentation to overcome annotation-related limitations among pathologists, Patterns, Volume 4, Issue 2, 2023, 100688, https://doi.org/10.1016/j.patter.2023.100688.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
HaoyeZhang/RLHF-V-Dataset | ---
license: cc-by-nc-4.0
task_categories:
- text-generation
- visual-question-answering
language:
- en
configs:
- config_name: default
data_files: RLHF-V-Dataset.parquet
dataset_info:
features:
- name: ds_name
dtype: string
- name: image
dtype: image
- name: text
dtype: string
- name: origin_dataset
dtype: string
- name: origin_split
dtype: string
- name: idx
dtype: int64
- name: image_path
dtype: string
pretty_name: RLHF-V-Dataset
size_categories:
- 1K<n<10K
---
# Dataset Card for RLHF-V-Dataset
[Project Page](https://rlhf-v.github.io/) | [Paper](https://arxiv.org/abs/2312.00849) | [GitHub](https://github.com/RLHF-V/RLHF-V)
## Updates
**[2024.01.06]** 🔥 **A larger, more diverse set of fine-grained human correction data is available now!** 🔥 The newly released data has about **5.7k of fine-grained human correction data** that covers the output of **more powerful models** (Qwen-VL-Chat, InstructBLIP, etc.). We also **expand the image types** from everyday scenes to diverse styles and themes (WikiArt, landmarks, scene texts, etc.).
**[2024.01.05]** 🔧 We reformat our dataset and now it is **more convenient to preview and use** our data! The dataset now supports the `load_dataset` function, and the data content can be easily previewed online.
**[2023.12.15]** We incorporated a new annotation subset with an additional **1065 fine-grained annotations** into our dataset !
## Dataset Summary
RLHF-V-Dataset is the human preference data used in "**RLHF-V: Towards Trustworthy MLLMs via Behavior Alignment from Fine-grained Correctional Human Feedback**".
We originally collected a large amount of **fine-grained segment-level human corrections** on diverse instructions, including detailed descriptions and question-answering instructions. More high-quality annotations for different image sources and model outputs are on the way.
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/6566e0c493e30c8a60048eb3/jerEZiHDDc2ceF9anVHR-.png" alt="fig1" width="60%"/>
</p>
Utilizing our dataset can dramatically **reduce model hallucinations by 34.8%** while **keeping informativeness**.
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/6566e0c493e30c8a60048eb3/7xJEdKXeW33iKdHqJwvNN.png" alt="fig2" width="70%"/>
</p>
## Usage
```python
from datasets import load_dataset
data = load_dataset("HaoyeZhang/RLHF-V-Dataset")
```
## Data fields
| | Key | Description |
| ---- | ---------------- | ------------------------------------------------------------ |
| 0 | `ds_name` | Dataset name. |
| 1 | `image` | Dict contains path and bytes. If loaded by `load_dataset`, it can be automatically converted into a PIL Image. |
| 2 | `text` | Preference data. Each data item contains a dict with the keys "question", "chosen", and "rejected". |
| 3 | `origin_dataset` | Original dataset for annotation, which is not used in training. |
| 4 | `origin_split` | Meta information for each data item, including the name of the model we use to generate the original answer, and the question type ("detailed description" or "question answering") |
| 5 | `idx` | Data index. |
| 6 | `image_path` | Image path. |
## Citation
```
@article{2023rlhf-v,
author = {Tianyu Yu and Yuan Yao and Haoye Zhang and Taiwen He and Yifeng Han and Ganqu Cui and Jinyi Hu and Zhiyuan Liu and Hai-Tao Zheng and Maosong Sun and Tat-Seng Chua},
title = {RLHF-V: Towards Trustworthy MLLMs via Behavior Alignment from Fine-grained Correctional Human Feedback},
journal = {arxiv},
year = {2023},
}
``` |
james-burton/OrientalMuseum_min6-white-mat | ---
dataset_info:
features:
- name: obj_num
dtype: string
- name: file
dtype: string
- name: image
dtype: image
- name: root
dtype: string
- name: description
dtype: string
- name: object_name
dtype: string
- name: other_name
dtype: string
- name: label
dtype:
class_label:
names:
'0': Animal Mummy
'1': Batik
'2': Buffalo Horn
'3': Chinese Red Rosewood
'4': Colour on Paper
'5': Flint/Chert
'6': Gouache on Paper
'7': Haematite/Red Ochre
'8': Human Bone
'9': Ink and Colour on Paper
'10': Ink and Colours on Silk
'11': Ink and Opaque Watercolour on Paper
'12': Ink on Paper
'13': Jade (Calcified)
'14': Japanese paper
'15': Microcline/Green Feldspar/Amazon-Stone
'16': Nile Mud
'17': Opaque Watercolour on Paper
'18': Opaque Watercolour or Gouache on Mica
'19': Pith
'20': Pith Paper
'21': Plant Product
'22': Resin/Plastic
'23': Rhinoceros Horn
'24': Smaragdite
'25': Steatite
'26': Steatite/Soap Stone
'27': Watercolour on Rice Paper
'28': acrylic
'29': agate
'30': alabaster
'31': aluminum
'32': amber
'33': amethyst
'34': antler
'35': artificial stone
'36': bamboo
'37': basalt
'38': bone
'39': bowenite
'40': boxwood
'41': brass
'42': brocade
'43': bronze
'44': burnt jade
'45': canvas
'46': cardboard
'47': cards
'48': carnelian
'49': cast iron
'50': celadon
'51': cellulose acetate
'52': ceramic
'53': chalcedony
'54': cherry
'55': clay
'56': coconut
'57': copper
'58': copper alloy
'59': coral
'60': cotton
'61': crystal
'62': diorite
'63': earthenware
'64': ebony
'65': emerald
'66': enamel
'67': faience
'68': flax
'69': flint
'70': gauze
'71': glass
'72': gold
'73': granite
'74': gray ware
'75': hardwood
'76': horn
'77': incense
'78': ink
'79': iron
'80': ivory
'81': jade
'82': jadeite
'83': jasper
'84': lacquer
'85': lapis lazuli
'86': lead
'87': lead alloy
'88': leather
'89': limestone
'90': linen
'91': malachite
'92': marble
'93': metal
'94': mineral
'95': mother of pearl
'96': nephrite
'97': nylon
'98': obsidian
'99': organic material
'100': paint
'101': palm fiber
'102': palm leaf
'103': paper
'104': papier mâché
'105': papyrus
'106': pewter
'107': photographic paper
'108': pine
'109': plant fiber
'110': plaster
'111': plastic
'112': plate
'113': polyester
'114': polystyrene
'115': porcelain
'116': pottery
'117': quartzite
'118': rattan
'119': realgar
'120': reed
'121': rice paper
'122': rock
'123': rush
'124': sandstone
'125': satin
'126': schist
'127': seashell
'128': serpentine
'129': shell
'130': silk
'131': silver
'132': slate
'133': soapstone
'134': softwood
'135': steel
'136': stone
'137': stoneware
'138': stucco
'139': sycamore
'140': synthetic fiber
'141': teak
'142': terracotta
'143': textiles
'144': tin
'145': tortoise shell
'146': travertine
'147': tremolite
'148': turquoise
'149': velvet
'150': wood
'151': wool
'152': wrought iron
'153': zinc alloy
- name: production.period
dtype: string
- name: production.place
dtype: string
- name: new_root
dtype: string
splits:
- name: train
num_bytes: 681415250.562
num_examples: 23014
- name: validation
num_bytes: 148687011.448
num_examples: 5416
- name: test
num_bytes: 187104955.216
num_examples: 5416
download_size: 949262639
dataset_size: 1017207217.226
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
qwerty123123/venv | ---
license: apache-2.0
---
|
suguroglu/crowdsourced-calculator-demo | ---
configs:
- config_name: default
data_files:
- split: train
path: data.csv
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
BEE-spoke-data/falcon-refinedweb-1M_en_medium | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 5454844691
num_examples: 1000000
download_size: 3344664045
dataset_size: 5454844691
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
source_datasets: tiiuae/falcon-refinedweb
language:
- en
license: odc-by
task_categories:
- text-generation
---
# BEE-spoke-data/falcon-refinedweb-1M_en_medium
A sample from [falcon-refinedweb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb):
- more than 512 & less than 8192 gpt4 tiktoken tokens
- `en` only (via fasttext-langdetect)
- 1M samples
```
GPT-4 tiktoken token count:
token_count
count 1000000.000000
mean 1197.179246
std 964.177338
min 513.000000
25% 653.000000
50% 871.000000
75% 1315.000000
max 8191.000000
Total count: 1197.18 M tokens
``` |
mattpscott/airoboros-summarization | ---
license: bsd-3-clause
task_categories:
- summarization
language:
- en
tags:
- airoboros
- mixture of experts
- summarization
pretty_name: Airoboros Summarization Expert
---
This is my adaptation and cleaned version of the Booksum dataset to work with Airoboros by Jon Durbin
[huggingface](https://huggingface.co/jondurbin)
I created this dataset for the purposes of improving the LLM capabilities with summarization. It's a core feature that I feel many applications rely on, yet we're still relying on older Longformer, RoBERTa, or BART solutions.
This dataset has been altered from the original as follows:
- Cleaned up bad formatting, extra quotes at the beginning of summaries, extra line breaks, and references to pictures
- Tried to push everything as close to 4096 tokens in total for larger contexts
- Added instruction formatting that requires Airoboros to only use knowledge contained in the CONTEXT
- Added instruction formatting for requesting specific word counts
## Example short instruction from the dataset
```
You are an extremely careful writing assistant. Don't make up answers if you don't know.
USER:
BEGININPUT
BEGINCONTEXT
IT was a sight that some people remembered better even than their own sorrows--the sight in that grey clear morning, when the fatal cart with the two young women in it was descried by the waiting watching multitude, cleaving its way towards the hideous symbol of a deliberately inflicted sudden death.
All Stoniton had heard of Dinah Morris, the young Methodist woman who had brought the obstinate criminal to confess, and there was as much eagerness to see her as to see the wretched Hetty.
But Dinah was hardly conscious of the multitude. When Hetty had caught sight of the vast crowd in the distance, she had clutched Dinah convulsively.
"Close your eyes, Hetty," Dinah said, "and let us pray without ceasing to God."
And in a low voice, as the cart went slowly along through the midst of the gazing crowd, she poured forth her soul with the wrestling intensity of a last pleading, for the trembling creature that clung to her and clutched her as the only visible sign of love and pity.
Dinah did not know that the crowd was silent, gazing at her with a sort of awe--she did not even know how near they were to the fatal spot, when the cart stopped, and she shrank appalled at a loud shout hideous to her ear, like a vast yell of demons. Hetty's shriek mingled with the sound, and they clasped each other in mutual horror.
But it was not a shout of execration--not a yell of exultant cruelty.
It was a shout of sudden excitement at the appearance of a horseman cleaving the crowd at full gallop. The horse is hot and distressed, but answers to the desperate spurring; the rider looks as if his eyes were glazed by madness, and he saw nothing but what was unseen by others. See, he has something in his hand--he is holding it up as if it were a signal.
The Sheriff knows him: it is Arthur Donnithorne, carrying in his hand a hard-won release from death.
ENDCONTEXT
ENDINPUT
BEGININSTRUCTION
Summarize the text in around 70 words.
ENDINSTRUCTION
ASSISTANT: Dinah rides out to the gallows with Hetty. At the sight of the crowd, Hetty clings to Dinah. They pray together and keep their eyes closed. The crowd is silent and stares and Dinah in awe. As they arrive at the gallows, a huge cry goes up from the crowd because a man has arrived on horseback. Captain Donnithorne arrives, and he has with him a stay of execution.
```
# Original Dataset README follows
---
license:
- bsd-3-clause
train-eval-index:
- config: kmfoda--booksum
task: summarization
task_id: summarization
splits:
eval_split: test
col_mapping:
chapter: text
summary_text: target
---
# BOOKSUM: A Collection of Datasets for Long-form Narrative Summarization
Authors: [Wojciech Kryściński](https://twitter.com/iam_wkr), [Nazneen Rajani](https://twitter.com/nazneenrajani), [Divyansh Agarwal](https://twitter.com/jigsaw2212), [Caiming Xiong](https://twitter.com/caimingxiong), [Dragomir Radev](http://www.cs.yale.edu/homes/radev/)
## Introduction
The majority of available text summarization datasets include short-form source documents that lack long-range causal and temporal dependencies, and often contain strong layout and stylistic biases.
While relevant, such datasets will offer limited challenges for future generations of text summarization systems.
We address these issues by introducing BookSum, a collection of datasets for long-form narrative summarization.
Our dataset covers source documents from the literature domain, such as novels, plays and stories, and includes highly abstractive, human written summaries on three levels of granularity of increasing difficulty: paragraph-, chapter-, and book-level.
The domain and structure of our dataset poses a unique set of challenges for summarization systems, which include: processing very long documents, non-trivial causal and temporal dependencies, and rich discourse structures.
To facilitate future work, we trained and evaluated multiple extractive and abstractive summarization models as baselines for our dataset.
## Links
- [paper](https://arxiv.org/abs/2105.08209) by SalesForce Research
- [GitHub repo](https://github.com/salesforce/booksum)
## Table of Contents
1. [Citation](#citation)
2. [Legal Note](#legal-note)
3. [License](#license)
## Citation
```
@article{kryscinski2021booksum,
title={BookSum: A Collection of Datasets for Long-form Narrative Summarization},
author={Wojciech Kry{\'s}ci{\'n}ski and Nazneen Rajani and Divyansh Agarwal and Caiming Xiong and Dragomir Radev},
year={2021},
eprint={2105.08209},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Legal Note
By downloading or using the resources, including any code or scripts, shared in this code
repository, you hereby agree to the following terms, and your use of the resources is conditioned
on and subject to these terms.
1. You may only use the scripts shared in this code repository for research purposes. You
may not use or allow others to use the scripts for any other purposes and other uses are
expressly prohibited.
2. You will comply with all terms and conditions, and are responsible for obtaining all
rights, related to the services you access and the data you collect.
3. We do not make any representations or warranties whatsoever regarding the sources from
which data is collected. Furthermore, we are not liable for any damage, loss or expense of
any kind arising from or relating to your use of the resources shared in this code
repository or the data collected, regardless of whether such liability is based in tort,
contract or otherwise.
## License
The code is released under the **BSD-3 License** (see `LICENSE.txt` for details). |
akjindal53244/arithmo_100percent | ---
configs:
- config_name: default
data_files:
- split: train
path: combined_MathInstruct_MetaMathQA_LilaOOD_train_100percent.json
license: apache-2.0
--- |
OneFly7/llama2-SST2-SFT-with-system-prompt | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: label_text
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 22798484
num_examples: 67349
- name: validation
num_bytes: 329484
num_examples: 872
download_size: 4382265
dataset_size: 23127968
---
# Dataset Card for "llama2-SST2-SFT-with-system-prompt"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
irds/beir_hotpotqa_test | ---
pretty_name: '`beir/hotpotqa/test`'
viewer: false
source_datasets: ['irds/beir_hotpotqa']
task_categories:
- text-retrieval
---
# Dataset Card for `beir/hotpotqa/test`
The `beir/hotpotqa/test` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/hotpotqa/test).
# Data
This dataset provides:
- `queries` (i.e., topics); count=7,405
- `qrels`: (relevance assessments); count=14,810
- For `docs`, use [`irds/beir_hotpotqa`](https://huggingface.co/datasets/irds/beir_hotpotqa)
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/beir_hotpotqa_test', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/beir_hotpotqa_test', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Yang2018Hotpotqa,
title = "{H}otpot{QA}: A Dataset for Diverse, Explainable Multi-hop Question Answering",
author = "Yang, Zhilin and
Qi, Peng and
Zhang, Saizheng and
Bengio, Yoshua and
Cohen, William and
Salakhutdinov, Ruslan and
Manning, Christopher D.",
booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
month = oct # "-" # nov,
year = "2018",
address = "Brussels, Belgium",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/D18-1259",
doi = "10.18653/v1/D18-1259",
pages = "2369--2380"
}
@article{Thakur2021Beir,
title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models",
author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.08663",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.08663",
}
```
|
open-llm-leaderboard/details_ajibawa-2023__carl-33b | ---
pretty_name: Evaluation run of ajibawa-2023/carl-33b
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [ajibawa-2023/carl-33b](https://huggingface.co/ajibawa-2023/carl-33b) on the [Open\
\ LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 3 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_ajibawa-2023__carl-33b\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-25T06:29:50.391928](https://huggingface.co/datasets/open-llm-leaderboard/details_ajibawa-2023__carl-33b/blob/main/results_2023-10-25T06-29-50.391928.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.4434773489932886,\n\
\ \"em_stderr\": 0.005087644945149476,\n \"f1\": 0.48920616610738366,\n\
\ \"f1_stderr\": 0.004915552047694347,\n \"acc\": 0.4130577743896054,\n\
\ \"acc_stderr\": 0.009343755992304432\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.4434773489932886,\n \"em_stderr\": 0.005087644945149476,\n\
\ \"f1\": 0.48920616610738366,\n \"f1_stderr\": 0.004915552047694347\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.06368460955269144,\n \
\ \"acc_stderr\": 0.006726213078805715\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7624309392265194,\n \"acc_stderr\": 0.011961298905803146\n\
\ }\n}\n```"
repo_url: https://huggingface.co/ajibawa-2023/carl-33b
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_drop_3
data_files:
- split: 2023_10_16T15_30_43.173459
path:
- '**/details_harness|drop|3_2023-10-16T15-30-43.173459.parquet'
- split: 2023_10_25T06_29_50.391928
path:
- '**/details_harness|drop|3_2023-10-25T06-29-50.391928.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-25T06-29-50.391928.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_16T15_30_43.173459
path:
- '**/details_harness|gsm8k|5_2023-10-16T15-30-43.173459.parquet'
- split: 2023_10_25T06_29_50.391928
path:
- '**/details_harness|gsm8k|5_2023-10-25T06-29-50.391928.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-25T06-29-50.391928.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_16T15_30_43.173459
path:
- '**/details_harness|winogrande|5_2023-10-16T15-30-43.173459.parquet'
- split: 2023_10_25T06_29_50.391928
path:
- '**/details_harness|winogrande|5_2023-10-25T06-29-50.391928.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-25T06-29-50.391928.parquet'
- config_name: results
data_files:
- split: 2023_10_16T15_30_43.173459
path:
- results_2023-10-16T15-30-43.173459.parquet
- split: 2023_10_25T06_29_50.391928
path:
- results_2023-10-25T06-29-50.391928.parquet
- split: latest
path:
- results_2023-10-25T06-29-50.391928.parquet
---
# Dataset Card for Evaluation run of ajibawa-2023/carl-33b
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/ajibawa-2023/carl-33b
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [ajibawa-2023/carl-33b](https://huggingface.co/ajibawa-2023/carl-33b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_ajibawa-2023__carl-33b",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-25T06:29:50.391928](https://huggingface.co/datasets/open-llm-leaderboard/details_ajibawa-2023__carl-33b/blob/main/results_2023-10-25T06-29-50.391928.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.4434773489932886,
"em_stderr": 0.005087644945149476,
"f1": 0.48920616610738366,
"f1_stderr": 0.004915552047694347,
"acc": 0.4130577743896054,
"acc_stderr": 0.009343755992304432
},
"harness|drop|3": {
"em": 0.4434773489932886,
"em_stderr": 0.005087644945149476,
"f1": 0.48920616610738366,
"f1_stderr": 0.004915552047694347
},
"harness|gsm8k|5": {
"acc": 0.06368460955269144,
"acc_stderr": 0.006726213078805715
},
"harness|winogrande|5": {
"acc": 0.7624309392265194,
"acc_stderr": 0.011961298905803146
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
harleygilpin/soc-audit-11k | ---
license: apache-2.0
task_categories:
- text-generation
language:
- en
tags:
- finance
- audit
- cybersecurity
- SOC
- croissant
pretty_name: SOC Audit 11k
size_categories:
- 10K<n<100K
---
# SOC Audit Text Generation Dataset
## Description
This dataset is designed for training and evaluating Language Models (LLMs) specifically in the context of SOC 2 audits. It covers a wide range of topics including, but not limited to, information security, risk management, compliance, data privacy, and governance. The dataset consists of structured text in the format of instructions followed by a detailed response, making it ideal for models intended to assist in understanding, preparing, and conducting SOC 2 audits.
## Dataset Structure
Each entry in the dataset is a JSON object containing three keys:
system: A brief description of the AI's capabilities.
instruction: A detailed question or scenario related to SOC 2 audits and associated areas.
response: An in-depth answer or analysis corresponding to the instruction.
## Intended Use
This dataset is intended for training and evaluating AI models, especially those focusing on text generation in the domain of SOC 2 audits and related topics. It is suitable for applications in compliance training, automated auditing assistance, educational tools, and more.
## License
[Apache-2.0](https://choosealicense.com/licenses/apache-2.0/) |
SEACrowd/indo_puisi | ---
tags:
- self-supervised-pretraining
language:
- ind
---
# indo_puisi
Puisi is an Indonesian poetic form. The dataset was collected by scraping various websites. It contains 7223 Indonesian puisi along with the title and author.
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
```
## License
Creative Commons Attribution Share-Alike 4.0 International
## Homepage
[https://github.com/ilhamfp/puisi-pantun-generator](https://github.com/ilhamfp/puisi-pantun-generator)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue) |
bigbio/gnormplus |
---
language:
- en
bigbio_language:
- English
license: unknown
multilinguality: monolingual
bigbio_license_shortname: UNKNOWN
pretty_name: GNormPlus
homepage: https://www.ncbi.nlm.nih.gov/research/bionlp/Tools/gnormplus/
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
- NAMED_ENTITY_DISAMBIGUATION
---
# Dataset Card for GNormPlus
## Dataset Description
- **Homepage:** https://www.ncbi.nlm.nih.gov/research/bionlp/Tools/gnormplus/
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER,NED
We re-annotated two existing gene corpora. The BioCreative II GN corpus is a widely used data set for benchmarking GN
tools and includes document-level annotations for a total of 543 articles (281 in its training set; and 262 in test).
The Citation GIA Test Collection was recently created for gene indexing at the NLM and includes 151 PubMed abstracts
with both mention-level and document-level annotations. They are selected because both have a focus on human genes.
For both corpora, we added annotations of gene families and protein domains. For the BioCreative GN corpus, we also
added mention-level gene annotations. As a result, in our new corpus, there are a total of 694 PubMed articles.
PubTator was used as our annotation tool along with BioC formats.
## Citation Information
```
@Article{Wei2015,
author={Wei, Chih-Hsuan and Kao, Hung-Yu and Lu, Zhiyong},
title={GNormPlus: An Integrative Approach for Tagging Genes, Gene Families, and Protein Domains},
journal={BioMed Research International},
year={2015},
month={Aug},
day={25},
publisher={Hindawi Publishing Corporation},
volume={2015},
pages={918710},
issn={2314-6133},
doi={10.1155/2015/918710},
url={https://doi.org/10.1155/2015/918710}
}
```
|
jcssafedep/belie-prompts-seeds | ---
license: agpl-3.0
---
|
one-sec-cv12/chunk_161 | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
splits:
- name: train
num_bytes: 21705503328.75
num_examples: 225986
download_size: 19854151285
dataset_size: 21705503328.75
---
# Dataset Card for "chunk_161"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
galman33/gal_yair_8300_10x10_fixed | ---
dataset_info:
features:
- name: lat
dtype: float64
- name: lon
dtype: float64
- name: country_code
dtype: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 3019225.5
num_examples: 8300
download_size: 2658915
dataset_size: 3019225.5
---
# Dataset Card for "gal_yair_8300_10x10_fixed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
kevinhaowan/mannequin-demo | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype: image
splits:
- name: train
num_bytes: 2895805.0
num_examples: 3
- name: validation
num_bytes: 2895805.0
num_examples: 3
download_size: 0
dataset_size: 5791610.0
---
# Dataset Card for "mannequin-demo"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
SinonTM/Wiki-Scraper | ---
license: openrail
task_categories:
- summarization
language:
- en
pretty_name: Wiki Scraper
size_categories:
- 10K<n<100K
---
A database of Wikipedia pages summarizes certain Natural Launage Processing Model applications. |
UnderstandLing/oasst1_bn_threads | ---
license: apache-2.0
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 17371937
num_examples: 9611
- name: validation
num_bytes: 854944
num_examples: 455
download_size: 4942448
dataset_size: 18226881
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
minh21/cpgQA-v1.0-unique-context-for-flan-t5 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: title
dtype: string
- name: id
dtype: int64
- name: question
dtype: string
- name: answer_text
dtype: string
- name: answer_start
dtype: int64
- name: context
dtype: string
splits:
- name: train
num_bytes: 1132786.0440713535
num_examples: 860
- name: test
num_bytes: 180144.0
num_examples: 144
download_size: 29642
dataset_size: 1312930.0440713535
---
# Dataset Card for "cpgQA-v1.0-unique-context-for-flan-t5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
lewtun/asr-preds-test | ---
benchmark: superb
task: asr
---
# Dummy dataset to test evaluation framework for SUPERB. |
NissrineH/AIS-Australian_Institute_of_Sports | ---
configs:
- config_name: ais
data_files: "ais.csv"
sep: ";"
default: true
--- |
2HW/llama2_dobe_02 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 50333
num_examples: 100
download_size: 14026
dataset_size: 50333
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
vwxyzjn/test-dataset-bug4 | ---
dataset_info:
features:
- name: init_prompt
struct:
- name: content
dtype: string
- name: role
dtype: string
- name: init_response
struct:
- name: content
dtype: string
- name: role
dtype: string
- name: critic_prompt
struct:
- name: content
dtype: string
- name: role
dtype: string
- name: critic_response
struct:
- name: content
dtype: string
- name: role
dtype: string
- name: revision_prompt
struct:
- name: content
dtype: string
- name: role
dtype: string
- name: revision_response
struct:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 129500
num_examples: 100
- name: remove_CritiqueRequest_10_18_2023_1697667530
num_bytes: 129500
num_examples: 100
- name: remove_CritiqueRequest_10_18_2023_1697667550
num_bytes: 129500
num_examples: 100
download_size: 203079
dataset_size: 388500
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: remove_CritiqueRequest_10_18_2023_1697667530
path: data/remove_CritiqueRequest_10_18_2023_1697667530-*
- split: remove_CritiqueRequest_10_18_2023_1697667550
path: data/remove_CritiqueRequest_10_18_2023_1697667550-*
---
# Dataset Card for "test-dataset-bug4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
lawallanre/geo_nlp_tweets | ---
configs:
- config_name: default
data_files:
- split: train
path: "data/train,tsv"
- split: test
path: "data.tsv"
- split: dev
path: "data/dev.tsv"
license: ofl-1.1
task_categories:
- text-classification
language:
- en
pretty_name: lanre
size_categories:
- 1K<n<10K
--- |
Snugglife/BasicModelDataset | ---
license: other
configs:
- config_name: default
data_files: "training.csv"
sep: ";"
default: true
---
|
antareepdey/Medical_chat_Llama-chat-50k | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: Text
dtype: string
splits:
- name: train
num_bytes: 50561249
num_examples: 50000
download_size: 31132221
dataset_size: 50561249
---
# Dataset Card for "Medical_chat_Llama-chat-50k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
allenai/scrapinghub-article-extraction-benchmark | ---
dataset_info:
features:
- name: html
dtype: string
- name: articleBody
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 32354376
num_examples: 181
download_size: 10374590
dataset_size: 32354376
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: mit
task_categories:
- text2text-generation
pretty_name: Scrapinghub Article Extraction Benchmark
size_categories:
- n<1K
---
# Scrapinghub Article Extraction Benchmark
This dataset was originally created and distributed under MIT License by Scrapinghub on GitHub: [github.com/scrapinghub/article-extraction-benchmark](https://github.com/scrapinghub/article-extraction-benchmark)
It is mirrored on the HuggingFace Hub as a convenience. |
DZN111/tes | ---
license: openrail
---
|
liuyanchen1015/MULTI_VALUE_mnli_invariant_tag_fronted_isnt | ---
dataset_info:
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: score
dtype: int64
splits:
- name: dev_matched
num_bytes: 598
num_examples: 5
- name: test_matched
num_bytes: 2426
num_examples: 11
- name: test_mismatched
num_bytes: 1798
num_examples: 7
- name: train
num_bytes: 55222
num_examples: 334
download_size: 35648
dataset_size: 60044
---
# Dataset Card for "MULTI_VALUE_mnli_invariant_tag_fronted_isnt"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
zolak/twitter_dataset_81_1713214777 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 1437350
num_examples: 3561
download_size: 714311
dataset_size: 1437350
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
open-llm-leaderboard/details_glenn2__gemma-2b-lora16b2 | ---
pretty_name: Evaluation run of glenn2/gemma-2b-lora16b2
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [glenn2/gemma-2b-lora16b2](https://huggingface.co/glenn2/gemma-2b-lora16b2) on\
\ the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_glenn2__gemma-2b-lora16b2\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-02-29T12:22:19.366590](https://huggingface.co/datasets/open-llm-leaderboard/details_glenn2__gemma-2b-lora16b2/blob/main/results_2024-02-29T12-22-19.366590.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.3861886642662729,\n\
\ \"acc_stderr\": 0.034077251555587135,\n \"acc_norm\": 0.38849790088737224,\n\
\ \"acc_norm_stderr\": 0.034824395698070706,\n \"mc1\": 0.24357405140758873,\n\
\ \"mc1_stderr\": 0.015026354824910782,\n \"mc2\": 0.3644999830185384,\n\
\ \"mc2_stderr\": 0.013800159294635947\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.46075085324232085,\n \"acc_stderr\": 0.014566303676636584,\n\
\ \"acc_norm\": 0.47525597269624575,\n \"acc_norm_stderr\": 0.014593487694937742\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.5296753634734117,\n\
\ \"acc_stderr\": 0.004980985384152899,\n \"acc_norm\": 0.7196773551085441,\n\
\ \"acc_norm_stderr\": 0.004482388821388947\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.3,\n \"acc_stderr\": 0.04605661864718381,\n \
\ \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.04605661864718381\n },\n\
\ \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.4666666666666667,\n\
\ \"acc_stderr\": 0.043097329010363554,\n \"acc_norm\": 0.4666666666666667,\n\
\ \"acc_norm_stderr\": 0.043097329010363554\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.34210526315789475,\n \"acc_stderr\": 0.03860731599316092,\n\
\ \"acc_norm\": 0.34210526315789475,\n \"acc_norm_stderr\": 0.03860731599316092\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.36,\n\
\ \"acc_stderr\": 0.048241815132442176,\n \"acc_norm\": 0.36,\n \
\ \"acc_norm_stderr\": 0.048241815132442176\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.45660377358490567,\n \"acc_stderr\": 0.030656748696739435,\n\
\ \"acc_norm\": 0.45660377358490567,\n \"acc_norm_stderr\": 0.030656748696739435\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.4375,\n\
\ \"acc_stderr\": 0.04148415739394154,\n \"acc_norm\": 0.4375,\n \
\ \"acc_norm_stderr\": 0.04148415739394154\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.26,\n \"acc_stderr\": 0.04408440022768079,\n \
\ \"acc_norm\": 0.26,\n \"acc_norm_stderr\": 0.04408440022768079\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.32,\n \"acc_stderr\": 0.04688261722621504,\n \"acc_norm\": 0.32,\n\
\ \"acc_norm_stderr\": 0.04688261722621504\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.25,\n \"acc_stderr\": 0.04351941398892446,\n \
\ \"acc_norm\": 0.25,\n \"acc_norm_stderr\": 0.04351941398892446\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.3583815028901734,\n\
\ \"acc_stderr\": 0.036563436533531585,\n \"acc_norm\": 0.3583815028901734,\n\
\ \"acc_norm_stderr\": 0.036563436533531585\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.23529411764705882,\n \"acc_stderr\": 0.04220773659171452,\n\
\ \"acc_norm\": 0.23529411764705882,\n \"acc_norm_stderr\": 0.04220773659171452\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.48,\n \"acc_stderr\": 0.05021167315686781,\n \"acc_norm\": 0.48,\n\
\ \"acc_norm_stderr\": 0.05021167315686781\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.4127659574468085,\n \"acc_stderr\": 0.03218471141400351,\n\
\ \"acc_norm\": 0.4127659574468085,\n \"acc_norm_stderr\": 0.03218471141400351\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.30701754385964913,\n\
\ \"acc_stderr\": 0.0433913832257986,\n \"acc_norm\": 0.30701754385964913,\n\
\ \"acc_norm_stderr\": 0.0433913832257986\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.35172413793103446,\n \"acc_stderr\": 0.0397923663749741,\n\
\ \"acc_norm\": 0.35172413793103446,\n \"acc_norm_stderr\": 0.0397923663749741\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.31216931216931215,\n \"acc_stderr\": 0.023865206836972595,\n \"\
acc_norm\": 0.31216931216931215,\n \"acc_norm_stderr\": 0.023865206836972595\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.1984126984126984,\n\
\ \"acc_stderr\": 0.03567016675276864,\n \"acc_norm\": 0.1984126984126984,\n\
\ \"acc_norm_stderr\": 0.03567016675276864\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.35,\n \"acc_stderr\": 0.047937248544110196,\n \
\ \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.047937248544110196\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\"\
: 0.4,\n \"acc_stderr\": 0.027869320571664632,\n \"acc_norm\": 0.4,\n\
\ \"acc_norm_stderr\": 0.027869320571664632\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.35960591133004927,\n \"acc_stderr\": 0.03376458246509567,\n\
\ \"acc_norm\": 0.35960591133004927,\n \"acc_norm_stderr\": 0.03376458246509567\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.37,\n \"acc_stderr\": 0.048523658709391,\n \"acc_norm\"\
: 0.37,\n \"acc_norm_stderr\": 0.048523658709391\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.4303030303030303,\n \"acc_stderr\": 0.03866225962879077,\n\
\ \"acc_norm\": 0.4303030303030303,\n \"acc_norm_stderr\": 0.03866225962879077\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.4444444444444444,\n \"acc_stderr\": 0.035402943770953675,\n \"\
acc_norm\": 0.4444444444444444,\n \"acc_norm_stderr\": 0.035402943770953675\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.5025906735751295,\n \"acc_stderr\": 0.03608390745384487,\n\
\ \"acc_norm\": 0.5025906735751295,\n \"acc_norm_stderr\": 0.03608390745384487\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.31025641025641026,\n \"acc_stderr\": 0.023454674889404288,\n\
\ \"acc_norm\": 0.31025641025641026,\n \"acc_norm_stderr\": 0.023454674889404288\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.24814814814814815,\n \"acc_stderr\": 0.0263357394040558,\n \
\ \"acc_norm\": 0.24814814814814815,\n \"acc_norm_stderr\": 0.0263357394040558\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.33613445378151263,\n \"acc_stderr\": 0.030684737115135356,\n\
\ \"acc_norm\": 0.33613445378151263,\n \"acc_norm_stderr\": 0.030684737115135356\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.24503311258278146,\n \"acc_stderr\": 0.035118075718047245,\n \"\
acc_norm\": 0.24503311258278146,\n \"acc_norm_stderr\": 0.035118075718047245\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.46972477064220186,\n \"acc_stderr\": 0.021397988604936965,\n \"\
acc_norm\": 0.46972477064220186,\n \"acc_norm_stderr\": 0.021397988604936965\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.24074074074074073,\n \"acc_stderr\": 0.0291575221846056,\n \"\
acc_norm\": 0.24074074074074073,\n \"acc_norm_stderr\": 0.0291575221846056\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.39215686274509803,\n \"acc_stderr\": 0.03426712349247272,\n \"\
acc_norm\": 0.39215686274509803,\n \"acc_norm_stderr\": 0.03426712349247272\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.4177215189873418,\n \"acc_stderr\": 0.032103530322412685,\n \
\ \"acc_norm\": 0.4177215189873418,\n \"acc_norm_stderr\": 0.032103530322412685\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.45739910313901344,\n\
\ \"acc_stderr\": 0.033435777055830646,\n \"acc_norm\": 0.45739910313901344,\n\
\ \"acc_norm_stderr\": 0.033435777055830646\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.35877862595419846,\n \"acc_stderr\": 0.04206739313864908,\n\
\ \"acc_norm\": 0.35877862595419846,\n \"acc_norm_stderr\": 0.04206739313864908\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.5537190082644629,\n \"acc_stderr\": 0.0453793517794788,\n \"acc_norm\"\
: 0.5537190082644629,\n \"acc_norm_stderr\": 0.0453793517794788\n },\n\
\ \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.37037037037037035,\n\
\ \"acc_stderr\": 0.04668408033024932,\n \"acc_norm\": 0.37037037037037035,\n\
\ \"acc_norm_stderr\": 0.04668408033024932\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.37423312883435583,\n \"acc_stderr\": 0.038020681028996146,\n\
\ \"acc_norm\": 0.37423312883435583,\n \"acc_norm_stderr\": 0.038020681028996146\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.33035714285714285,\n\
\ \"acc_stderr\": 0.04464285714285713,\n \"acc_norm\": 0.33035714285714285,\n\
\ \"acc_norm_stderr\": 0.04464285714285713\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.49514563106796117,\n \"acc_stderr\": 0.04950504382128921,\n\
\ \"acc_norm\": 0.49514563106796117,\n \"acc_norm_stderr\": 0.04950504382128921\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.5726495726495726,\n\
\ \"acc_stderr\": 0.032408473935163266,\n \"acc_norm\": 0.5726495726495726,\n\
\ \"acc_norm_stderr\": 0.032408473935163266\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.38,\n \"acc_stderr\": 0.04878317312145633,\n \
\ \"acc_norm\": 0.38,\n \"acc_norm_stderr\": 0.04878317312145633\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.5389527458492975,\n\
\ \"acc_stderr\": 0.017825621793239012,\n \"acc_norm\": 0.5389527458492975,\n\
\ \"acc_norm_stderr\": 0.017825621793239012\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.3583815028901734,\n \"acc_stderr\": 0.025816756791584187,\n\
\ \"acc_norm\": 0.3583815028901734,\n \"acc_norm_stderr\": 0.025816756791584187\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.23687150837988827,\n\
\ \"acc_stderr\": 0.01421957078810398,\n \"acc_norm\": 0.23687150837988827,\n\
\ \"acc_norm_stderr\": 0.01421957078810398\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.3464052287581699,\n \"acc_stderr\": 0.027245613047215355,\n\
\ \"acc_norm\": 0.3464052287581699,\n \"acc_norm_stderr\": 0.027245613047215355\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.41479099678456594,\n\
\ \"acc_stderr\": 0.02798268045975956,\n \"acc_norm\": 0.41479099678456594,\n\
\ \"acc_norm_stderr\": 0.02798268045975956\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.44135802469135804,\n \"acc_stderr\": 0.027628737155668777,\n\
\ \"acc_norm\": 0.44135802469135804,\n \"acc_norm_stderr\": 0.027628737155668777\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.2730496453900709,\n \"acc_stderr\": 0.02657786094330785,\n \
\ \"acc_norm\": 0.2730496453900709,\n \"acc_norm_stderr\": 0.02657786094330785\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.3161668839634941,\n\
\ \"acc_stderr\": 0.011875780894386578,\n \"acc_norm\": 0.3161668839634941,\n\
\ \"acc_norm_stderr\": 0.011875780894386578\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.27941176470588236,\n \"acc_stderr\": 0.027257202606114948,\n\
\ \"acc_norm\": 0.27941176470588236,\n \"acc_norm_stderr\": 0.027257202606114948\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.369281045751634,\n \"acc_stderr\": 0.019524316744866353,\n \
\ \"acc_norm\": 0.369281045751634,\n \"acc_norm_stderr\": 0.019524316744866353\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.41818181818181815,\n\
\ \"acc_stderr\": 0.04724577405731571,\n \"acc_norm\": 0.41818181818181815,\n\
\ \"acc_norm_stderr\": 0.04724577405731571\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.46530612244897956,\n \"acc_stderr\": 0.03193207024425314,\n\
\ \"acc_norm\": 0.46530612244897956,\n \"acc_norm_stderr\": 0.03193207024425314\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.4925373134328358,\n\
\ \"acc_stderr\": 0.035351400842767194,\n \"acc_norm\": 0.4925373134328358,\n\
\ \"acc_norm_stderr\": 0.035351400842767194\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.54,\n \"acc_stderr\": 0.05009082659620332,\n \
\ \"acc_norm\": 0.54,\n \"acc_norm_stderr\": 0.05009082659620332\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.43373493975903615,\n\
\ \"acc_stderr\": 0.03858158940685517,\n \"acc_norm\": 0.43373493975903615,\n\
\ \"acc_norm_stderr\": 0.03858158940685517\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.543859649122807,\n \"acc_stderr\": 0.03820042586602966,\n\
\ \"acc_norm\": 0.543859649122807,\n \"acc_norm_stderr\": 0.03820042586602966\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.24357405140758873,\n\
\ \"mc1_stderr\": 0.015026354824910782,\n \"mc2\": 0.3644999830185384,\n\
\ \"mc2_stderr\": 0.013800159294635947\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.6692975532754538,\n \"acc_stderr\": 0.013222435887002703\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.17134192570128887,\n \
\ \"acc_stderr\": 0.010379150273178359\n }\n}\n```"
repo_url: https://huggingface.co/glenn2/gemma-2b-lora16b2
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_02_29T12_22_19.366590
path:
- '**/details_harness|arc:challenge|25_2024-02-29T12-22-19.366590.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-02-29T12-22-19.366590.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_02_29T12_22_19.366590
path:
- '**/details_harness|gsm8k|5_2024-02-29T12-22-19.366590.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-02-29T12-22-19.366590.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_02_29T12_22_19.366590
path:
- '**/details_harness|hellaswag|10_2024-02-29T12-22-19.366590.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-02-29T12-22-19.366590.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_02_29T12_22_19.366590
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-29T12-22-19.366590.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-29T12-22-19.366590.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-29T12-22-19.366590.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_02_29T12_22_19.366590
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-29T12-22-19.366590.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-29T12-22-19.366590.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_02_29T12_22_19.366590
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-29T12-22-19.366590.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-29T12-22-19.366590.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_02_29T12_22_19.366590
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-29T12-22-19.366590.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-29T12-22-19.366590.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_02_29T12_22_19.366590
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-29T12-22-19.366590.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-29T12-22-19.366590.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_02_29T12_22_19.366590
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-29T12-22-19.366590.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-29T12-22-19.366590.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_02_29T12_22_19.366590
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-29T12-22-19.366590.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-29T12-22-19.366590.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_02_29T12_22_19.366590
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-29T12-22-19.366590.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-29T12-22-19.366590.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_02_29T12_22_19.366590
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-29T12-22-19.366590.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-29T12-22-19.366590.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_02_29T12_22_19.366590
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-29T12-22-19.366590.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-29T12-22-19.366590.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_02_29T12_22_19.366590
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-29T12-22-19.366590.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-29T12-22-19.366590.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_02_29T12_22_19.366590
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-29T12-22-19.366590.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-29T12-22-19.366590.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_02_29T12_22_19.366590
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-29T12-22-19.366590.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-29T12-22-19.366590.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_02_29T12_22_19.366590
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-29T12-22-19.366590.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-29T12-22-19.366590.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_02_29T12_22_19.366590
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-29T12-22-19.366590.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-29T12-22-19.366590.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_02_29T12_22_19.366590
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-29T12-22-19.366590.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-29T12-22-19.366590.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_02_29T12_22_19.366590
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-29T12-22-19.366590.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-29T12-22-19.366590.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_02_29T12_22_19.366590
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-29T12-22-19.366590.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-29T12-22-19.366590.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_02_29T12_22_19.366590
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-29T12-22-19.366590.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-29T12-22-19.366590.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_02_29T12_22_19.366590
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-29T12-22-19.366590.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-29T12-22-19.366590.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_02_29T12_22_19.366590
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-29T12-22-19.366590.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-29T12-22-19.366590.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_02_29T12_22_19.366590
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-29T12-22-19.366590.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-29T12-22-19.366590.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_02_29T12_22_19.366590
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-29T12-22-19.366590.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-29T12-22-19.366590.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_02_29T12_22_19.366590
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-29T12-22-19.366590.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-29T12-22-19.366590.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_02_29T12_22_19.366590
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-29T12-22-19.366590.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-29T12-22-19.366590.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_02_29T12_22_19.366590
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-29T12-22-19.366590.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-29T12-22-19.366590.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_02_29T12_22_19.366590
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-29T12-22-19.366590.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-29T12-22-19.366590.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_02_29T12_22_19.366590
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-29T12-22-19.366590.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-29T12-22-19.366590.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_02_29T12_22_19.366590
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-29T12-22-19.366590.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-29T12-22-19.366590.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_02_29T12_22_19.366590
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-29T12-22-19.366590.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-29T12-22-19.366590.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_02_29T12_22_19.366590
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-29T12-22-19.366590.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-29T12-22-19.366590.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_02_29T12_22_19.366590
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-29T12-22-19.366590.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-29T12-22-19.366590.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_02_29T12_22_19.366590
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-29T12-22-19.366590.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-29T12-22-19.366590.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_02_29T12_22_19.366590
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-29T12-22-19.366590.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-29T12-22-19.366590.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_02_29T12_22_19.366590
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-29T12-22-19.366590.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-29T12-22-19.366590.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_02_29T12_22_19.366590
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-29T12-22-19.366590.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-29T12-22-19.366590.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_02_29T12_22_19.366590
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-29T12-22-19.366590.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-29T12-22-19.366590.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_02_29T12_22_19.366590
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-29T12-22-19.366590.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-29T12-22-19.366590.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_02_29T12_22_19.366590
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-29T12-22-19.366590.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-29T12-22-19.366590.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_02_29T12_22_19.366590
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-29T12-22-19.366590.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-29T12-22-19.366590.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_02_29T12_22_19.366590
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-29T12-22-19.366590.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-29T12-22-19.366590.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_02_29T12_22_19.366590
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-29T12-22-19.366590.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-29T12-22-19.366590.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_02_29T12_22_19.366590
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-29T12-22-19.366590.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-29T12-22-19.366590.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_02_29T12_22_19.366590
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-29T12-22-19.366590.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-29T12-22-19.366590.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_02_29T12_22_19.366590
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-29T12-22-19.366590.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-29T12-22-19.366590.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_02_29T12_22_19.366590
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-29T12-22-19.366590.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-29T12-22-19.366590.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_02_29T12_22_19.366590
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-29T12-22-19.366590.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-29T12-22-19.366590.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_02_29T12_22_19.366590
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-29T12-22-19.366590.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-29T12-22-19.366590.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_02_29T12_22_19.366590
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-29T12-22-19.366590.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-29T12-22-19.366590.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_02_29T12_22_19.366590
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-29T12-22-19.366590.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-29T12-22-19.366590.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_02_29T12_22_19.366590
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-29T12-22-19.366590.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-29T12-22-19.366590.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_02_29T12_22_19.366590
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-29T12-22-19.366590.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-29T12-22-19.366590.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_02_29T12_22_19.366590
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-29T12-22-19.366590.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-29T12-22-19.366590.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_02_29T12_22_19.366590
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-29T12-22-19.366590.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-29T12-22-19.366590.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_02_29T12_22_19.366590
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-29T12-22-19.366590.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-29T12-22-19.366590.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_02_29T12_22_19.366590
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-29T12-22-19.366590.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-29T12-22-19.366590.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_02_29T12_22_19.366590
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-29T12-22-19.366590.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-29T12-22-19.366590.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_02_29T12_22_19.366590
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-29T12-22-19.366590.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-29T12-22-19.366590.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_02_29T12_22_19.366590
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-29T12-22-19.366590.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-29T12-22-19.366590.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_02_29T12_22_19.366590
path:
- '**/details_harness|winogrande|5_2024-02-29T12-22-19.366590.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-02-29T12-22-19.366590.parquet'
- config_name: results
data_files:
- split: 2024_02_29T12_22_19.366590
path:
- results_2024-02-29T12-22-19.366590.parquet
- split: latest
path:
- results_2024-02-29T12-22-19.366590.parquet
---
# Dataset Card for Evaluation run of glenn2/gemma-2b-lora16b2
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [glenn2/gemma-2b-lora16b2](https://huggingface.co/glenn2/gemma-2b-lora16b2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_glenn2__gemma-2b-lora16b2",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-02-29T12:22:19.366590](https://huggingface.co/datasets/open-llm-leaderboard/details_glenn2__gemma-2b-lora16b2/blob/main/results_2024-02-29T12-22-19.366590.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.3861886642662729,
"acc_stderr": 0.034077251555587135,
"acc_norm": 0.38849790088737224,
"acc_norm_stderr": 0.034824395698070706,
"mc1": 0.24357405140758873,
"mc1_stderr": 0.015026354824910782,
"mc2": 0.3644999830185384,
"mc2_stderr": 0.013800159294635947
},
"harness|arc:challenge|25": {
"acc": 0.46075085324232085,
"acc_stderr": 0.014566303676636584,
"acc_norm": 0.47525597269624575,
"acc_norm_stderr": 0.014593487694937742
},
"harness|hellaswag|10": {
"acc": 0.5296753634734117,
"acc_stderr": 0.004980985384152899,
"acc_norm": 0.7196773551085441,
"acc_norm_stderr": 0.004482388821388947
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.3,
"acc_stderr": 0.04605661864718381,
"acc_norm": 0.3,
"acc_norm_stderr": 0.04605661864718381
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.4666666666666667,
"acc_stderr": 0.043097329010363554,
"acc_norm": 0.4666666666666667,
"acc_norm_stderr": 0.043097329010363554
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.34210526315789475,
"acc_stderr": 0.03860731599316092,
"acc_norm": 0.34210526315789475,
"acc_norm_stderr": 0.03860731599316092
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.36,
"acc_stderr": 0.048241815132442176,
"acc_norm": 0.36,
"acc_norm_stderr": 0.048241815132442176
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.45660377358490567,
"acc_stderr": 0.030656748696739435,
"acc_norm": 0.45660377358490567,
"acc_norm_stderr": 0.030656748696739435
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.4375,
"acc_stderr": 0.04148415739394154,
"acc_norm": 0.4375,
"acc_norm_stderr": 0.04148415739394154
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.26,
"acc_stderr": 0.04408440022768079,
"acc_norm": 0.26,
"acc_norm_stderr": 0.04408440022768079
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.32,
"acc_stderr": 0.04688261722621504,
"acc_norm": 0.32,
"acc_norm_stderr": 0.04688261722621504
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.25,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.25,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.3583815028901734,
"acc_stderr": 0.036563436533531585,
"acc_norm": 0.3583815028901734,
"acc_norm_stderr": 0.036563436533531585
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.23529411764705882,
"acc_stderr": 0.04220773659171452,
"acc_norm": 0.23529411764705882,
"acc_norm_stderr": 0.04220773659171452
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.48,
"acc_stderr": 0.05021167315686781,
"acc_norm": 0.48,
"acc_norm_stderr": 0.05021167315686781
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.4127659574468085,
"acc_stderr": 0.03218471141400351,
"acc_norm": 0.4127659574468085,
"acc_norm_stderr": 0.03218471141400351
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.30701754385964913,
"acc_stderr": 0.0433913832257986,
"acc_norm": 0.30701754385964913,
"acc_norm_stderr": 0.0433913832257986
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.35172413793103446,
"acc_stderr": 0.0397923663749741,
"acc_norm": 0.35172413793103446,
"acc_norm_stderr": 0.0397923663749741
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.31216931216931215,
"acc_stderr": 0.023865206836972595,
"acc_norm": 0.31216931216931215,
"acc_norm_stderr": 0.023865206836972595
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.1984126984126984,
"acc_stderr": 0.03567016675276864,
"acc_norm": 0.1984126984126984,
"acc_norm_stderr": 0.03567016675276864
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.35,
"acc_stderr": 0.047937248544110196,
"acc_norm": 0.35,
"acc_norm_stderr": 0.047937248544110196
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.4,
"acc_stderr": 0.027869320571664632,
"acc_norm": 0.4,
"acc_norm_stderr": 0.027869320571664632
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.35960591133004927,
"acc_stderr": 0.03376458246509567,
"acc_norm": 0.35960591133004927,
"acc_norm_stderr": 0.03376458246509567
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.37,
"acc_stderr": 0.048523658709391,
"acc_norm": 0.37,
"acc_norm_stderr": 0.048523658709391
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.4303030303030303,
"acc_stderr": 0.03866225962879077,
"acc_norm": 0.4303030303030303,
"acc_norm_stderr": 0.03866225962879077
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.4444444444444444,
"acc_stderr": 0.035402943770953675,
"acc_norm": 0.4444444444444444,
"acc_norm_stderr": 0.035402943770953675
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.5025906735751295,
"acc_stderr": 0.03608390745384487,
"acc_norm": 0.5025906735751295,
"acc_norm_stderr": 0.03608390745384487
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.31025641025641026,
"acc_stderr": 0.023454674889404288,
"acc_norm": 0.31025641025641026,
"acc_norm_stderr": 0.023454674889404288
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.24814814814814815,
"acc_stderr": 0.0263357394040558,
"acc_norm": 0.24814814814814815,
"acc_norm_stderr": 0.0263357394040558
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.33613445378151263,
"acc_stderr": 0.030684737115135356,
"acc_norm": 0.33613445378151263,
"acc_norm_stderr": 0.030684737115135356
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.24503311258278146,
"acc_stderr": 0.035118075718047245,
"acc_norm": 0.24503311258278146,
"acc_norm_stderr": 0.035118075718047245
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.46972477064220186,
"acc_stderr": 0.021397988604936965,
"acc_norm": 0.46972477064220186,
"acc_norm_stderr": 0.021397988604936965
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.24074074074074073,
"acc_stderr": 0.0291575221846056,
"acc_norm": 0.24074074074074073,
"acc_norm_stderr": 0.0291575221846056
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.39215686274509803,
"acc_stderr": 0.03426712349247272,
"acc_norm": 0.39215686274509803,
"acc_norm_stderr": 0.03426712349247272
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.4177215189873418,
"acc_stderr": 0.032103530322412685,
"acc_norm": 0.4177215189873418,
"acc_norm_stderr": 0.032103530322412685
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.45739910313901344,
"acc_stderr": 0.033435777055830646,
"acc_norm": 0.45739910313901344,
"acc_norm_stderr": 0.033435777055830646
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.35877862595419846,
"acc_stderr": 0.04206739313864908,
"acc_norm": 0.35877862595419846,
"acc_norm_stderr": 0.04206739313864908
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.5537190082644629,
"acc_stderr": 0.0453793517794788,
"acc_norm": 0.5537190082644629,
"acc_norm_stderr": 0.0453793517794788
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.37037037037037035,
"acc_stderr": 0.04668408033024932,
"acc_norm": 0.37037037037037035,
"acc_norm_stderr": 0.04668408033024932
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.37423312883435583,
"acc_stderr": 0.038020681028996146,
"acc_norm": 0.37423312883435583,
"acc_norm_stderr": 0.038020681028996146
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.33035714285714285,
"acc_stderr": 0.04464285714285713,
"acc_norm": 0.33035714285714285,
"acc_norm_stderr": 0.04464285714285713
},
"harness|hendrycksTest-management|5": {
"acc": 0.49514563106796117,
"acc_stderr": 0.04950504382128921,
"acc_norm": 0.49514563106796117,
"acc_norm_stderr": 0.04950504382128921
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.5726495726495726,
"acc_stderr": 0.032408473935163266,
"acc_norm": 0.5726495726495726,
"acc_norm_stderr": 0.032408473935163266
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.38,
"acc_stderr": 0.04878317312145633,
"acc_norm": 0.38,
"acc_norm_stderr": 0.04878317312145633
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.5389527458492975,
"acc_stderr": 0.017825621793239012,
"acc_norm": 0.5389527458492975,
"acc_norm_stderr": 0.017825621793239012
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.3583815028901734,
"acc_stderr": 0.025816756791584187,
"acc_norm": 0.3583815028901734,
"acc_norm_stderr": 0.025816756791584187
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.23687150837988827,
"acc_stderr": 0.01421957078810398,
"acc_norm": 0.23687150837988827,
"acc_norm_stderr": 0.01421957078810398
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.3464052287581699,
"acc_stderr": 0.027245613047215355,
"acc_norm": 0.3464052287581699,
"acc_norm_stderr": 0.027245613047215355
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.41479099678456594,
"acc_stderr": 0.02798268045975956,
"acc_norm": 0.41479099678456594,
"acc_norm_stderr": 0.02798268045975956
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.44135802469135804,
"acc_stderr": 0.027628737155668777,
"acc_norm": 0.44135802469135804,
"acc_norm_stderr": 0.027628737155668777
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.2730496453900709,
"acc_stderr": 0.02657786094330785,
"acc_norm": 0.2730496453900709,
"acc_norm_stderr": 0.02657786094330785
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.3161668839634941,
"acc_stderr": 0.011875780894386578,
"acc_norm": 0.3161668839634941,
"acc_norm_stderr": 0.011875780894386578
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.27941176470588236,
"acc_stderr": 0.027257202606114948,
"acc_norm": 0.27941176470588236,
"acc_norm_stderr": 0.027257202606114948
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.369281045751634,
"acc_stderr": 0.019524316744866353,
"acc_norm": 0.369281045751634,
"acc_norm_stderr": 0.019524316744866353
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.41818181818181815,
"acc_stderr": 0.04724577405731571,
"acc_norm": 0.41818181818181815,
"acc_norm_stderr": 0.04724577405731571
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.46530612244897956,
"acc_stderr": 0.03193207024425314,
"acc_norm": 0.46530612244897956,
"acc_norm_stderr": 0.03193207024425314
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.4925373134328358,
"acc_stderr": 0.035351400842767194,
"acc_norm": 0.4925373134328358,
"acc_norm_stderr": 0.035351400842767194
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.54,
"acc_stderr": 0.05009082659620332,
"acc_norm": 0.54,
"acc_norm_stderr": 0.05009082659620332
},
"harness|hendrycksTest-virology|5": {
"acc": 0.43373493975903615,
"acc_stderr": 0.03858158940685517,
"acc_norm": 0.43373493975903615,
"acc_norm_stderr": 0.03858158940685517
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.543859649122807,
"acc_stderr": 0.03820042586602966,
"acc_norm": 0.543859649122807,
"acc_norm_stderr": 0.03820042586602966
},
"harness|truthfulqa:mc|0": {
"mc1": 0.24357405140758873,
"mc1_stderr": 0.015026354824910782,
"mc2": 0.3644999830185384,
"mc2_stderr": 0.013800159294635947
},
"harness|winogrande|5": {
"acc": 0.6692975532754538,
"acc_stderr": 0.013222435887002703
},
"harness|gsm8k|5": {
"acc": 0.17134192570128887,
"acc_stderr": 0.010379150273178359
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
HarmatonI/salomon-training | ---
license: apache-2.0
---
|
presencesw/QAK_test | ---
dataset_info:
features:
- name: Doc
dtype: string
splits:
- name: train
num_bytes: 14121
num_examples: 6
download_size: 26062
dataset_size: 14121
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
irds/codesearchnet_valid | ---
pretty_name: '`codesearchnet/valid`'
viewer: false
source_datasets: ['irds/codesearchnet']
task_categories:
- text-retrieval
---
# Dataset Card for `codesearchnet/valid`
The `codesearchnet/valid` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/codesearchnet#codesearchnet/valid).
# Data
This dataset provides:
- `queries` (i.e., topics); count=89,154
- `qrels`: (relevance assessments); count=89,154
- For `docs`, use [`irds/codesearchnet`](https://huggingface.co/datasets/irds/codesearchnet)
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/codesearchnet_valid', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/codesearchnet_valid', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@article{Husain2019CodeSearchNet,
title={CodeSearchNet Challenge: Evaluating the State of Semantic Code Search},
author={Hamel Husain and Ho-Hsiang Wu and Tiferet Gazit and Miltiadis Allamanis and Marc Brockschmidt},
journal={ArXiv},
year={2019}
}
```
|
to-be/annomi-motivational-interviewing-therapy-conversations | ---
license: openrail
task_categories:
- conversational
language:
- en
tags:
- medical
- therapy
- conversations
- counseling
- dialogues
pretty_name: annomi
size_categories:
- n<1K
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
Converted the AnnoMI motivational interviewing dataset into sharegpt format.
It is the first public collection of expert-annotated MI transcripts. [Source](https://github.com/uccollab/AnnoMI).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
AnnoMI, containing 133 faithfully transcribed and expert-annotated demonstrations of high- and low-quality motivational interviewing (MI), an effective therapy strategy that evokes client motivation for positive change.
Sample conversation number 40, regarding drug use:
```
client
Pretty good. It's been three months now.
therapist
Nice.
client
Um, and—
therapist
Since the-- since you used, uh-- since the last time you used meth, right?
client
Right.
therapist
Okay.
client
Right. It's been three months, and I've been doing pretty well.
therapist
Good.
client
Um, and I met a couple new friends.
therapist
Nice.
client
So—
therapist
So-so, uh, have-- last time we talked, you know, you were kind of cutting the-the ex-boyfriend out. This is because he was your dealer, so--
client
Right.
therapist
And how is that going for you?
client
Right. It's been good. I haven't answered the phone when he's called-
therapist
Okay.
client
-and a couple of my other friends have tried to call me as well, and I haven't answered the phone. Um, so it's been good for me to just reach out and like, meet a couple of new people, um, in the neigh-neighborhood that I'm living in.
therapist
So it sounds like you've made some good changes. The change in your environment, and some of your behaviors, and, uh, and these new friends.
client
Yeah, the new friends, and I've been using the, you know, the breathing exercises you taught me, when I get-- start to feel a little stressed out, and I've been- I've been using those and it helps me. I go for a walk and do my breathing exercises, and it calms me down and kind of gets me through that time.
therapist
So it sounds like--
```
### Dataset Sources
<!-- Provide the basic links for the dataset. -->
- **Repository:** [https://github.com/uccollab/AnnoMI]
- **Paper [optional]:** [https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9746035]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
Training on longer conversations and medical advice.
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
Sharegpt format to allow for long conversations back and forth.
therapist is gpt
client is human
id is a concatenation of:
- transcript_id: the unique numerical identifier of the conversation/transcript where this utterance belongs. Note that this identifier is NOT used for ordering, and it is only to distinguish between different conversations in the dataset.
- video_url: the URL of the original video of the conversation/transcript where this utterance belongs.
- topic: the topic(s) of the conversation/transcript where this utterance belongs.
```
[
{
"id": "40_https://www.youtube.com/watch?v=wcu8oqHIsbc_reducing drug use",
"conversations": [
{
"from": "human",
"value": "Pretty good. It's been three months now"
},
{
"from": "gpt",
"value": "Nice."
},
{
"from": "human",
"value": "Right. It's been three months, and I've been doing pretty well."
},
{
"from": "gpt",
"value": "So-so, uh, have-- last time we talked, you know, you were kind of cutting the-the ex-boyfriend out. This is because he was your dealer, so--"
},
...
]
```
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
Despite the remarkable growth of research in recent years on the analysis of counselling conversations through natural language processing methods, the potential of this field has been greatly limited by the lack of access to publicly available therapy dialogues, especially those with expert annotations.
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
It is anonymized.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
@INPROCEEDINGS{9746035,
author={Wu, Zixiu and Balloccu, Simone and Kumar, Vivek and Helaoui, Rim and Reiter, Ehud and Reforgiato Recupero, Diego and Riboni, Daniele},
booktitle={ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
title={Anno-MI: A Dataset of Expert-Annotated Counselling Dialogues},
year={2022},
volume={},
number={},
pages={6177-6181},
doi={10.1109/ICASSP43922.2022.9746035}}
}
## Dataset Card Contact
Toon Beerten (toon@neontreebot.be) |
Anofrix/lols | ---
license: apache-2.0
---
|
nguyenth1312/vietnamese_feat | ---
dataset_info:
features:
- name: image
dtype: image
- name: 'Unnamed: 0'
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 423262848.0
num_examples: 144
download_size: 371383004
dataset_size: 423262848.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "vietnamese_feat"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
reach-vb/mls-eng-10k-repunct-all-spacy | ---
dataset_info:
features:
- name: original_path
dtype: string
- name: begin_time
dtype: float64
- name: end_time
dtype: float64
- name: transcript
dtype: string
- name: audio_duration
dtype: float64
- name: speaker_id
dtype: string
- name: book_id
dtype: string
- name: repunct_text
dtype: string
splits:
- name: dev
num_bytes: 2183089
num_examples: 3807
- name: test
num_bytes: 2169166
num_examples: 3769
- name: train
num_bytes: 1385541464
num_examples: 2420047
download_size: 882080241
dataset_size: 1389893719
configs:
- config_name: default
data_files:
- split: dev
path: data/dev-*
- split: test
path: data/test-*
- split: train
path: data/train-*
---
|
Elamranilyas/processedCordV2 | ---
dataset_info:
features:
- name: image
dtype: image
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 1297739248.0
num_examples: 800
- name: test
num_bytes: 163680123.0
num_examples: 100
- name: validation
num_bytes: 171665014.0
num_examples: 100
download_size: 1182410690
dataset_size: 1633084385.0
---
# Dataset Card for "processedCordV2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
maveriq/DocBank | ---
annotations_creators:
- machine-generated
language:
- en
language_creators:
- machine-generated
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: DocBank
size_categories:
- 100K<n<1M
source_datasets:
- original
tags:
- document-ai
task_categories: []
task_ids: []
---
# Dataset Card for DocBank
## Table of Contents
- [Dataset Card for DocBank](#dataset-card-for-docbank)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://doc-analysis.github.io/docbank-page/index.html
- **Repository:** https://github.com/doc-analysis/DocBank
- **Paper:** https://arxiv.org/abs/2006.01038
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
DocBank is a new large-scale dataset that is constructed using a weak supervision approach. It enables models to integrate both the textual and layout information for downstream tasks. The current DocBank dataset totally includes 500K document pages, where 400K for training, 50K for validation and 50K for testing.
### Supported Tasks and Leaderboards
Document AI (text and layout)
### Languages
English
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
dataset_info:
features:
- name: image
dtype: image
- name: token
dtype: string
- name: bounding_box
sequence:
sequence: uint16
- name: color
sequence:
sequence: uint8
- name: font
dtype: string
- name: label
dtype: string
### Data Splits
dataset_info:
splits:
- name: train
num_bytes: 80004043
num_examples: 400000
- name: validation
num_bytes: 9995812
num_examples: 50000
- name: test
num_bytes: 9995812
num_examples: 50000
download_size: 0
dataset_size: 99995667
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Apache 2.0 License
### Citation Information
title={DocBank: A Benchmark Dataset for Document Layout Analysis},
author={Minghao Li and Yiheng Xu and Lei Cui and Shaohan Huang and Furu Wei and Zhoujun Li and Ming Zhou},
year={2020},
eprint={2006.01038},
archivePrefix={arXiv},
primaryClass={cs.CL}
### Contributions
Thanks to [@doc-analysis](https://github.com/doc-analysis) for adding this dataset. |
autoevaluate/autoeval-eval-ccdv__arxiv-summarization-section-c4e7c7-42022145086 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- ccdv/arxiv-summarization
eval_info:
task: summarization
model: ArtifactAI/led_base_16384_arxiv_summarization
metrics: []
dataset_name: ccdv/arxiv-summarization
dataset_config: section
dataset_split: test
col_mapping:
text: article
target: abstract
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: ArtifactAI/led_base_16384_arxiv_summarization
* Dataset: ccdv/arxiv-summarization
* Config: section
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@ArtifactAI](https://huggingface.co/ArtifactAI) for evaluating this model. |
Codec-SUPERB/vocalset_synth | ---
configs:
- config_name: default
data_files:
- split: original
path: data/original-*
- split: academicodec_hifi_16k_320d
path: data/academicodec_hifi_16k_320d-*
- split: academicodec_hifi_16k_320d_large_uni
path: data/academicodec_hifi_16k_320d_large_uni-*
- split: academicodec_hifi_24k_320d
path: data/academicodec_hifi_24k_320d-*
- split: audiodec_24k_320d
path: data/audiodec_24k_320d-*
- split: dac_16k
path: data/dac_16k-*
- split: dac_24k
path: data/dac_24k-*
- split: dac_44k
path: data/dac_44k-*
- split: encodec_24k_12bps
path: data/encodec_24k_12bps-*
- split: encodec_24k_1_5bps
path: data/encodec_24k_1_5bps-*
- split: encodec_24k_24bps
path: data/encodec_24k_24bps-*
- split: encodec_24k_3bps
path: data/encodec_24k_3bps-*
- split: encodec_24k_6bps
path: data/encodec_24k_6bps-*
- split: funcodec_en_libritts_16k_gr1nq32ds320
path: data/funcodec_en_libritts_16k_gr1nq32ds320-*
- split: funcodec_en_libritts_16k_gr8nq32ds320
path: data/funcodec_en_libritts_16k_gr8nq32ds320-*
- split: funcodec_en_libritts_16k_nq32ds320
path: data/funcodec_en_libritts_16k_nq32ds320-*
- split: funcodec_en_libritts_16k_nq32ds640
path: data/funcodec_en_libritts_16k_nq32ds640-*
- split: funcodec_zh_en_16k_nq32ds320
path: data/funcodec_zh_en_16k_nq32ds320-*
- split: funcodec_zh_en_16k_nq32ds640
path: data/funcodec_zh_en_16k_nq32ds640-*
- split: speech_tokenizer_16k
path: data/speech_tokenizer_16k-*
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 44100
- name: id
dtype: string
splits:
- name: original
num_bytes: 2788138275.0
num_examples: 3612
- name: academicodec_hifi_16k_320d
num_bytes: 1020434328.496
num_examples: 3612
- name: academicodec_hifi_16k_320d_large_uni
num_bytes: 1020434328.496
num_examples: 3612
- name: academicodec_hifi_24k_320d
num_bytes: 1531167660.976
num_examples: 3612
- name: audiodec_24k_320d
num_bytes: 1533385284.496
num_examples: 3612
- name: dac_16k
num_bytes: 1021616066.536
num_examples: 3612
- name: dac_24k
num_bytes: 1532288926.912
num_examples: 3612
- name: dac_44k
num_bytes: 2815350783.688
num_examples: 3612
- name: encodec_24k_12bps
num_bytes: 1532288926.912
num_examples: 3612
- name: encodec_24k_1_5bps
num_bytes: 1532288926.912
num_examples: 3612
- name: encodec_24k_24bps
num_bytes: 1532288926.912
num_examples: 3612
- name: encodec_24k_3bps
num_bytes: 1532288926.912
num_examples: 3612
- name: encodec_24k_6bps
num_bytes: 1532288926.912
num_examples: 3612
- name: funcodec_en_libritts_16k_gr1nq32ds320
num_bytes: 1021308981.52
num_examples: 3612
- name: funcodec_en_libritts_16k_gr8nq32ds320
num_bytes: 1021308981.52
num_examples: 3612
- name: funcodec_en_libritts_16k_nq32ds320
num_bytes: 1021616066.536
num_examples: 3612
- name: funcodec_en_libritts_16k_nq32ds640
num_bytes: 1021616066.536
num_examples: 3612
- name: funcodec_zh_en_16k_nq32ds320
num_bytes: 1021616066.536
num_examples: 3612
- name: funcodec_zh_en_16k_nq32ds640
num_bytes: 1021616066.536
num_examples: 3612
- name: speech_tokenizer_16k
num_bytes: 1022741385.136
num_examples: 3612
download_size: 26474218677
dataset_size: 28076083903.48
---
# Dataset Card for "vocalset_synth"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
arthurmluz/temario_data-temario_results | ---
dataset_info:
features:
- name: id
dtype: string
- name: text
dtype: string
- name: summary
dtype: string
- name: gen_summary
dtype: string
- name: rouge
struct:
- name: rouge1
dtype: float64
- name: rouge2
dtype: float64
- name: rougeL
dtype: float64
- name: rougeLsum
dtype: float64
- name: bert
struct:
- name: f1
sequence: float64
- name: hashcode
dtype: string
- name: precision
sequence: float64
- name: recall
sequence: float64
- name: moverScore
dtype: float64
splits:
- name: validation
num_bytes: 232168
num_examples: 25
download_size: 184320
dataset_size: 232168
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
---
# Dataset Card for "temario_data-temario_results"
Results of the model arthurmluz/ptt5-temario on the dataset godoyj/temario.
'gen_summary' is the generated summary, and both bertScore and Rouge metrics calculated.
mean metrics:
rouge= {'rouge1': 0.43610795130617214, 'rouge2': 0.1909123032482415, 'rougeL': 0.2712262438292888, 'rougeLsum': 0.2712262438292888}
bert= {'precision': 0.7376330280303955, 'recall': 0.7247375249862671, 'f1': 0.7302069187164306}
mover = 0.6236951679558348
|
mteb/amazon_counterfactual | ---
language:
- de
- en
- ja
---
# Amazon Multilingual Counterfactual Dataset
The dataset contains sentences from Amazon customer reviews (sampled from Amazon product review dataset) annotated for counterfactual detection (CFD) binary classification. Counterfactual statements describe events that did not or cannot take place. Counterfactual statements may be identified as statements of the form – If p was true, then q would be true (i.e. assertions whose antecedent (p) and consequent (q) are known or assumed to be false).
The key features of this dataset are:
* The dataset is multilingual and contains sentences in English, German, and Japanese.
* The labeling was done by professional linguists and high quality was ensured.
* The dataset is supplemented with the annotation guidelines and definitions, which were worked out by professional linguists. We also provide the clue word lists, which are typical for counterfactual sentences and were used for initial data filtering. The clue word lists were also compiled by professional linguists.
Please see the [paper](https://arxiv.org/abs/2104.06893) for the data statistics, detailed description of data collection and annotation.
GitHub repo URL: https://github.com/amazon-research/amazon-multilingual-counterfactual-dataset
## Usage
You can load each of the languages as follows:
```
from datasets import get_dataset_config_names
dataset_id = "SetFit/amazon_counterfactual"
# Returns ['de', 'en', 'en-ext', 'ja']
configs = get_dataset_config_names(dataset_id)
# Load English subset
dset = load_dataset(dataset_id, name="en")
``` |
parsa-mz/fnli-dataset | ---
dataset_info:
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype: int64
splits:
- name: test
num_bytes: 1092127
num_examples: 9824
- name: dev
num_bytes: 1097461
num_examples: 9842
- name: train
num_bytes: 60781325
num_examples: 549367
download_size: 20372252
dataset_size: 62970913
---
# Dataset Card for "fnli-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
zjunlp/Mol-Instructions | ---
language:
- en
size_categories:
- 100M<n<1B
license: cc-by-4.0
tags:
- chemistry
- biology
- molecule
- protein
- instructions
---
<h1 align="center"> 🧪 Mol-Instructions </h1>
<h3 align="center"> An open, large-scale biomolecular instruction dataset for large language models. </h3>
> Please refer to our [repository](https://github.com/zjunlp/Mol-Instructions) and [paper](https://arxiv.org/abs/2306.08018) for more details.

## 📌 Contents
- [Overview](#1)
- [Data Stats](#1-1)
- [Data Construction](#1-2)
- [Data Release](#1-3)
- [Tasks](#2)
- [Molecule-oriented](#2-1)
- [Protein-oriented](#2-2)
- [Biomolecule text](#2-3)
- [Demo](#3)
- [Model Weight Release](#3-1)
- [Model Usage Guide](#3-2)
- [FAQ](#3-3)
- [Notices](#4)
- [Usage and License](#4-1)
- [Limitations](#4-2)
- [About](#5)
- [References](#5-1)
- [Acknowledgements](#5-2)
<h2 id="1">1. Overview</h2>
<h3 id="1-1"> 📊 1.1 Data Stats</h3>

**Mol-Instructions** comprises three cardinal components:
- 🔬 *Molecule-oriented instructions:* This component delves into the world of small molecules, emphasizing their inherent properties and behaviors. It sheds light on the fundamental challenges of diverse chemical reactions and molecular design, with 148,4K instructions across six tasks.
- 🧬 *Protein-oriented instructions:* Rooted in the biosciences, this component presents 505K instructions across five distinct categories of tasks. These tasks aim to predict the structure, function, and activity of proteins, and facilitate protein design based on textual directives.
- 🥼 *Biomolecular text instructions:* Predominantly designed to cater to NLP tasks within the fields of bioinformatics and chemoinformatics, this part encapsulates six information extraction and Q\&A tasks represented through 53K instructions.
<h3 id="1-2"> 🛠️ 1.2 Data Construction</h3>

- 🤖️ *Human-AI Collaboration Task Description Creation*: In real-world applications, task instructions must be able to accommodate the varied and dynamic nature of human needs and queries. We emulate this diversity by starting with a clear, human-crafted description for each task, which is then used as an input to GPT-3.5-turbo.
- 📖 *Information Derivation from Existing Data*: Biomolecular data often requires specialist laboratory experiments and expert analysis, making authoritative and recognized biochemistry databases an ideal source of our data. With suitable processing, these resources enable us to extract the required instruction data.
- 📜 *Template-based Conversion of Biological Data into Textual Format*: To facilitate the transformation of these structured annotations into a textual format, we design a diverse array of templates. Each resulting text-based annotation serves as a guideline for protein design.
- ✅ *Quality Control*: To expedite the model's ability to generate precise biomolecules, we implement stringent quality control measures for our biomolecular data.
<h3 id="1-3"> 🤗 1.3 Data Release</h3>
We release the dataset on Hugging Face at [zjunlp/Mol-Instructions](https://huggingface.co/datasets/zjunlp/Mol-Instructions).
<h2 id="2">2. Tasks</h2>
<h3 id="2-1"> 🔬 2.1 Molecule-oriented</h3>
<details>
<summary><b>Molecule description generation</b></summary>
- *Please give me some details about this molecule:*
[C][C][C][C][C][C][C][C][C][C][C][C][C][C][C][C][C][C][=Branch1][C][=O][O][C@H1][Branch2][Ring1][=Branch1][C][O][C][=Branch1][C][=O][C][C][C][C][C][C][C][C][C][C][C][C][C][C][C][C][O][P][=Branch1][C][=O][Branch1][C][O][O][C][C@@H1][Branch1][=Branch1][C][=Branch1][C][=O][O][N]
```
The molecule is a 3-sn-phosphatidyl-L-serine in which the phosphatidyl acyl groups at positions 1 and 2 are specified as stearoyl and arachidonoyl respectively.
It is functionally related to an arachidonic acid and an octadecanoic acid.
```
</details>
<details>
<summary><b>Description-guided molecule design</b></summary>
- *Create a molecule with the structure as the one described:*
The molecule is a primary arylamine in which an amino functional group is substituted for one of the benzene hydrogens. It is a primary arylamine and a member of anilines.
```
[N][C][=C][C][=C][C][=C][Ring1][=Branch1]
```
</details>
<details>
<summary><b>Forward reaction prediction</b></summary>
- *With the provided reactants and reagents, propose a potential product:*
[O][=N+1][Branch1][C][O-1][C][=C][N][=C][Branch1][C][Cl][C][Branch1][C][I][=C][Ring1][Branch2].[Fe]
```
[N][C][=C][N][=C][Branch1][C][Cl][C][Branch1][C][I][=C][Ring1][Branch2]
```
</details>
<details>
<summary><b>Retrosynthesis</b></summary>
- *Please suggest potential reactants used in the synthesis of the provided product:*
[C][=C][C][C][N][C][=Branch1][C][=O][O][C][Branch1][C][C][Branch1][C][C][C]
```
[C][=C][C][C][N].[C][C][Branch1][C][C][Branch1][C][C][O][C][=Branch1][C][=O][O][C][=Branch1][C][=O][O][C][Branch1][C][C][Branch1][C][C][C]
```
</details>
<details>
<summary><b>Reagent prediction</b></summary>
- *Please provide possible reagents based on the following chemical reaction:*
[C][C][=C][C][=C][Branch1][C][N][C][=N][Ring1][#Branch1].[O][=C][Branch1][C][Cl][C][Cl]>>[C][C][=C][C][=C][Branch1][Branch2][N][C][=Branch1][C][=O][C][Cl][C][=N][Ring1][O]
```
[C][C][C][O][C][Ring1][Branch1].[C][C][N][Branch1][Ring1][C][C][C][C].[O]
```
</details>
<details>
<summary><b>Property prediction</b></summary>
- *Please provide the HOMO energy value for this molecule:*
[C][C][O][C][C][Branch1][C][C][C][Branch1][C][C][C]
```
-0.2482
```
</details>
<h3 id="2-2"> 🧬 2.2 Protein-oriented</h3>
<details>
<summary><b>Protein design</b></summary>
- *Design a protein sequence that exhibits the desired activity and specificity.*
1. The presence of Mg(2+) is necessary for the protein to function in the desired environment.
2. The AMP, (6S)-NADPHX binding site should be located in a region of the protein that is accessible to the ligand.
3. The designed protein should have ATP binding, NADPHX epimerase activity, metal ion binding, ADP-dependent NAD(P)H-hydrate dehydratase activity to facilitate nicotinamide nucleotide metabolic process.
4. For general function, the protein need meet that Catalyzes the epimerization of the S- and R-forms of NAD(P)HX, a damaged form of NAD(P)H that is a result of enzymatic or heat-dependent hydration
```
MSNELVLSREQVRRVDQRAIEAYGVPGIVLMENAGRGAAEIIRAACPSAQRVLIACGPGNNGGDGFVIARHLANAGWMVELLLACPADRITGDAQGNHEIIRRMNLPCAVMADARDLEAANDRFATADVIVDALLGTGASGPPREPIASLIRAINEAHRRVSAQPAPSVFAVDIPSGLDCDTGEAANPTVRADHTITFVARKIGFRNPAARDLLGRVHVVDIGAPRAAIQDALTGKSG
```
</details>
<details>
<summary><b>Catalytic activity prediction</b></summary>
- *Please evaluate the following protein sequence and provide an explanation of the enzyme's catalytic activity, including the chemical reaction it facilitates:* MDKVAVAGFLPEELCASLSLSPSFRGNQIFQWIGKGVDSFDAMTNLSAELRASLAEKAILRSTRVSDVLKADDGTVKLQIQTEDDLAVETVLLTDKAARKTACVSCQAGCAMGCAFCKTGTLGLARNLSAAEIVEQFLYLEKHAGALDNIVFMGMGEPLLNLDALRKAIAVLTDKRGRNLSSRRITVSTVGIVSGIYDLANNGPDVRLAVSLTTADETLRRELMPASLTNPLSDLRQAISYYIEKTGKRVTLEAVLLSGKNTSEKNADSLIAFAKGLDVHVNLIPWNPVEGLSFVTPDPEETAQFVSRLEKGGLNVTLRMHRGKSISGACGQLGKTNPYA
```
Based on the provided protein sequence, the enzyme appears to facilitate the chemical reaction: adenosine(37) in tRNA + 2 reduced [2Fe-2S]-[ferredoxin] + 2 S- adenosyl-L-methionine = 2-methyladenosine(37) in tRNA + 5'- deoxyadenosine + L-methionine + 2 oxidized [2Fe-2S]-[ferredoxin] + S- adenosyl-L-homocysteine.
```
</details>
<details>
<summary><b>Protein function prediction</b></summary>
- *Analyze the following amino acid sequence, and determine the function of the resulting protein, its subcellular localization, and any biological processes it may be part of:*
MNGTVNASAPSKMSEVAVERLSNDKALKVIFVLGGPGSGKGTQCAKIAKHFGFTHLSVGDLLRAEINSGSKNGTMIESMINEGKIVRSEVTIKLLQRAMHESGNDKFLIDGFPRNEENRAAFENLEKIEPEFVLFFDCPMEEMERRILNRNQGRDDDKMETIRKRFKVFIESTLPVIEFYNLKGKLYKIDACKPADEVFEDVKAIFSRFRAKEDSSQQTNICTAKRFELVMCLIKRLFREIKRMWSSFFCKAL
```
The protein characterized by the amino acid sequence demonstrates ATP binding, cytidylate kinase activity, uridylate kinase activity and is implicated in the 'de novo' pyrimidine nucleobase biosynthetic process, phosphorylation, pyrimidine nucleotide biosynthetic process. Its subcellular localization is primarily within the cytoplasm, nucleus.
```
</details>
<details>
<summary><b>Functional description generation</b></summary>
- *Examine the given protein sequence and share a brief overview of its attributes:*
MKIVLASNNQGKLAELKAMLAPLGVQLLRQAELGIPEAAEPFRTFVENALAKARHASALSGLPALADDAGLCVEAFGGLPGVDTAFYAVQFGYAKGDANNVKALLAQMAGITDRRAALVSTLVAVRSAEDPEPLIACGRVAGEVALEPMGSNGFGFDPVMFIPEFGQTFAQLPVEVKNANSHRGKATRQMMALMRERWIT
```
A concise description of the protein with the specified amino acid sequence includes: Pyrophosphatase that catalyzes the hydrolysis of nucleoside triphosphates to their monophosphate derivatives, with a high preference for the non-canonical purine nucleotides XTP (xanthosine triphosphate), dITP (deoxyinosine triphosphate) and ITP. Seems to function as a house-cleaning enzyme that removes non-canonical purine nucleotides from the nucleotide pool, thus preventing their incorporation into DNA/RNA and avoiding chromosomal lesions.
```
</details>
<details>
<summary><b>Domain/Motif prediction</b></summary>
- *Given this protein sequence, can you identify any common protein motifs or domains that it may contain?*
MANTKYIFITGGVVSSLGKGIAAASIGALLESRGLSVSLIKVDPYINVDPGTMSPFQHGEVFVTEDGTETDLDLGHYERFVRFKASKKNNFTAGKVYETVIRNERKGNYLGGTVQVIPHITNEIKKRIKKGGQNKDIAIVEVGGTVGDIESQPFVEALRQMALELPNSSWAFVHLTLVPFINASGELKTKPTQHSVKELRSLGISPDVLVCRSEQELPKDEKNKIALFCSVPAKSVISMHDVDTVYSIPILLNKQKVDDTILKKLNLKIKKPNLNDWKRVVKAKLLPEKEVNVSFVGKYTELKDSYKSINEALEHAGIQNKAKVNINFVEAEQITSQNVRKVLKKSDAILVPGGFGERGIEGMILACKYARENNVPYLGICLGMQIAIIEYARNVLKLKSANSTEFDSSTKFPVIGLITEWSDISGKKEKRTKNSDLGGTMRLGGQVCKLKKKSNSYKMYKKSEIIERHRHRYEVNPNYKDKMIEQGLDVVGTSIDGKLVEMIELPSHKWFLACQFHPEFTSNPRDGHPIFNSYIKSTITK
```
Our predictive analysis of the given protein sequence reveals possible domains or motifs. These include: Glutamine amidotransferase, CTP synthase N-terminal domains.
```
</details>
<h3 id="2-3"> 🥼 2.3 Biomolecule text</h3>
<details>
<summary><b>Chemical entity recognition</b></summary>
- *Find and list all the instances of the chemical entities in the following content:*
"Both the control and caramiphen groups with double cannulas had significantly shorter latencies to seizure onset than the corresponding groups with single cannula."
```
caramiphen
```
</details>
<details>
<summary><b>Chemical-disease interaction extraction</b></summary>
- *You are provided with a set of clinical trial summaries. Extract the chemical-disease relations from the summaries and present your findings in the format of (Subject, Object):*
"Eating disorders and the associated behavioural problems and drug abuse are uncommon in pregnancy. When they do occur they are often unrecognized because of denial but when significant may pose a risk to both the mother and her fetus. This case illustrates a number of problems that may be encountered in women with eating disorders in pregnancy, including prolonged and recurrent metabolic disturbances and diuretic abuse. In particular it illustrates the derangements of thyroid function seen in pregnant women with eating disorders and reminds us that when a cause for thyrotoxicosis remains obscure, thyroxine abuse should be considered and explored."
```
(thyroxine, thyrotoxicosis)
```
</details>
<details>
<summary><b>Chemical-protein interaction extraction</b></summary>
- *Your task is to extract the subject and object entities from the provided text and establish the corresponding relation triples. Please format your answers as (Subject, Relation, Object):*
"Opioid receptors are the membrane proteins that mediate the pain-relieving effect of opioid drugs, such as morphine and fentanyl as well as endogenous opioid peptides enkephalins and endorphins. Using cDNAs for the mu and the kappa opioid receptors, we mapped the chromosomal locations of their genes in mouse. Multilocus cross analysis located the mu receptor gene Oprm on Chr 10 and the kappa receptor gene Oprk1 on Chr 1. Both genes are near centromere, with no markers more centromeric. These data indicate that the two opioid receptors are different gene products, ruling out the possibility that they may be differential splicing products from the same gene."
```
(morphine, regulator, Opioid receptors), (fentanyl, regulator, Opioid receptors)
```
</details>
<details>
<summary><b>Multiple-choice question</b></summary>
- *Which of the following conditions does not show multifactorial inheritance? What of the following is the right choice?*
(A) Pyloric stenosis (B) Schizophrenia (C) Spina bifida (neural tube defects) (D) Marfan syndrome
```
The final answer is (D).
```
</details>
<details>
<summary><b>True or False question</b></summary>
- *Does feeding tube insertion and its timing improve survival?*
```
No, neither insertion of PEG tubes nor timing of insertion affect survival.
```
</details>
<details>
<summary><b>Open question</b></summary>
- *What virus causes chronic disease?*
```
Hepatitis C virus (HCV).
```
</details>
<h2 id="3">3. Demo</h2>

<h3 id="3-1"> 🤗 3.1 Model Weight Release</h3>
We release the model weights on Hugging Face at:
- 🔬 *Molecule-oriented instructions:* [zjunlp/llama-molinst-molecule-7b](https://huggingface.co/zjunlp/llama-molinst-molecule-7b)
- 🧬 *Protein-oriented instructions:* [zjunlp/llama-molinst-protein-7b](https://huggingface.co/zjunlp/llama-molinst-protein-7b)
- 🥼 *Biomolecular text instructions:* [zjunlp/llama-molinst-biotext-7b](https://huggingface.co/zjunlp/llama-molinst-biotext-7b)
<h3 id="3-2"> 📝 3.2 Model Usage Guide</h3>
For this part, please refer to our [repository](https://github.com/zjunlp/Mol-Instructions).
We have provided a web version demo based on [Gradio](https://gradio.app). To use it, you first need to download this repository:
```shell
>> git clone https://github.com/zjunlp/Mol-Instruction
>> cd demo
```
Step 1, install Gradio by running:`pip install gradio`.
Step 2, specify the parameters in the [generate.sh](https://github.com/zjunlp/Mol-Instructions/blob/main/demo/generate.sh) file.
```shell
>> CUDA_VISIBLE_DEVICES=0 python generate.py \
--CLI False\
--protein False\
--load_8bit \
--base_model $BASE_MODEL_PATH \
--share_gradio True\
--lora_weights $FINETUNED_MODEL_PATH \
```
For models fine-tuned on *molecule-oriented* and *biomolecular text* instructions, please set `$FINETUNED_MODEL_PATH` to `'zjunlp/llama-molinst-molecule-7b'` or `'zjunlp/llama-molinst-biotext-7b'`.
For the model fine-tuned on *protein-oriented* instructions, you need to perform additional steps as described in [this folder](https://github.com/zjunlp/Mol-Instructions/tree/main/demo).
Step 3, run the [generate.sh](https://github.com/zjunlp/Mol-Instructions/blob/main/demo/generate.sh) file in the repository:
```shell
>> sh generate.sh
```
We offer two methods: the first one is command-line interaction, and the second one is web-based interaction, which provides greater flexibility.
1. Use the following command to enter **web-based interaction**:
```shell
>> python generate.py
```
The program will run a web server and output an address. Open the output address in a browser to use it.
2. Use the following command to enter **command-line interaction**:
```shell
>> python generate.py --CLI True
```
The disadvantage is the inability to dynamically change decoding parameters.
<h3 id="3-3"> 💡 3.3 FAQ</h3>
- *Question:* What action should be taken if the model encounters `<unk>` and subsequently repeats the input during decoding?
*Answer:* Consider reducing the value of the `max tokens`.
- *Question:* What should I do if the model encounters � during decoding?
*Answer:* If this symbol emerges in the middle of the decoded sentence, we recommend changing the input. If it shows up at the end of the sentence, you can tackle this issue by extending the output length.
- *Question:* Why do I receive varied results despite using identical decoding parameters?
*Answer:* This might occur if you have enabled `do_sample=True`. Another factor could be the order in which tasks are executed. A useful approach would be to use a for loop to generate multiple outputs with the same decoding parameters, enabling you to note the variance in each output.
- *Question:* What could be the reason for subpar answer quality?
*Answer:* Modifying the decoding parameters could help in improving the quality of the extraction or the answer.
<h2 id="4">4. Notices</h2>
<h3 id="4-1"> 🚨 4.1. Usage and License</h3>
Please note that all data and model weights of **Mol-Instructions** is exclusively licensed for research purposes. The accompanying dataset is licensed under CC BY 4.0, which permits solely non-commercial usage.
We emphatically urge all users to adhere to the highest ethical standards when using our dataset, including maintaining fairness, transparency, and responsibility in their research. Any usage of the dataset that may lead to harm or pose a detriment to society is strictly **forbidden**.
In terms of dataset maintenance, we pledge our commitment to provide necessary upkeep. This will ensure the continued relevance and usability of the dataset in light of evolving research landscapes. This commitment encompasses regular updates, error checks, and amendments in accordance with field advancements and user feedback.
<h3 id="4-2"> ❗️ 4.2. Limitations</h3>
The current state of the model, obtained via instruction tuning, is a preliminary demonstration. Its capacity to handle real-world, production-grade tasks remains limited. Moreover, there is a vast reservoir of rich instruction data that remains to be collected and exploited.
<h2 id="5">5. About</h2>
<h3 id="5-1"> 📚 5.1 References</h3>
If you use our repository, please cite the following related paper:
```
@inproceedings{fang2023mol,
author = {Yin Fang and
Xiaozhuan Liang and
Ningyu Zhang and
Kangwei Liu and
Rui Huang and
Zhuo Chen and
Xiaohui Fan and
Huajun Chen},
title = {Mol-Instructions: {A} Large-Scale Biomolecular Instruction Dataset
for Large Language Models},
booktitle = {{ICLR}},
publisher = {OpenReview.net},
year = {2024},
url = {https://openreview.net/pdf?id=Tlsdsb6l9n}
}
```
<h3 id="5-2"> 🫱🏻🫲 5.2 Acknowledgements</h3>
We appreciate [LLaMA](https://github.com/facebookresearch/llama), [Huggingface Transformers Llama](https://github.com/huggingface/transformers/tree/main/src/transformers/models/llama), [Alpaca](https://crfm.stanford.edu/2023/03/13/alpaca.html), [Alpaca-LoRA](https://github.com/tloen/alpaca-lora), [Chatbot Service](https://github.com/deep-diver/LLM-As-Chatbot) and many other related works for their open-source contributions. |
vinisebk/paul | ---
license: openrail
---
|
robotitaINC/robotita | ---
license: afl-3.0
---
|
HarryAndrews/KsponSpeech_eval_clean | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: file
dtype: string
- name: transcription
dtype: string
- name: hangul_phonemes
dtype: string
- name: labels
dtype: string
splits:
- name: train
num_bytes: 318255268.0
num_examples: 3000
download_size: 300583036
dataset_size: 318255268.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
P01son/instructions | ---
license: cc-by-4.0
---
|
JovialValley/broadclass_totaldataset_3 | ---
dataset_info:
features:
- name: name
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: label
dtype: string
- name: emotion
dtype: string
- name: emotion_str
dtype: string
splits:
- name: train
num_bytes: 164258465.0
num_examples: 390
- name: test
num_bytes: 41859040.0
num_examples: 97
download_size: 138753084
dataset_size: 206117505.0
---
# Dataset Card for "broadclass_totaldataset_3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Umal-exvc/chocolate-captioned-dataset-100 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 26453719.0
num_examples: 100
download_size: 26029410
dataset_size: 26453719.0
---
# Dataset Card for "chocolate-captioned-dataset-100"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CyberHarem/hatsushimo_azurlane | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of hatsushimo/初霜/初霜 (Azur Lane)
This is the dataset of hatsushimo/初霜/初霜 (Azur Lane), containing 11 images and their tags.
The core tags of this character are `animal_ears, hair_ornament, pink_hair, animal_ear_fluff, cat_ears, hairclip, red_eyes, ahoge, bangs, hair_between_eyes, cat_tail, fang, long_hair, tail, breasts, twintails, cat_girl, small_breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:---------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 11 | 16.35 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hatsushimo_azurlane/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 11 | 10.18 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hatsushimo_azurlane/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 29 | 21.02 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hatsushimo_azurlane/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 11 | 14.99 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hatsushimo_azurlane/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 29 | 29.03 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hatsushimo_azurlane/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/hatsushimo_azurlane',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 11 |  |  |  |  |  | 1girl, blush, looking_at_viewer, solo, open_mouth, :d, bare_shoulders, kimono, long_sleeves, thighhighs, wide_sleeves, choker, jingle_bell, cleavage, collarbone, garter_straps, obi, pleated_skirt, simple_background, underwear, white_background |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | blush | looking_at_viewer | solo | open_mouth | :d | bare_shoulders | kimono | long_sleeves | thighhighs | wide_sleeves | choker | jingle_bell | cleavage | collarbone | garter_straps | obi | pleated_skirt | simple_background | underwear | white_background |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:--------------------|:-------|:-------------|:-----|:-----------------|:---------|:---------------|:-------------|:---------------|:---------|:--------------|:-----------|:-------------|:----------------|:------|:----------------|:--------------------|:------------|:-------------------|
| 0 | 11 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
zz990906/garbage_detection | ---
task_categories:
- image-classification
language:
- en
--- |
AdapterOcean/math_dataset_standardized_cluster_2_alpaca | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 20563494
num_examples: 16335
download_size: 9453917
dataset_size: 20563494
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "math_dataset_standardized_cluster_2_alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ostapeno/flanv2_100k_2 | ---
license: apache-2.0
dataset_info:
features:
- name: id
dtype: int64
- name: user
dtype: string
- name: assistant
dtype: string
splits:
- name: train
num_bytes: 143307369
num_examples: 100000
download_size: 85860910
dataset_size: 143307369
---
|
DBQ/Mr.Porter.Product.prices.Taiwan | ---
annotations_creators:
- other
language_creators:
- other
language:
- en
license:
- unknown
multilinguality:
- monolingual
source_datasets:
- original
task_categories:
- text-classification
- image-classification
- feature-extraction
- image-segmentation
- image-to-image
- image-to-text
- object-detection
- summarization
- zero-shot-image-classification
pretty_name: Taiwan - Mr Porter - Product-level price list
tags:
- webscraping
- ecommerce
- Mr Porter
- fashion
- fashion product
- image
- fashion image
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: website_name
dtype: string
- name: competence_date
dtype: string
- name: country_code
dtype: string
- name: currency_code
dtype: string
- name: brand
dtype: string
- name: category1_code
dtype: string
- name: category2_code
dtype: string
- name: category3_code
dtype: string
- name: product_code
dtype: int64
- name: title
dtype: string
- name: itemurl
dtype: string
- name: imageurl
dtype: string
- name: full_price
dtype: float64
- name: price
dtype: float64
- name: full_price_eur
dtype: float64
- name: price_eur
dtype: float64
- name: flg_discount
dtype: int64
splits:
- name: train
num_bytes: 8970329
num_examples: 27315
download_size: 2161710
dataset_size: 8970329
---
# Mr Porter web scraped data
## About the website
The **Ecommerce industry** in Asia Pacific, particularly in **Taiwan**, has been burgeoning due to increased internet penetration and growing consumer trust in online transactions. The industry encompasses various businesses, including fashion retailers like **Mr Porter**. Recognized for its upscale menswear, Mr Porter has carved a good standing in the online shopping landscape of Taiwan. The observed dataset specifically covers Ecommerce product-list page (PLP) data, offering valuable insights into Mr Porters performance and competitive stance within Taiwan. As Ecommerce sees an upward trend in Taiwan, understanding PLP data becomes essential for carving successful market strategies.
## Link to **dataset**
[Taiwan - Mr Porter - Product-level price list dataset](https://www.databoutique.com/buy-data-page/Mr%20Porter%20Product-prices%20Taiwan/r/rec2pFzgTY38uwa95)
|
clinicalnlplab/pubmedqa_test | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: string
- name: query
dtype: string
- name: answer
dtype: string
- name: choices
sequence: string
- name: gold
dtype: int64
splits:
- name: train
num_bytes: 81909
num_examples: 50
- name: valid
num_bytes: 81909
num_examples: 50
- name: test
num_bytes: 834914
num_examples: 500
download_size: 529693
dataset_size: 998732
---
# Dataset Card for "pubmedqa_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Yuqiancici/cici | ---
license: openrail
---
|
Luckyjhg/Geo170K | ---
configs:
- config_name: default
data_files:
- split: qa_tuning
path: data/qa_tuning-*
- split: alignment
path: data/alignment-*
dataset_info:
features:
- name: image
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: qa_tuning
num_bytes: 93111889
num_examples: 117205
- name: alignment
num_bytes: 20241610
num_examples: 60252
download_size: 23754996
dataset_size: 113353499
---
# Dataset Card for "Geo170K"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
rntc/blurb_ncbi_disease_a-0 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: type
dtype: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B
'2': I
splits:
- name: train
num_bytes: 36925119
num_examples: 5424
- name: validation
num_bytes: 6271337
num_examples: 923
- name: test
num_bytes: 6186130
num_examples: 940
download_size: 7120688
dataset_size: 49382586
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
Aeirya/poem-metre | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 51688709
num_examples: 749184
- name: valid
num_bytes: 2943079
num_examples: 42545
- name: test
num_bytes: 2430299
num_examples: 42545
download_size: 27025014
dataset_size: 57062087
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
- split: test
path: data/test-*
---
|
Cleudemir/voz6 | ---
license: openrail
---
|
ethz-spylab/harmless-poisoned-10-SUDO | ---
dataset_info:
features:
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 56338412
num_examples: 42537
download_size: 31368327
dataset_size: 56338412
---
# Dataset Card for "harmless-poisoned-10-SUDO"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
kpriyanshu256/MultiTabQA-multitable_pretraining-Salesforce-codet5-base_train-latex-123000 | ---
dataset_info:
features:
- name: input_ids
sequence:
sequence: int32
- name: attention_mask
sequence:
sequence: int8
- name: labels
sequence:
sequence: int64
splits:
- name: train
num_bytes: 13336000
num_examples: 1000
download_size: 1026540
dataset_size: 13336000
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
openaccess-ai-collective/1324a75361e69e28a302fc81ce9b94a4 | Invalid username or password. |
Dmenorsz/kevin | ---
license: openrail
---
|
mask-distilled-onesec-cv12-each-chunk-uniq/chunk_244 | ---
dataset_info:
features:
- name: logits
sequence: float32
- name: mfcc
sequence:
sequence: float64
splits:
- name: train
num_bytes: 932686364.0
num_examples: 183167
download_size: 952494320
dataset_size: 932686364.0
---
# Dataset Card for "chunk_244"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CyberHarem/svd_girlsfrontline | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of svd/SVD/SVD (Girls' Frontline)
This is the dataset of svd/SVD/SVD (Girls' Frontline), containing 188 images and their tags.
The core tags of this character are `long_hair, hair_ornament, hairclip, breasts, grey_eyes, grey_hair, white_hair, hat, medium_breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:--------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 188 | 468.27 MiB | [Download](https://huggingface.co/datasets/CyberHarem/svd_girlsfrontline/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 188 | 180.74 MiB | [Download](https://huggingface.co/datasets/CyberHarem/svd_girlsfrontline/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 438 | 377.11 MiB | [Download](https://huggingface.co/datasets/CyberHarem/svd_girlsfrontline/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 188 | 360.11 MiB | [Download](https://huggingface.co/datasets/CyberHarem/svd_girlsfrontline/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 438 | 670.73 MiB | [Download](https://huggingface.co/datasets/CyberHarem/svd_girlsfrontline/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/svd_girlsfrontline',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 6 |  |  |  |  |  | 1girl, black_gloves, black_skirt, brown_jacket, military_uniform, solo, white_shirt, fingerless_gloves, looking_away, military_hat, open_mouth, single_thighhigh, uneven_legwear, black_belt, hand_on_hip, pleated_skirt |
| 1 | 11 |  |  |  |  |  | 1girl, black_skirt, brown_jacket, military_uniform, solo, black_gloves, fingerless_gloves, white_shirt, military_hat, smile, black_belt, looking_at_viewer, thighhighs, blush, simple_background, wedding_ring |
| 2 | 7 |  |  |  |  |  | 1girl, bare_shoulders, christmas, santa_costume, santa_hat, solo, bangs, looking_at_viewer, santa_dress, blush, cleavage, official_alternate_costume, red_dress, red_gloves, red_headwear, bow, fingerless_gloves, fur-trimmed_headwear, open_mouth, sitting, smile, white_background, brown_eyes, collarbone, fur-trimmed_dress, fur-trimmed_gloves, simple_background |
| 3 | 11 |  |  |  |  |  | 1girl, cleavage, looking_at_viewer, solo, official_alternate_costume, smile, bangs, china_dress, fur-trimmed_jacket, red_jacket, white_shorts, bare_shoulders, sitting, blush, open_jacket, open_mouth, yellow_eyes, chinese_new_year, closed_mouth, feet_out_of_frame, lantern, shirt, simple_background, standing, white_background |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | black_gloves | black_skirt | brown_jacket | military_uniform | solo | white_shirt | fingerless_gloves | looking_away | military_hat | open_mouth | single_thighhigh | uneven_legwear | black_belt | hand_on_hip | pleated_skirt | smile | looking_at_viewer | thighhighs | blush | simple_background | wedding_ring | bare_shoulders | christmas | santa_costume | santa_hat | bangs | santa_dress | cleavage | official_alternate_costume | red_dress | red_gloves | red_headwear | bow | fur-trimmed_headwear | sitting | white_background | brown_eyes | collarbone | fur-trimmed_dress | fur-trimmed_gloves | china_dress | fur-trimmed_jacket | red_jacket | white_shorts | open_jacket | yellow_eyes | chinese_new_year | closed_mouth | feet_out_of_frame | lantern | shirt | standing |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------------|:--------------|:---------------|:-------------------|:-------|:--------------|:--------------------|:---------------|:---------------|:-------------|:-------------------|:-----------------|:-------------|:--------------|:----------------|:--------|:--------------------|:-------------|:--------|:--------------------|:---------------|:-----------------|:------------|:----------------|:------------|:--------|:--------------|:-----------|:-----------------------------|:------------|:-------------|:---------------|:------|:-----------------------|:----------|:-------------------|:-------------|:-------------|:--------------------|:---------------------|:--------------|:---------------------|:-------------|:---------------|:--------------|:--------------|:-------------------|:---------------|:--------------------|:----------|:--------|:-----------|
| 0 | 6 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 11 |  |  |  |  |  | X | X | X | X | X | X | X | X | | X | | | | X | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 7 |  |  |  |  |  | X | | | | | X | | X | | | X | | | | | | X | X | | X | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | |
| 3 | 11 |  |  |  |  |  | X | | | | | X | | | | | X | | | | | | X | X | | X | X | | X | | | | X | | X | X | | | | | | X | X | | | | | X | X | X | X | X | X | X | X | X | X | X | X |
|
FaalSa/dbscan9 | ---
dataset_info:
features:
- name: start
dtype: timestamp[s]
- name: target
sequence: float32
- name: item_id
dtype: string
- name: feat_static_cat
sequence: uint64
splits:
- name: train
num_bytes: 81925844
num_examples: 1027
- name: validation
num_bytes: 82418804
num_examples: 1027
- name: test
num_bytes: 82911764
num_examples: 1027
download_size: 16089216
dataset_size: 247256412
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
open-llm-leaderboard/details_h2m__mhm-7b-v1.3-DPO-1 | ---
pretty_name: Evaluation run of h2m/mhm-7b-v1.3-DPO-1
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [h2m/mhm-7b-v1.3-DPO-1](https://huggingface.co/h2m/mhm-7b-v1.3-DPO-1) on the [Open\
\ LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_h2m__mhm-7b-v1.3-DPO-1\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-01-17T06:45:52.399769](https://huggingface.co/datasets/open-llm-leaderboard/details_h2m__mhm-7b-v1.3-DPO-1/blob/main/results_2024-01-17T06-45-52.399769.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.45614037821682546,\n\
\ \"acc_stderr\": 0.03454781614962824,\n \"acc_norm\": 0.46207531178088435,\n\
\ \"acc_norm_stderr\": 0.035314919445415614,\n \"mc1\": 0.29253365973072215,\n\
\ \"mc1_stderr\": 0.015925597445286165,\n \"mc2\": 0.4588457171247393,\n\
\ \"mc2_stderr\": 0.015385039501663943\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.46245733788395904,\n \"acc_stderr\": 0.01457014449507558,\n\
\ \"acc_norm\": 0.49573378839590443,\n \"acc_norm_stderr\": 0.014610858923956948\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.5036845249950209,\n\
\ \"acc_stderr\": 0.00498964592981145,\n \"acc_norm\": 0.6810396335391357,\n\
\ \"acc_norm_stderr\": 0.004651211311633843\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.23,\n \"acc_stderr\": 0.042295258468165044,\n \
\ \"acc_norm\": 0.23,\n \"acc_norm_stderr\": 0.042295258468165044\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.4222222222222222,\n\
\ \"acc_stderr\": 0.042667634040995814,\n \"acc_norm\": 0.4222222222222222,\n\
\ \"acc_norm_stderr\": 0.042667634040995814\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.46710526315789475,\n \"acc_stderr\": 0.040601270352363966,\n\
\ \"acc_norm\": 0.46710526315789475,\n \"acc_norm_stderr\": 0.040601270352363966\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.47,\n\
\ \"acc_stderr\": 0.050161355804659205,\n \"acc_norm\": 0.47,\n \
\ \"acc_norm_stderr\": 0.050161355804659205\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.4830188679245283,\n \"acc_stderr\": 0.030755120364119905,\n\
\ \"acc_norm\": 0.4830188679245283,\n \"acc_norm_stderr\": 0.030755120364119905\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.3541666666666667,\n\
\ \"acc_stderr\": 0.039994111357535424,\n \"acc_norm\": 0.3541666666666667,\n\
\ \"acc_norm_stderr\": 0.039994111357535424\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \
\ \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.47,\n \"acc_stderr\": 0.050161355804659205,\n \"acc_norm\": 0.47,\n\
\ \"acc_norm_stderr\": 0.050161355804659205\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \
\ \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.4393063583815029,\n\
\ \"acc_stderr\": 0.03784271932887467,\n \"acc_norm\": 0.4393063583815029,\n\
\ \"acc_norm_stderr\": 0.03784271932887467\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.27450980392156865,\n \"acc_stderr\": 0.044405219061793275,\n\
\ \"acc_norm\": 0.27450980392156865,\n \"acc_norm_stderr\": 0.044405219061793275\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.53,\n \"acc_stderr\": 0.050161355804659205,\n \"acc_norm\": 0.53,\n\
\ \"acc_norm_stderr\": 0.050161355804659205\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.33191489361702126,\n \"acc_stderr\": 0.030783736757745664,\n\
\ \"acc_norm\": 0.33191489361702126,\n \"acc_norm_stderr\": 0.030783736757745664\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.34210526315789475,\n\
\ \"acc_stderr\": 0.04462917535336936,\n \"acc_norm\": 0.34210526315789475,\n\
\ \"acc_norm_stderr\": 0.04462917535336936\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.4206896551724138,\n \"acc_stderr\": 0.0411391498118926,\n\
\ \"acc_norm\": 0.4206896551724138,\n \"acc_norm_stderr\": 0.0411391498118926\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.3306878306878307,\n \"acc_stderr\": 0.024229965298425075,\n \"\
acc_norm\": 0.3306878306878307,\n \"acc_norm_stderr\": 0.024229965298425075\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.29365079365079366,\n\
\ \"acc_stderr\": 0.04073524322147126,\n \"acc_norm\": 0.29365079365079366,\n\
\ \"acc_norm_stderr\": 0.04073524322147126\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.28,\n \"acc_stderr\": 0.045126085985421276,\n \
\ \"acc_norm\": 0.28,\n \"acc_norm_stderr\": 0.045126085985421276\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\"\
: 0.49032258064516127,\n \"acc_stderr\": 0.02843867799890955,\n \"\
acc_norm\": 0.49032258064516127,\n \"acc_norm_stderr\": 0.02843867799890955\n\
\ },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\"\
: 0.3448275862068966,\n \"acc_stderr\": 0.03344283744280458,\n \"\
acc_norm\": 0.3448275862068966,\n \"acc_norm_stderr\": 0.03344283744280458\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.52,\n \"acc_stderr\": 0.05021167315686779,\n \"acc_norm\"\
: 0.52,\n \"acc_norm_stderr\": 0.05021167315686779\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.5878787878787879,\n \"acc_stderr\": 0.03843566993588716,\n\
\ \"acc_norm\": 0.5878787878787879,\n \"acc_norm_stderr\": 0.03843566993588716\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.5959595959595959,\n \"acc_stderr\": 0.03496130972056128,\n \"\
acc_norm\": 0.5959595959595959,\n \"acc_norm_stderr\": 0.03496130972056128\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.6113989637305699,\n \"acc_stderr\": 0.035177397963731316,\n\
\ \"acc_norm\": 0.6113989637305699,\n \"acc_norm_stderr\": 0.035177397963731316\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.4358974358974359,\n \"acc_stderr\": 0.025141801511177495,\n\
\ \"acc_norm\": 0.4358974358974359,\n \"acc_norm_stderr\": 0.025141801511177495\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.24074074074074073,\n \"acc_stderr\": 0.026067159222275794,\n \
\ \"acc_norm\": 0.24074074074074073,\n \"acc_norm_stderr\": 0.026067159222275794\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.3865546218487395,\n \"acc_stderr\": 0.0316314580755238,\n \
\ \"acc_norm\": 0.3865546218487395,\n \"acc_norm_stderr\": 0.0316314580755238\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.304635761589404,\n \"acc_stderr\": 0.03757949922943343,\n \"acc_norm\"\
: 0.304635761589404,\n \"acc_norm_stderr\": 0.03757949922943343\n },\n\
\ \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.6036697247706422,\n\
\ \"acc_stderr\": 0.02097146994790053,\n \"acc_norm\": 0.6036697247706422,\n\
\ \"acc_norm_stderr\": 0.02097146994790053\n },\n \"harness|hendrycksTest-high_school_statistics|5\"\
: {\n \"acc\": 0.41203703703703703,\n \"acc_stderr\": 0.03356787758160835,\n\
\ \"acc_norm\": 0.41203703703703703,\n \"acc_norm_stderr\": 0.03356787758160835\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.5735294117647058,\n \"acc_stderr\": 0.034711579079534254,\n \"\
acc_norm\": 0.5735294117647058,\n \"acc_norm_stderr\": 0.034711579079534254\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.6413502109704642,\n \"acc_stderr\": 0.031219569445301843,\n \
\ \"acc_norm\": 0.6413502109704642,\n \"acc_norm_stderr\": 0.031219569445301843\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.5112107623318386,\n\
\ \"acc_stderr\": 0.033549366530984746,\n \"acc_norm\": 0.5112107623318386,\n\
\ \"acc_norm_stderr\": 0.033549366530984746\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.5267175572519084,\n \"acc_stderr\": 0.04379024936553894,\n\
\ \"acc_norm\": 0.5267175572519084,\n \"acc_norm_stderr\": 0.04379024936553894\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.6198347107438017,\n \"acc_stderr\": 0.04431324501968431,\n \"\
acc_norm\": 0.6198347107438017,\n \"acc_norm_stderr\": 0.04431324501968431\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.5370370370370371,\n\
\ \"acc_stderr\": 0.04820403072760627,\n \"acc_norm\": 0.5370370370370371,\n\
\ \"acc_norm_stderr\": 0.04820403072760627\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.50920245398773,\n \"acc_stderr\": 0.03927705600787443,\n\
\ \"acc_norm\": 0.50920245398773,\n \"acc_norm_stderr\": 0.03927705600787443\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.375,\n\
\ \"acc_stderr\": 0.04595091388086298,\n \"acc_norm\": 0.375,\n \
\ \"acc_norm_stderr\": 0.04595091388086298\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.6116504854368932,\n \"acc_stderr\": 0.048257293373563895,\n\
\ \"acc_norm\": 0.6116504854368932,\n \"acc_norm_stderr\": 0.048257293373563895\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.6752136752136753,\n\
\ \"acc_stderr\": 0.03067902276549883,\n \"acc_norm\": 0.6752136752136753,\n\
\ \"acc_norm_stderr\": 0.03067902276549883\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.49,\n \"acc_stderr\": 0.05024183937956911,\n \
\ \"acc_norm\": 0.49,\n \"acc_norm_stderr\": 0.05024183937956911\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.5887611749680716,\n\
\ \"acc_stderr\": 0.017595971908056573,\n \"acc_norm\": 0.5887611749680716,\n\
\ \"acc_norm_stderr\": 0.017595971908056573\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.49710982658959535,\n \"acc_stderr\": 0.026918645383239015,\n\
\ \"acc_norm\": 0.49710982658959535,\n \"acc_norm_stderr\": 0.026918645383239015\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.24692737430167597,\n\
\ \"acc_stderr\": 0.014422292204808835,\n \"acc_norm\": 0.24692737430167597,\n\
\ \"acc_norm_stderr\": 0.014422292204808835\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.477124183006536,\n \"acc_stderr\": 0.028599936776089786,\n\
\ \"acc_norm\": 0.477124183006536,\n \"acc_norm_stderr\": 0.028599936776089786\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.48231511254019294,\n\
\ \"acc_stderr\": 0.02838032284907713,\n \"acc_norm\": 0.48231511254019294,\n\
\ \"acc_norm_stderr\": 0.02838032284907713\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.5154320987654321,\n \"acc_stderr\": 0.027807490044276198,\n\
\ \"acc_norm\": 0.5154320987654321,\n \"acc_norm_stderr\": 0.027807490044276198\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.36524822695035464,\n \"acc_stderr\": 0.02872386385328127,\n \
\ \"acc_norm\": 0.36524822695035464,\n \"acc_norm_stderr\": 0.02872386385328127\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.3455019556714472,\n\
\ \"acc_stderr\": 0.012145303004087206,\n \"acc_norm\": 0.3455019556714472,\n\
\ \"acc_norm_stderr\": 0.012145303004087206\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.45955882352941174,\n \"acc_stderr\": 0.030273325077345755,\n\
\ \"acc_norm\": 0.45955882352941174,\n \"acc_norm_stderr\": 0.030273325077345755\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.39869281045751637,\n \"acc_stderr\": 0.019808281317449855,\n \
\ \"acc_norm\": 0.39869281045751637,\n \"acc_norm_stderr\": 0.019808281317449855\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.4909090909090909,\n\
\ \"acc_stderr\": 0.04788339768702861,\n \"acc_norm\": 0.4909090909090909,\n\
\ \"acc_norm_stderr\": 0.04788339768702861\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.5306122448979592,\n \"acc_stderr\": 0.031949171367580624,\n\
\ \"acc_norm\": 0.5306122448979592,\n \"acc_norm_stderr\": 0.031949171367580624\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.6567164179104478,\n\
\ \"acc_stderr\": 0.03357379665433431,\n \"acc_norm\": 0.6567164179104478,\n\
\ \"acc_norm_stderr\": 0.03357379665433431\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.7,\n \"acc_stderr\": 0.04605661864718381,\n \
\ \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.04605661864718381\n },\n\
\ \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.4036144578313253,\n\
\ \"acc_stderr\": 0.038194861407583984,\n \"acc_norm\": 0.4036144578313253,\n\
\ \"acc_norm_stderr\": 0.038194861407583984\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.5730994152046783,\n \"acc_stderr\": 0.03793620616529917,\n\
\ \"acc_norm\": 0.5730994152046783,\n \"acc_norm_stderr\": 0.03793620616529917\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.29253365973072215,\n\
\ \"mc1_stderr\": 0.015925597445286165,\n \"mc2\": 0.4588457171247393,\n\
\ \"mc2_stderr\": 0.015385039501663943\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.6203630623520127,\n \"acc_stderr\": 0.013639245403711156\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.15238817285822592,\n \
\ \"acc_stderr\": 0.009899572254794209\n }\n}\n```"
repo_url: https://huggingface.co/h2m/mhm-7b-v1.3-DPO-1
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_01_17T06_45_52.399769
path:
- '**/details_harness|arc:challenge|25_2024-01-17T06-45-52.399769.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-01-17T06-45-52.399769.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_01_17T06_45_52.399769
path:
- '**/details_harness|gsm8k|5_2024-01-17T06-45-52.399769.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-01-17T06-45-52.399769.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_01_17T06_45_52.399769
path:
- '**/details_harness|hellaswag|10_2024-01-17T06-45-52.399769.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-01-17T06-45-52.399769.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_01_17T06_45_52.399769
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-17T06-45-52.399769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-17T06-45-52.399769.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-17T06-45-52.399769.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_01_17T06_45_52.399769
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-17T06-45-52.399769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-17T06-45-52.399769.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_01_17T06_45_52.399769
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-17T06-45-52.399769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-17T06-45-52.399769.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_01_17T06_45_52.399769
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-17T06-45-52.399769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-17T06-45-52.399769.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_01_17T06_45_52.399769
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-17T06-45-52.399769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-17T06-45-52.399769.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_01_17T06_45_52.399769
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-17T06-45-52.399769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-17T06-45-52.399769.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_01_17T06_45_52.399769
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-17T06-45-52.399769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-17T06-45-52.399769.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_01_17T06_45_52.399769
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-17T06-45-52.399769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-17T06-45-52.399769.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_01_17T06_45_52.399769
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-17T06-45-52.399769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-17T06-45-52.399769.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_01_17T06_45_52.399769
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-17T06-45-52.399769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-17T06-45-52.399769.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_01_17T06_45_52.399769
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-17T06-45-52.399769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-17T06-45-52.399769.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_01_17T06_45_52.399769
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-17T06-45-52.399769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-17T06-45-52.399769.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_01_17T06_45_52.399769
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-17T06-45-52.399769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-17T06-45-52.399769.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_01_17T06_45_52.399769
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-17T06-45-52.399769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-17T06-45-52.399769.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_01_17T06_45_52.399769
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-17T06-45-52.399769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-17T06-45-52.399769.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_01_17T06_45_52.399769
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-17T06-45-52.399769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-17T06-45-52.399769.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_01_17T06_45_52.399769
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-17T06-45-52.399769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-17T06-45-52.399769.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_01_17T06_45_52.399769
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-17T06-45-52.399769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-17T06-45-52.399769.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_01_17T06_45_52.399769
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-17T06-45-52.399769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-17T06-45-52.399769.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_01_17T06_45_52.399769
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-17T06-45-52.399769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-17T06-45-52.399769.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_01_17T06_45_52.399769
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-17T06-45-52.399769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-17T06-45-52.399769.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_01_17T06_45_52.399769
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-17T06-45-52.399769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-17T06-45-52.399769.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_01_17T06_45_52.399769
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-17T06-45-52.399769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-17T06-45-52.399769.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_01_17T06_45_52.399769
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-17T06-45-52.399769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-17T06-45-52.399769.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_01_17T06_45_52.399769
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-17T06-45-52.399769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-17T06-45-52.399769.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_01_17T06_45_52.399769
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-17T06-45-52.399769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-17T06-45-52.399769.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_01_17T06_45_52.399769
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-17T06-45-52.399769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-17T06-45-52.399769.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_01_17T06_45_52.399769
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-17T06-45-52.399769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-17T06-45-52.399769.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_01_17T06_45_52.399769
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-17T06-45-52.399769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-17T06-45-52.399769.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_01_17T06_45_52.399769
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-17T06-45-52.399769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-17T06-45-52.399769.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_01_17T06_45_52.399769
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-17T06-45-52.399769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-17T06-45-52.399769.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_01_17T06_45_52.399769
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-17T06-45-52.399769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-17T06-45-52.399769.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_01_17T06_45_52.399769
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-17T06-45-52.399769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-17T06-45-52.399769.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_01_17T06_45_52.399769
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-17T06-45-52.399769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-17T06-45-52.399769.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_01_17T06_45_52.399769
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-17T06-45-52.399769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-17T06-45-52.399769.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_01_17T06_45_52.399769
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-17T06-45-52.399769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-17T06-45-52.399769.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_01_17T06_45_52.399769
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-17T06-45-52.399769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-17T06-45-52.399769.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_01_17T06_45_52.399769
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-17T06-45-52.399769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-17T06-45-52.399769.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_01_17T06_45_52.399769
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-17T06-45-52.399769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-17T06-45-52.399769.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_01_17T06_45_52.399769
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-17T06-45-52.399769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-17T06-45-52.399769.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_01_17T06_45_52.399769
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-17T06-45-52.399769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-17T06-45-52.399769.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_01_17T06_45_52.399769
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-17T06-45-52.399769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-17T06-45-52.399769.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_01_17T06_45_52.399769
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-17T06-45-52.399769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-17T06-45-52.399769.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_01_17T06_45_52.399769
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-17T06-45-52.399769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-17T06-45-52.399769.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_01_17T06_45_52.399769
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-17T06-45-52.399769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-17T06-45-52.399769.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_01_17T06_45_52.399769
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-17T06-45-52.399769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-17T06-45-52.399769.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_01_17T06_45_52.399769
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-17T06-45-52.399769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-17T06-45-52.399769.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_01_17T06_45_52.399769
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-17T06-45-52.399769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-17T06-45-52.399769.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_01_17T06_45_52.399769
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-17T06-45-52.399769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-17T06-45-52.399769.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_01_17T06_45_52.399769
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-17T06-45-52.399769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-17T06-45-52.399769.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_01_17T06_45_52.399769
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-17T06-45-52.399769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-17T06-45-52.399769.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_01_17T06_45_52.399769
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-17T06-45-52.399769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-17T06-45-52.399769.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_01_17T06_45_52.399769
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-17T06-45-52.399769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-17T06-45-52.399769.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_01_17T06_45_52.399769
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-17T06-45-52.399769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-17T06-45-52.399769.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_01_17T06_45_52.399769
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-17T06-45-52.399769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-17T06-45-52.399769.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_01_17T06_45_52.399769
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-17T06-45-52.399769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-17T06-45-52.399769.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_01_17T06_45_52.399769
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-17T06-45-52.399769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-17T06-45-52.399769.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_01_17T06_45_52.399769
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-17T06-45-52.399769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-17T06-45-52.399769.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_01_17T06_45_52.399769
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-17T06-45-52.399769.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-17T06-45-52.399769.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_01_17T06_45_52.399769
path:
- '**/details_harness|winogrande|5_2024-01-17T06-45-52.399769.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-01-17T06-45-52.399769.parquet'
- config_name: results
data_files:
- split: 2024_01_17T06_45_52.399769
path:
- results_2024-01-17T06-45-52.399769.parquet
- split: latest
path:
- results_2024-01-17T06-45-52.399769.parquet
---
# Dataset Card for Evaluation run of h2m/mhm-7b-v1.3-DPO-1
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [h2m/mhm-7b-v1.3-DPO-1](https://huggingface.co/h2m/mhm-7b-v1.3-DPO-1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_h2m__mhm-7b-v1.3-DPO-1",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-17T06:45:52.399769](https://huggingface.co/datasets/open-llm-leaderboard/details_h2m__mhm-7b-v1.3-DPO-1/blob/main/results_2024-01-17T06-45-52.399769.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.45614037821682546,
"acc_stderr": 0.03454781614962824,
"acc_norm": 0.46207531178088435,
"acc_norm_stderr": 0.035314919445415614,
"mc1": 0.29253365973072215,
"mc1_stderr": 0.015925597445286165,
"mc2": 0.4588457171247393,
"mc2_stderr": 0.015385039501663943
},
"harness|arc:challenge|25": {
"acc": 0.46245733788395904,
"acc_stderr": 0.01457014449507558,
"acc_norm": 0.49573378839590443,
"acc_norm_stderr": 0.014610858923956948
},
"harness|hellaswag|10": {
"acc": 0.5036845249950209,
"acc_stderr": 0.00498964592981145,
"acc_norm": 0.6810396335391357,
"acc_norm_stderr": 0.004651211311633843
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.23,
"acc_stderr": 0.042295258468165044,
"acc_norm": 0.23,
"acc_norm_stderr": 0.042295258468165044
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.4222222222222222,
"acc_stderr": 0.042667634040995814,
"acc_norm": 0.4222222222222222,
"acc_norm_stderr": 0.042667634040995814
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.46710526315789475,
"acc_stderr": 0.040601270352363966,
"acc_norm": 0.46710526315789475,
"acc_norm_stderr": 0.040601270352363966
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.47,
"acc_stderr": 0.050161355804659205,
"acc_norm": 0.47,
"acc_norm_stderr": 0.050161355804659205
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.4830188679245283,
"acc_stderr": 0.030755120364119905,
"acc_norm": 0.4830188679245283,
"acc_norm_stderr": 0.030755120364119905
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.3541666666666667,
"acc_stderr": 0.039994111357535424,
"acc_norm": 0.3541666666666667,
"acc_norm_stderr": 0.039994111357535424
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.31,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.31,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.47,
"acc_stderr": 0.050161355804659205,
"acc_norm": 0.47,
"acc_norm_stderr": 0.050161355804659205
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.4393063583815029,
"acc_stderr": 0.03784271932887467,
"acc_norm": 0.4393063583815029,
"acc_norm_stderr": 0.03784271932887467
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.27450980392156865,
"acc_stderr": 0.044405219061793275,
"acc_norm": 0.27450980392156865,
"acc_norm_stderr": 0.044405219061793275
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.53,
"acc_stderr": 0.050161355804659205,
"acc_norm": 0.53,
"acc_norm_stderr": 0.050161355804659205
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.33191489361702126,
"acc_stderr": 0.030783736757745664,
"acc_norm": 0.33191489361702126,
"acc_norm_stderr": 0.030783736757745664
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.34210526315789475,
"acc_stderr": 0.04462917535336936,
"acc_norm": 0.34210526315789475,
"acc_norm_stderr": 0.04462917535336936
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.4206896551724138,
"acc_stderr": 0.0411391498118926,
"acc_norm": 0.4206896551724138,
"acc_norm_stderr": 0.0411391498118926
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.3306878306878307,
"acc_stderr": 0.024229965298425075,
"acc_norm": 0.3306878306878307,
"acc_norm_stderr": 0.024229965298425075
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.29365079365079366,
"acc_stderr": 0.04073524322147126,
"acc_norm": 0.29365079365079366,
"acc_norm_stderr": 0.04073524322147126
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.28,
"acc_stderr": 0.045126085985421276,
"acc_norm": 0.28,
"acc_norm_stderr": 0.045126085985421276
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.49032258064516127,
"acc_stderr": 0.02843867799890955,
"acc_norm": 0.49032258064516127,
"acc_norm_stderr": 0.02843867799890955
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.3448275862068966,
"acc_stderr": 0.03344283744280458,
"acc_norm": 0.3448275862068966,
"acc_norm_stderr": 0.03344283744280458
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.52,
"acc_stderr": 0.05021167315686779,
"acc_norm": 0.52,
"acc_norm_stderr": 0.05021167315686779
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.5878787878787879,
"acc_stderr": 0.03843566993588716,
"acc_norm": 0.5878787878787879,
"acc_norm_stderr": 0.03843566993588716
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.5959595959595959,
"acc_stderr": 0.03496130972056128,
"acc_norm": 0.5959595959595959,
"acc_norm_stderr": 0.03496130972056128
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.6113989637305699,
"acc_stderr": 0.035177397963731316,
"acc_norm": 0.6113989637305699,
"acc_norm_stderr": 0.035177397963731316
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.4358974358974359,
"acc_stderr": 0.025141801511177495,
"acc_norm": 0.4358974358974359,
"acc_norm_stderr": 0.025141801511177495
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.24074074074074073,
"acc_stderr": 0.026067159222275794,
"acc_norm": 0.24074074074074073,
"acc_norm_stderr": 0.026067159222275794
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.3865546218487395,
"acc_stderr": 0.0316314580755238,
"acc_norm": 0.3865546218487395,
"acc_norm_stderr": 0.0316314580755238
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.304635761589404,
"acc_stderr": 0.03757949922943343,
"acc_norm": 0.304635761589404,
"acc_norm_stderr": 0.03757949922943343
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.6036697247706422,
"acc_stderr": 0.02097146994790053,
"acc_norm": 0.6036697247706422,
"acc_norm_stderr": 0.02097146994790053
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.41203703703703703,
"acc_stderr": 0.03356787758160835,
"acc_norm": 0.41203703703703703,
"acc_norm_stderr": 0.03356787758160835
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.5735294117647058,
"acc_stderr": 0.034711579079534254,
"acc_norm": 0.5735294117647058,
"acc_norm_stderr": 0.034711579079534254
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.6413502109704642,
"acc_stderr": 0.031219569445301843,
"acc_norm": 0.6413502109704642,
"acc_norm_stderr": 0.031219569445301843
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.5112107623318386,
"acc_stderr": 0.033549366530984746,
"acc_norm": 0.5112107623318386,
"acc_norm_stderr": 0.033549366530984746
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.5267175572519084,
"acc_stderr": 0.04379024936553894,
"acc_norm": 0.5267175572519084,
"acc_norm_stderr": 0.04379024936553894
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.6198347107438017,
"acc_stderr": 0.04431324501968431,
"acc_norm": 0.6198347107438017,
"acc_norm_stderr": 0.04431324501968431
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.5370370370370371,
"acc_stderr": 0.04820403072760627,
"acc_norm": 0.5370370370370371,
"acc_norm_stderr": 0.04820403072760627
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.50920245398773,
"acc_stderr": 0.03927705600787443,
"acc_norm": 0.50920245398773,
"acc_norm_stderr": 0.03927705600787443
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.375,
"acc_stderr": 0.04595091388086298,
"acc_norm": 0.375,
"acc_norm_stderr": 0.04595091388086298
},
"harness|hendrycksTest-management|5": {
"acc": 0.6116504854368932,
"acc_stderr": 0.048257293373563895,
"acc_norm": 0.6116504854368932,
"acc_norm_stderr": 0.048257293373563895
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.6752136752136753,
"acc_stderr": 0.03067902276549883,
"acc_norm": 0.6752136752136753,
"acc_norm_stderr": 0.03067902276549883
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.49,
"acc_stderr": 0.05024183937956911,
"acc_norm": 0.49,
"acc_norm_stderr": 0.05024183937956911
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.5887611749680716,
"acc_stderr": 0.017595971908056573,
"acc_norm": 0.5887611749680716,
"acc_norm_stderr": 0.017595971908056573
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.49710982658959535,
"acc_stderr": 0.026918645383239015,
"acc_norm": 0.49710982658959535,
"acc_norm_stderr": 0.026918645383239015
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.24692737430167597,
"acc_stderr": 0.014422292204808835,
"acc_norm": 0.24692737430167597,
"acc_norm_stderr": 0.014422292204808835
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.477124183006536,
"acc_stderr": 0.028599936776089786,
"acc_norm": 0.477124183006536,
"acc_norm_stderr": 0.028599936776089786
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.48231511254019294,
"acc_stderr": 0.02838032284907713,
"acc_norm": 0.48231511254019294,
"acc_norm_stderr": 0.02838032284907713
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.5154320987654321,
"acc_stderr": 0.027807490044276198,
"acc_norm": 0.5154320987654321,
"acc_norm_stderr": 0.027807490044276198
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.36524822695035464,
"acc_stderr": 0.02872386385328127,
"acc_norm": 0.36524822695035464,
"acc_norm_stderr": 0.02872386385328127
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.3455019556714472,
"acc_stderr": 0.012145303004087206,
"acc_norm": 0.3455019556714472,
"acc_norm_stderr": 0.012145303004087206
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.45955882352941174,
"acc_stderr": 0.030273325077345755,
"acc_norm": 0.45955882352941174,
"acc_norm_stderr": 0.030273325077345755
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.39869281045751637,
"acc_stderr": 0.019808281317449855,
"acc_norm": 0.39869281045751637,
"acc_norm_stderr": 0.019808281317449855
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.4909090909090909,
"acc_stderr": 0.04788339768702861,
"acc_norm": 0.4909090909090909,
"acc_norm_stderr": 0.04788339768702861
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.5306122448979592,
"acc_stderr": 0.031949171367580624,
"acc_norm": 0.5306122448979592,
"acc_norm_stderr": 0.031949171367580624
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.6567164179104478,
"acc_stderr": 0.03357379665433431,
"acc_norm": 0.6567164179104478,
"acc_norm_stderr": 0.03357379665433431
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.7,
"acc_stderr": 0.04605661864718381,
"acc_norm": 0.7,
"acc_norm_stderr": 0.04605661864718381
},
"harness|hendrycksTest-virology|5": {
"acc": 0.4036144578313253,
"acc_stderr": 0.038194861407583984,
"acc_norm": 0.4036144578313253,
"acc_norm_stderr": 0.038194861407583984
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.5730994152046783,
"acc_stderr": 0.03793620616529917,
"acc_norm": 0.5730994152046783,
"acc_norm_stderr": 0.03793620616529917
},
"harness|truthfulqa:mc|0": {
"mc1": 0.29253365973072215,
"mc1_stderr": 0.015925597445286165,
"mc2": 0.4588457171247393,
"mc2_stderr": 0.015385039501663943
},
"harness|winogrande|5": {
"acc": 0.6203630623520127,
"acc_stderr": 0.013639245403711156
},
"harness|gsm8k|5": {
"acc": 0.15238817285822592,
"acc_stderr": 0.009899572254794209
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
qgallouedec/prj_gia_dataset_metaworld_stick_push_v2_1111 | ---
library_name: gia
tags:
- deep-reinforcement-learning
- reinforcement-learning
- gia
- multi-task
- multi-modal
- imitation-learning
- offline-reinforcement-learning
---
An imitation learning environment for the stick-push-v2 environment, sample for the policy stick-push-v2
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
## Load dataset
First, clone it with
```sh
git clone https://huggingface.co/datasets/qgallouedec/prj_gia_dataset_metaworld_stick_push_v2_1111
```
Then, load it with
```python
import numpy as np
dataset = np.load("prj_gia_dataset_metaworld_stick_push_v2_1111/dataset.npy", allow_pickle=True).item()
print(dataset.keys()) # dict_keys(['observations', 'actions', 'dones', 'rewards'])
```
|
BangumiBase/mushokutensei | ---
license: mit
tags:
- art
size_categories:
- 1K<n<10K
---
# Bangumi Image Base of Mushoku Tensei
This is the image base of bangumi Mushoku Tensei, we detected 87 characters, 8375 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 117 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 19 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 56 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 194 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 42 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 68 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 198 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 2241 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 23 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 101 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 180 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 54 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 22 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 54 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 64 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 261 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| 16 | 23 | [Download](16/dataset.zip) |  |  |  |  |  |  |  |  |
| 17 | 35 | [Download](17/dataset.zip) |  |  |  |  |  |  |  |  |
| 18 | 23 | [Download](18/dataset.zip) |  |  |  |  |  |  |  |  |
| 19 | 180 | [Download](19/dataset.zip) |  |  |  |  |  |  |  |  |
| 20 | 67 | [Download](20/dataset.zip) |  |  |  |  |  |  |  |  |
| 21 | 28 | [Download](21/dataset.zip) |  |  |  |  |  |  |  |  |
| 22 | 46 | [Download](22/dataset.zip) |  |  |  |  |  |  |  |  |
| 23 | 39 | [Download](23/dataset.zip) |  |  |  |  |  |  |  |  |
| 24 | 31 | [Download](24/dataset.zip) |  |  |  |  |  |  |  |  |
| 25 | 59 | [Download](25/dataset.zip) |  |  |  |  |  |  |  |  |
| 26 | 206 | [Download](26/dataset.zip) |  |  |  |  |  |  |  |  |
| 27 | 66 | [Download](27/dataset.zip) |  |  |  |  |  |  |  |  |
| 28 | 137 | [Download](28/dataset.zip) |  |  |  |  |  |  |  |  |
| 29 | 82 | [Download](29/dataset.zip) |  |  |  |  |  |  |  |  |
| 30 | 32 | [Download](30/dataset.zip) |  |  |  |  |  |  |  |  |
| 31 | 31 | [Download](31/dataset.zip) |  |  |  |  |  |  |  |  |
| 32 | 9 | [Download](32/dataset.zip) |  |  |  |  |  |  |  |  |
| 33 | 26 | [Download](33/dataset.zip) |  |  |  |  |  |  |  |  |
| 34 | 73 | [Download](34/dataset.zip) |  |  |  |  |  |  |  |  |
| 35 | 17 | [Download](35/dataset.zip) |  |  |  |  |  |  |  |  |
| 36 | 92 | [Download](36/dataset.zip) |  |  |  |  |  |  |  |  |
| 37 | 392 | [Download](37/dataset.zip) |  |  |  |  |  |  |  |  |
| 38 | 18 | [Download](38/dataset.zip) |  |  |  |  |  |  |  |  |
| 39 | 61 | [Download](39/dataset.zip) |  |  |  |  |  |  |  |  |
| 40 | 55 | [Download](40/dataset.zip) |  |  |  |  |  |  |  |  |
| 41 | 20 | [Download](41/dataset.zip) |  |  |  |  |  |  |  |  |
| 42 | 11 | [Download](42/dataset.zip) |  |  |  |  |  |  |  |  |
| 43 | 22 | [Download](43/dataset.zip) |  |  |  |  |  |  |  |  |
| 44 | 49 | [Download](44/dataset.zip) |  |  |  |  |  |  |  |  |
| 45 | 43 | [Download](45/dataset.zip) |  |  |  |  |  |  |  |  |
| 46 | 116 | [Download](46/dataset.zip) |  |  |  |  |  |  |  |  |
| 47 | 33 | [Download](47/dataset.zip) |  |  |  |  |  |  |  |  |
| 48 | 19 | [Download](48/dataset.zip) |  |  |  |  |  |  |  |  |
| 49 | 89 | [Download](49/dataset.zip) |  |  |  |  |  |  |  |  |
| 50 | 18 | [Download](50/dataset.zip) |  |  |  |  |  |  |  |  |
| 51 | 11 | [Download](51/dataset.zip) |  |  |  |  |  |  |  |  |
| 52 | 34 | [Download](52/dataset.zip) |  |  |  |  |  |  |  |  |
| 53 | 128 | [Download](53/dataset.zip) |  |  |  |  |  |  |  |  |
| 54 | 8 | [Download](54/dataset.zip) |  |  |  |  |  |  |  |  |
| 55 | 40 | [Download](55/dataset.zip) |  |  |  |  |  |  |  |  |
| 56 | 43 | [Download](56/dataset.zip) |  |  |  |  |  |  |  |  |
| 57 | 8 | [Download](57/dataset.zip) |  |  |  |  |  |  |  |  |
| 58 | 19 | [Download](58/dataset.zip) |  |  |  |  |  |  |  |  |
| 59 | 270 | [Download](59/dataset.zip) |  |  |  |  |  |  |  |  |
| 60 | 28 | [Download](60/dataset.zip) |  |  |  |  |  |  |  |  |
| 61 | 11 | [Download](61/dataset.zip) |  |  |  |  |  |  |  |  |
| 62 | 20 | [Download](62/dataset.zip) |  |  |  |  |  |  |  |  |
| 63 | 16 | [Download](63/dataset.zip) |  |  |  |  |  |  |  |  |
| 64 | 23 | [Download](64/dataset.zip) |  |  |  |  |  |  |  |  |
| 65 | 20 | [Download](65/dataset.zip) |  |  |  |  |  |  |  |  |
| 66 | 62 | [Download](66/dataset.zip) |  |  |  |  |  |  |  |  |
| 67 | 8 | [Download](67/dataset.zip) |  |  |  |  |  |  |  |  |
| 68 | 95 | [Download](68/dataset.zip) |  |  |  |  |  |  |  |  |
| 69 | 34 | [Download](69/dataset.zip) |  |  |  |  |  |  |  |  |
| 70 | 46 | [Download](70/dataset.zip) |  |  |  |  |  |  |  |  |
| 71 | 12 | [Download](71/dataset.zip) |  |  |  |  |  |  |  |  |
| 72 | 12 | [Download](72/dataset.zip) |  |  |  |  |  |  |  |  |
| 73 | 11 | [Download](73/dataset.zip) |  |  |  |  |  |  |  |  |
| 74 | 7 | [Download](74/dataset.zip) |  |  |  |  |  |  |  | N/A |
| 75 | 13 | [Download](75/dataset.zip) |  |  |  |  |  |  |  |  |
| 76 | 84 | [Download](76/dataset.zip) |  |  |  |  |  |  |  |  |
| 77 | 26 | [Download](77/dataset.zip) |  |  |  |  |  |  |  |  |
| 78 | 83 | [Download](78/dataset.zip) |  |  |  |  |  |  |  |  |
| 79 | 766 | [Download](79/dataset.zip) |  |  |  |  |  |  |  |  |
| 80 | 23 | [Download](80/dataset.zip) |  |  |  |  |  |  |  |  |
| 81 | 19 | [Download](81/dataset.zip) |  |  |  |  |  |  |  |  |
| 82 | 14 | [Download](82/dataset.zip) |  |  |  |  |  |  |  |  |
| 83 | 69 | [Download](83/dataset.zip) |  |  |  |  |  |  |  |  |
| 84 | 9 | [Download](84/dataset.zip) |  |  |  |  |  |  |  |  |
| 85 | 5 | [Download](85/dataset.zip) |  |  |  |  |  | N/A | N/A | N/A |
| noise | 356 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
|
GEM-submissions/lewtun__this-is-a-test__1646314818 | ---
benchmark: gem
type: prediction
submission_name: This is a test
tags:
- evaluation
- benchmark
---
# GEM Submission
Submission name: This is a test
|
wbxlala/Epilepsy_seizure_prediction | ---
license: cc-by-4.0
---
|
open-llm-leaderboard/details_LeoLM__leo-hessianai-7b-chat | ---
pretty_name: Evaluation run of LeoLM/leo-hessianai-7b-chat
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [LeoLM/leo-hessianai-7b-chat](https://huggingface.co/LeoLM/leo-hessianai-7b-chat)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 64 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_LeoLM__leo-hessianai-7b-chat\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-25T06:19:05.012690](https://huggingface.co/datasets/open-llm-leaderboard/details_LeoLM__leo-hessianai-7b-chat/blob/main/results_2023-10-25T06-19-05.012690.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.007235738255033557,\n\
\ \"em_stderr\": 0.0008679688570178599,\n \"f1\": 0.06527369966442945,\n\
\ \"f1_stderr\": 0.0015637825138587918,\n \"acc\": 0.3754219341743793,\n\
\ \"acc_stderr\": 0.009489449224983773\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.007235738255033557,\n \"em_stderr\": 0.0008679688570178599,\n\
\ \"f1\": 0.06527369966442945,\n \"f1_stderr\": 0.0015637825138587918\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.05155420773313116,\n \
\ \"acc_stderr\": 0.006090887955262826\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.6992896606156275,\n \"acc_stderr\": 0.01288801049470472\n\
\ }\n}\n```"
repo_url: https://huggingface.co/LeoLM/leo-hessianai-7b-chat
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_10_04T00_28_30.897823
path:
- '**/details_harness|arc:challenge|25_2023-10-04T00-28-30.897823.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-10-04T00-28-30.897823.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_10_25T06_19_05.012690
path:
- '**/details_harness|drop|3_2023-10-25T06-19-05.012690.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-25T06-19-05.012690.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_25T06_19_05.012690
path:
- '**/details_harness|gsm8k|5_2023-10-25T06-19-05.012690.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-25T06-19-05.012690.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_10_04T00_28_30.897823
path:
- '**/details_harness|hellaswag|10_2023-10-04T00-28-30.897823.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-10-04T00-28-30.897823.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_10_04T00_28_30.897823
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-04T00-28-30.897823.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-10-04T00-28-30.897823.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-04T00-28-30.897823.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_10_04T00_28_30.897823
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-04T00-28-30.897823.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-04T00-28-30.897823.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_10_04T00_28_30.897823
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-04T00-28-30.897823.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-04T00-28-30.897823.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_10_04T00_28_30.897823
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-04T00-28-30.897823.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-04T00-28-30.897823.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_10_04T00_28_30.897823
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-04T00-28-30.897823.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-04T00-28-30.897823.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_10_04T00_28_30.897823
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-04T00-28-30.897823.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-04T00-28-30.897823.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_10_04T00_28_30.897823
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-04T00-28-30.897823.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-04T00-28-30.897823.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_10_04T00_28_30.897823
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-04T00-28-30.897823.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-04T00-28-30.897823.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_10_04T00_28_30.897823
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-04T00-28-30.897823.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-04T00-28-30.897823.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_10_04T00_28_30.897823
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-04T00-28-30.897823.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-04T00-28-30.897823.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_10_04T00_28_30.897823
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-04T00-28-30.897823.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-04T00-28-30.897823.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_10_04T00_28_30.897823
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-04T00-28-30.897823.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-04T00-28-30.897823.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_10_04T00_28_30.897823
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-04T00-28-30.897823.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-04T00-28-30.897823.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_10_04T00_28_30.897823
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-04T00-28-30.897823.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-04T00-28-30.897823.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_10_04T00_28_30.897823
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-04T00-28-30.897823.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-04T00-28-30.897823.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_10_04T00_28_30.897823
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-04T00-28-30.897823.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-04T00-28-30.897823.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_10_04T00_28_30.897823
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-04T00-28-30.897823.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-04T00-28-30.897823.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_10_04T00_28_30.897823
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-04T00-28-30.897823.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-04T00-28-30.897823.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_10_04T00_28_30.897823
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-04T00-28-30.897823.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-04T00-28-30.897823.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_10_04T00_28_30.897823
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-04T00-28-30.897823.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-04T00-28-30.897823.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_10_04T00_28_30.897823
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-04T00-28-30.897823.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-04T00-28-30.897823.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_10_04T00_28_30.897823
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-04T00-28-30.897823.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-04T00-28-30.897823.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_10_04T00_28_30.897823
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-04T00-28-30.897823.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-04T00-28-30.897823.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_10_04T00_28_30.897823
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-04T00-28-30.897823.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-04T00-28-30.897823.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_10_04T00_28_30.897823
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-04T00-28-30.897823.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-04T00-28-30.897823.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_10_04T00_28_30.897823
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-04T00-28-30.897823.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-04T00-28-30.897823.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_10_04T00_28_30.897823
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-04T00-28-30.897823.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-04T00-28-30.897823.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_10_04T00_28_30.897823
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-04T00-28-30.897823.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-04T00-28-30.897823.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_10_04T00_28_30.897823
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-04T00-28-30.897823.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-04T00-28-30.897823.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_10_04T00_28_30.897823
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-04T00-28-30.897823.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-04T00-28-30.897823.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_10_04T00_28_30.897823
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-04T00-28-30.897823.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-04T00-28-30.897823.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_10_04T00_28_30.897823
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-04T00-28-30.897823.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-04T00-28-30.897823.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_10_04T00_28_30.897823
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-04T00-28-30.897823.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-04T00-28-30.897823.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_10_04T00_28_30.897823
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-04T00-28-30.897823.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-04T00-28-30.897823.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_10_04T00_28_30.897823
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-04T00-28-30.897823.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-04T00-28-30.897823.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_10_04T00_28_30.897823
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-10-04T00-28-30.897823.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-10-04T00-28-30.897823.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_10_04T00_28_30.897823
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-04T00-28-30.897823.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-04T00-28-30.897823.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_10_04T00_28_30.897823
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-04T00-28-30.897823.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-04T00-28-30.897823.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_10_04T00_28_30.897823
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-04T00-28-30.897823.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-04T00-28-30.897823.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_10_04T00_28_30.897823
path:
- '**/details_harness|hendrycksTest-management|5_2023-10-04T00-28-30.897823.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-10-04T00-28-30.897823.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_10_04T00_28_30.897823
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-10-04T00-28-30.897823.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-10-04T00-28-30.897823.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_10_04T00_28_30.897823
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-04T00-28-30.897823.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-04T00-28-30.897823.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_10_04T00_28_30.897823
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-04T00-28-30.897823.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-04T00-28-30.897823.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_10_04T00_28_30.897823
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-04T00-28-30.897823.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-04T00-28-30.897823.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_10_04T00_28_30.897823
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-04T00-28-30.897823.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-04T00-28-30.897823.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_10_04T00_28_30.897823
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-04T00-28-30.897823.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-04T00-28-30.897823.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_10_04T00_28_30.897823
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-04T00-28-30.897823.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-04T00-28-30.897823.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_10_04T00_28_30.897823
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-04T00-28-30.897823.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-04T00-28-30.897823.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_10_04T00_28_30.897823
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-04T00-28-30.897823.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-04T00-28-30.897823.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_10_04T00_28_30.897823
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-04T00-28-30.897823.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-04T00-28-30.897823.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_10_04T00_28_30.897823
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-04T00-28-30.897823.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-04T00-28-30.897823.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_10_04T00_28_30.897823
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-04T00-28-30.897823.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-04T00-28-30.897823.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_10_04T00_28_30.897823
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-04T00-28-30.897823.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-04T00-28-30.897823.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_10_04T00_28_30.897823
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-04T00-28-30.897823.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-04T00-28-30.897823.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_10_04T00_28_30.897823
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-10-04T00-28-30.897823.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-10-04T00-28-30.897823.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_10_04T00_28_30.897823
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-04T00-28-30.897823.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-04T00-28-30.897823.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_10_04T00_28_30.897823
path:
- '**/details_harness|hendrycksTest-virology|5_2023-10-04T00-28-30.897823.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-10-04T00-28-30.897823.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_10_04T00_28_30.897823
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-04T00-28-30.897823.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-04T00-28-30.897823.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_10_04T00_28_30.897823
path:
- '**/details_harness|truthfulqa:mc|0_2023-10-04T00-28-30.897823.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-10-04T00-28-30.897823.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_25T06_19_05.012690
path:
- '**/details_harness|winogrande|5_2023-10-25T06-19-05.012690.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-25T06-19-05.012690.parquet'
- config_name: results
data_files:
- split: 2023_10_04T00_28_30.897823
path:
- results_2023-10-04T00-28-30.897823.parquet
- split: 2023_10_25T06_19_05.012690
path:
- results_2023-10-25T06-19-05.012690.parquet
- split: latest
path:
- results_2023-10-25T06-19-05.012690.parquet
---
# Dataset Card for Evaluation run of LeoLM/leo-hessianai-7b-chat
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/LeoLM/leo-hessianai-7b-chat
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [LeoLM/leo-hessianai-7b-chat](https://huggingface.co/LeoLM/leo-hessianai-7b-chat) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_LeoLM__leo-hessianai-7b-chat",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-25T06:19:05.012690](https://huggingface.co/datasets/open-llm-leaderboard/details_LeoLM__leo-hessianai-7b-chat/blob/main/results_2023-10-25T06-19-05.012690.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.007235738255033557,
"em_stderr": 0.0008679688570178599,
"f1": 0.06527369966442945,
"f1_stderr": 0.0015637825138587918,
"acc": 0.3754219341743793,
"acc_stderr": 0.009489449224983773
},
"harness|drop|3": {
"em": 0.007235738255033557,
"em_stderr": 0.0008679688570178599,
"f1": 0.06527369966442945,
"f1_stderr": 0.0015637825138587918
},
"harness|gsm8k|5": {
"acc": 0.05155420773313116,
"acc_stderr": 0.006090887955262826
},
"harness|winogrande|5": {
"acc": 0.6992896606156275,
"acc_stderr": 0.01288801049470472
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
goodfellowliu/OST | ---
license: apache-2.0
---
|
davanstrien/imdb_test_push | ---
dataset_info:
features:
- name: annotation_id
dtype: int64
- name: annotator
dtype: int64
- name: created_at
dtype: string
- name: id
dtype: int64
- name: lead_time
dtype: float64
- name: sentiment
dtype:
class_label:
names:
'0': Negative
'1': Positive
- name: text
dtype: string
- name: updated_at
dtype: string
splits:
- name: train
num_bytes: 5885
num_examples: 4
download_size: 0
dataset_size: 5885
tags:
- label-studio-exported
size_categories:
- 1K<n<10K
---
# Dataset Card for "imdb_test_push"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.