datasetId stringlengths 2 117 | card stringlengths 19 1.01M |
|---|---|
BrookBvn/finali | ---
license: openrail
---
|
open-llm-leaderboard/details_vanillaOVO__Beagle_Turdus | ---
pretty_name: Evaluation run of vanillaOVO/Beagle_Turdus
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [vanillaOVO/Beagle_Turdus](https://huggingface.co/vanillaOVO/Beagle_Turdus) on\
\ the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_vanillaOVO__Beagle_Turdus\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-03-22T17:22:33.326350](https://huggingface.co/datasets/open-llm-leaderboard/details_vanillaOVO__Beagle_Turdus/blob/main/results_2024-03-22T17-22-33.326350.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6526232352690864,\n\
\ \"acc_stderr\": 0.03219434500356539,\n \"acc_norm\": 0.6518218021063211,\n\
\ \"acc_norm_stderr\": 0.03287404531366433,\n \"mc1\": 0.5630354957160343,\n\
\ \"mc1_stderr\": 0.017363844503195953,\n \"mc2\": 0.6826892277470754,\n\
\ \"mc2_stderr\": 0.015301677075267637\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.7090443686006825,\n \"acc_stderr\": 0.01327307786590759,\n\
\ \"acc_norm\": 0.7363481228668942,\n \"acc_norm_stderr\": 0.012875929151297044\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.7276438956383191,\n\
\ \"acc_stderr\": 0.004442623590846324,\n \"acc_norm\": 0.8881696873132842,\n\
\ \"acc_norm_stderr\": 0.003145134767702308\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695235,\n \
\ \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695235\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6370370370370371,\n\
\ \"acc_stderr\": 0.04153948404742398,\n \"acc_norm\": 0.6370370370370371,\n\
\ \"acc_norm_stderr\": 0.04153948404742398\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.6973684210526315,\n \"acc_stderr\": 0.03738520676119669,\n\
\ \"acc_norm\": 0.6973684210526315,\n \"acc_norm_stderr\": 0.03738520676119669\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.66,\n\
\ \"acc_stderr\": 0.04760952285695238,\n \"acc_norm\": 0.66,\n \
\ \"acc_norm_stderr\": 0.04760952285695238\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.720754716981132,\n \"acc_stderr\": 0.027611163402399715,\n\
\ \"acc_norm\": 0.720754716981132,\n \"acc_norm_stderr\": 0.027611163402399715\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7847222222222222,\n\
\ \"acc_stderr\": 0.03437079344106135,\n \"acc_norm\": 0.7847222222222222,\n\
\ \"acc_norm_stderr\": 0.03437079344106135\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.51,\n \"acc_stderr\": 0.05024183937956912,\n \
\ \"acc_norm\": 0.51,\n \"acc_norm_stderr\": 0.05024183937956912\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.51,\n \"acc_stderr\": 0.05024183937956911,\n \"acc_norm\": 0.51,\n\
\ \"acc_norm_stderr\": 0.05024183937956911\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695235,\n \
\ \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695235\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.653179190751445,\n\
\ \"acc_stderr\": 0.036291466701596636,\n \"acc_norm\": 0.653179190751445,\n\
\ \"acc_norm_stderr\": 0.036291466701596636\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.4019607843137255,\n \"acc_stderr\": 0.048786087144669955,\n\
\ \"acc_norm\": 0.4019607843137255,\n \"acc_norm_stderr\": 0.048786087144669955\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.77,\n \"acc_stderr\": 0.04229525846816506,\n \"acc_norm\": 0.77,\n\
\ \"acc_norm_stderr\": 0.04229525846816506\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.5702127659574469,\n \"acc_stderr\": 0.03236214467715564,\n\
\ \"acc_norm\": 0.5702127659574469,\n \"acc_norm_stderr\": 0.03236214467715564\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.47368421052631576,\n\
\ \"acc_stderr\": 0.046970851366478626,\n \"acc_norm\": 0.47368421052631576,\n\
\ \"acc_norm_stderr\": 0.046970851366478626\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5655172413793104,\n \"acc_stderr\": 0.04130740879555498,\n\
\ \"acc_norm\": 0.5655172413793104,\n \"acc_norm_stderr\": 0.04130740879555498\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.42592592592592593,\n \"acc_stderr\": 0.025467149045469546,\n \"\
acc_norm\": 0.42592592592592593,\n \"acc_norm_stderr\": 0.025467149045469546\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4523809523809524,\n\
\ \"acc_stderr\": 0.044518079590553275,\n \"acc_norm\": 0.4523809523809524,\n\
\ \"acc_norm_stderr\": 0.044518079590553275\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252604,\n \
\ \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252604\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7774193548387097,\n\
\ \"acc_stderr\": 0.023664216671642518,\n \"acc_norm\": 0.7774193548387097,\n\
\ \"acc_norm_stderr\": 0.023664216671642518\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.5024630541871922,\n \"acc_stderr\": 0.035179450386910616,\n\
\ \"acc_norm\": 0.5024630541871922,\n \"acc_norm_stderr\": 0.035179450386910616\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\"\
: 0.69,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7757575757575758,\n \"acc_stderr\": 0.032568666616811015,\n\
\ \"acc_norm\": 0.7757575757575758,\n \"acc_norm_stderr\": 0.032568666616811015\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.7828282828282829,\n \"acc_stderr\": 0.02937661648494563,\n \"\
acc_norm\": 0.7828282828282829,\n \"acc_norm_stderr\": 0.02937661648494563\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.8963730569948186,\n \"acc_stderr\": 0.02199531196364424,\n\
\ \"acc_norm\": 0.8963730569948186,\n \"acc_norm_stderr\": 0.02199531196364424\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.6692307692307692,\n \"acc_stderr\": 0.023854795680971125,\n\
\ \"acc_norm\": 0.6692307692307692,\n \"acc_norm_stderr\": 0.023854795680971125\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.35185185185185186,\n \"acc_stderr\": 0.029116617606083008,\n \
\ \"acc_norm\": 0.35185185185185186,\n \"acc_norm_stderr\": 0.029116617606083008\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6722689075630253,\n \"acc_stderr\": 0.03048991141767323,\n \
\ \"acc_norm\": 0.6722689075630253,\n \"acc_norm_stderr\": 0.03048991141767323\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.3576158940397351,\n \"acc_stderr\": 0.03913453431177258,\n \"\
acc_norm\": 0.3576158940397351,\n \"acc_norm_stderr\": 0.03913453431177258\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8366972477064221,\n \"acc_stderr\": 0.015848255806501562,\n \"\
acc_norm\": 0.8366972477064221,\n \"acc_norm_stderr\": 0.015848255806501562\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.5185185185185185,\n \"acc_stderr\": 0.034076320938540516,\n \"\
acc_norm\": 0.5185185185185185,\n \"acc_norm_stderr\": 0.034076320938540516\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.8480392156862745,\n \"acc_stderr\": 0.025195658428931796,\n \"\
acc_norm\": 0.8480392156862745,\n \"acc_norm_stderr\": 0.025195658428931796\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.7848101265822784,\n \"acc_stderr\": 0.02675082699467618,\n \
\ \"acc_norm\": 0.7848101265822784,\n \"acc_norm_stderr\": 0.02675082699467618\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6860986547085202,\n\
\ \"acc_stderr\": 0.031146796482972465,\n \"acc_norm\": 0.6860986547085202,\n\
\ \"acc_norm_stderr\": 0.031146796482972465\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.7709923664122137,\n \"acc_stderr\": 0.036853466317118506,\n\
\ \"acc_norm\": 0.7709923664122137,\n \"acc_norm_stderr\": 0.036853466317118506\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.768595041322314,\n \"acc_stderr\": 0.03849856098794088,\n \"acc_norm\"\
: 0.768595041322314,\n \"acc_norm_stderr\": 0.03849856098794088\n },\n\
\ \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.75,\n \
\ \"acc_stderr\": 0.04186091791394607,\n \"acc_norm\": 0.75,\n \
\ \"acc_norm_stderr\": 0.04186091791394607\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.754601226993865,\n \"acc_stderr\": 0.03380939813943354,\n\
\ \"acc_norm\": 0.754601226993865,\n \"acc_norm_stderr\": 0.03380939813943354\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4642857142857143,\n\
\ \"acc_stderr\": 0.04733667890053756,\n \"acc_norm\": 0.4642857142857143,\n\
\ \"acc_norm_stderr\": 0.04733667890053756\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7766990291262136,\n \"acc_stderr\": 0.04123553189891431,\n\
\ \"acc_norm\": 0.7766990291262136,\n \"acc_norm_stderr\": 0.04123553189891431\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8846153846153846,\n\
\ \"acc_stderr\": 0.02093019318517933,\n \"acc_norm\": 0.8846153846153846,\n\
\ \"acc_norm_stderr\": 0.02093019318517933\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.71,\n \"acc_stderr\": 0.045604802157206845,\n \
\ \"acc_norm\": 0.71,\n \"acc_norm_stderr\": 0.045604802157206845\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8250319284802043,\n\
\ \"acc_stderr\": 0.013586619219903341,\n \"acc_norm\": 0.8250319284802043,\n\
\ \"acc_norm_stderr\": 0.013586619219903341\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7312138728323699,\n \"acc_stderr\": 0.023868003262500097,\n\
\ \"acc_norm\": 0.7312138728323699,\n \"acc_norm_stderr\": 0.023868003262500097\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.4446927374301676,\n\
\ \"acc_stderr\": 0.016619881988177015,\n \"acc_norm\": 0.4446927374301676,\n\
\ \"acc_norm_stderr\": 0.016619881988177015\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.7124183006535948,\n \"acc_stderr\": 0.02591780611714716,\n\
\ \"acc_norm\": 0.7124183006535948,\n \"acc_norm_stderr\": 0.02591780611714716\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7170418006430869,\n\
\ \"acc_stderr\": 0.025583062489984813,\n \"acc_norm\": 0.7170418006430869,\n\
\ \"acc_norm_stderr\": 0.025583062489984813\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.7345679012345679,\n \"acc_stderr\": 0.024569223600460842,\n\
\ \"acc_norm\": 0.7345679012345679,\n \"acc_norm_stderr\": 0.024569223600460842\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.5,\n \"acc_stderr\": 0.029827499313594685,\n \"acc_norm\"\
: 0.5,\n \"acc_norm_stderr\": 0.029827499313594685\n },\n \"harness|hendrycksTest-professional_law|5\"\
: {\n \"acc\": 0.4706649282920469,\n \"acc_stderr\": 0.012748238397365549,\n\
\ \"acc_norm\": 0.4706649282920469,\n \"acc_norm_stderr\": 0.012748238397365549\n\
\ },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\"\
: 0.6691176470588235,\n \"acc_stderr\": 0.02858270975389845,\n \"\
acc_norm\": 0.6691176470588235,\n \"acc_norm_stderr\": 0.02858270975389845\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6699346405228758,\n \"acc_stderr\": 0.019023726160724553,\n \
\ \"acc_norm\": 0.6699346405228758,\n \"acc_norm_stderr\": 0.019023726160724553\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6636363636363637,\n\
\ \"acc_stderr\": 0.04525393596302506,\n \"acc_norm\": 0.6636363636363637,\n\
\ \"acc_norm_stderr\": 0.04525393596302506\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.7387755102040816,\n \"acc_stderr\": 0.028123429335142777,\n\
\ \"acc_norm\": 0.7387755102040816,\n \"acc_norm_stderr\": 0.028123429335142777\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8308457711442786,\n\
\ \"acc_stderr\": 0.02650859065623327,\n \"acc_norm\": 0.8308457711442786,\n\
\ \"acc_norm_stderr\": 0.02650859065623327\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.85,\n \"acc_stderr\": 0.03588702812826371,\n \
\ \"acc_norm\": 0.85,\n \"acc_norm_stderr\": 0.03588702812826371\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5662650602409639,\n\
\ \"acc_stderr\": 0.03858158940685516,\n \"acc_norm\": 0.5662650602409639,\n\
\ \"acc_norm_stderr\": 0.03858158940685516\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8362573099415205,\n \"acc_stderr\": 0.028380919596145866,\n\
\ \"acc_norm\": 0.8362573099415205,\n \"acc_norm_stderr\": 0.028380919596145866\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.5630354957160343,\n\
\ \"mc1_stderr\": 0.017363844503195953,\n \"mc2\": 0.6826892277470754,\n\
\ \"mc2_stderr\": 0.015301677075267637\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.8602999210734017,\n \"acc_stderr\": 0.009743307618298178\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6800606520090978,\n \
\ \"acc_stderr\": 0.012848426555240761\n }\n}\n```"
repo_url: https://huggingface.co/vanillaOVO/Beagle_Turdus
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_03_22T17_22_33.326350
path:
- '**/details_harness|arc:challenge|25_2024-03-22T17-22-33.326350.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-03-22T17-22-33.326350.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_03_22T17_22_33.326350
path:
- '**/details_harness|gsm8k|5_2024-03-22T17-22-33.326350.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-03-22T17-22-33.326350.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_03_22T17_22_33.326350
path:
- '**/details_harness|hellaswag|10_2024-03-22T17-22-33.326350.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-03-22T17-22-33.326350.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_03_22T17_22_33.326350
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-22T17-22-33.326350.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-22T17-22-33.326350.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-22T17-22-33.326350.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_03_22T17_22_33.326350
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-22T17-22-33.326350.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-22T17-22-33.326350.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_03_22T17_22_33.326350
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-22T17-22-33.326350.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-22T17-22-33.326350.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_03_22T17_22_33.326350
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-22T17-22-33.326350.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-22T17-22-33.326350.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_03_22T17_22_33.326350
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-22T17-22-33.326350.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-22T17-22-33.326350.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_03_22T17_22_33.326350
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-22T17-22-33.326350.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-22T17-22-33.326350.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_03_22T17_22_33.326350
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-22T17-22-33.326350.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-22T17-22-33.326350.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_03_22T17_22_33.326350
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-22T17-22-33.326350.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-22T17-22-33.326350.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_03_22T17_22_33.326350
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-22T17-22-33.326350.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-22T17-22-33.326350.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_03_22T17_22_33.326350
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-22T17-22-33.326350.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-22T17-22-33.326350.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_03_22T17_22_33.326350
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-22T17-22-33.326350.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-22T17-22-33.326350.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_03_22T17_22_33.326350
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-22T17-22-33.326350.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-22T17-22-33.326350.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_03_22T17_22_33.326350
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-22T17-22-33.326350.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-22T17-22-33.326350.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_03_22T17_22_33.326350
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-22T17-22-33.326350.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-22T17-22-33.326350.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_03_22T17_22_33.326350
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-22T17-22-33.326350.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-22T17-22-33.326350.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_03_22T17_22_33.326350
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-22T17-22-33.326350.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-22T17-22-33.326350.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_03_22T17_22_33.326350
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-22T17-22-33.326350.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-22T17-22-33.326350.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_03_22T17_22_33.326350
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-22T17-22-33.326350.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-22T17-22-33.326350.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_03_22T17_22_33.326350
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-22T17-22-33.326350.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-22T17-22-33.326350.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_03_22T17_22_33.326350
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-22T17-22-33.326350.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-22T17-22-33.326350.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_03_22T17_22_33.326350
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-22T17-22-33.326350.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-22T17-22-33.326350.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_03_22T17_22_33.326350
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-22T17-22-33.326350.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-22T17-22-33.326350.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_03_22T17_22_33.326350
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-22T17-22-33.326350.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-22T17-22-33.326350.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_03_22T17_22_33.326350
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-22T17-22-33.326350.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-22T17-22-33.326350.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_03_22T17_22_33.326350
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-22T17-22-33.326350.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-22T17-22-33.326350.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_03_22T17_22_33.326350
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-22T17-22-33.326350.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-22T17-22-33.326350.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_03_22T17_22_33.326350
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-22T17-22-33.326350.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-22T17-22-33.326350.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_03_22T17_22_33.326350
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-22T17-22-33.326350.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-22T17-22-33.326350.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_03_22T17_22_33.326350
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-22T17-22-33.326350.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-22T17-22-33.326350.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_03_22T17_22_33.326350
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-22T17-22-33.326350.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-22T17-22-33.326350.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_03_22T17_22_33.326350
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-22T17-22-33.326350.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-22T17-22-33.326350.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_03_22T17_22_33.326350
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-22T17-22-33.326350.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-22T17-22-33.326350.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_03_22T17_22_33.326350
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-22T17-22-33.326350.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-22T17-22-33.326350.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_03_22T17_22_33.326350
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-22T17-22-33.326350.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-22T17-22-33.326350.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_03_22T17_22_33.326350
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-22T17-22-33.326350.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-22T17-22-33.326350.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_03_22T17_22_33.326350
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-22T17-22-33.326350.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-22T17-22-33.326350.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_03_22T17_22_33.326350
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-22T17-22-33.326350.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-22T17-22-33.326350.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_03_22T17_22_33.326350
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-22T17-22-33.326350.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-22T17-22-33.326350.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_03_22T17_22_33.326350
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-22T17-22-33.326350.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-22T17-22-33.326350.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_03_22T17_22_33.326350
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-22T17-22-33.326350.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-22T17-22-33.326350.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_03_22T17_22_33.326350
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-22T17-22-33.326350.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-22T17-22-33.326350.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_03_22T17_22_33.326350
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-22T17-22-33.326350.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-22T17-22-33.326350.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_03_22T17_22_33.326350
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-22T17-22-33.326350.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-22T17-22-33.326350.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_03_22T17_22_33.326350
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-22T17-22-33.326350.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-22T17-22-33.326350.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_03_22T17_22_33.326350
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-22T17-22-33.326350.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-22T17-22-33.326350.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_03_22T17_22_33.326350
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-22T17-22-33.326350.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-22T17-22-33.326350.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_03_22T17_22_33.326350
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-22T17-22-33.326350.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-22T17-22-33.326350.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_03_22T17_22_33.326350
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-22T17-22-33.326350.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-22T17-22-33.326350.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_03_22T17_22_33.326350
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-22T17-22-33.326350.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-22T17-22-33.326350.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_03_22T17_22_33.326350
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-22T17-22-33.326350.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-22T17-22-33.326350.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_03_22T17_22_33.326350
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-22T17-22-33.326350.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-22T17-22-33.326350.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_03_22T17_22_33.326350
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-22T17-22-33.326350.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-22T17-22-33.326350.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_03_22T17_22_33.326350
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-22T17-22-33.326350.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-22T17-22-33.326350.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_03_22T17_22_33.326350
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-22T17-22-33.326350.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-22T17-22-33.326350.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_03_22T17_22_33.326350
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-22T17-22-33.326350.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-22T17-22-33.326350.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_03_22T17_22_33.326350
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-22T17-22-33.326350.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-22T17-22-33.326350.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_03_22T17_22_33.326350
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-22T17-22-33.326350.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-22T17-22-33.326350.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_03_22T17_22_33.326350
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-22T17-22-33.326350.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-22T17-22-33.326350.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_03_22T17_22_33.326350
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-22T17-22-33.326350.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-22T17-22-33.326350.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_03_22T17_22_33.326350
path:
- '**/details_harness|winogrande|5_2024-03-22T17-22-33.326350.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-03-22T17-22-33.326350.parquet'
- config_name: results
data_files:
- split: 2024_03_22T17_22_33.326350
path:
- results_2024-03-22T17-22-33.326350.parquet
- split: latest
path:
- results_2024-03-22T17-22-33.326350.parquet
---
# Dataset Card for Evaluation run of vanillaOVO/Beagle_Turdus
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [vanillaOVO/Beagle_Turdus](https://huggingface.co/vanillaOVO/Beagle_Turdus) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_vanillaOVO__Beagle_Turdus",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-03-22T17:22:33.326350](https://huggingface.co/datasets/open-llm-leaderboard/details_vanillaOVO__Beagle_Turdus/blob/main/results_2024-03-22T17-22-33.326350.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6526232352690864,
"acc_stderr": 0.03219434500356539,
"acc_norm": 0.6518218021063211,
"acc_norm_stderr": 0.03287404531366433,
"mc1": 0.5630354957160343,
"mc1_stderr": 0.017363844503195953,
"mc2": 0.6826892277470754,
"mc2_stderr": 0.015301677075267637
},
"harness|arc:challenge|25": {
"acc": 0.7090443686006825,
"acc_stderr": 0.01327307786590759,
"acc_norm": 0.7363481228668942,
"acc_norm_stderr": 0.012875929151297044
},
"harness|hellaswag|10": {
"acc": 0.7276438956383191,
"acc_stderr": 0.004442623590846324,
"acc_norm": 0.8881696873132842,
"acc_norm_stderr": 0.003145134767702308
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.34,
"acc_stderr": 0.04760952285695235,
"acc_norm": 0.34,
"acc_norm_stderr": 0.04760952285695235
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6370370370370371,
"acc_stderr": 0.04153948404742398,
"acc_norm": 0.6370370370370371,
"acc_norm_stderr": 0.04153948404742398
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6973684210526315,
"acc_stderr": 0.03738520676119669,
"acc_norm": 0.6973684210526315,
"acc_norm_stderr": 0.03738520676119669
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.66,
"acc_stderr": 0.04760952285695238,
"acc_norm": 0.66,
"acc_norm_stderr": 0.04760952285695238
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.720754716981132,
"acc_stderr": 0.027611163402399715,
"acc_norm": 0.720754716981132,
"acc_norm_stderr": 0.027611163402399715
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7847222222222222,
"acc_stderr": 0.03437079344106135,
"acc_norm": 0.7847222222222222,
"acc_norm_stderr": 0.03437079344106135
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.51,
"acc_stderr": 0.05024183937956912,
"acc_norm": 0.51,
"acc_norm_stderr": 0.05024183937956912
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.51,
"acc_stderr": 0.05024183937956911,
"acc_norm": 0.51,
"acc_norm_stderr": 0.05024183937956911
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.34,
"acc_stderr": 0.04760952285695235,
"acc_norm": 0.34,
"acc_norm_stderr": 0.04760952285695235
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.653179190751445,
"acc_stderr": 0.036291466701596636,
"acc_norm": 0.653179190751445,
"acc_norm_stderr": 0.036291466701596636
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.4019607843137255,
"acc_stderr": 0.048786087144669955,
"acc_norm": 0.4019607843137255,
"acc_norm_stderr": 0.048786087144669955
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.77,
"acc_stderr": 0.04229525846816506,
"acc_norm": 0.77,
"acc_norm_stderr": 0.04229525846816506
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5702127659574469,
"acc_stderr": 0.03236214467715564,
"acc_norm": 0.5702127659574469,
"acc_norm_stderr": 0.03236214467715564
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.47368421052631576,
"acc_stderr": 0.046970851366478626,
"acc_norm": 0.47368421052631576,
"acc_norm_stderr": 0.046970851366478626
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5655172413793104,
"acc_stderr": 0.04130740879555498,
"acc_norm": 0.5655172413793104,
"acc_norm_stderr": 0.04130740879555498
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.42592592592592593,
"acc_stderr": 0.025467149045469546,
"acc_norm": 0.42592592592592593,
"acc_norm_stderr": 0.025467149045469546
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.4523809523809524,
"acc_stderr": 0.044518079590553275,
"acc_norm": 0.4523809523809524,
"acc_norm_stderr": 0.044518079590553275
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.33,
"acc_stderr": 0.04725815626252604,
"acc_norm": 0.33,
"acc_norm_stderr": 0.04725815626252604
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7774193548387097,
"acc_stderr": 0.023664216671642518,
"acc_norm": 0.7774193548387097,
"acc_norm_stderr": 0.023664216671642518
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5024630541871922,
"acc_stderr": 0.035179450386910616,
"acc_norm": 0.5024630541871922,
"acc_norm_stderr": 0.035179450386910616
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.69,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.69,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7757575757575758,
"acc_stderr": 0.032568666616811015,
"acc_norm": 0.7757575757575758,
"acc_norm_stderr": 0.032568666616811015
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7828282828282829,
"acc_stderr": 0.02937661648494563,
"acc_norm": 0.7828282828282829,
"acc_norm_stderr": 0.02937661648494563
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8963730569948186,
"acc_stderr": 0.02199531196364424,
"acc_norm": 0.8963730569948186,
"acc_norm_stderr": 0.02199531196364424
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6692307692307692,
"acc_stderr": 0.023854795680971125,
"acc_norm": 0.6692307692307692,
"acc_norm_stderr": 0.023854795680971125
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.35185185185185186,
"acc_stderr": 0.029116617606083008,
"acc_norm": 0.35185185185185186,
"acc_norm_stderr": 0.029116617606083008
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6722689075630253,
"acc_stderr": 0.03048991141767323,
"acc_norm": 0.6722689075630253,
"acc_norm_stderr": 0.03048991141767323
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3576158940397351,
"acc_stderr": 0.03913453431177258,
"acc_norm": 0.3576158940397351,
"acc_norm_stderr": 0.03913453431177258
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8366972477064221,
"acc_stderr": 0.015848255806501562,
"acc_norm": 0.8366972477064221,
"acc_norm_stderr": 0.015848255806501562
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5185185185185185,
"acc_stderr": 0.034076320938540516,
"acc_norm": 0.5185185185185185,
"acc_norm_stderr": 0.034076320938540516
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8480392156862745,
"acc_stderr": 0.025195658428931796,
"acc_norm": 0.8480392156862745,
"acc_norm_stderr": 0.025195658428931796
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7848101265822784,
"acc_stderr": 0.02675082699467618,
"acc_norm": 0.7848101265822784,
"acc_norm_stderr": 0.02675082699467618
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6860986547085202,
"acc_stderr": 0.031146796482972465,
"acc_norm": 0.6860986547085202,
"acc_norm_stderr": 0.031146796482972465
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7709923664122137,
"acc_stderr": 0.036853466317118506,
"acc_norm": 0.7709923664122137,
"acc_norm_stderr": 0.036853466317118506
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.768595041322314,
"acc_stderr": 0.03849856098794088,
"acc_norm": 0.768595041322314,
"acc_norm_stderr": 0.03849856098794088
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.75,
"acc_stderr": 0.04186091791394607,
"acc_norm": 0.75,
"acc_norm_stderr": 0.04186091791394607
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.754601226993865,
"acc_stderr": 0.03380939813943354,
"acc_norm": 0.754601226993865,
"acc_norm_stderr": 0.03380939813943354
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.4642857142857143,
"acc_stderr": 0.04733667890053756,
"acc_norm": 0.4642857142857143,
"acc_norm_stderr": 0.04733667890053756
},
"harness|hendrycksTest-management|5": {
"acc": 0.7766990291262136,
"acc_stderr": 0.04123553189891431,
"acc_norm": 0.7766990291262136,
"acc_norm_stderr": 0.04123553189891431
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8846153846153846,
"acc_stderr": 0.02093019318517933,
"acc_norm": 0.8846153846153846,
"acc_norm_stderr": 0.02093019318517933
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.71,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.71,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8250319284802043,
"acc_stderr": 0.013586619219903341,
"acc_norm": 0.8250319284802043,
"acc_norm_stderr": 0.013586619219903341
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7312138728323699,
"acc_stderr": 0.023868003262500097,
"acc_norm": 0.7312138728323699,
"acc_norm_stderr": 0.023868003262500097
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.4446927374301676,
"acc_stderr": 0.016619881988177015,
"acc_norm": 0.4446927374301676,
"acc_norm_stderr": 0.016619881988177015
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7124183006535948,
"acc_stderr": 0.02591780611714716,
"acc_norm": 0.7124183006535948,
"acc_norm_stderr": 0.02591780611714716
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7170418006430869,
"acc_stderr": 0.025583062489984813,
"acc_norm": 0.7170418006430869,
"acc_norm_stderr": 0.025583062489984813
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7345679012345679,
"acc_stderr": 0.024569223600460842,
"acc_norm": 0.7345679012345679,
"acc_norm_stderr": 0.024569223600460842
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.5,
"acc_stderr": 0.029827499313594685,
"acc_norm": 0.5,
"acc_norm_stderr": 0.029827499313594685
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4706649282920469,
"acc_stderr": 0.012748238397365549,
"acc_norm": 0.4706649282920469,
"acc_norm_stderr": 0.012748238397365549
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6691176470588235,
"acc_stderr": 0.02858270975389845,
"acc_norm": 0.6691176470588235,
"acc_norm_stderr": 0.02858270975389845
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6699346405228758,
"acc_stderr": 0.019023726160724553,
"acc_norm": 0.6699346405228758,
"acc_norm_stderr": 0.019023726160724553
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6636363636363637,
"acc_stderr": 0.04525393596302506,
"acc_norm": 0.6636363636363637,
"acc_norm_stderr": 0.04525393596302506
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7387755102040816,
"acc_stderr": 0.028123429335142777,
"acc_norm": 0.7387755102040816,
"acc_norm_stderr": 0.028123429335142777
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8308457711442786,
"acc_stderr": 0.02650859065623327,
"acc_norm": 0.8308457711442786,
"acc_norm_stderr": 0.02650859065623327
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.85,
"acc_stderr": 0.03588702812826371,
"acc_norm": 0.85,
"acc_norm_stderr": 0.03588702812826371
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5662650602409639,
"acc_stderr": 0.03858158940685516,
"acc_norm": 0.5662650602409639,
"acc_norm_stderr": 0.03858158940685516
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8362573099415205,
"acc_stderr": 0.028380919596145866,
"acc_norm": 0.8362573099415205,
"acc_norm_stderr": 0.028380919596145866
},
"harness|truthfulqa:mc|0": {
"mc1": 0.5630354957160343,
"mc1_stderr": 0.017363844503195953,
"mc2": 0.6826892277470754,
"mc2_stderr": 0.015301677075267637
},
"harness|winogrande|5": {
"acc": 0.8602999210734017,
"acc_stderr": 0.009743307618298178
},
"harness|gsm8k|5": {
"acc": 0.6800606520090978,
"acc_stderr": 0.012848426555240761
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
mstz/toxicity | ---
language:
- en
tags:
- toxicity
- tabular_classification
- binary_classification
- multiclass_classification
- UCI
pretty_name: Toxicity
size_categories:
- n<1K
task_categories:
- tabular-classification
configs:
- encoding
- income
- income-no race
- race
license: cc
---
# Adult
The [Toxicity dataset](https://archive-beta.ics.uci.edu/dataset/728/toxicity) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
The dataset includes 171 molecules designed for functional domains of a core clock protein, CRY1, responsible for generating circadian rhythm.
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|-----------------------------------------------------------------|
| toxicity | Binary classification | Is the molecule toxic? |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/toxicity")["train"]
``` |
liuyanchen1015/MULTI_VALUE_cola_possessives_for_post | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: value_score
dtype: int64
splits:
- name: dev
num_bytes: 10741
num_examples: 117
- name: test
num_bytes: 11219
num_examples: 127
- name: train
num_bytes: 86209
num_examples: 983
download_size: 53037
dataset_size: 108169
---
# Dataset Card for "MULTI_VALUE_cola_possessives_for_post"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
leminda-ai/s2orc_small | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: paperAbstract
dtype: string
- name: entities
sequence: string
- name: s2Url
dtype: string
- name: pdfUrls
sequence: string
- name: s2PdfUrl
dtype: string
- name: authors
list:
- name: name
dtype: string
- name: ids
sequence: string
- name: inCitations
sequence: string
- name: outCitations
sequence: string
- name: fieldsOfStudy
sequence: string
- name: year
dtype: int32
- name: venue
dtype: string
- name: journalName
dtype: string
- name: journalVolume
dtype: string
- name: journalPages
dtype: string
- name: sources
sequence: string
- name: doi
dtype: string
- name: doiUrl
dtype: string
- name: pmid
dtype: string
- name: magId
dtype: string
splits:
- name: train
num_bytes: 1725313131.1503427
num_examples: 889289
download_size: 2180008218
dataset_size: 1725313131.1503427
---
# Dataset Card for "s2orc_small"
A small split of the s2orc dataset, includes ~900k english papers with abstract included.
See all detailes in the original dataset card - https://huggingface.co/datasets/allenai/s2orc |
Aman6917/autotrain-data-exact_data | ---
task_categories:
- summarization
---
# AutoTrain Dataset for project: exact_data
## Dataset Description
This dataset has been automatically processed by AutoTrain for project exact_data.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "What is the maximum vendor id of vendor present in vendor table who has been issued a PO in 2021",
"target": "select max(t1.vendor_id) from RETAILBUYER_POHEADER as t2 inner join RETAILBUYER_VENDOR as t1 on t2.vendor_id = t1.vendor_id where YEAR(t2.po_issuedt) = 2021"
},
{
"text": "What are the product ids, descriptions and sum of quantities ordered for the products in purchase order line items",
"target": "select L.product_id, t2.product_desc, sum(t1.quantity) from RETAILBUYER_PRODUCT_SOURCE as t2 INNER JOIN RETAILBUYER_POLINEITEM as t1 ON t2.PRODUCT_ID = t1.PRODUCT_ID GROUP BY t1.PRODUCT_ID, t2.product_desc"
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 25 |
| valid | 7 |
|
pdearena/NavierStokes-2D-conditoned | ---
license: mit
--- |
RikoteMaster/llama2_classifying_and_explainning | ---
dataset_info:
features:
- name: Explanation
dtype: string
- name: Text_processed
dtype: string
- name: Emotion
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 51981712
num_examples: 47512
download_size: 16818458
dataset_size: 51981712
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "llama2_classifying_and_explainning"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
kunalchamoli/mental_health_v1 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 1773757
num_examples: 1000
download_size: 891537
dataset_size: 1773757
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
kedardes/splats | ---
license: apache-2.0
---
|
mila-intel/ProtST-Stability | ---
configs:
- config_name: default
data_files:
- split: train
path: stability_train.csv
- split: validation
path: stability_valid.csv
- split: test
path: stability_test.csv
--- |
tr416/dataset_20231007_030433 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 73993
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231007_030433"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
radlab/polish-sts-dataset | ---
license: lgpl-3.0
language:
- pl
pretty_name: Polish STS dataset
tags:
- sts
size_categories:
- 1K<n<10K
--- |
hhu-dsml/emowoz | ---
license: cc-by-nc-4.0
task_categories:
- text-classification
language:
- en
size_categories:
- 10K<n<100K
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
multilinguality:
- monolingual
source_datasets:
- MultiWOZ
- Original (human-machine interaction dialogues)
pretty_name: EmoWOZ
task_ids:
- sentiment-classification
- sentiment-analysis
paperswithcode_id: emowoz-1
configs:
- emowoz
- multiwoz
- dialmage
dataset_info:
- config_name: emowoz
features:
- name: dialogue_id
dtype: string
- name: log
sequence:
- name: text
dtype: string
- name: emotion
dtype: int32
splits:
- name: train
num_bytes: 10661603
num_examples: 9233
- name: validation
num_bytes: 1391634
num_examples: 1100
- name: test
num_bytes: 1409633
num_examples: 1100
- config_name: multiwoz
features:
- name: dialogue_id
dtype: string
- name: log
sequence:
- name: text
dtype: string
- name: emotion
dtype: int32
splits:
- name: train
num_bytes: 10661603
num_examples: 9233
- name: validation
num_bytes: 1391634
num_examples: 1100
- name: test
num_bytes: 1409633
num_examples: 1100
- config_name: dialmage
features:
- name: dialogue_id
dtype: string
- name: log
sequence:
- name: text
dtype: string
- name: emotion
dtype: int32
splits:
- name: train
num_bytes: 10661603
num_examples: 9233
- name: validation
num_bytes: 1391634
num_examples: 1100
- name: test
num_bytes: 1409633
num_examples: 1100
---
# Dataset Card for EmoWOZ Dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** [EmoWOZ Dataset repository](https://zenodo.org/record/6506504), [EmoWOZ Benchmark repository](https://gitlab.cs.uni-duesseldorf.de/general/dsml/emowoz-public)
- **Paper:** [EmoWOZ: A Large-Scale Corpus and Labelling Scheme for Emotion Recognition in Task-Oriented Dialogue Systems](https://aclanthology.org/2022.lrec-1.436/)
- **Leaderboard:** [Papers with Code leaderboard for EmoWOZ Dataset](https://paperswithcode.com/dataset/emowoz-1)
- **Point of Contact:** [Shutong Feng](mailto:shutong.feng@hhu.de)
### Dataset Summary
EmoWOZ is based on [MultiWOZ, a multi-domain task-oriented dialogue dataset](https://github.com/budzianowski/multiwoz). It contains more than 11K task-oriented dialogues with more than 83K emotion annotations of user utterances. In addition to Wizard-of-Oz dialogues from MultiWOZ, we collect human-machine dialogues (DialMAGE) within the same set of domains to sufficiently cover the space of various emotions that can happen during the lifetime of a data-driven dialogue system. There are 7 emotion labels, which are adapted from the OCC emotion models: _Neutral_, _Satisfied_, _Dissatisfied_, _Excited_, _Apologetic_, _Fearful_, _Abusive_.
Some of the statistics about the dataset:
| Metirc | Value |
| ---------- | ---------------- |
| # Dialogues | 11434 |
| # Turns | 167234 |
| # Annotations | 83617 |
| # Unique Tokens | 28417 |
| Average Turns per Dialogue | 14.63 |
| Average Tokens per Turn | 12.78 |
Emotion Distribution in EmoWOZ and subsets:
| Emotion | EmoWOZ | # MultiWOZ | DialMAGE |
| ---------- | ---------------- | ---------- | ---------------- |
| Neutral | 58,656 | 51,426 | 7,230 |
| Satisfied | 17,532 | 17,061 | 471 |
| Dissatisfied | 5,117 | 914 | 4,203 |
| Excited | 971 | 860 | 111 |
| Apologetic | 840 | 838 | 2 |
| Fearful | 396 | 381 | 15 |
| Satisfied | 105 | 44 | 61 |
### Supported Tasks and Leaderboards
- 'Emotion Recognition in Conversations': See the [Papers With Code leaderboard](hhttps://paperswithcode.com/sota/emotion-recognition-in-conversation-on-emowoz) for more models.
- 'Additional Classification Tasks': According to the initial benchmark [paper](https://aclanthology.org/2022.lrec-1.436/), emotion labels in EmoWOZ can be mapped to sentiment polarities. Therefore, sentiment classification and sentiment analysis can also be performed. Since EmoWOZ has two subsets: MultiWOZ (human-to-human) and DialMAGE (human-to-machine), it is also possible to perform cross-domain emotion/sentiment recognition.
### Languages
Only English is represented in the data.
## Dataset Structure
### Data Instances
For each instance, there is a string id for the dialogue, a list of strings for the dialogue utterances, and a list of integers for the emotion labels.
```
{
'dialogue_id': 'PMUL4725.json',
'log': {
'text': [
'Hi, i am looking for some museums that I could visit when in town, could you help me find some?',
'Is there an area of town you prefer?',
"No, I don't care.",
"I recommend the Cafe Jello Gallery in the west. It's free to enter!",
'I also need a place to stay',
'Great! There are 33 hotels in the area. What area of town would you like to stay in? What is your preference on price?',
" The attraction should be in the type of museum. I don't care about the price range or the area",
'Just to clarify - did you need a different museum? Or a hotel?',
'That museum from earlier is fine, I just need their postalcode. I need a hotel two in the west and moderately priced. ',
"The postal code for Cafe Jello Gallery is cb30af. Okay, Hobson's House matches your request. ",
'Do they have internet?',
'Yes they do. Would you like me to book a room for you?',
"No thanks. I will do that later. Can you please arrange for taxi service from Cafe Jello to Hobson's House sometime after 04:00?",
'I was able to book that for you. Be expecting a grey Tesla. If you need to reach them, please call 07615015749. ',
'Well that you that is all i need for today',
'Your welcome. Have a great day!'
],
'emotion': [0, -1, 0, -1, 0, -1, 0, -1, 0, -1, 0, -1, 0, -1, 0, -1]
}
}
```
### Data Fields
- `dialogue_id`: a string representing the unique id of the dialogue. For MultiWOZ dialogues, the original id is keeped. For DialMAGE dialogues, all ids are in the format of DMAGExxx.json where xxx is an integer of variable number of digits.
- `text`: a list of strings containing the dialogue turns.
- `emotion`: a list of integers containing the sequence of emotion labels for the dialogue. Specificially,
- -1: system turns with unlabelled emotion
- 0: neutral, no emotion expressed
- 1: fearful, or sad/disappointed, negative emotion elicited by facts/events, which is out of the system's control
- 2: dissatisfied, negative emotion elicited by the system, usually after the system's poor performance
- 3: apologetic, negative emotion from the user, usually expressing apologies for causing confusion or changing search criteria
- 4: abusive, negative emotion elicited by the system, expressed in an impolite way
- 5: excited, positive emotion elicited by facts/events
- 6: satisfied, positive emotion elicited by the system
### Data Splits
The EmoWOZ dataset has 3 splits: _train_, _validation_, and _test_. Below are the statistics for the dataset.
| Dataset Split | Number of Emotion Annotations in Split| Of Which from MultiWOZ | Of Which from DialMage |
| ------------- | ----------------------------| ------------- | ------------------------------------------- |
| Train | 66,474 | 56,778 | 9696 |
| Validation | 8,509 | 7,374 | 1135 |
| Test | 8,634 | 7,372 | 1262 |
## Dataset Creation
### Curation Rationale
EmoWOZ was built on top of MultiWOZ because MultiWOZ is a well-established dataset for task-oriented dialogue modelling, allowing further study of the impact of user emotions on downstream tasks. The additional 1000 human-machine dialogues (DialMAGE) was collected to improve the emotion coverage and emotional expression diversity.
### Source Data
#### Initial Data Collection and Normalization
MultiWOZ dialogues were inherited from the work of [MultiWOZ - A Large-Scale Multi-Domain Wizard-of-Oz Dataset for Task-Oriented Dialogue Modelling](https://aclanthology.org/D18-1547/).
DialMAGE dialogues were collected from a human evaluation of an RNN-based policy trained on MultiWOZ on Amazon Mechanical Turk platform.
#### Who are the source language producers?
The text of both MultiWOZ and DialMAGE was written by workers on Amazon Mechanical Turk platform. For detailed data collection set-ups, please refer to their respective publications.
### Annotations
All dialogues take place between a _user_ and a _system_ (or an _operator_). The dialogue always starts with a user turn, which is always followed by a system response, and ends with a system turn. Only user turns are annotated with a emotion label.
#### Annotation process
Each user utterance was annotated by three annotators. The final label was determined by majority voting. If there was no agreement, the final label would be resolved manually.
For details such as annotator selection process and quality assurance methods, please refer to the EmoWOZ publication.
#### Who are the annotators?
Annotators are crowdsource workers on Amazon Mechanical Turk platform.
### Personal and Sensitive Information
All annotators are anonymised. There is no personal information in EmoWOZ.
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to help develop task-oriented dialogue systems that can perceive human emotions and avoid abusive behaviours. This task is useful for building more human-like dialogue agents.
### Discussion of Biases
There is bias in emotion distribution in the MultiWOZ (human-human) and DialMAGE (human-machine) subset of EmoWOZ. The linguistic styles are also different between the two subsets.
As pointed out in [Reevaluating Data Partitioning for Emotion Detection in EmoWOZ](https://arxiv.org/abs/2303.13364), there is also emotion shift in train-dev-test split in the MultiWOZ subset. EmoWOZ keeps the original data split of MultiWOZ, which is suitable for task-oriented dialogue modelling but the emotion distribution in these data splits are different. Further investigations will be needed.
### Other Known Limitations
The emotion distribution is unbalanced where _neutral_, _satisfied_, and _dissatisfied_ make up more than 95% of the labels.
## Additional Information
### Dataset Curators
The collection and annotation of EmoWOZ were conducted by the [Chair for Dialog Systems and Machine Learning at Heinrich Heine Universität Düsseldorf](https://www.cs.hhu.de/lehrstuehle-und-arbeitsgruppen/dialog-systems-and-machine-learning).
### Licensing Information
The EmoWOZ datasetis released under the [CC-BY-NC-4.0 License](https://creativecommons.org/licenses/by-nc/4.0/).
### Citation Information
```
@inproceedings{feng-etal-2022-emowoz,
title = "{E}mo{WOZ}: A Large-Scale Corpus and Labelling Scheme for Emotion Recognition in Task-Oriented Dialogue Systems",
author = "Feng, Shutong and
Lubis, Nurul and
Geishauser, Christian and
Lin, Hsien-chin and
Heck, Michael and
van Niekerk, Carel and
Gasic, Milica",
booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference",
month = jun,
year = "2022",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2022.lrec-1.436",
pages = "4096--4113",
abstract = "The ability to recognise emotions lends a conversational artificial intelligence a human touch. While emotions in chit-chat dialogues have received substantial attention, emotions in task-oriented dialogues remain largely unaddressed. This is despite emotions and dialogue success having equally important roles in a natural system. Existing emotion-annotated task-oriented corpora are limited in size, label richness, and public availability, creating a bottleneck for downstream tasks. To lay a foundation for studies on emotions in task-oriented dialogues, we introduce EmoWOZ, a large-scale manually emotion-annotated corpus of task-oriented dialogues. EmoWOZ is based on MultiWOZ, a multi-domain task-oriented dialogue dataset. It contains more than 11K dialogues with more than 83K emotion annotations of user utterances. In addition to Wizard-of-Oz dialogues from MultiWOZ, we collect human-machine dialogues within the same set of domains to sufficiently cover the space of various emotions that can happen during the lifetime of a data-driven dialogue system. To the best of our knowledge, this is the first large-scale open-source corpus of its kind. We propose a novel emotion labelling scheme, which is tailored to task-oriented dialogues. We report a set of experimental results to show the usability of this corpus for emotion recognition and state tracking in task-oriented dialogues.",
}
``` |
Seanxh/twitter_dataset_1713201724 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 115388
num_examples: 270
download_size: 44665
dataset_size: 115388
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
datasciencemmw/current-data | ---
license: openrail
---
Your mother
nobody is going to see this probably
I saw |
ju-resplande/rebel-pt | ---
annotations_creators:
- machine-generated
language_creators:
- machine-generated
language:
- pt
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
size_categories:
- unknown
source_datasets:
- extended|rebel-dataset
task_categories:
- text-retrieval
- text2text-generation
task_ids: []
pretty_name: rebel-portuguese
tags:
- relation-extraction
- conditional-text-generation
---
# REBEL-Portuguese
## Table of Contents
- [Dataset Card for REBEL-Portuguese](#dataset-card-for-rebel)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [https://github.com/Babelscape/rebel](https://github.com/Babelscape/rebel)
- **Paper:** [https://github.com/Babelscape/rebel/blob/main/docs/EMNLP_2021_REBEL__Camera_Ready_.pdf](https://github.com/Babelscape/rebel/blob/main/docs/EMNLP_2021_REBEL__Camera_Ready_.pdf)
- **Point of Contact:** [julianarsg13@gmail.com](julianarsg13@gmail.com)
### Dataset Summary
Dataset adapted to Portuguese from [REBEL-dataset](https://huggingface.co/datasets/Babelscape/rebel-dataset) .
### Supported Tasks and Leaderboards
- `text-retrieval-other-relation-extraction`: The dataset can be used to train a model for Relation Extraction, which consists in extracting triplets from raw text, made of subject, object and relation type.
### Languages
The dataset is in Portuguese, from the Portuguese Wikipedia.
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
Data comes from Wikipedia text before the table of contents, as well as Wikidata for the triplets annotation.
#### Initial Data Collection and Normalization
For the data collection, the dataset extraction pipeline [cRocoDiLe: Automati**c** **R**elati**o**n Extra**c**ti**o**n **D**ataset w**i**th N**L**I filt**e**ring](https://github.com/Babelscape/crocodile) insipired by [T-REx Pipeline](https://github.com/hadyelsahar/RE-NLG-Dataset) more details found at: [T-REx Website](https://hadyelsahar.github.io/t-rex/). The starting point is a Wikipedia dump as well as a Wikidata one.
After the triplets are extracted, an NLI system was used to filter out those not entailed by the text.
#### Who are the source language producers?
Any Wikipedia and Wikidata contributor.
### Annotations
#### Annotation process
The dataset extraction pipeline [cRocoDiLe: Automati**c** **R**elati**o**n Extra**c**ti**o**n **D**ataset w**i**th N**L**I filt**e**ring](https://github.com/ju-resplande/crocodile).
#### Who are the annotators?
Automatic annottations
### Personal and Sensitive Information
All text is from Wikipedia, any Personal or Sensitive Information there may be present in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Not for now
## Additional Information
### Dataset Curators
### Licensing Information
### Citation Information
### Contributions
Thanks to [@ju-resplande](https://github.com/ju-resplade) for adding this dataset.
|
drumwell/llm-kuobot | ---
dataset_info:
features:
- name: input_ids
sequence: int32
splits:
- name: train
num_bytes: 1631004.0
num_examples: 199
- name: test
num_bytes: 188508.0
num_examples: 23
download_size: 942321
dataset_size: 1819512.0
---
# Dataset Card for "llm-kuobot2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
liuyanchen1015/MULTI_VALUE_rte_who_which | ---
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: value_score
dtype: int64
splits:
- name: test
num_bytes: 153268
num_examples: 324
- name: train
num_bytes: 120711
num_examples: 257
download_size: 184670
dataset_size: 273979
---
# Dataset Card for "MULTI_VALUE_rte_who_which"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
gonglinyuan/code_search_net_python_tokenized | ---
license: other
---
|
distilled-one-sec-cv12-each-chunk-uniq/chunk_8 | ---
dataset_info:
features:
- name: logits
sequence: float32
- name: mfcc
sequence:
sequence: float64
splits:
- name: train
num_bytes: 1127710812.0
num_examples: 219741
download_size: 1153327288
dataset_size: 1127710812.0
---
# Dataset Card for "chunk_8"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
elihoole/asrs-aviation-reports | ---
annotations_creators:
- expert-generated
language:
- en
language_creators:
- other
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: 'ASRS Aviation Incident Reports '
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- summarization
---
# Dataset Card for ASRS Aviation Incident Reports
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://huggingface.co/datasets/elihoole/asrs-aviation-reports]
- **Repository:** [ASRS Incident Reports Summarisation code repo](https://github.com/elihoole/asrs-incident-reports)
- **Point of Contact:** [Elijah Hoole](mailto:E.J.Hoole@sms.ed.ac.uk)
### Dataset Summary
This dataset collects 47,723 aviation incident reports published in the Aviation Safety Reporting System (ASRS) database maintained by NASA.
### Supported Tasks and Leaderboards
- 'summarization': Dataset can be used to train a model for abstractive and extractive summarization. The model performance is measured by how high the output summary's [ROUGE](https://huggingface.co/metrics/rouge) score for a given narrative account of an aviation incident is when compared to the synopsis as written by a NASA expert. Models and scores to follow.
### Languages
The BCP-47 code for English as generally spoken in the United States is en-US and the BCP-47 code for English as generally spoken in the United Kingdom is en-GB. It is unknown if other varieties of English are represented in the data.
## Dataset Structure
### Data Instances
For each instance, there is a string for the narrative account (Report 1_Narrative), a string for the synopsis (Report 1.2_Synopsis), and a string for the document id (acn_num_ACN). Some instances may have two narratives (Report 1_Narrative & Report 2_Narrative) and extended analyses produced by experts (Report 1.1_Callback & Report 2.1_Callback). Other fields contain metadata such as time, location, flight conditions, aircraft model name, etc. associated with the incident. See the [ASRS Incident Reports dataset viewer](https://huggingface.co/datasets/elihoole/asrs-aviation-reports/viewer/elihoole--asrs-aviation-reports/train) to explore more examples.
```
{'acn_num_ACN': '1206196',
'Report 1_Narrative': 'While taxiing company B757 aircraft from gate to Hangar line; we were cleared by Ground Control to proceed via A-T-join runway XX. After receiving subsequent clearance to T1 [then associated taxiways] to the hangar; we caught up to a dark; apparently unpowered company livery RJ (ERJ-145) near the T1 intersection. The RJ was being towed dark with absolutely no external lighting on; a completely dark aircraft. This situation only presented itself as we drew close to the aircraft in tow. The towbarless tractor (supertug) was lit externally; but minimally visible from our vantage point; with a completely dark aircraft between us and the tractor. Once the towing operation completed a turn onto taxiway T; a single green light came in view which is somehow mounted on supertug; presented a similar appearance to a green wing navigation light common on all aircraft. To say this presented a confusing situation is an understatement. [Aircraft] operation in Noncompliance with FARs; Policy and Procedures. This is a situation never before observed in [my] 30 plus years as a taxi mechanic at our location. There are long established standards in place regarding external light usage and requirements; both in gate areas; as well as movement in active controlled taxiways; most with an eye on safety regarding aircraft position (nav lights) and anti-collision lights signaling running engines and/or aircraft movement.',
'Report 1.1_Callback': '',
'Report 2_Narrative': '',
'Report 2.1_Callback': '',
'Report 1.2_Synopsis': 'A Line Aircraft Maintenance Technician (AMT) taxiing a company B757 aircraft reports coming up on a dark; unpowered ERJ-145 aircraft with no external lighting on. Light on the towbarless Supertug tractor only minimally visible; with completely dark aircraft between their B757 and Tow tractor. Technician notes long established standards requiring Anti-Collision and Nav lights not enforced during aircraft tow.'}
```
The average token count for the articles and the highlights are provided below.
| Feature | Number of Instances | Mean Token Count |
| ------------------- | ------------------ | ---------------- |
| Report 1_Narrative | 47,723 | 281 |
| Report 1.1_Callback | 1,435 | 103 |
| Report 2_Narrative | 11,228 | 169 |
| Report 2.1 Callback | 85 | 110 |
| Report 1.2_Synopsis | 47,723 | 27 |
### Data fields
More data explanation.
|
Isaak-Carter/Wizzard-smol | ---
license: bigscience-openrail-m
---
|
AdiOO7/Multi-Class | ---
license: apache-2.0
task_categories:
- text-classification
language:
- en
size_categories:
- n<1K
--- |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/bdf6a05d | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 182
num_examples: 10
download_size: 1336
dataset_size: 182
---
# Dataset Card for "bdf6a05d"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
sahilkadge/dataset_2 | ---
dataset_info:
features:
- name: label
dtype: string
- name: folder
dtype: string
- name: audio
dtype: string
splits:
- name: train
num_bytes: 20008
num_examples: 52
download_size: 8356
dataset_size: 20008
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HaileyStorm/lichess-filtered | ---
license: apache-2.0
---
These csv files contain a single column, 'transcript', with simple PGN string gam transcripts. The data is sources from Lichess: https://database.lichess.org
All files contain some very short and very long games; I recommend filtering these, depending on your use case.
All files are filtered to include white wins only.
According to the Lichess ELO ratings saved with the game data, which typically run a little high, the files contain games filtered:
stable.csv, 24M games: White 1300-2300, Black 1300-2600
anneal.csv, 3.9M games: White 1900-2300, Black 1300*-2700. Includes some lc0-vs-lc0 and stockfish-vs-stockfish games within that range (possibly a little lower in the case of some lc0 games, but probably not given Lichess' high ratings...)
stable2.csv, 57.7M games: White 1500-2400, Black 1400-2800
anneal2.csv, 5.77M games: White 2000-2500, Black 1500-3000 |
mask-distilled-one-sec-cv12/chunk_252 | ---
dataset_info:
features:
- name: logits
sequence: float32
- name: mfcc
sequence:
sequence: float64
splits:
- name: train
num_bytes: 889496020
num_examples: 174685
download_size: 906332868
dataset_size: 889496020
---
# Dataset Card for "chunk_252"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
DylanonWic/common_voice_10_1_th_clean_test | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: labels
sequence: int64
- name: input_values
sequence: float32
splits:
- name: validation
num_bytes: 2996917184.000732
num_examples: 10028
- name: test
num_bytes: 3187660791.5096064
num_examples: 10160
download_size: 5739728701
dataset_size: 6184577975.510338
---
# Dataset Card for "common_voice_10_1_th_clean_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Norod78/HebrewStageAndLyricsWithNewLines | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: test
num_bytes: 12638465.341690589
num_examples: 11113
- name: train
num_bytes: 240110370.6583094
num_examples: 211129
download_size: 133520933
dataset_size: 252748836.0
language:
- he
multilinguality:
- monolingual
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
---
# Dataset Card for "HebrewStageAndLyricsWithNewLines"
* Contains poems and stories from "New Stage" ("במה חדשה")
* Contains text lines from various Hebrew song lyrics
* Data contains new-line characters
* Generated from a text file in which different poems were seperated using a double new-line character
* The script I made for converting the text file into a dataset is [available here](https://huggingface.co/datasets/Norod78/HebrewStageAndLyricsWithNewLines/blob/main/load_ds.py) |
nz/closest_to_3000_range_1000_to_9000 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 3799087.274617199
num_examples: 10000
- name: test
num_bytes: 379908.7274617199
num_examples: 1000
download_size: 2174332
dataset_size: 4178996.0020789187
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
zeynepgulhan/mediaspeech-with-cv-tr | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 10631918399.222
num_examples: 38638
- name: test
num_bytes: 278050703.0
num_examples: 10143
download_size: 1643709639
dataset_size: 10909969102.222
---
# Dataset Card for "mediaspeech-with-cv-tr"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
kabir5297/EngASRwithCVWav | ---
license: apache-2.0
---
|
michaelmallari/airbnb-usa-dc-washington | ---
license: mit
---
|
vikhyatk/lnqa | ---
license: cc-by-4.0
task_categories:
- visual-question-answering
---
Visual question answering dataset based on Localized Narratives: https://google.github.io/localized-narratives/
Please cite their paper if you use this dataset in your research. |
the-cramer-project/Kyrgyz_News_Corpus | ---
license: cc-by-nc-4.0
language:
- ky
pretty_name: The Kyrgyz News Corpus dataset
size_categories:
- 100K<n<1M
---
# Kyrgyz_News_Corpus
The Kyrgyz News Corpus dataset is a collection of news in the Kyrgyz language collected from various news sites using web scraping and contains 256364 news.
This corpus contains news on various topics, including politics, economics, culture, sports and others. Each entry in the dataset is a separate news item, including the text and the source.
This dataset can only be used for research purposes such as natural language processing, thematic modeling, and more. It can be useful for researchers, developers and students interested in analyzing texts in the Kyrgyz language and related tasks.
# References
All of our achievements were made achievable thanks to the robust AI community in Kyrgyzstan and the contributions made by individuals within the AkylAI project (by TheCramer.com). We also express our gratitude to the Kyrgyz news agencies for their work, which allowed us to create this dataset.
# Next
We work on creation Kyrgyz Spell checker and grammar corrector. Please feel free to reach out timur.turat@gmail.com or rkizmailov@gmail.com if you are interested in any forms of collaborations!
---
license: cc-by-nc-4.0
language:
- ky
--- |
kheopss/ask_kheops_prompts_datasets_fr | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 796661
num_examples: 500
download_size: 349547
dataset_size: 796661
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
jmichaelov/inverse_scaling_prize-resisting_correction | ---
license: cc-by-4.0
---
|
autoevaluate/autoeval-eval-scientific_papers-pubmed-c3b6df-51381145312 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- scientific_papers
eval_info:
task: summarization
model: sambydlo/scientific_abstract_simplification-scientific-lay-summarise
metrics: ['accuracy', 'frugalscore']
dataset_name: scientific_papers
dataset_config: pubmed
dataset_split: train
col_mapping:
text: article
target: abstract
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: sambydlo/scientific_abstract_simplification-scientific-lay-summarise
* Dataset: scientific_papers
* Config: pubmed
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@NessTechIntl](https://huggingface.co/NessTechIntl) for evaluating this model. |
FreedomIntelligence/MMLU_Chinese | ---
license: mit
---
Chinese version of MMLU dataset tranlasted by gpt-3.5-turbo.
The dataset is used in the research related to [MultilingualSIFT](https://github.com/FreedomIntelligence/MultilingualSIFT). |
ashutosh01/TajMahalSample | ---
dataset_info:
features:
- name: Instruction
dtype: string
- name: Response
dtype: string
- name: __index_level_0__
dtype: string
splits:
- name: train
num_bytes: 3729
num_examples: 21
download_size: 4359
dataset_size: 3729
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
daloopa/fashion-mnist-interview | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': T - shirt / top
'1': Trouser
'2': Pullover
'3': Dress
'4': Coat
'5': Sandal
'6': Shirt
'7': Sneaker
'8': Bag
'9': Ankle boot
splits:
- name: train
num_bytes: 31049107.0
num_examples: 60000
- name: test
num_bytes: 4150316.0
num_examples: 8000
download_size: 33099036
dataset_size: 35199423.0
---
# Dataset Card for "fashion-mnist-interview"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
MiMe-MeMo/MeMo-Dataset-SA | ---
license: cc-by-nc-nd-4.0
---
# Sentiment Classification of Historical Literary in Danish and Norwegian Texts
## Description
This project describes a study on sentiment classification in literary analysis of 19th-century Scandinavian novels by female authors. We create a dataset, train and evaluate sentiment classification methods, and use pre-trained language models to confirm and contest a literary hypothesis that the writing of female authors in that period was characterized by negative sentiment. The dataset and trained models are expected to be valuable for future analysis of historical Danish and Norwegian literary texts.
## Dataset
The dataset is uploaded to the `dataset` directory and is structured as follows:
1. `train_set.txt`: TXT file containing the training set with annotated text for sentiment analysis.
2. `dev_set.txt`: TXT file containing the development set with annotated text for sentiment analysis.
3. `test_set.txt`: TXT file containing the testing set with annotated text for sentiment analysis.
Each file contains two columns (tab separated) where the first column is the sentence and the second column is the sentimen annoation (1=positive, 0=neutral, and 2=negative)
## Usage
To use the dataset and code, follow these steps:
1. Clone or download this GitHub repository.
2. Access the dataset files in the `dataset` directory and the Python code file.
3. Use the dataset files for training, development, and testing of sentiment analysis models in your research or applications.
4. Run the Python code files using your preferred IDE or Python environment to understand how to load, preprocess, and analyze the historical text data.
## License
The dataset and code in this repository are released under the [Creative Commons Attribution 4.0 International license](http://creativecommons.org/licenses/by/4.0/).
## Citation
For more details about the sentiment annoatation and classification, please read further in [the following paper](https://openreview.net/forum?id=dszKbb2GH3):
```
@inproceedings{allaith2023sentiment,
title={Sentiment Classification of Historical Literary in {D}anish and {N}orwegian Texts},
author={Ali Al-Laith and Kirstine Nielsen Degn and Alexander Conroy and Bolette S. Pedersen and Jens Bjerring-Hansen and Daniel Hershcovich},
booktitle={The 24rd Nordic Conference on Computational Linguistics},
year={2023},
url={https://openreview.net/forum?id=dszKbb2GH3}
}
``` |
waleedfarooq51/custom_data | ---
language:
- en
--- |
neel-17/audio_dataset_aug_20K | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: label
dtype: int64
splits:
- name: train
num_bytes: 37257629977.6
num_examples: 21120
- name: validation
num_bytes: 1522701105.0
num_examples: 579
download_size: 31128688558
dataset_size: 38780331082.6
---
# Dataset Card for "audio_dataset_aug_20K"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Nahrawy/MIIW-Depth-ControlNet | ---
dataset_info:
features:
- name: image_path
dtype: image
- name: depth_path
dtype: image
- name: scene
dtype: string
- name: caption
dtype: string
- name: direction
dtype: int8
splits:
- name: train
num_bytes: 12475221926.5
num_examples: 24625
download_size: 6659246738
dataset_size: 12475221926.5
---
# Dataset Card for "MIIW-Depth-ControlNet"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
rishiraj/vicuna-unfiltered-guanaco | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 154112650
num_examples: 44962
download_size: 77058096
dataset_size: 154112650
---
# Dataset Card for "vicuna-unfiltered-guanaco"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
IchigoKaio2007/DanielVoice | ---
license: openrail
---
|
open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-FINETUNE4_3.8w-r8-q_k_v_o_gate_up_down | ---
pretty_name: Evaluation run of CHIH-HUNG/llama-2-13b-FINETUNE4_3.8w-r8-q_k_v_o_gate_up_down
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [CHIH-HUNG/llama-2-13b-FINETUNE4_3.8w-r8-q_k_v_o_gate_up_down](https://huggingface.co/CHIH-HUNG/llama-2-13b-FINETUNE4_3.8w-r8-q_k_v_o_gate_up_down)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 64 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-FINETUNE4_3.8w-r8-q_k_v_o_gate_up_down\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-24T07:10:24.463266](https://huggingface.co/datasets/open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-FINETUNE4_3.8w-r8-q_k_v_o_gate_up_down/blob/main/results_2023-10-24T07-10-24.463266.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.31984060402684567,\n\
\ \"em_stderr\": 0.004776522202017119,\n \"f1\": 0.35765205536912764,\n\
\ \"f1_stderr\": 0.004713634443624145,\n \"acc\": 0.42305943190800716,\n\
\ \"acc_stderr\": 0.010128397484043218\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.31984060402684567,\n \"em_stderr\": 0.004776522202017119,\n\
\ \"f1\": 0.35765205536912764,\n \"f1_stderr\": 0.004713634443624145\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.09552691432903715,\n \
\ \"acc_stderr\": 0.008096605771155745\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7505919494869772,\n \"acc_stderr\": 0.01216018919693069\n\
\ }\n}\n```"
repo_url: https://huggingface.co/CHIH-HUNG/llama-2-13b-FINETUNE4_3.8w-r8-q_k_v_o_gate_up_down
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_10_04T05_24_08.290753
path:
- '**/details_harness|arc:challenge|25_2023-10-04T05-24-08.290753.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-10-04T05-24-08.290753.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_10_24T07_10_24.463266
path:
- '**/details_harness|drop|3_2023-10-24T07-10-24.463266.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-24T07-10-24.463266.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_24T07_10_24.463266
path:
- '**/details_harness|gsm8k|5_2023-10-24T07-10-24.463266.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-24T07-10-24.463266.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_10_04T05_24_08.290753
path:
- '**/details_harness|hellaswag|10_2023-10-04T05-24-08.290753.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-10-04T05-24-08.290753.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_10_04T05_24_08.290753
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-04T05-24-08.290753.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-10-04T05-24-08.290753.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-04T05-24-08.290753.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_10_04T05_24_08.290753
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-04T05-24-08.290753.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-04T05-24-08.290753.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_10_04T05_24_08.290753
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-04T05-24-08.290753.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-04T05-24-08.290753.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_10_04T05_24_08.290753
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-04T05-24-08.290753.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-04T05-24-08.290753.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_10_04T05_24_08.290753
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-04T05-24-08.290753.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-04T05-24-08.290753.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_10_04T05_24_08.290753
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-04T05-24-08.290753.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-04T05-24-08.290753.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_10_04T05_24_08.290753
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-04T05-24-08.290753.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-04T05-24-08.290753.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_10_04T05_24_08.290753
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-04T05-24-08.290753.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-04T05-24-08.290753.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_10_04T05_24_08.290753
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-04T05-24-08.290753.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-04T05-24-08.290753.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_10_04T05_24_08.290753
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-04T05-24-08.290753.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-04T05-24-08.290753.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_10_04T05_24_08.290753
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-04T05-24-08.290753.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-04T05-24-08.290753.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_10_04T05_24_08.290753
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-04T05-24-08.290753.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-04T05-24-08.290753.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_10_04T05_24_08.290753
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-04T05-24-08.290753.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-04T05-24-08.290753.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_10_04T05_24_08.290753
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-04T05-24-08.290753.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-04T05-24-08.290753.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_10_04T05_24_08.290753
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-04T05-24-08.290753.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-04T05-24-08.290753.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_10_04T05_24_08.290753
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-04T05-24-08.290753.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-04T05-24-08.290753.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_10_04T05_24_08.290753
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-04T05-24-08.290753.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-04T05-24-08.290753.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_10_04T05_24_08.290753
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-04T05-24-08.290753.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-04T05-24-08.290753.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_10_04T05_24_08.290753
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-04T05-24-08.290753.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-04T05-24-08.290753.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_10_04T05_24_08.290753
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-04T05-24-08.290753.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-04T05-24-08.290753.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_10_04T05_24_08.290753
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-04T05-24-08.290753.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-04T05-24-08.290753.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_10_04T05_24_08.290753
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-04T05-24-08.290753.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-04T05-24-08.290753.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_10_04T05_24_08.290753
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-04T05-24-08.290753.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-04T05-24-08.290753.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_10_04T05_24_08.290753
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-04T05-24-08.290753.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-04T05-24-08.290753.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_10_04T05_24_08.290753
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-04T05-24-08.290753.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-04T05-24-08.290753.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_10_04T05_24_08.290753
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-04T05-24-08.290753.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-04T05-24-08.290753.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_10_04T05_24_08.290753
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-04T05-24-08.290753.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-04T05-24-08.290753.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_10_04T05_24_08.290753
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-04T05-24-08.290753.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-04T05-24-08.290753.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_10_04T05_24_08.290753
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-04T05-24-08.290753.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-04T05-24-08.290753.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_10_04T05_24_08.290753
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-04T05-24-08.290753.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-04T05-24-08.290753.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_10_04T05_24_08.290753
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-04T05-24-08.290753.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-04T05-24-08.290753.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_10_04T05_24_08.290753
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-04T05-24-08.290753.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-04T05-24-08.290753.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_10_04T05_24_08.290753
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-04T05-24-08.290753.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-04T05-24-08.290753.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_10_04T05_24_08.290753
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-04T05-24-08.290753.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-04T05-24-08.290753.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_10_04T05_24_08.290753
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-04T05-24-08.290753.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-04T05-24-08.290753.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_10_04T05_24_08.290753
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-10-04T05-24-08.290753.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-10-04T05-24-08.290753.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_10_04T05_24_08.290753
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-04T05-24-08.290753.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-04T05-24-08.290753.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_10_04T05_24_08.290753
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-04T05-24-08.290753.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-04T05-24-08.290753.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_10_04T05_24_08.290753
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-04T05-24-08.290753.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-04T05-24-08.290753.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_10_04T05_24_08.290753
path:
- '**/details_harness|hendrycksTest-management|5_2023-10-04T05-24-08.290753.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-10-04T05-24-08.290753.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_10_04T05_24_08.290753
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-10-04T05-24-08.290753.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-10-04T05-24-08.290753.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_10_04T05_24_08.290753
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-04T05-24-08.290753.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-04T05-24-08.290753.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_10_04T05_24_08.290753
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-04T05-24-08.290753.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-04T05-24-08.290753.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_10_04T05_24_08.290753
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-04T05-24-08.290753.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-04T05-24-08.290753.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_10_04T05_24_08.290753
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-04T05-24-08.290753.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-04T05-24-08.290753.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_10_04T05_24_08.290753
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-04T05-24-08.290753.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-04T05-24-08.290753.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_10_04T05_24_08.290753
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-04T05-24-08.290753.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-04T05-24-08.290753.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_10_04T05_24_08.290753
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-04T05-24-08.290753.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-04T05-24-08.290753.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_10_04T05_24_08.290753
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-04T05-24-08.290753.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-04T05-24-08.290753.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_10_04T05_24_08.290753
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-04T05-24-08.290753.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-04T05-24-08.290753.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_10_04T05_24_08.290753
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-04T05-24-08.290753.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-04T05-24-08.290753.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_10_04T05_24_08.290753
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-04T05-24-08.290753.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-04T05-24-08.290753.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_10_04T05_24_08.290753
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-04T05-24-08.290753.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-04T05-24-08.290753.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_10_04T05_24_08.290753
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-04T05-24-08.290753.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-04T05-24-08.290753.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_10_04T05_24_08.290753
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-10-04T05-24-08.290753.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-10-04T05-24-08.290753.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_10_04T05_24_08.290753
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-04T05-24-08.290753.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-04T05-24-08.290753.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_10_04T05_24_08.290753
path:
- '**/details_harness|hendrycksTest-virology|5_2023-10-04T05-24-08.290753.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-10-04T05-24-08.290753.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_10_04T05_24_08.290753
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-04T05-24-08.290753.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-04T05-24-08.290753.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_10_04T05_24_08.290753
path:
- '**/details_harness|truthfulqa:mc|0_2023-10-04T05-24-08.290753.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-10-04T05-24-08.290753.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_24T07_10_24.463266
path:
- '**/details_harness|winogrande|5_2023-10-24T07-10-24.463266.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-24T07-10-24.463266.parquet'
- config_name: results
data_files:
- split: 2023_10_04T05_24_08.290753
path:
- results_2023-10-04T05-24-08.290753.parquet
- split: 2023_10_24T07_10_24.463266
path:
- results_2023-10-24T07-10-24.463266.parquet
- split: latest
path:
- results_2023-10-24T07-10-24.463266.parquet
---
# Dataset Card for Evaluation run of CHIH-HUNG/llama-2-13b-FINETUNE4_3.8w-r8-q_k_v_o_gate_up_down
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/CHIH-HUNG/llama-2-13b-FINETUNE4_3.8w-r8-q_k_v_o_gate_up_down
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [CHIH-HUNG/llama-2-13b-FINETUNE4_3.8w-r8-q_k_v_o_gate_up_down](https://huggingface.co/CHIH-HUNG/llama-2-13b-FINETUNE4_3.8w-r8-q_k_v_o_gate_up_down) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-FINETUNE4_3.8w-r8-q_k_v_o_gate_up_down",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-24T07:10:24.463266](https://huggingface.co/datasets/open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-FINETUNE4_3.8w-r8-q_k_v_o_gate_up_down/blob/main/results_2023-10-24T07-10-24.463266.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.31984060402684567,
"em_stderr": 0.004776522202017119,
"f1": 0.35765205536912764,
"f1_stderr": 0.004713634443624145,
"acc": 0.42305943190800716,
"acc_stderr": 0.010128397484043218
},
"harness|drop|3": {
"em": 0.31984060402684567,
"em_stderr": 0.004776522202017119,
"f1": 0.35765205536912764,
"f1_stderr": 0.004713634443624145
},
"harness|gsm8k|5": {
"acc": 0.09552691432903715,
"acc_stderr": 0.008096605771155745
},
"harness|winogrande|5": {
"acc": 0.7505919494869772,
"acc_stderr": 0.01216018919693069
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
Jackmax5/data | ---
license: gpl-2.0
---
|
VAGOsolutions/MT-Bench-TrueGerman | ---
language:
- de
---
## Benchmark
**German Benchmarks on Hugging Face**
At present, there is a notable scarcity, if not a complete **absence, of reliable and true German benchmarks** designed to evaluate the capabilities of German Language Models (LLMs). While some efforts have been made to translate English benchmarks into German, these attempts often **fall short in terms of precision, accuracy, and context sensitivity, even when employing GPT-4 technology**. Take, for instance, the **MT-Bench**, a widely recognized and frequently used benchmark for assessing LLM performance in real-world scenarios. The seemingly straightforward and cost-effective approach of **translating MT-Bench into German using GPT-4 proves to be counterproductive**, resulting in subpar outcomes that hinder a realistic and contextually appropriate evaluation of German LLMs. To illustrate this, we offer a few examples extracted from translated MT-Bench versions available on Hugging Face.
**Example: Uncommon use of words**
*{ "category": "writing", "turns": [ "Schreibe eine überzeugende E-Mail, um deinen introvertierten Freund, der öffentliches Sprechen nicht mag, dazu zu bringen, sich als Gastredner bei einer lokalen Veranstaltung zu engagieren. Verwende überzeugende Argumente und gehe auf mögliche Einwände ein. Bitte sei prägnant.", "Kannst du deine vorherige Antwort umformulieren und in jedem Satz eine Metapher oder ein **Gleichnis** einbauen?" ] }*
What you can see here is an example of a German word, someone would not use in a real conversation (marked in bold). In a real conversation someone would rather use “Vergleich” instead of “Gleichnis”.
**Example: Wrong context**
*{ "category": "roleplay", "turns": [ "Bitte nehmen Sie die Rolle eines englischen Übersetzers an, der damit beauftragt ist, Rechtschreibung und Sprache zu korrigieren und zu verbessern. Unabhängig von der Sprache, die ich verwende, sollten Sie sie identifizieren, übersetzen und mit einer verfeinerten und polierten Version meines Textes **auf Englisch antworten**.*
Here we get a request to translate a given sentence in English language and phrase a more sophisticated sentence compared to the original sentence. As we aim to assess a German LLM requesting the model to translate a sentence in English language would be pointless.
**Example: Wrong content**
*{"category": "writing", "turns": [ "Bearbeite den folgenden Absatz, um etwaige grammatikalische Fehler zu korrigieren: ***Sie erinnerte sich nicht daran, wo ihre Geldbörse ist, also denke ich, dass sie im Auto ist, aber er sagt, dass sie auf dem Küchentisch ist, aber er ist sich nicht sicher, und dann haben sie mich gebeten, danach zu suchen, sie sagt: "Kannst du?", und ich antworte: "Vielleicht, aber ich bin nicht sicher", und er hat mich nicht gehört, und er fragt: "Was?", "Hast du es gefunden?"***.", "Ändere deine frühere Antwort und vermeide die Verwendung von geschlechtsspezifischen Pronomen." ]}*
The task here is to edit a sentence full of grammatical errors and correct them. The problem with this translated version of the MT-bench is that the sentence was already corrected by GPT4 during translation. So now the model is requested to correct a sentence that has no more grammatical errors.
**Example: Pointless translation of anglicisms**
*{ "category": "roleplay", "turns": [ "Jetzt bist du ein **Maschinenlern-Ingenieur**. Deine Aufgabe besteht darin, komplexe Maschinenlernkonzepte auf einfache Weise zu erklären, damit Kunden ohne technischen Hintergrund deine Produkte verstehen und ihnen vertrauen können. Fangen wir an mit der Frage: Was ist ein Sprachmodell? Wird es mit gelabelten oder ungelabelten Daten trainiert?, "Ist das wahr? Ich habe gehört, dass andere Unternehmen unterschiedliche Ansätze verwenden, um dies zu tun und es sicherer zu machen.]}*
As we can see here, the GPT4 translation of this dataset lead to a term that no one would use when speaking German. Instead someone would rather use the original English term “Machine Learning Engineer” or the properly translated term “Ingenieur für maschinelles Lernen”.
**Our approach to a German Benchmark**
So, what we did instead of simply translating the MT-Bench with GPT4, we applied a mixed approach of automatic translation and human evaluation. In a first step we translated the complete MT-Bench into German language by using GPT4. In a second step we conducted a thorough manual evaluation of each translated dataset to ensure following quality criteria:
- The dataset has been translated into German language.
- The German translation consists of an appropriate and genuine wording.
- the context of the translated dataset is meaningful and reasonable for assessing German language skills of the model.
- the content of the translated dataset is still reasonable after translation.
Although this method is undeniably time-consuming, it enables us to create a substantive benchmark for evaluating the model's proficiency in completing various benchmark categories. Nonetheless, it is important to acknowledge that even with this meticulous approach, a truly flawless benchmark remains elusive, as minor oversights may still occur due to human errors.
Nevertheless, when we compare the current approaches of German Language Model teams available on Hugging Face, we may assume that our German MT-Bench, as of today, stands as the most precise and practical benchmark for assessing German LLMs. Consequently, the benchmark scores we present offer a realistic evaluation of the models performance in German language. |
xinzhang/wiki_how_fine_tuning | ---
license: mit
---
|
ChiJuiChen/wine_review | ---
dataset_info:
features:
- name: wine_id
dtype: int64
- name: country
dtype: string
- name: description
dtype: string
- name: designation
dtype: string
- name: points
dtype: int64
- name: price
dtype: float64
splits:
- name: train
num_bytes: 21093175.17523332
num_examples: 68918
- name: test
num_bytes: 5273446.824766681
num_examples: 17230
download_size: 15110181
dataset_size: 26366622.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
bigcode/the-stack-smol-xs | ---
annotations_creators: []
language_creators:
- crowdsourced
language: ["code"]
multilinguality:
- multilingual
size_categories:
- unknown
source_datasets: []
task_categories:
- text-generation
task_ids:
- language-modeling
---
## Dataset Description
A small subset of [the-stack](https://huggingface.co/datasets/bigcode/the-stack) dataset, with 87 programming languages, each has 100 random samples from the original dataset for visualization.
## Languages
The dataset contains 87 programming languages:
````
'ada', 'agda', 'alloy', 'antlr', 'applescript', 'assembly', 'augeas', 'awk', 'batchfile', 'bison', 'bluespec', 'c',
'c++', 'c-sharp', 'clojure', 'cmake', 'coffeescript', 'common-lisp', 'css', 'cuda', 'dart', 'dockerfile', 'elixir',
'elm', 'emacs-lisp','erlang', 'f-sharp', 'fortran', 'glsl', 'go', 'groovy', 'haskell','html', 'idris', 'isabelle', 'java',
'java-server-pages', 'javascript', 'julia', 'kotlin', 'lean', 'literate-agda', 'literate-coffeescript', 'literate-haskell',
'lua', 'makefile', 'maple', 'markdown', 'mathematica', 'matlab', 'ocaml', 'pascal', 'perl', 'php', 'powershell', 'prolog',
'protocol-buffer', 'python', 'r', 'racket', 'restructuredtext', 'rmarkdown', 'ruby', 'rust', 'sas', 'scala', 'scheme',
'shell', 'smalltalk', 'solidity', 'sparql', 'sql', 'stan', 'standard-ml', 'stata', 'systemverilog', 'tcl', 'tcsh', 'tex',
'thrift', 'typescript', 'verilog', 'vhdl', 'visual-basic', 'xslt', 'yacc', 'zig'
`````
## Dataset Structure
You can specify which language you want to load, python is loaded by default:
```python
# to load go:
from datasets import load_dataset
load_dataset("bigcode/the-stack-smol-xs", "go")
DatasetDict({
train: Dataset({
features: ['content', 'lang', 'size', 'ext', 'max_stars_count', 'avg_line_length', 'max_line_length', 'alphanum_fraction'],
num_rows: 100
})
})
```
|
Doub7e/SDv2-Count-Repeated-3 | ---
dataset_info:
features:
- name: image
dtype: image
- name: prompt
dtype: string
- name: T5_last_hidden_states
sequence:
sequence:
sequence: float32
- name: style
dtype: string
splits:
- name: train
num_bytes: 1475476707.25
num_examples: 1150
download_size: 1283859072
dataset_size: 1475476707.25
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
open-llm-leaderboard/details_allenai__digital-socrates-13b | ---
pretty_name: Evaluation run of allenai/digital-socrates-13b
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [allenai/digital-socrates-13b](https://huggingface.co/allenai/digital-socrates-13b)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 1 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_allenai__digital-socrates-13b\"\
,\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese\
\ are the [latest results from run 2023-12-03T16:32:03.154791](https://huggingface.co/datasets/open-llm-leaderboard/details_allenai__digital-socrates-13b/blob/main/results_2023-12-03T16-32-03.154791.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.29492039423805916,\n\
\ \"acc_stderr\": 0.012560698010954769\n },\n \"harness|gsm8k|5\":\
\ {\n \"acc\": 0.29492039423805916,\n \"acc_stderr\": 0.012560698010954769\n\
\ }\n}\n```"
repo_url: https://huggingface.co/allenai/digital-socrates-13b
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_gsm8k_5
data_files:
- split: 2023_12_03T16_32_03.154791
path:
- '**/details_harness|gsm8k|5_2023-12-03T16-32-03.154791.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-12-03T16-32-03.154791.parquet'
- config_name: results
data_files:
- split: 2023_12_03T16_32_03.154791
path:
- results_2023-12-03T16-32-03.154791.parquet
- split: latest
path:
- results_2023-12-03T16-32-03.154791.parquet
---
# Dataset Card for Evaluation run of allenai/digital-socrates-13b
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/allenai/digital-socrates-13b
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [allenai/digital-socrates-13b](https://huggingface.co/allenai/digital-socrates-13b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_allenai__digital-socrates-13b",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-03T16:32:03.154791](https://huggingface.co/datasets/open-llm-leaderboard/details_allenai__digital-socrates-13b/blob/main/results_2023-12-03T16-32-03.154791.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.29492039423805916,
"acc_stderr": 0.012560698010954769
},
"harness|gsm8k|5": {
"acc": 0.29492039423805916,
"acc_stderr": 0.012560698010954769
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
trojblue/configs | ---
license: openrail
---
|
FranciscoMacaya/Trainning_llama2 | ---
license: openrail
---
|
llm-aes/gemini_hanna_full_score_only | ---
dataset_info:
features:
- name: task_id
dtype: string
- name: worker_id
dtype: string
- name: human_label
dtype: int64
- name: llm_label
dtype: int64
- name: generator_1
dtype: string
- name: generator_2
dtype: string
- name: premise
dtype: string
splits:
- name: train
num_bytes: 1123365
num_examples: 5280
download_size: 109484
dataset_size: 1123365
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
rogozinushka/psychologist_answers | ---
language:
- ru
---
# Вопросы к психологу и ответы от психологов с сайта [psiholog.ru](https://www.psiholog.ru)
Данные актуальны на 2023-12-16. Парсер, с помощью которого получили датасет, можно найти в [этом репозитории](https://github.com/rogozinushka/psychologist_answers_parser)
Датафрейм имеет такую структуру:
- url - ссылка на вопрос
- question_name - заголовок вопроса
- question_body - подробный вопрос
- answers - ответы психологов
|url|question_name|question_body|answers|
|---|---|---|---|
|https://psiholog.ru/vopros/89|Как избавиться от страха и депрессии после цыганского гипноза?|спрашивает: Марина (Казань)Здравствуйте!Вчера подверглась цыганскому гипнозу..отдала им деньги,не понимаю как.произошло всё на работе(работаю продавцом не первый год).теперь остался только страх и опустошение..не знаю,как с этим справиться.ощущение,что схожу с ума и что все вокруг меня теперь считают немного сумашедшей.я как загнанная в клетку и до сих пор ощущаю присутствие этих цыганок(они работали вдвоём)|['Добрый вечер, Марина!<br>Ситация, произошедшая с Вами очень не приятная. И усугубляется тем, что Вы, видимо остались еще и должны? Если состояние еще актуально, то хорошо бы пройти антикризисную психотерапию.<br>Для информации: Цыганский гипноз хорошо описан как феномен у Милтона Эриксона (эрисонианский или эриксоновский гипноз) и осуществляется в рамках НЛП. И то, что с Вами произошло - это чистого вида манипуляция. Вам хорошо бы познать методы манипуляций, чтобы впредь чувствовать их и не поддаваться.<br>С позитивной точки зрения ситуацию лучше воспринять как урок. Кстати, эту технику (не всегда в чистом виде) используют в тренингах продаж и в профессиональном плане Вам эти знания могут пригодиться в будущем.']"|
# Questions and answers from [psiholog.ru](https://www.psiholog.ru) site
The data is current for 2023-12-16. The parser used to get the dataset can be found in [this repository] (https://github.com/rogozinushka/psychologist_answers_parser)
Data structure:
- url - question url
- question_name - question title
- question_body - question to psychologist
- answers - psychologist answers
|url|question_name|question_body|answers|
|---|---|---|---|
|https://psiholog.ru/vopros/89|Как избавиться от страха и депрессии после цыганского гипноза?|спрашивает: Марина (Казань)Здравствуйте!Вчера подверглась цыганскому гипнозу..отдала им деньги,не понимаю как.произошло всё на работе(работаю продавцом не первый год).теперь остался только страх и опустошение..не знаю,как с этим справиться.ощущение,что схожу с ума и что все вокруг меня теперь считают немного сумашедшей.я как загнанная в клетку и до сих пор ощущаю присутствие этих цыганок(они работали вдвоём)|['Добрый вечер, Марина!<br>Ситация, произошедшая с Вами очень не приятная. И усугубляется тем, что Вы, видимо остались еще и должны? Если состояние еще актуально, то хорошо бы пройти антикризисную психотерапию.<br>Для информации: Цыганский гипноз хорошо описан как феномен у Милтона Эриксона (эрисонианский или эриксоновский гипноз) и осуществляется в рамках НЛП. И то, что с Вами произошло - это чистого вида манипуляция. Вам хорошо бы познать методы манипуляций, чтобы впредь чувствовать их и не поддаваться.<br>С позитивной точки зрения ситуацию лучше воспринять как урок. Кстати, эту технику (не всегда в чистом виде) используют в тренингах продаж и в профессиональном плане Вам эти знания могут пригодиться в будущем.']"| |
jlbaker361/avatar_captioned-augmented | ---
dataset_info:
features:
- name: image
dtype: image
- name: src
dtype: string
- name: split
dtype: string
- name: id
dtype: int64
- name: caption
dtype: string
splits:
- name: train
num_bytes: 1616992137.25
num_examples: 6894
download_size: 1616179230
dataset_size: 1616992137.25
---
# Dataset Card for "avatar_captioned-augmented"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
totto | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license:
- cc-by-sa-3.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- table-to-text
task_ids: []
paperswithcode_id: totto
pretty_name: ToTTo
dataset_info:
features:
- name: id
dtype: int32
- name: table_page_title
dtype: string
- name: table_webpage_url
dtype: string
- name: table_section_title
dtype: string
- name: table_section_text
dtype: string
- name: table
list:
list:
- name: column_span
dtype: int32
- name: is_header
dtype: bool
- name: row_span
dtype: int32
- name: value
dtype: string
- name: highlighted_cells
sequence:
sequence: int32
- name: example_id
dtype: string
- name: sentence_annotations
sequence:
- name: original_sentence
dtype: string
- name: sentence_after_deletion
dtype: string
- name: sentence_after_ambiguity
dtype: string
- name: final_sentence
dtype: string
- name: overlap_subset
dtype: string
splits:
- name: train
num_bytes: 652754806
num_examples: 120761
- name: validation
num_bytes: 47277039
num_examples: 7700
- name: test
num_bytes: 40883586
num_examples: 7700
download_size: 187724372
dataset_size: 740915431
---
# Dataset Card for ToTTo
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** None
- **Repository:** https://github.com/google-research-datasets/ToTTo
- **Paper:** https://arxiv.org/abs/2004.14373
- **Leaderboard:** https://github.com/google-research-datasets/ToTTo#leaderboard
- **Point of Contact:** [totto@google.com](mailto:totto@google.com)
### Dataset Summary
ToTTo is an open-domain English table-to-text dataset with over 120,000 training examples that proposes a controlled
generation task: given a Wikipedia table and a set of highlighted table cells, produce a one-sentence description.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
A sample training set is provided below
```
{'example_id': '1762238357686640028',
'highlighted_cells': [[13, 2]],
'id': 0,
'overlap_subset': 'none',
'sentence_annotations': {'final_sentence': ['A Favorita is the telenovela aired in the 9 pm timeslot.'],
'original_sentence': ['It is also the first telenovela by the writer to air in the 9 pm timeslot.'],
'sentence_after_ambiguity': ['A Favorita is the telenovela aired in the 9 pm timeslot.'],
'sentence_after_deletion': ['It is the telenovela air in the 9 pm timeslot.']},
'table': [[{'column_span': 1, 'is_header': True, 'row_span': 1, 'value': '#'},
{'column_span': 1, 'is_header': True, 'row_span': 1, 'value': 'Run'},
{'column_span': 1, 'is_header': True, 'row_span': 1, 'value': 'Title'},
{'column_span': 1, 'is_header': True, 'row_span': 1, 'value': 'Chapters'},
{'column_span': 1, 'is_header': True, 'row_span': 1, 'value': 'Author'},
{'column_span': 1, 'is_header': True, 'row_span': 1, 'value': 'Director'},
{'column_span': 1,
'is_header': True,
'row_span': 1,
'value': 'Ibope Rating'}],
[{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '59'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'June 5, 2000— February 2, 2001'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'Laços de Família'},
{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '209'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'Manoel Carlos'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'Ricardo Waddington'},
{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '44.9'}],
[{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '60'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'February 5, 2001— September 28, 2001'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'Porto dos Milagres'},
{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '203'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'Aguinaldo Silva Ricardo Linhares'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'Marcos Paulo Simões'},
{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '44.6'}],
[{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '61'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'October 1, 2001— June 14, 2002'},
{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': 'O Clone'},
{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '221'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'Glória Perez'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'Jayme Monjardim'},
{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '47.0'}],
[{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '62'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'June 17, 2002— February 14, 2003'},
{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': 'Esperança'},
{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '209'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'Benedito Ruy Barbosa'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'Luiz Fernando'},
{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '37.7'}],
[{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '63'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'February 17, 2003— October 10, 2003'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'Mulheres Apaixonadas'},
{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '203'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'Manoel Carlos'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'Ricardo Waddington'},
{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '46.6'}],
[{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '64'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'October 13, 2003— June 25, 2004'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'Celebridade'},
{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '221'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'Gilberto Braga'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'Dennis Carvalho'},
{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '46.0'}],
[{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '65'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'June 28, 2004— March 11, 2005'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'Senhora do Destino'},
{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '221'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'Aguinaldo Silva'},
{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': 'Wolf Maya'},
{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '50.4'}],
[{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '66'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'March 14, 2005— November 4, 2005'},
{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': 'América'},
{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '203'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'Glória Perez'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'Jayme Monjardim Marcos Schechtman'},
{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '49.4'}],
[{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '67'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'November 7, 2005— July 7, 2006'},
{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': 'Belíssima'},
{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '209'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'Sílvio de Abreu'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'Denise Saraceni'},
{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '48.5'}],
[{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '68'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'July 10, 2006— March 2, 2007'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'Páginas da Vida'},
{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '203'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'Manoel Carlos'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'Jayme Monjardim'},
{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '46.8'}],
[{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '69'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'March 5, 2007— September 28, 2007'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'Paraíso Tropical'},
{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '179'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'Gilberto Braga Ricardo Linhares'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'Dennis Carvalho'},
{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '42.8'}],
[{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '70'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'October 1, 2007— May 31, 2008'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'Duas Caras'},
{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '210'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'Aguinaldo Silva'},
{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': 'Wolf Maya'},
{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '41.1'}],
[{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '71'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'June 2, 2008— January 16, 2009'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'A Favorita'},
{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '197'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'João Emanuel Carneiro'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'Ricardo Waddington'},
{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '39.5'}],
[{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '72'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'January 19, 2009— September 11, 2009'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'Caminho das Índias'},
{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '203'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'Glória Perez'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'Marcos Schechtman'},
{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '38.8'}],
[{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '73'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'September 14, 2009— May 14, 2010'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'Viver a Vida'},
{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '209'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'Manoel Carlos'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'Jayme Monjardim'},
{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '35.6'}]],
'table_page_title': 'List of 8/9 PM telenovelas of Rede Globo',
'table_section_text': '',
'table_section_title': '2000s',
'table_webpage_url': 'http://en.wikipedia.org/wiki/List_of_8/9_PM_telenovelas_of_Rede_Globo'}
```
Please note that in test set sentence annotations are not available and thus values inside `sentence_annotations` can be safely ignored.
### Data Fields
- `table_webpage_url` (`str`): Table webpage URL.
- `table_page_title` (`str`): Table metadata with context about the table.
- `table_section_title` (`str`): Table metadata with context about the table.
- `table_section_text` (`str`): Table metadata with context about the table.
- `table` (`List[List[Dict]]`): The outer lists represents rows and the inner lists columns. Each Dict has the fields:
- `column_span` (`int`)
- `is_header` (`bool`)
- `row_span` (`int`)
- `value` (`str`)
- `highlighted_cells` (`List[[row_index, column_index]]`): Where each `[row_index, column_index]` pair indicates that `table[row_index][column_index]` is highlighted.
- `example_id` (`int`): A unique id for this example.
- `sentence_annotations`: Consists of the `original_sentence` and the sequence of revised sentences performed in order to produce the `final_sentence`.
### Data Splits
```
DatasetDict({
train: Dataset({
features: ['id', 'table_page_title', 'table_webpage_url', 'table_section_title', 'table_section_text', 'table', 'highlighted_cells', 'example_id', 'sentence_annotations', 'overlap_subset'],
num_rows: 120761
})
validation: Dataset({
features: ['id', 'table_page_title', 'table_webpage_url', 'table_section_title', 'table_section_text', 'table', 'highlighted_cells', 'example_id', 'sentence_annotations', 'overlap_subset'],
num_rows: 7700
})
test: Dataset({
features: ['id', 'table_page_title', 'table_webpage_url', 'table_section_title', 'table_section_text', 'table', 'highlighted_cells', 'example_id', 'sentence_annotations', 'overlap_subset'],
num_rows: 7700
})
})
```
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@inproceedings{parikh2020totto,
title={{ToTTo}: A Controlled Table-To-Text Generation Dataset},
author={Parikh, Ankur P and Wang, Xuezhi and Gehrmann, Sebastian and Faruqui, Manaal and Dhingra, Bhuwan and Yang, Diyi and Das, Dipanjan},
booktitle={Proceedings of EMNLP},
year={2020}
}
```
### Contributions
Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset. |
aharley/rvl_cdip | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- extended|iit_cdip
task_categories:
- image-classification
task_ids:
- multi-class-image-classification
paperswithcode_id: rvl-cdip
pretty_name: RVL-CDIP
viewer: false
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': letter
'1': form
'2': email
'3': handwritten
'4': advertisement
'5': scientific report
'6': scientific publication
'7': specification
'8': file folder
'9': news article
'10': budget
'11': invoice
'12': presentation
'13': questionnaire
'14': resume
'15': memo
splits:
- name: train
num_bytes: 38816373360
num_examples: 320000
- name: test
num_bytes: 4863300853
num_examples: 40000
- name: validation
num_bytes: 4868685208
num_examples: 40000
download_size: 38779484559
dataset_size: 48548359421
---
# Dataset Card for RVL-CDIP
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [The RVL-CDIP Dataset](https://www.cs.cmu.edu/~aharley/rvl-cdip/)
- **Repository:**
- **Paper:** [Evaluation of Deep Convolutional Nets for Document Image Classification and Retrieval](https://arxiv.org/abs/1502.07058)
- **Leaderboard:** [RVL-CDIP leaderboard](https://paperswithcode.com/dataset/rvl-cdip)
- **Point of Contact:** [Adam W. Harley](mailto:aharley@cmu.edu)
### Dataset Summary
The RVL-CDIP (Ryerson Vision Lab Complex Document Information Processing) dataset consists of 400,000 grayscale images in 16 classes, with 25,000 images per class. There are 320,000 training images, 40,000 validation images, and 40,000 test images. The images are sized so their largest dimension does not exceed 1000 pixels.
### Supported Tasks and Leaderboards
- `image-classification`: The goal of this task is to classify a given document into one of 16 classes representing document types (letter, form, etc.). The leaderboard for this task is available [here](https://paperswithcode.com/sota/document-image-classification-on-rvl-cdip).
### Languages
All the classes and documents use English as their primary language.
## Dataset Structure
### Data Instances
A sample from the training set is provided below :
```
{
'image': <PIL.TiffImagePlugin.TiffImageFile image mode=L size=754x1000 at 0x7F9A5E92CA90>,
'label': 15
}
```
### Data Fields
- `image`: A `PIL.Image.Image` object containing a document.
- `label`: an `int` classification label.
<details>
<summary>Class Label Mappings</summary>
```json
{
"0": "letter",
"1": "form",
"2": "email",
"3": "handwritten",
"4": "advertisement",
"5": "scientific report",
"6": "scientific publication",
"7": "specification",
"8": "file folder",
"9": "news article",
"10": "budget",
"11": "invoice",
"12": "presentation",
"13": "questionnaire",
"14": "resume",
"15": "memo"
}
```
</details>
### Data Splits
| |train|test|validation|
|----------|----:|----:|---------:|
|# of examples|320000|40000|40000|
The dataset was split in proportions similar to those of ImageNet.
- 320000 images were used for training,
- 40000 images for validation, and
- 40000 images for testing.
## Dataset Creation
### Curation Rationale
From the paper:
> This work makes available a new labelled subset of the IIT-CDIP collection, containing 400,000
document images across 16 categories, useful for training new CNNs for document analysis.
### Source Data
#### Initial Data Collection and Normalization
The same as in the IIT-CDIP collection.
#### Who are the source language producers?
The same as in the IIT-CDIP collection.
### Annotations
#### Annotation process
The same as in the IIT-CDIP collection.
#### Who are the annotators?
The same as in the IIT-CDIP collection.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The dataset was curated by the authors - Adam W. Harley, Alex Ufkes, and Konstantinos G. Derpanis.
### Licensing Information
RVL-CDIP is a subset of IIT-CDIP, which came from the [Legacy Tobacco Document Library](https://www.industrydocuments.ucsf.edu/tobacco/), for which license information can be found [here](https://www.industrydocuments.ucsf.edu/help/copyright/).
### Citation Information
```bibtex
@inproceedings{harley2015icdar,
title = {Evaluation of Deep Convolutional Nets for Document Image Classification and Retrieval},
author = {Adam W Harley and Alex Ufkes and Konstantinos G Derpanis},
booktitle = {International Conference on Document Analysis and Recognition ({ICDAR})}},
year = {2015}
}
```
### Contributions
Thanks to [@dnaveenr](https://github.com/dnaveenr) for adding this dataset. |
kunalsharma/fake-news | ---
license: cc
---
|
autoevaluate/autoeval-eval-project-emotion-2fbf3953-1266148530 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- emotion
eval_info:
task: multi_class_classification
model: autoevaluate/multi-class-classification
metrics: []
dataset_name: emotion
dataset_config: default
dataset_split: test
col_mapping:
text: text
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: autoevaluate/multi-class-classification
* Dataset: emotion
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
Kaludi/data-eurekaQA | ---
language:
- en
---
# Dataset for project: eurekaqa
This dataset has been trained for project eurekaQA.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"context": "Colquhoun's utilitarian approach to the problem \u2013 using a cost-benefit argument to obtain support from businesses standing to benefit \u2013 allowed him to achieve what Henry and John Fielding failed for their Bow Street detectives. Unlike the stipendiary system at Bow Street, the river police were full-time, salaried officers prohibited from taking private fees. His other contribution was the concept of preventive policing; his police were to act as a highly visible deterrent to crime by their permanent presence on the Thames. Colquhoun's innovations were a critical development leading up to Robert Peel's \"new\" police three decades later.",
"question": "How did the Thames River Police pay their employees?",
"answers.text": [
"full-time, salaried officers prohibited from taking private fees"
],
"answers.answer_start": [
295
]
},
{
"context": "The small woolen dolls called Maniae, hung on the Compitalia shrines, were thought a symbolic replacement for child-sacrifice to Mania, as Mother of the Lares. The Junii took credit for its abolition by their ancestor L. Junius Brutus, traditionally Rome's Republican founder and first consul. Political or military executions were sometimes conducted in such a way that they evoked human sacrifice, whether deliberately or in the perception of witnesses; Marcus Marius Gratidianus was a gruesome example.",
"question": "Who was Mania in Roman religion?",
"answers.text": [
"Mother of the Lares"
],
"answers.answer_start": [
139
]
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"context": "Value(dtype='string', id=None)",
"question": "Value(dtype='string', id=None)",
"answers.text": "Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)",
"answers.answer_start": "Sequence(feature=Value(dtype='int32', id=None), length=-1, id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 8996 |
| valid | 998 |
|
so_stacksample | ---
annotations_creators:
- no-annotation
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-sa-3.0
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- text2text-generation
task_ids:
- abstractive-qa
- open-domain-abstractive-qa
paperswithcode_id: null
pretty_name: SO StackSample
dataset_info:
- config_name: Answers
features:
- name: Id
dtype: int32
- name: OwnerUserId
dtype: int32
- name: CreationDate
dtype: string
- name: ParentId
dtype: int32
- name: Score
dtype: int32
- name: Body
dtype: string
splits:
- name: Answers
num_bytes: 1583232304
num_examples: 2014516
download_size: 0
dataset_size: 1583232304
- config_name: Questions
features:
- name: Id
dtype: int32
- name: OwnerUserId
dtype: int32
- name: CreationDate
dtype: string
- name: ClosedDate
dtype: string
- name: Score
dtype: int32
- name: Title
dtype: string
- name: Body
dtype: string
splits:
- name: Questions
num_bytes: 1913896893
num_examples: 1264216
download_size: 0
dataset_size: 1913896893
- config_name: Tags
features:
- name: Id
dtype: int32
- name: Tag
dtype: string
splits:
- name: Tags
num_bytes: 58816824
num_examples: 3750994
download_size: 0
dataset_size: 58816824
---
# Dataset Card for SO StackSample
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.kaggle.com/stackoverflow/stacksample
### Dataset Summary
Dataset with the text of 10% of questions and answers from the Stack Overflow programming Q&A website.
This is organized as three tables:
Questions table contains the title, body, creation date, closed date (if applicable), score, and owner ID for all non-deleted Stack Overflow questions whose Id is a multiple of 10.
Answers table contains the body, creation date, score, and owner ID for each of the answers to these questions. The ParentId column links back to the Questions table.
Tags table contains the tags on each of these questions.
### Supported Tasks and Leaderboards
Example projects include:
- Identifying tags from question text
- Predicting whether questions will be upvoted, downvoted, or closed based on their text
- Predicting how long questions will take to answer
- Open Domain Q/A
### Languages
English (en) and Programming Languages.
## Dataset Structure
### Data Instances
For Answers:
```
{
"Id": { # Unique ID given to the Answer post
"feature_type": "Value",
"dtype": "int32"
},
"OwnerUserId": { # The UserID of the person who generated the Answer on StackOverflow. -1 means NA
"feature_type": "Value",
"dtype": "int32"
},
"CreationDate": { # The date the Answer was generated. Follows standard datetime format.
"feature_type": "Value",
"dtype": "string"
},
"ParentId": { # Refers to the `Id` of the Question the Answer belong to.
"feature_type": "Value",
"dtype": "int32"
},
"Score": { # The sum of up and down votes given to the Answer. Can be negative.
"feature_type": "Value",
"dtype": "int32"
},
"Body": { # The body content of the Answer.
"feature_type": "Value",
"dtype": "string"
}
}
```
For Questions:
```
{
"Id": { # Unique ID given to the Question post
"feature_type": "Value",
"dtype": "int32"
},
"OwnerUserId": { # The UserID of the person who generated the Question on StackOverflow. -1 means NA.
"feature_type": "Value",
"dtype": "int32"
},
"CreationDate": { # The date the Question was generated. Follows standard datetime format.
"feature_type": "Value",
"dtype": "string"
},
"ClosedDate": { # The date the Question was generated. Follows standard datetime format. Can be NA.
"feature_type": "Value",
"dtype": "string"
},
"Score": { # The sum of up and down votes given to the Question. Can be negative.
"feature_type": "Value",
"dtype": "int32"
},
"Title": { # The title of the Question.
"feature_type": "Value",
"dtype": "string"
},
"Body": { # The body content of the Question.
"feature_type": "Value",
"dtype": "string"
}
}
```
For Tags:
```
{
"Id": { # ID of the Question the tag belongs to
"feature_type": "Value",
"dtype": "int32"
},
"Tag": { # The tag name
"feature_type": "Value",
"dtype": "string"
}
}
```
`
### Data Fields
For Answers:
-`Id`: Unique ID given to the Answer post
`OwnerUserId`: The UserID of the person who generated the Answer on StackOverflow. -1 means NA
"`CreationDate`": The date the Answer was generated. Follows standard datetime format.
"`ParentId`": Refers to the `Id` of the Question the Answer belong to.
"`Score`": The sum of up and down votes given to the Answer. Can be negative.
"`Body`": The body content of the Answer.
For Questions:
- `Id`: Unique ID given to the Question post.
- `OwnerUserId`: The UserID of the person who generated the Question on StackOverflow. -1 means NA.
- `CreationDate`: The date the Question was generated. Follows standard datetime format.
- `ClosedDate`: The date the Question was generated. Follows standard datetime format. Can be NA.
- `Score`: The sum of up and down votes given to the Question. Can be negative.
- `Title`: {The title of the Question.
- `Body`: The body content of the Question.
For Tags:
- `Id`: ID of the Question the tag belongs to.
- `Tag`: The tag name.
### Data Splits
The dataset has 3 splits:
- `Answers`
- `Questions`
- `Tags`
## Dataset Creation
### Curation Rationale
Datasets of all R questions and all Python questions are also available on Kaggle, but this dataset is especially useful for analyses that span many languages.
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
StackOverflow Users.
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
This data contains information that can identify individual users of StackOverflow. The information is self-reported.
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
StackOverflow answers are not guaranteed to be safe, secure, or correct. Some answers may purposefully be insecure as is done in this https://stackoverflow.com/a/35571883/5768407 answer from user [`zys`](https://stackoverflow.com/users/5259310/zys), where they show a solution to purposefully bypass Google Play store security checks. Such answers can lead to biased models that use this data and can further propogate unsafe and insecure programming practices.
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
All Stack Overflow user contributions are licensed under CC-BY-SA 3.0 with attribution required.
### Citation Information
The content is from Stack Overflow.
### Contributions
Thanks to [@ncoop57](https://github.com/ncoop57) for adding this dataset. |
SameeraDattaMyla/guanaco-llama2-1k | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1654448
num_examples: 1000
download_size: 966692
dataset_size: 1654448
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
BangumiBase/denpaonnatoseishunotoko | ---
license: mit
tags:
- art
size_categories:
- 1K<n<10K
---
# Bangumi Image Base of Denpa Onna To Seishun Otoko
This is the image base of bangumi Denpa Onna to Seishun Otoko, we detected 15 characters, 1491 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 109 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 106 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 16 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 126 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 22 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 546 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 13 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 163 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 8 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 29 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 8 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 185 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 26 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 5 | [Download](13/dataset.zip) |  |  |  |  |  | N/A | N/A | N/A |
| noise | 129 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
|
ramprakash-vedaraman/backgroundremoval | ---
license: mit
---
|
chrisgru/blended_skill_talk_chatml | ---
dataset_info:
features:
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 1198146
num_examples: 980
download_size: 0
dataset_size: 1198146
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "blended_skill_talk_chatml"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Leventk/yasa_son | ---
license: other
---
license: cc-by-sa-3.0
task_categories:
- question-answering
- summarization
language:
- en
size_categories:
- 10K<n<100K |
hk-kaden-kim/uzh-hs23-etsp-eval-multi-subplot-bar | ---
dataset_info:
features:
- name: image
dtype: image
- name: caption
dtype: string
splits:
- name: test
num_bytes: 6192425.0
num_examples: 100
download_size: 6134847
dataset_size: 6192425.0
---
# Dataset Card for "uzh-hs23-etsp-eval-multi-subplot-bar"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
thiomajid/unique_java_methods | ---
dataset_info:
features:
- name: name
dtype: string
- name: new_args
dtype: string
- name: new_implementation
dtype: string
- name: new_return_type
dtype: string
- name: new_signature
dtype: string
- name: old_args
dtype: string
- name: old_implementation
dtype: string
- name: old_return_type
dtype: string
- name: old_signature
dtype: string
splits:
- name: train
num_bytes: 113254
num_examples: 339
download_size: 40281
dataset_size: 113254
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
AnonymousSite/QA_dataset_for_CCLR | ---
license: mit
---
# 1. Introduction
(1) As far as we know, this is the largest QA dataset for Chinese Construction Laws and Regulations (CCLR). For example, well-known datasets like c-eval typically contain only about 500 questions in a single domain, whereas our dataset specifically focuses on the CCLR domain and includes 6,339 questions.
(2) This dataset has 2,220 questions from Registered Constructor Qualification Examination (RCQE) and 4,119 self-designed questions covering 8 CCLR subdomains.
(3) The dataset is developed and maintained by Southeast University, University of Cambridge, and City University of Hong Kong.
(4) Make sure to read the specification and follow the rules.
# 2. Submission of your LLM’s answers
The answers could be submitted through https://forms.gle/bKLj6GgyxSnGenXS8. Please use “Template of answer submission.xls” in this repository to submit your LLM's answers
# 3. Citation requirement
The reuse of this repository requires citation. Should any individual or entity utilize this repository without appropriate acknowledgment and citation, they do not have the right to use our data. We will take measures to protect our copyright, including, but not limited to, retracting their papers and initiating legal action.
# 4.LLM Leaderboard for CCLR QA
| Large Language Model | Contributors | Overall Scoring Rate | D1 | D2 | D3 | D4 | D5 | D6 | D7 | D8 | Ranking |
|-----|-----|-----|-----|-----|-----|-----|-----|-----|------|------|------|
| ERNIE-Bot 4.0 with knowledge graph | Baidu & The authors | 0.822 | 0.842 | 0.826 | 0.830 | 0.801 | 0.853 | 0.842 | 0.800 | 0.862 | 1 |
| ERNIE-Bot 4.0 | Baidu | 0.757 | 0.783 | 0.718 | 0.762 | 0.768 | 0.724 | 0.724 | 0.731 | 0.788 | 2 |
| GPT-4 with knowledge graph | OpenAI & The authors | 0.666 | 0.719 | 0.734 | 0.661 | 0.660 | 0.757 | 0.681 | 0.664 | 0.689 | 3 |
| GPT-4 | OpenAI | 0.532 | 0.602 | 0.490 | 0.556 | 0.536 | 0.570 | 0.519 | 0.514 | 0.566 | 4 |
| GPT-3.5-turbo with knowledge graph | OpenAI & The authors | 0.504 | 0.532 | 0.503 | 0.527 | 0.472 | 0.626 | 0.522 | 0.540 | 0.467 | 5 |
| ChatGLM3-6B with knowledge graph | Tsinghua, Zhipu.AI & The authors | 0.483 | 0.497 | 0.444 | 0.510 | 0.421 | 0.540 | 0.596 | 0.543 | 0.444 | 6 |
| Text-davinci-003 with knowledge graph | OpenAI & The authors | 0.482 | 0.507 | 0.521 | 0.470 | 0.478 | 0.582 | 0.516 | 0.510 | 0.516 | 7 |
| Qianfan-Chinese-Llama-2-7B with knowledge graph| Baidu & The authors | 0.474 | 0.474 | 0.486 | 0.494 | 0.469 | 0.570 | 0.529 | 0.514 | 0.470 | 8 |
| ChatGLM2-6B with knowledge graph | Tsinghua, Zhipu.AI & The authors | 0.472 | 0.471 | 0.469 | 0.488 | 0.464 | 0.517 | 0.507 | 0.528 | 0.462 | 9 |
| ChatGLM2-6B | Tsinghua & Zhipu.AI | 0.430 | 0.454 | 0.412 | 0.477 | 0.409 | 0.469 | 0.444 | 0.494 | 0.420 | 10 |
| ChatGLM3-6B | Tsinghua & Zhipu.AI | 0.399 | 0.452 | 0.389 | 0.415 | 0.356 | 0.412 | 0.389 | 0.416 | 0.399 | 11 |
| Qianfan-Chinese-Llama-2-7B | Baidu | 0.373 | 0.421 | 0.377 | 0.364 | 0.359 | 0.422 | 0.374 | 0.411 | 0.358 | 12 |
| GPT-3.5-turbo | OpenAI | 0.348 | 0.422 | 0.317 | 0.368 | 0.322 | 0.438 | 0.332 | 0.405 | 0.333 | 13 |
| Llama-2-70b with knowledge graph | MetaAI & The authors | 0.377 | 0.335 | 0.369 | 0.323 | 0.328 | 0.414 | 0.354 | 0.335 | 0.332 | 14 |
| Text-davinci-003 | OpenAI | 0.328 | 0.351 | 0.318 | 0.343 | 0.334 | 0.382 | 0.343 | 0.361 | 0.341 | 15 |
| Llama-2-70b | MetaAI | 0.284 | 0.284 | 0.338 | 0.255 | 0.316 | 0.313 | 0.291 | 0.299 | 0.293 | 16 | |
MingLiiii/Wiz70_Analysis_llama2_13b | ---
dataset_info:
features:
- name: data
struct:
- name: loss
sequence: float64
- name: ppl
sequence: float64
splits:
- name: origin
num_bytes: 5057436
num_examples: 70000
- name: reflect_instruction
num_bytes: 5040000
num_examples: 70000
- name: reflect_response
num_bytes: 5040000
num_examples: 70000
- name: reflect_both
num_bytes: 5040000
num_examples: 70000
download_size: 16869578
dataset_size: 20177436
---
# Dataset Card for "Wiz70_Analysis_llama2_13b"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
liuyanchen1015/MULTI_VALUE_cola_that_infinitival_subclause | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: value_score
dtype: int64
splits:
- name: dev
num_bytes: 1157
num_examples: 15
- name: test
num_bytes: 793
num_examples: 10
- name: train
num_bytes: 10519
num_examples: 120
download_size: 11572
dataset_size: 12469
---
# Dataset Card for "MULTI_VALUE_cola_that_infinitival_subclause"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
WeijianQi/internal_state | ---
dataset_info:
features:
- name: statement
dtype: string
- name: label
dtype: int64
- name: category
dtype: string
splits:
- name: train
num_bytes: 453211
num_examples: 6330
download_size: 139190
dataset_size: 453211
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: apache-2.0
task_categories:
- text-classification
language:
- en
size_categories:
- 1K<n<10K
--- |
CreativeLang/wps_chinese_simile | ---
license: cc-by-2.0
---
# WPS - Chinese Simile
## Dataset Description
- **Paper:** [Writing Polishment with Simile: Task, Dataset and A Neural Approach](https://arxiv.org/abs/2012.08117)
## Dataset Summary
Chinese Simile (CS) Dataset
This dataset is constructed and based on the online free-access fictions that are tagged with sci-fi, urban novel, love story, youth, etc.
All similes are extracted by rich regular expression, and the extraction precision is estimated as 92% by labelling 500 random extracted samples. Further data filtering as well as processing is truly encouraged!
The data split in paper is as follows (You could find more details in the paper):
Train Dev Test
5,485,721 2,500 2,500
For the details of this dataset, we refer you to the original [paper](https://arxiv.org/abs/2012.08117).
Metadata in **Creative Language Toolkit ([CLTK](https://github.com/liyucheng09/cltk))**:
- CL Type: Simile
- Task Type: Detection, Generation
- Size: 5M
- Created time: 2021
- Language: zh
### Citation Information
If you find this dataset helpful, please cite:
```
@inproceedings{Zhang2020WritingPW,
title={Writing Polishment with Simile: Task, Dataset and A Neural Approach},
author={Jiayi Zhang and Z. Cui and Xiaoqiang Xia and Ya-Long Guo and Yanran Li and Chen Wei and Jianwei Cui},
booktitle={AAAI},
year={2021}
}
```
### Contributions
If you have any queries, please open an issue or direct your queries to [mail](mailto:yucheng.li@surrey.ac.uk). |
nlp-brin-id/unsup-fact | ---
license: mit
task_categories:
- text-classification
language:
- id
size_categories:
- 10K<n<100K
---
This dataset infers a contradiction case between facts and contents from HOAX class subset in nlp-brin-id/id-hoax-report-merge-v2. </br>
The subsets can be utilized as samples for interleaving batch sampling during training stage of contrastive learning modles. </br>
Attributes used = 'Content', 'Fact'.</br> |
hnqh8888/sv_corpora_parliament_processed | ---
license: apache-2.0
---
|
yuan-sf63/chenyu_mask_72 | ---
dataset_info:
features:
- name: feature
dtype: string
- name: target
dtype: string
splits:
- name: train
num_bytes: 11235591.986864993
num_examples: 96749
- name: validation
num_bytes: 1248412.013135006
num_examples: 10750
download_size: 0
dataset_size: 12484004.0
---
# Dataset Card for "chenyu_mask_72"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
vikp/starcoder_filtered | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: code
dtype: string
- name: repo_path
dtype: string
splits:
- name: train
num_bytes: 88302798272
num_examples: 13368477
download_size: 1680002223
dataset_size: 88302798272
license: bigcode-openrail-m
---
# Dataset Card for "starcoder_filtered"
A version of the starcoder dataset filtered based on data quality. Data was labeled with a rater model, and low-ranking rows were removed. |
open-llm-leaderboard/details_jefferylovely__MoeLovely-13B | ---
pretty_name: Evaluation run of jefferylovely/MoeLovely-13B
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [jefferylovely/MoeLovely-13B](https://huggingface.co/jefferylovely/MoeLovely-13B)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_jefferylovely__MoeLovely-13B\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-03-09T20:15:16.888132](https://huggingface.co/datasets/open-llm-leaderboard/details_jefferylovely__MoeLovely-13B/blob/main/results_2024-03-09T20-15-16.888132.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6546902728965655,\n\
\ \"acc_stderr\": 0.03211718616098461,\n \"acc_norm\": 0.6535419961890351,\n\
\ \"acc_norm_stderr\": 0.03280312459827673,\n \"mc1\": 0.6376988984088128,\n\
\ \"mc1_stderr\": 0.01682664689726226,\n \"mc2\": 0.7873533609473717,\n\
\ \"mc2_stderr\": 0.01360915944260026\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.7073378839590444,\n \"acc_stderr\": 0.013295916103619423,\n\
\ \"acc_norm\": 0.7372013651877133,\n \"acc_norm_stderr\": 0.012862523175351333\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.7344154550886277,\n\
\ \"acc_stderr\": 0.004407413723383404,\n \"acc_norm\": 0.8949412467635929,\n\
\ \"acc_norm_stderr\": 0.003060024474796982\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.32,\n \"acc_stderr\": 0.046882617226215034,\n \
\ \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.046882617226215034\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6370370370370371,\n\
\ \"acc_stderr\": 0.041539484047423976,\n \"acc_norm\": 0.6370370370370371,\n\
\ \"acc_norm_stderr\": 0.041539484047423976\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.6907894736842105,\n \"acc_stderr\": 0.037610708698674805,\n\
\ \"acc_norm\": 0.6907894736842105,\n \"acc_norm_stderr\": 0.037610708698674805\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.64,\n\
\ \"acc_stderr\": 0.04824181513244218,\n \"acc_norm\": 0.64,\n \
\ \"acc_norm_stderr\": 0.04824181513244218\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.690566037735849,\n \"acc_stderr\": 0.028450154794118637,\n\
\ \"acc_norm\": 0.690566037735849,\n \"acc_norm_stderr\": 0.028450154794118637\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7847222222222222,\n\
\ \"acc_stderr\": 0.03437079344106135,\n \"acc_norm\": 0.7847222222222222,\n\
\ \"acc_norm_stderr\": 0.03437079344106135\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.48,\n \"acc_stderr\": 0.050211673156867795,\n \
\ \"acc_norm\": 0.48,\n \"acc_norm_stderr\": 0.050211673156867795\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"\
acc\": 0.55,\n \"acc_stderr\": 0.05,\n \"acc_norm\": 0.55,\n \
\ \"acc_norm_stderr\": 0.05\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.32,\n \"acc_stderr\": 0.04688261722621504,\n \
\ \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.04688261722621504\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6589595375722543,\n\
\ \"acc_stderr\": 0.036146654241808254,\n \"acc_norm\": 0.6589595375722543,\n\
\ \"acc_norm_stderr\": 0.036146654241808254\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.43137254901960786,\n \"acc_stderr\": 0.04928099597287534,\n\
\ \"acc_norm\": 0.43137254901960786,\n \"acc_norm_stderr\": 0.04928099597287534\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.74,\n \"acc_stderr\": 0.04408440022768077,\n \"acc_norm\": 0.74,\n\
\ \"acc_norm_stderr\": 0.04408440022768077\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.5702127659574469,\n \"acc_stderr\": 0.03236214467715564,\n\
\ \"acc_norm\": 0.5702127659574469,\n \"acc_norm_stderr\": 0.03236214467715564\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5,\n\
\ \"acc_stderr\": 0.047036043419179864,\n \"acc_norm\": 0.5,\n \
\ \"acc_norm_stderr\": 0.047036043419179864\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5793103448275863,\n \"acc_stderr\": 0.0411391498118926,\n\
\ \"acc_norm\": 0.5793103448275863,\n \"acc_norm_stderr\": 0.0411391498118926\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.4365079365079365,\n \"acc_stderr\": 0.025542846817400496,\n \"\
acc_norm\": 0.4365079365079365,\n \"acc_norm_stderr\": 0.025542846817400496\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4523809523809524,\n\
\ \"acc_stderr\": 0.044518079590553275,\n \"acc_norm\": 0.4523809523809524,\n\
\ \"acc_norm_stderr\": 0.044518079590553275\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.32,\n \"acc_stderr\": 0.04688261722621504,\n \
\ \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.04688261722621504\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7806451612903226,\n\
\ \"acc_stderr\": 0.023540799358723295,\n \"acc_norm\": 0.7806451612903226,\n\
\ \"acc_norm_stderr\": 0.023540799358723295\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.5073891625615764,\n \"acc_stderr\": 0.035176035403610105,\n\
\ \"acc_norm\": 0.5073891625615764,\n \"acc_norm_stderr\": 0.035176035403610105\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.72,\n \"acc_stderr\": 0.04512608598542127,\n \"acc_norm\"\
: 0.72,\n \"acc_norm_stderr\": 0.04512608598542127\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7636363636363637,\n \"acc_stderr\": 0.03317505930009182,\n\
\ \"acc_norm\": 0.7636363636363637,\n \"acc_norm_stderr\": 0.03317505930009182\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.7929292929292929,\n \"acc_stderr\": 0.028869778460267045,\n \"\
acc_norm\": 0.7929292929292929,\n \"acc_norm_stderr\": 0.028869778460267045\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.8963730569948186,\n \"acc_stderr\": 0.02199531196364424,\n\
\ \"acc_norm\": 0.8963730569948186,\n \"acc_norm_stderr\": 0.02199531196364424\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.6717948717948717,\n \"acc_stderr\": 0.02380763319865727,\n \
\ \"acc_norm\": 0.6717948717948717,\n \"acc_norm_stderr\": 0.02380763319865727\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.3592592592592593,\n \"acc_stderr\": 0.029252905927251972,\n \
\ \"acc_norm\": 0.3592592592592593,\n \"acc_norm_stderr\": 0.029252905927251972\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.680672268907563,\n \"acc_stderr\": 0.030283995525884396,\n \
\ \"acc_norm\": 0.680672268907563,\n \"acc_norm_stderr\": 0.030283995525884396\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.3708609271523179,\n \"acc_stderr\": 0.03943966699183629,\n \"\
acc_norm\": 0.3708609271523179,\n \"acc_norm_stderr\": 0.03943966699183629\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8366972477064221,\n \"acc_stderr\": 0.015848255806501562,\n \"\
acc_norm\": 0.8366972477064221,\n \"acc_norm_stderr\": 0.015848255806501562\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.5277777777777778,\n \"acc_stderr\": 0.0340470532865388,\n \"acc_norm\"\
: 0.5277777777777778,\n \"acc_norm_stderr\": 0.0340470532865388\n },\n\
\ \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.8235294117647058,\n\
\ \"acc_stderr\": 0.026756401538078962,\n \"acc_norm\": 0.8235294117647058,\n\
\ \"acc_norm_stderr\": 0.026756401538078962\n },\n \"harness|hendrycksTest-high_school_world_history|5\"\
: {\n \"acc\": 0.8016877637130801,\n \"acc_stderr\": 0.02595502084162113,\n\
\ \"acc_norm\": 0.8016877637130801,\n \"acc_norm_stderr\": 0.02595502084162113\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6860986547085202,\n\
\ \"acc_stderr\": 0.031146796482972465,\n \"acc_norm\": 0.6860986547085202,\n\
\ \"acc_norm_stderr\": 0.031146796482972465\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.7938931297709924,\n \"acc_stderr\": 0.03547771004159465,\n\
\ \"acc_norm\": 0.7938931297709924,\n \"acc_norm_stderr\": 0.03547771004159465\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.7768595041322314,\n \"acc_stderr\": 0.03800754475228732,\n \"\
acc_norm\": 0.7768595041322314,\n \"acc_norm_stderr\": 0.03800754475228732\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.75,\n\
\ \"acc_stderr\": 0.04186091791394607,\n \"acc_norm\": 0.75,\n \
\ \"acc_norm_stderr\": 0.04186091791394607\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7730061349693251,\n \"acc_stderr\": 0.03291099578615769,\n\
\ \"acc_norm\": 0.7730061349693251,\n \"acc_norm_stderr\": 0.03291099578615769\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4017857142857143,\n\
\ \"acc_stderr\": 0.04653333146973646,\n \"acc_norm\": 0.4017857142857143,\n\
\ \"acc_norm_stderr\": 0.04653333146973646\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7961165048543689,\n \"acc_stderr\": 0.039891398595317706,\n\
\ \"acc_norm\": 0.7961165048543689,\n \"acc_norm_stderr\": 0.039891398595317706\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8803418803418803,\n\
\ \"acc_stderr\": 0.021262719400406974,\n \"acc_norm\": 0.8803418803418803,\n\
\ \"acc_norm_stderr\": 0.021262719400406974\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.73,\n \"acc_stderr\": 0.044619604333847394,\n \
\ \"acc_norm\": 0.73,\n \"acc_norm_stderr\": 0.044619604333847394\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8301404853128991,\n\
\ \"acc_stderr\": 0.013428186370608306,\n \"acc_norm\": 0.8301404853128991,\n\
\ \"acc_norm_stderr\": 0.013428186370608306\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7225433526011561,\n \"acc_stderr\": 0.024105712607754307,\n\
\ \"acc_norm\": 0.7225433526011561,\n \"acc_norm_stderr\": 0.024105712607754307\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.46145251396648046,\n\
\ \"acc_stderr\": 0.01667273126755226,\n \"acc_norm\": 0.46145251396648046,\n\
\ \"acc_norm_stderr\": 0.01667273126755226\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.7189542483660131,\n \"acc_stderr\": 0.025738854797818733,\n\
\ \"acc_norm\": 0.7189542483660131,\n \"acc_norm_stderr\": 0.025738854797818733\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7138263665594855,\n\
\ \"acc_stderr\": 0.02567025924218893,\n \"acc_norm\": 0.7138263665594855,\n\
\ \"acc_norm_stderr\": 0.02567025924218893\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.7253086419753086,\n \"acc_stderr\": 0.024836057868294677,\n\
\ \"acc_norm\": 0.7253086419753086,\n \"acc_norm_stderr\": 0.024836057868294677\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.5177304964539007,\n \"acc_stderr\": 0.02980873964223777,\n \
\ \"acc_norm\": 0.5177304964539007,\n \"acc_norm_stderr\": 0.02980873964223777\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.47196870925684486,\n\
\ \"acc_stderr\": 0.012750151802922436,\n \"acc_norm\": 0.47196870925684486,\n\
\ \"acc_norm_stderr\": 0.012750151802922436\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.6727941176470589,\n \"acc_stderr\": 0.028501452860396556,\n\
\ \"acc_norm\": 0.6727941176470589,\n \"acc_norm_stderr\": 0.028501452860396556\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6813725490196079,\n \"acc_stderr\": 0.01885008469646872,\n \
\ \"acc_norm\": 0.6813725490196079,\n \"acc_norm_stderr\": 0.01885008469646872\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6727272727272727,\n\
\ \"acc_stderr\": 0.0449429086625209,\n \"acc_norm\": 0.6727272727272727,\n\
\ \"acc_norm_stderr\": 0.0449429086625209\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.7306122448979592,\n \"acc_stderr\": 0.02840125202902294,\n\
\ \"acc_norm\": 0.7306122448979592,\n \"acc_norm_stderr\": 0.02840125202902294\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.835820895522388,\n\
\ \"acc_stderr\": 0.026193923544454125,\n \"acc_norm\": 0.835820895522388,\n\
\ \"acc_norm_stderr\": 0.026193923544454125\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.84,\n \"acc_stderr\": 0.03684529491774709,\n \
\ \"acc_norm\": 0.84,\n \"acc_norm_stderr\": 0.03684529491774709\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5662650602409639,\n\
\ \"acc_stderr\": 0.03858158940685515,\n \"acc_norm\": 0.5662650602409639,\n\
\ \"acc_norm_stderr\": 0.03858158940685515\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8421052631578947,\n \"acc_stderr\": 0.02796678585916089,\n\
\ \"acc_norm\": 0.8421052631578947,\n \"acc_norm_stderr\": 0.02796678585916089\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.6376988984088128,\n\
\ \"mc1_stderr\": 0.01682664689726226,\n \"mc2\": 0.7873533609473717,\n\
\ \"mc2_stderr\": 0.01360915944260026\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.8760852407261247,\n \"acc_stderr\": 0.009260146295063712\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6914329037149356,\n \
\ \"acc_stderr\": 0.012723076049815896\n }\n}\n```"
repo_url: https://huggingface.co/jefferylovely/MoeLovely-13B
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_03_09T20_15_16.888132
path:
- '**/details_harness|arc:challenge|25_2024-03-09T20-15-16.888132.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-03-09T20-15-16.888132.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_03_09T20_15_16.888132
path:
- '**/details_harness|gsm8k|5_2024-03-09T20-15-16.888132.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-03-09T20-15-16.888132.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_03_09T20_15_16.888132
path:
- '**/details_harness|hellaswag|10_2024-03-09T20-15-16.888132.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-03-09T20-15-16.888132.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_03_09T20_15_16.888132
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-09T20-15-16.888132.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-09T20-15-16.888132.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-09T20-15-16.888132.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_03_09T20_15_16.888132
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-09T20-15-16.888132.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-09T20-15-16.888132.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_03_09T20_15_16.888132
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-09T20-15-16.888132.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-09T20-15-16.888132.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_03_09T20_15_16.888132
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-09T20-15-16.888132.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-09T20-15-16.888132.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_03_09T20_15_16.888132
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-09T20-15-16.888132.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-09T20-15-16.888132.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_03_09T20_15_16.888132
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-09T20-15-16.888132.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-09T20-15-16.888132.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_03_09T20_15_16.888132
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-09T20-15-16.888132.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-09T20-15-16.888132.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_03_09T20_15_16.888132
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-09T20-15-16.888132.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-09T20-15-16.888132.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_03_09T20_15_16.888132
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-09T20-15-16.888132.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-09T20-15-16.888132.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_03_09T20_15_16.888132
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-09T20-15-16.888132.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-09T20-15-16.888132.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_03_09T20_15_16.888132
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-09T20-15-16.888132.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-09T20-15-16.888132.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_03_09T20_15_16.888132
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-09T20-15-16.888132.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-09T20-15-16.888132.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_03_09T20_15_16.888132
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-09T20-15-16.888132.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-09T20-15-16.888132.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_03_09T20_15_16.888132
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-09T20-15-16.888132.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-09T20-15-16.888132.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_03_09T20_15_16.888132
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-09T20-15-16.888132.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-09T20-15-16.888132.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_03_09T20_15_16.888132
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-09T20-15-16.888132.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-09T20-15-16.888132.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_03_09T20_15_16.888132
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-09T20-15-16.888132.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-09T20-15-16.888132.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_03_09T20_15_16.888132
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-09T20-15-16.888132.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-09T20-15-16.888132.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_03_09T20_15_16.888132
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-09T20-15-16.888132.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-09T20-15-16.888132.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_03_09T20_15_16.888132
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-09T20-15-16.888132.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-09T20-15-16.888132.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_03_09T20_15_16.888132
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-09T20-15-16.888132.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-09T20-15-16.888132.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_03_09T20_15_16.888132
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-09T20-15-16.888132.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-09T20-15-16.888132.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_03_09T20_15_16.888132
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-09T20-15-16.888132.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-09T20-15-16.888132.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_03_09T20_15_16.888132
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-09T20-15-16.888132.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-09T20-15-16.888132.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_03_09T20_15_16.888132
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-09T20-15-16.888132.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-09T20-15-16.888132.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_03_09T20_15_16.888132
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-09T20-15-16.888132.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-09T20-15-16.888132.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_03_09T20_15_16.888132
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-09T20-15-16.888132.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-09T20-15-16.888132.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_03_09T20_15_16.888132
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-09T20-15-16.888132.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-09T20-15-16.888132.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_03_09T20_15_16.888132
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-09T20-15-16.888132.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-09T20-15-16.888132.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_03_09T20_15_16.888132
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-09T20-15-16.888132.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-09T20-15-16.888132.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_03_09T20_15_16.888132
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-09T20-15-16.888132.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-09T20-15-16.888132.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_03_09T20_15_16.888132
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-09T20-15-16.888132.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-09T20-15-16.888132.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_03_09T20_15_16.888132
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-09T20-15-16.888132.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-09T20-15-16.888132.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_03_09T20_15_16.888132
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-09T20-15-16.888132.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-09T20-15-16.888132.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_03_09T20_15_16.888132
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-09T20-15-16.888132.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-09T20-15-16.888132.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_03_09T20_15_16.888132
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-09T20-15-16.888132.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-09T20-15-16.888132.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_03_09T20_15_16.888132
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-09T20-15-16.888132.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-09T20-15-16.888132.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_03_09T20_15_16.888132
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-09T20-15-16.888132.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-09T20-15-16.888132.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_03_09T20_15_16.888132
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-09T20-15-16.888132.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-09T20-15-16.888132.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_03_09T20_15_16.888132
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-09T20-15-16.888132.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-09T20-15-16.888132.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_03_09T20_15_16.888132
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-09T20-15-16.888132.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-09T20-15-16.888132.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_03_09T20_15_16.888132
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-09T20-15-16.888132.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-09T20-15-16.888132.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_03_09T20_15_16.888132
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-09T20-15-16.888132.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-09T20-15-16.888132.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_03_09T20_15_16.888132
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-09T20-15-16.888132.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-09T20-15-16.888132.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_03_09T20_15_16.888132
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-09T20-15-16.888132.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-09T20-15-16.888132.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_03_09T20_15_16.888132
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-09T20-15-16.888132.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-09T20-15-16.888132.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_03_09T20_15_16.888132
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-09T20-15-16.888132.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-09T20-15-16.888132.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_03_09T20_15_16.888132
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-09T20-15-16.888132.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-09T20-15-16.888132.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_03_09T20_15_16.888132
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-09T20-15-16.888132.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-09T20-15-16.888132.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_03_09T20_15_16.888132
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-09T20-15-16.888132.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-09T20-15-16.888132.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_03_09T20_15_16.888132
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-09T20-15-16.888132.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-09T20-15-16.888132.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_03_09T20_15_16.888132
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-09T20-15-16.888132.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-09T20-15-16.888132.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_03_09T20_15_16.888132
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-09T20-15-16.888132.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-09T20-15-16.888132.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_03_09T20_15_16.888132
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-09T20-15-16.888132.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-09T20-15-16.888132.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_03_09T20_15_16.888132
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-09T20-15-16.888132.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-09T20-15-16.888132.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_03_09T20_15_16.888132
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-09T20-15-16.888132.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-09T20-15-16.888132.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_03_09T20_15_16.888132
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-09T20-15-16.888132.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-09T20-15-16.888132.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_03_09T20_15_16.888132
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-09T20-15-16.888132.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-09T20-15-16.888132.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_03_09T20_15_16.888132
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-09T20-15-16.888132.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-09T20-15-16.888132.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_03_09T20_15_16.888132
path:
- '**/details_harness|winogrande|5_2024-03-09T20-15-16.888132.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-03-09T20-15-16.888132.parquet'
- config_name: results
data_files:
- split: 2024_03_09T20_15_16.888132
path:
- results_2024-03-09T20-15-16.888132.parquet
- split: latest
path:
- results_2024-03-09T20-15-16.888132.parquet
---
# Dataset Card for Evaluation run of jefferylovely/MoeLovely-13B
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [jefferylovely/MoeLovely-13B](https://huggingface.co/jefferylovely/MoeLovely-13B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_jefferylovely__MoeLovely-13B",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-03-09T20:15:16.888132](https://huggingface.co/datasets/open-llm-leaderboard/details_jefferylovely__MoeLovely-13B/blob/main/results_2024-03-09T20-15-16.888132.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6546902728965655,
"acc_stderr": 0.03211718616098461,
"acc_norm": 0.6535419961890351,
"acc_norm_stderr": 0.03280312459827673,
"mc1": 0.6376988984088128,
"mc1_stderr": 0.01682664689726226,
"mc2": 0.7873533609473717,
"mc2_stderr": 0.01360915944260026
},
"harness|arc:challenge|25": {
"acc": 0.7073378839590444,
"acc_stderr": 0.013295916103619423,
"acc_norm": 0.7372013651877133,
"acc_norm_stderr": 0.012862523175351333
},
"harness|hellaswag|10": {
"acc": 0.7344154550886277,
"acc_stderr": 0.004407413723383404,
"acc_norm": 0.8949412467635929,
"acc_norm_stderr": 0.003060024474796982
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.32,
"acc_stderr": 0.046882617226215034,
"acc_norm": 0.32,
"acc_norm_stderr": 0.046882617226215034
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6370370370370371,
"acc_stderr": 0.041539484047423976,
"acc_norm": 0.6370370370370371,
"acc_norm_stderr": 0.041539484047423976
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6907894736842105,
"acc_stderr": 0.037610708698674805,
"acc_norm": 0.6907894736842105,
"acc_norm_stderr": 0.037610708698674805
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.64,
"acc_stderr": 0.04824181513244218,
"acc_norm": 0.64,
"acc_norm_stderr": 0.04824181513244218
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.690566037735849,
"acc_stderr": 0.028450154794118637,
"acc_norm": 0.690566037735849,
"acc_norm_stderr": 0.028450154794118637
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7847222222222222,
"acc_stderr": 0.03437079344106135,
"acc_norm": 0.7847222222222222,
"acc_norm_stderr": 0.03437079344106135
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.48,
"acc_stderr": 0.050211673156867795,
"acc_norm": 0.48,
"acc_norm_stderr": 0.050211673156867795
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.55,
"acc_stderr": 0.05,
"acc_norm": 0.55,
"acc_norm_stderr": 0.05
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.32,
"acc_stderr": 0.04688261722621504,
"acc_norm": 0.32,
"acc_norm_stderr": 0.04688261722621504
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6589595375722543,
"acc_stderr": 0.036146654241808254,
"acc_norm": 0.6589595375722543,
"acc_norm_stderr": 0.036146654241808254
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.43137254901960786,
"acc_stderr": 0.04928099597287534,
"acc_norm": 0.43137254901960786,
"acc_norm_stderr": 0.04928099597287534
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.74,
"acc_stderr": 0.04408440022768077,
"acc_norm": 0.74,
"acc_norm_stderr": 0.04408440022768077
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5702127659574469,
"acc_stderr": 0.03236214467715564,
"acc_norm": 0.5702127659574469,
"acc_norm_stderr": 0.03236214467715564
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.5,
"acc_stderr": 0.047036043419179864,
"acc_norm": 0.5,
"acc_norm_stderr": 0.047036043419179864
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5793103448275863,
"acc_stderr": 0.0411391498118926,
"acc_norm": 0.5793103448275863,
"acc_norm_stderr": 0.0411391498118926
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.4365079365079365,
"acc_stderr": 0.025542846817400496,
"acc_norm": 0.4365079365079365,
"acc_norm_stderr": 0.025542846817400496
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.4523809523809524,
"acc_stderr": 0.044518079590553275,
"acc_norm": 0.4523809523809524,
"acc_norm_stderr": 0.044518079590553275
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.32,
"acc_stderr": 0.04688261722621504,
"acc_norm": 0.32,
"acc_norm_stderr": 0.04688261722621504
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7806451612903226,
"acc_stderr": 0.023540799358723295,
"acc_norm": 0.7806451612903226,
"acc_norm_stderr": 0.023540799358723295
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5073891625615764,
"acc_stderr": 0.035176035403610105,
"acc_norm": 0.5073891625615764,
"acc_norm_stderr": 0.035176035403610105
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.72,
"acc_stderr": 0.04512608598542127,
"acc_norm": 0.72,
"acc_norm_stderr": 0.04512608598542127
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7636363636363637,
"acc_stderr": 0.03317505930009182,
"acc_norm": 0.7636363636363637,
"acc_norm_stderr": 0.03317505930009182
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7929292929292929,
"acc_stderr": 0.028869778460267045,
"acc_norm": 0.7929292929292929,
"acc_norm_stderr": 0.028869778460267045
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8963730569948186,
"acc_stderr": 0.02199531196364424,
"acc_norm": 0.8963730569948186,
"acc_norm_stderr": 0.02199531196364424
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6717948717948717,
"acc_stderr": 0.02380763319865727,
"acc_norm": 0.6717948717948717,
"acc_norm_stderr": 0.02380763319865727
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3592592592592593,
"acc_stderr": 0.029252905927251972,
"acc_norm": 0.3592592592592593,
"acc_norm_stderr": 0.029252905927251972
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.680672268907563,
"acc_stderr": 0.030283995525884396,
"acc_norm": 0.680672268907563,
"acc_norm_stderr": 0.030283995525884396
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3708609271523179,
"acc_stderr": 0.03943966699183629,
"acc_norm": 0.3708609271523179,
"acc_norm_stderr": 0.03943966699183629
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8366972477064221,
"acc_stderr": 0.015848255806501562,
"acc_norm": 0.8366972477064221,
"acc_norm_stderr": 0.015848255806501562
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5277777777777778,
"acc_stderr": 0.0340470532865388,
"acc_norm": 0.5277777777777778,
"acc_norm_stderr": 0.0340470532865388
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8235294117647058,
"acc_stderr": 0.026756401538078962,
"acc_norm": 0.8235294117647058,
"acc_norm_stderr": 0.026756401538078962
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8016877637130801,
"acc_stderr": 0.02595502084162113,
"acc_norm": 0.8016877637130801,
"acc_norm_stderr": 0.02595502084162113
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6860986547085202,
"acc_stderr": 0.031146796482972465,
"acc_norm": 0.6860986547085202,
"acc_norm_stderr": 0.031146796482972465
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7938931297709924,
"acc_stderr": 0.03547771004159465,
"acc_norm": 0.7938931297709924,
"acc_norm_stderr": 0.03547771004159465
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7768595041322314,
"acc_stderr": 0.03800754475228732,
"acc_norm": 0.7768595041322314,
"acc_norm_stderr": 0.03800754475228732
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.75,
"acc_stderr": 0.04186091791394607,
"acc_norm": 0.75,
"acc_norm_stderr": 0.04186091791394607
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7730061349693251,
"acc_stderr": 0.03291099578615769,
"acc_norm": 0.7730061349693251,
"acc_norm_stderr": 0.03291099578615769
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.4017857142857143,
"acc_stderr": 0.04653333146973646,
"acc_norm": 0.4017857142857143,
"acc_norm_stderr": 0.04653333146973646
},
"harness|hendrycksTest-management|5": {
"acc": 0.7961165048543689,
"acc_stderr": 0.039891398595317706,
"acc_norm": 0.7961165048543689,
"acc_norm_stderr": 0.039891398595317706
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8803418803418803,
"acc_stderr": 0.021262719400406974,
"acc_norm": 0.8803418803418803,
"acc_norm_stderr": 0.021262719400406974
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.73,
"acc_stderr": 0.044619604333847394,
"acc_norm": 0.73,
"acc_norm_stderr": 0.044619604333847394
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8301404853128991,
"acc_stderr": 0.013428186370608306,
"acc_norm": 0.8301404853128991,
"acc_norm_stderr": 0.013428186370608306
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7225433526011561,
"acc_stderr": 0.024105712607754307,
"acc_norm": 0.7225433526011561,
"acc_norm_stderr": 0.024105712607754307
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.46145251396648046,
"acc_stderr": 0.01667273126755226,
"acc_norm": 0.46145251396648046,
"acc_norm_stderr": 0.01667273126755226
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7189542483660131,
"acc_stderr": 0.025738854797818733,
"acc_norm": 0.7189542483660131,
"acc_norm_stderr": 0.025738854797818733
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7138263665594855,
"acc_stderr": 0.02567025924218893,
"acc_norm": 0.7138263665594855,
"acc_norm_stderr": 0.02567025924218893
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7253086419753086,
"acc_stderr": 0.024836057868294677,
"acc_norm": 0.7253086419753086,
"acc_norm_stderr": 0.024836057868294677
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.5177304964539007,
"acc_stderr": 0.02980873964223777,
"acc_norm": 0.5177304964539007,
"acc_norm_stderr": 0.02980873964223777
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.47196870925684486,
"acc_stderr": 0.012750151802922436,
"acc_norm": 0.47196870925684486,
"acc_norm_stderr": 0.012750151802922436
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6727941176470589,
"acc_stderr": 0.028501452860396556,
"acc_norm": 0.6727941176470589,
"acc_norm_stderr": 0.028501452860396556
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6813725490196079,
"acc_stderr": 0.01885008469646872,
"acc_norm": 0.6813725490196079,
"acc_norm_stderr": 0.01885008469646872
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6727272727272727,
"acc_stderr": 0.0449429086625209,
"acc_norm": 0.6727272727272727,
"acc_norm_stderr": 0.0449429086625209
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7306122448979592,
"acc_stderr": 0.02840125202902294,
"acc_norm": 0.7306122448979592,
"acc_norm_stderr": 0.02840125202902294
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.835820895522388,
"acc_stderr": 0.026193923544454125,
"acc_norm": 0.835820895522388,
"acc_norm_stderr": 0.026193923544454125
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.84,
"acc_stderr": 0.03684529491774709,
"acc_norm": 0.84,
"acc_norm_stderr": 0.03684529491774709
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5662650602409639,
"acc_stderr": 0.03858158940685515,
"acc_norm": 0.5662650602409639,
"acc_norm_stderr": 0.03858158940685515
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8421052631578947,
"acc_stderr": 0.02796678585916089,
"acc_norm": 0.8421052631578947,
"acc_norm_stderr": 0.02796678585916089
},
"harness|truthfulqa:mc|0": {
"mc1": 0.6376988984088128,
"mc1_stderr": 0.01682664689726226,
"mc2": 0.7873533609473717,
"mc2_stderr": 0.01360915944260026
},
"harness|winogrande|5": {
"acc": 0.8760852407261247,
"acc_stderr": 0.009260146295063712
},
"harness|gsm8k|5": {
"acc": 0.6914329037149356,
"acc_stderr": 0.012723076049815896
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
testzhp/llama_7B_merge_model_hf | ---
license: apache-2.0
---
|
liuyanchen1015/MULTI_VALUE_cola_demonstrative_for_definite_articles | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: value_score
dtype: int64
splits:
- name: dev
num_bytes: 28072
num_examples: 367
- name: test
num_bytes: 28129
num_examples: 378
- name: train
num_bytes: 251217
num_examples: 3412
download_size: 141473
dataset_size: 307418
---
# Dataset Card for "MULTI_VALUE_cola_demonstrative_for_definite_articles"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
iamnguyen/law-qa | ---
dataset_info:
- config_name: 0-1000
features:
- name: title
dtype: string
- name: question
dtype: string
- name: content
dtype: string
- name: normalize_answer
dtype: string
splits:
- name: train
num_bytes: 11038878
num_examples: 1000
download_size: 3397979
dataset_size: 11038878
- config_name: 1000-2000
features:
- name: title
dtype: string
- name: question
dtype: string
- name: content
dtype: string
- name: normalize_answer
dtype: string
splits:
- name: train
num_bytes: 3519140
num_examples: 1000
download_size: 1066029
dataset_size: 3519140
configs:
- config_name: 0-1000
data_files:
- split: train
path: 0-1000/train-*
- config_name: 1000-2000
data_files:
- split: train
path: 1000-2000/train-*
---
|
DTU54DL/test_small_ | ---
dataset_info:
features:
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: speaker_id
dtype: int64
- name: chapter_id
dtype: int64
- name: id
dtype: string
splits:
- name: train.360
num_bytes: 859588.0
num_examples: 2
download_size: 862898
dataset_size: 859588.0
---
# Dataset Card for "test_small_"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
thean/THFOOD-50 | ---
license: afl-3.0
pretty_name: Fine-Grained Thai Food Image Classification Datasets.
size_categories:
- 10K<n<100K
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': BitterMelonSoup
'1': BooPadPongali
'2': CurriedFishCake
'3': Dumpling
'4': EggsStewed
'5': FriedChicken
'6': FriedKale
'7': FriedMusselPancakes
'8': GaengJued
'9': GaengKeawWan
'10': GaiYang
'11': GoongObWoonSen
'12': GoongPao
'13': GrilledQquid
'14': HoyKraeng
'15': HoyLaiPrikPao
'16': Joke
'17': KaiJeowMooSaap
'18': KaiThoon
'19': KaoManGai
'20': KaoMooDang
'21': KhanomJeenNamYaKati
'22': KhaoMokGai
'23': KhaoMooTodGratiem
'24': KhaoNiewMaMuang
'25': KkaoKlukKaphi
'26': KorMooYang
'27': KuaKling
'28': KuayJab
'29': KuayTeowReua
'30': LarbMoo
'31': MassamanGai
'32': MooSatay
'33': NamTokMoo
'34': PadPakBung
'35': PadPakRuamMit
'36': PadThai
'37': PadYordMala
'38': PhatKaphrao
'39': PorkStickyNoodles
'40': Roast_duck
'41': Roast_fish
'42': Somtam
'43': SonInLawEggs
'44': StewedPorkLeg
'45': Suki
'46': TomKhaGai
'47': TomYumGoong
'48': YamWoonSen
'49': Yentafo
splits:
- name: train
num_bytes: 1790570028.695
num_examples: 12065
- name: test
num_bytes: 394634675.44
num_examples: 2105
- name: val
num_bytes: 295187724.2
num_examples: 1600
download_size: 3125698089
dataset_size: 2480392428.3349996
---
# THFOOD-50
Fine-Grained Thai Food Image Classification Datasets
THFOOD-50 containing 15,770 images of 50 famous Thai dishes.
## Download:
[THFOOD-50 v1 on Google Drive](https://drive.google.com/file/d/1CuNO2e77ZTk7mDfv3XujYXuUwiMwlUQI/view?usp=sharing)
## License
THFOOD-50 for **non-commercial research/educational** use.
## Citation
If you use THFOOD-50 dataset in your research, please cite our paper:
@article{termritthikun2017nu,
title="{NU-InNet: Thai food image recognition using convolutional neural networks on smartphone}",
author={Termritthikun, Chakkrit and Muneesawang, Paisarn and Kanprachar, Surachet},
journal={Journal of Telecommunication, Electronic and Computer Engineering (JTEC)},
volume={9},
number={2-6},
pages={63--67},
year={2017}
}
@inproceedings{termritthikun2017accuracy,
title="{Accuracy improvement of Thai food image recognition using deep convolutional neural networks}",
author={Termritthikun, Chakkrit and Kanprachar, Surachet},
booktitle={2017 international electrical engineering congress (IEECON)},
pages={1--4},
year={2017},
organization={IEEE}
}
@article{termritthikun2018nu,
title="{Nu-ResNet: Deep residual networks for Thai food image recognition}",
author={Termritthikun, Chakkrit and Kanprachar, Surachet},
journal={Journal of Telecommunication, Electronic and Computer Engineering (JTEC)},
volume={10},
number={1-4},
pages={29--33},
year={2018}
}
## Paper
1. NU-InNet: Thai food image recognition using convolutional neural networks on smartphone [Paper](https://journal.utem.edu.my/index.php/jtec/article/download/2436/1521) [Code](https://github.com/chakkritte/NU-InNet)
2. Accuracy improvement of Thai food image recognition using deep convolutional neural networks [Paper](https://ieeexplore.ieee.org/abstract/document/8075874/)
3. Nu-resnet: Deep residual networks for thai food image recognition [Paper](https://journal.utem.edu.my/index.php/jtec/article/download/3572/2467) [Code](https://github.com/chakkritte/NU-ResNet)
#### Examples of Thai food images in the THFOOD-50 dataset

**NOTE**: I do not own this, but I took the liberty to upload this dataset to the community. |
sasikumars/sandeepspace | ---
task_categories:
- question-answering
pretty_name: Space Science
size_categories:
- 100K<n<1M
--- |
kpriyanshu256/semeval-task-8-a-mono-v2-test-paraphrase-2 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
- name: model
dtype: string
- name: source
dtype: string
- name: id
dtype: int64
- name: paraphrase
dtype: string
- name: paraphrase2
dtype: string
splits:
- name: test
num_bytes: 23387836
num_examples: 5000
download_size: 13353734
dataset_size: 23387836
---
# Dataset Card for "semeval-task-8-a-mono-v2-test-paraphrase-2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
MartinKu/bookcorpus_SV | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 442719314
num_examples: 22412930
download_size: 284591599
dataset_size: 442719314
---
# Dataset Card for "bookcorpus_SV"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
FaalSa/data3 | ---
dataset_info:
features:
- name: start
dtype: timestamp[s]
- name: target
sequence: float32
- name: item_id
dtype: string
- name: feat_static_cat
sequence: uint64
splits:
- name: train
num_bytes: 17309
num_examples: 1
- name: validation
num_bytes: 17789
num_examples: 1
- name: test
num_bytes: 18269
num_examples: 1
download_size: 20335
dataset_size: 53367
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
Adabs/ai_trader_sample | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 11650805.0
num_examples: 50
download_size: 11553680
dataset_size: 11650805.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Matheyyus/personagem2 | ---
license: openrail
---
|
JayChauhan99/llama2-political-guanaco-small | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 988182
num_examples: 750
download_size: 550394
dataset_size: 988182
---
|
CyberHarem/oberon_fgo | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of oberon_fgo
This is the dataset of oberon_fgo, containing 200 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------|
| raw | 200 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 425 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| 384x512 | 200 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x512 | 200 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. |
| 512x704 | 200 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x640 | 200 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. |
| 640x880 | 200 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 425 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 425 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-1200 | 425 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
|
Nexdata/Italian_Conversational_Speech_Data_by_Mobile_Phone | ---
YAML tags:
- copy-paste the tags obtained with the tagging app: https://github.com/huggingface/datasets-tagging
task_categories:
- conversational
language:
- it
---
# Dataset Card for Nexdata/Italian_Conversational_Speech_Data_by_Mobile_Phone
## Description
About 700 speakers participated in the recording, and conducted face-to-face communication in a natural way. They had free discussion on a number of given topics, with a wide range of fields; the voice was natural and fluent, in line with the actual dialogue scene. Text is transferred manually, with high accuracy.
For more details, please refer to the link: https://www.nexdata.ai/datasets/1178?source=Huggingface
## Format
16kHz, 16bit, uncompressed wav, mono channel;
## Recording Environment
quiet indoor environment, without echo;
## Recording content
dozens of topics are specified, and the speakers make dialogue under those topics while the recording is performed;
## Demographics
About 700 people.
## Annotation
annotating for the transcription text, speaker identification and gender
## Device
Android mobile phone, iPhone;
## Language
Italian
## Application scenarios
speech recognition; voiceprint recognition;
## Accuracy rate
the word accuracy rate is not less than 98%
# Licensing Information
Commercial License
|
estebancrop/estebancrop | ---
license: openrail
---
|
svjack/pokemon-blip-captions-en-zh | ---
license: cc-by-nc-sa-4.0
annotations_creators:
- machine-generated
language:
- en
- zh
language_creators:
- other
multilinguality:
- multilingual
pretty_name: 'Pokémon BLIP captions'
size_categories:
- n<1K
source_datasets:
- huggan/few-shot-pokemon
tags: []
task_categories:
- text-to-image
task_ids: []
---
# Dataset Card for Pokémon BLIP captions with English and Chinese.
Dataset used to train Pokémon text to image model, add a Chinese Column of [Pokémon BLIP captions](https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions)
BLIP generated captions for Pokémon images from Few Shot Pokémon dataset introduced by Towards Faster and Stabilized GAN Training for High-fidelity Few-shot Image Synthesis (FastGAN). Original images were obtained from FastGAN-pytorch and captioned with the pre-trained BLIP model.
For each row the dataset contains image en_text (caption in English) and zh_text (caption in Chinese) keys. image is a varying size PIL jpeg, and text is the accompanying text caption. Only a train split is provided.
The Chinese captions are translated by [Deepl](https://www.deepl.com/translator) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.