datasetId stringlengths 2 117 | card stringlengths 19 1.01M |
|---|---|
TinyPixel/air-2 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 45835149
num_examples: 27729
download_size: 22872260
dataset_size: 45835149
---
# Dataset Card for "air-2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
TheFinAI/flare-tatqa | ---
dataset_info:
features:
- name: id
dtype: string
- name: query
dtype: string
- name: answer
dtype: string
- name: text
dtype: string
splits:
- name: test
num_bytes: 3510146
num_examples: 1668
download_size: 0
dataset_size: 3510146
---
# Dataset Card for "flare-tatqa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
rcfelipe/jigsaw_360 | ---
license: apache-2.0
---
|
jdoerr/medicare_faq_ssa | ---
license: mit
---
|
open-llm-leaderboard/details_Weyaxi__MetaMath-neural-chat-7b-v3-2-Ties | ---
pretty_name: Evaluation run of Weyaxi/MetaMath-neural-chat-7b-v3-2-Ties
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [Weyaxi/MetaMath-neural-chat-7b-v3-2-Ties](https://huggingface.co/Weyaxi/MetaMath-neural-chat-7b-v3-2-Ties)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Weyaxi__MetaMath-neural-chat-7b-v3-2-Ties\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-12-09T16:52:16.188783](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__MetaMath-neural-chat-7b-v3-2-Ties/blob/main/results_2023-12-09T16-52-16.188783.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6262329269588028,\n\
\ \"acc_stderr\": 0.03265531717656403,\n \"acc_norm\": 0.6261458795179596,\n\
\ \"acc_norm_stderr\": 0.033325096066245945,\n \"mc1\": 0.3623011015911873,\n\
\ \"mc1_stderr\": 0.016826646897262255,\n \"mc2\": 0.5206285653012832,\n\
\ \"mc2_stderr\": 0.015833320867777365\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.6109215017064846,\n \"acc_stderr\": 0.014247309976045607,\n\
\ \"acc_norm\": 0.6348122866894198,\n \"acc_norm_stderr\": 0.014070265519268802\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6538538139812786,\n\
\ \"acc_stderr\": 0.004747682003491466,\n \"acc_norm\": 0.8234415455088627,\n\
\ \"acc_norm_stderr\": 0.00380515334471309\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.24,\n \"acc_stderr\": 0.04292346959909283,\n \
\ \"acc_norm\": 0.24,\n \"acc_norm_stderr\": 0.04292346959909283\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6074074074074074,\n\
\ \"acc_stderr\": 0.04218506215368881,\n \"acc_norm\": 0.6074074074074074,\n\
\ \"acc_norm_stderr\": 0.04218506215368881\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.6842105263157895,\n \"acc_stderr\": 0.0378272898086547,\n\
\ \"acc_norm\": 0.6842105263157895,\n \"acc_norm_stderr\": 0.0378272898086547\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.62,\n\
\ \"acc_stderr\": 0.048783173121456316,\n \"acc_norm\": 0.62,\n \
\ \"acc_norm_stderr\": 0.048783173121456316\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.6754716981132075,\n \"acc_stderr\": 0.02881561571343211,\n\
\ \"acc_norm\": 0.6754716981132075,\n \"acc_norm_stderr\": 0.02881561571343211\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7152777777777778,\n\
\ \"acc_stderr\": 0.037738099906869334,\n \"acc_norm\": 0.7152777777777778,\n\
\ \"acc_norm_stderr\": 0.037738099906869334\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.45,\n \"acc_stderr\": 0.05,\n \"acc_norm\"\
: 0.45,\n \"acc_norm_stderr\": 0.05\n },\n \"harness|hendrycksTest-college_computer_science|5\"\
: {\n \"acc\": 0.53,\n \"acc_stderr\": 0.05016135580465919,\n \
\ \"acc_norm\": 0.53,\n \"acc_norm_stderr\": 0.05016135580465919\n \
\ },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.39,\n\
\ \"acc_stderr\": 0.04902071300001975,\n \"acc_norm\": 0.39,\n \
\ \"acc_norm_stderr\": 0.04902071300001975\n },\n \"harness|hendrycksTest-college_medicine|5\"\
: {\n \"acc\": 0.630057803468208,\n \"acc_stderr\": 0.0368122963339432,\n\
\ \"acc_norm\": 0.630057803468208,\n \"acc_norm_stderr\": 0.0368122963339432\n\
\ },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.3627450980392157,\n\
\ \"acc_stderr\": 0.047840607041056527,\n \"acc_norm\": 0.3627450980392157,\n\
\ \"acc_norm_stderr\": 0.047840607041056527\n },\n \"harness|hendrycksTest-computer_security|5\"\
: {\n \"acc\": 0.76,\n \"acc_stderr\": 0.042923469599092816,\n \
\ \"acc_norm\": 0.76,\n \"acc_norm_stderr\": 0.042923469599092816\n \
\ },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\":\
\ 0.5787234042553191,\n \"acc_stderr\": 0.03227834510146267,\n \"\
acc_norm\": 0.5787234042553191,\n \"acc_norm_stderr\": 0.03227834510146267\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.4473684210526316,\n\
\ \"acc_stderr\": 0.046774730044911984,\n \"acc_norm\": 0.4473684210526316,\n\
\ \"acc_norm_stderr\": 0.046774730044911984\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5103448275862069,\n \"acc_stderr\": 0.04165774775728762,\n\
\ \"acc_norm\": 0.5103448275862069,\n \"acc_norm_stderr\": 0.04165774775728762\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.3783068783068783,\n \"acc_stderr\": 0.024976954053155254,\n \"\
acc_norm\": 0.3783068783068783,\n \"acc_norm_stderr\": 0.024976954053155254\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4126984126984127,\n\
\ \"acc_stderr\": 0.04403438954768177,\n \"acc_norm\": 0.4126984126984127,\n\
\ \"acc_norm_stderr\": 0.04403438954768177\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.35,\n \"acc_stderr\": 0.047937248544110196,\n \
\ \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.047937248544110196\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\"\
: 0.7419354838709677,\n \"acc_stderr\": 0.024892469172462836,\n \"\
acc_norm\": 0.7419354838709677,\n \"acc_norm_stderr\": 0.024892469172462836\n\
\ },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\"\
: 0.46798029556650245,\n \"acc_stderr\": 0.035107665979592154,\n \"\
acc_norm\": 0.46798029556650245,\n \"acc_norm_stderr\": 0.035107665979592154\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.65,\n \"acc_stderr\": 0.047937248544110196,\n \"acc_norm\"\
: 0.65,\n \"acc_norm_stderr\": 0.047937248544110196\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7878787878787878,\n \"acc_stderr\": 0.031922715695483,\n\
\ \"acc_norm\": 0.7878787878787878,\n \"acc_norm_stderr\": 0.031922715695483\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.7777777777777778,\n \"acc_stderr\": 0.02962022787479048,\n \"\
acc_norm\": 0.7777777777777778,\n \"acc_norm_stderr\": 0.02962022787479048\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.8756476683937824,\n \"acc_stderr\": 0.023814477086593552,\n\
\ \"acc_norm\": 0.8756476683937824,\n \"acc_norm_stderr\": 0.023814477086593552\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.6384615384615384,\n \"acc_stderr\": 0.024359581465396997,\n\
\ \"acc_norm\": 0.6384615384615384,\n \"acc_norm_stderr\": 0.024359581465396997\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.32592592592592595,\n \"acc_stderr\": 0.02857834836547308,\n \
\ \"acc_norm\": 0.32592592592592595,\n \"acc_norm_stderr\": 0.02857834836547308\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.680672268907563,\n \"acc_stderr\": 0.030283995525884396,\n \
\ \"acc_norm\": 0.680672268907563,\n \"acc_norm_stderr\": 0.030283995525884396\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.33112582781456956,\n \"acc_stderr\": 0.038425817186598696,\n \"\
acc_norm\": 0.33112582781456956,\n \"acc_norm_stderr\": 0.038425817186598696\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8348623853211009,\n \"acc_stderr\": 0.015919557829976054,\n \"\
acc_norm\": 0.8348623853211009,\n \"acc_norm_stderr\": 0.015919557829976054\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.5324074074074074,\n \"acc_stderr\": 0.03402801581358966,\n \"\
acc_norm\": 0.5324074074074074,\n \"acc_norm_stderr\": 0.03402801581358966\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.7892156862745098,\n \"acc_stderr\": 0.028626547912437406,\n \"\
acc_norm\": 0.7892156862745098,\n \"acc_norm_stderr\": 0.028626547912437406\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.7679324894514767,\n \"acc_stderr\": 0.02747974455080851,\n \
\ \"acc_norm\": 0.7679324894514767,\n \"acc_norm_stderr\": 0.02747974455080851\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6771300448430493,\n\
\ \"acc_stderr\": 0.03138147637575499,\n \"acc_norm\": 0.6771300448430493,\n\
\ \"acc_norm_stderr\": 0.03138147637575499\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.7175572519083969,\n \"acc_stderr\": 0.03948406125768361,\n\
\ \"acc_norm\": 0.7175572519083969,\n \"acc_norm_stderr\": 0.03948406125768361\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.7520661157024794,\n \"acc_stderr\": 0.03941897526516302,\n \"\
acc_norm\": 0.7520661157024794,\n \"acc_norm_stderr\": 0.03941897526516302\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7685185185185185,\n\
\ \"acc_stderr\": 0.04077494709252626,\n \"acc_norm\": 0.7685185185185185,\n\
\ \"acc_norm_stderr\": 0.04077494709252626\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7055214723926381,\n \"acc_stderr\": 0.03581165790474082,\n\
\ \"acc_norm\": 0.7055214723926381,\n \"acc_norm_stderr\": 0.03581165790474082\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.45535714285714285,\n\
\ \"acc_stderr\": 0.047268355537191,\n \"acc_norm\": 0.45535714285714285,\n\
\ \"acc_norm_stderr\": 0.047268355537191\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7864077669902912,\n \"acc_stderr\": 0.040580420156460344,\n\
\ \"acc_norm\": 0.7864077669902912,\n \"acc_norm_stderr\": 0.040580420156460344\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8717948717948718,\n\
\ \"acc_stderr\": 0.021901905115073336,\n \"acc_norm\": 0.8717948717948718,\n\
\ \"acc_norm_stderr\": 0.021901905115073336\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \
\ \"acc_norm\": 0.69,\n \"acc_norm_stderr\": 0.04648231987117316\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8071519795657727,\n\
\ \"acc_stderr\": 0.014108533515757431,\n \"acc_norm\": 0.8071519795657727,\n\
\ \"acc_norm_stderr\": 0.014108533515757431\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.6763005780346821,\n \"acc_stderr\": 0.025190181327608408,\n\
\ \"acc_norm\": 0.6763005780346821,\n \"acc_norm_stderr\": 0.025190181327608408\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.4324022346368715,\n\
\ \"acc_stderr\": 0.01656897123354861,\n \"acc_norm\": 0.4324022346368715,\n\
\ \"acc_norm_stderr\": 0.01656897123354861\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.6928104575163399,\n \"acc_stderr\": 0.02641560191438899,\n\
\ \"acc_norm\": 0.6928104575163399,\n \"acc_norm_stderr\": 0.02641560191438899\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7009646302250804,\n\
\ \"acc_stderr\": 0.026003301117885142,\n \"acc_norm\": 0.7009646302250804,\n\
\ \"acc_norm_stderr\": 0.026003301117885142\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.6944444444444444,\n \"acc_stderr\": 0.025630824975621358,\n\
\ \"acc_norm\": 0.6944444444444444,\n \"acc_norm_stderr\": 0.025630824975621358\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.450354609929078,\n \"acc_stderr\": 0.029680105565029036,\n \
\ \"acc_norm\": 0.450354609929078,\n \"acc_norm_stderr\": 0.029680105565029036\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4445893089960887,\n\
\ \"acc_stderr\": 0.01269157579265712,\n \"acc_norm\": 0.4445893089960887,\n\
\ \"acc_norm_stderr\": 0.01269157579265712\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.6286764705882353,\n \"acc_stderr\": 0.029349803139765873,\n\
\ \"acc_norm\": 0.6286764705882353,\n \"acc_norm_stderr\": 0.029349803139765873\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6486928104575164,\n \"acc_stderr\": 0.01931267606578655,\n \
\ \"acc_norm\": 0.6486928104575164,\n \"acc_norm_stderr\": 0.01931267606578655\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6363636363636364,\n\
\ \"acc_stderr\": 0.04607582090719976,\n \"acc_norm\": 0.6363636363636364,\n\
\ \"acc_norm_stderr\": 0.04607582090719976\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.7142857142857143,\n \"acc_stderr\": 0.028920583220675606,\n\
\ \"acc_norm\": 0.7142857142857143,\n \"acc_norm_stderr\": 0.028920583220675606\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.7860696517412935,\n\
\ \"acc_stderr\": 0.02899690969332891,\n \"acc_norm\": 0.7860696517412935,\n\
\ \"acc_norm_stderr\": 0.02899690969332891\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.86,\n \"acc_stderr\": 0.03487350880197771,\n \
\ \"acc_norm\": 0.86,\n \"acc_norm_stderr\": 0.03487350880197771\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5120481927710844,\n\
\ \"acc_stderr\": 0.03891364495835817,\n \"acc_norm\": 0.5120481927710844,\n\
\ \"acc_norm_stderr\": 0.03891364495835817\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8304093567251462,\n \"acc_stderr\": 0.02878210810540171,\n\
\ \"acc_norm\": 0.8304093567251462,\n \"acc_norm_stderr\": 0.02878210810540171\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.3623011015911873,\n\
\ \"mc1_stderr\": 0.016826646897262255,\n \"mc2\": 0.5206285653012832,\n\
\ \"mc2_stderr\": 0.015833320867777365\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7687450670876085,\n \"acc_stderr\": 0.01185004012485051\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6823351023502654,\n \
\ \"acc_stderr\": 0.012824066621488836\n }\n}\n```"
repo_url: https://huggingface.co/Weyaxi/MetaMath-neural-chat-7b-v3-2-Ties
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_12_09T16_52_16.188783
path:
- '**/details_harness|arc:challenge|25_2023-12-09T16-52-16.188783.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-12-09T16-52-16.188783.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_12_09T16_52_16.188783
path:
- '**/details_harness|gsm8k|5_2023-12-09T16-52-16.188783.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-12-09T16-52-16.188783.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_12_09T16_52_16.188783
path:
- '**/details_harness|hellaswag|10_2023-12-09T16-52-16.188783.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-12-09T16-52-16.188783.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_12_09T16_52_16.188783
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-09T16-52-16.188783.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-12-09T16-52-16.188783.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-09T16-52-16.188783.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_12_09T16_52_16.188783
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-09T16-52-16.188783.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-09T16-52-16.188783.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_12_09T16_52_16.188783
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-09T16-52-16.188783.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-09T16-52-16.188783.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_12_09T16_52_16.188783
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-09T16-52-16.188783.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-09T16-52-16.188783.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_12_09T16_52_16.188783
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-09T16-52-16.188783.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-09T16-52-16.188783.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_12_09T16_52_16.188783
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-09T16-52-16.188783.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-09T16-52-16.188783.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_12_09T16_52_16.188783
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-09T16-52-16.188783.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-09T16-52-16.188783.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_12_09T16_52_16.188783
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-09T16-52-16.188783.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-09T16-52-16.188783.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_12_09T16_52_16.188783
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-09T16-52-16.188783.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-09T16-52-16.188783.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_12_09T16_52_16.188783
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-09T16-52-16.188783.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-09T16-52-16.188783.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_12_09T16_52_16.188783
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-09T16-52-16.188783.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-09T16-52-16.188783.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_12_09T16_52_16.188783
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-09T16-52-16.188783.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-09T16-52-16.188783.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_12_09T16_52_16.188783
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-09T16-52-16.188783.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-09T16-52-16.188783.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_12_09T16_52_16.188783
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-09T16-52-16.188783.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-09T16-52-16.188783.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_12_09T16_52_16.188783
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-09T16-52-16.188783.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-09T16-52-16.188783.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_12_09T16_52_16.188783
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-09T16-52-16.188783.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-09T16-52-16.188783.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_12_09T16_52_16.188783
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-09T16-52-16.188783.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-09T16-52-16.188783.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_12_09T16_52_16.188783
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-09T16-52-16.188783.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-09T16-52-16.188783.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_12_09T16_52_16.188783
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-09T16-52-16.188783.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-09T16-52-16.188783.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_12_09T16_52_16.188783
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-09T16-52-16.188783.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-09T16-52-16.188783.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_12_09T16_52_16.188783
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-09T16-52-16.188783.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-09T16-52-16.188783.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_12_09T16_52_16.188783
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-09T16-52-16.188783.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-09T16-52-16.188783.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_12_09T16_52_16.188783
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-09T16-52-16.188783.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-09T16-52-16.188783.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_12_09T16_52_16.188783
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-09T16-52-16.188783.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-09T16-52-16.188783.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_12_09T16_52_16.188783
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-09T16-52-16.188783.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-09T16-52-16.188783.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_12_09T16_52_16.188783
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-09T16-52-16.188783.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-09T16-52-16.188783.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_12_09T16_52_16.188783
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-09T16-52-16.188783.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-09T16-52-16.188783.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_12_09T16_52_16.188783
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-09T16-52-16.188783.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-09T16-52-16.188783.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_12_09T16_52_16.188783
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-09T16-52-16.188783.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-09T16-52-16.188783.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_12_09T16_52_16.188783
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-09T16-52-16.188783.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-09T16-52-16.188783.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_12_09T16_52_16.188783
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-09T16-52-16.188783.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-09T16-52-16.188783.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_12_09T16_52_16.188783
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-09T16-52-16.188783.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-09T16-52-16.188783.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_12_09T16_52_16.188783
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-09T16-52-16.188783.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-09T16-52-16.188783.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_12_09T16_52_16.188783
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-09T16-52-16.188783.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-09T16-52-16.188783.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_12_09T16_52_16.188783
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-09T16-52-16.188783.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-09T16-52-16.188783.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_12_09T16_52_16.188783
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-12-09T16-52-16.188783.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-12-09T16-52-16.188783.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_12_09T16_52_16.188783
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-09T16-52-16.188783.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-09T16-52-16.188783.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_12_09T16_52_16.188783
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-09T16-52-16.188783.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-09T16-52-16.188783.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_12_09T16_52_16.188783
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-09T16-52-16.188783.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-09T16-52-16.188783.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_12_09T16_52_16.188783
path:
- '**/details_harness|hendrycksTest-management|5_2023-12-09T16-52-16.188783.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-12-09T16-52-16.188783.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_12_09T16_52_16.188783
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-12-09T16-52-16.188783.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-12-09T16-52-16.188783.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_12_09T16_52_16.188783
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-09T16-52-16.188783.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-09T16-52-16.188783.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_12_09T16_52_16.188783
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-09T16-52-16.188783.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-09T16-52-16.188783.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_12_09T16_52_16.188783
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-09T16-52-16.188783.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-09T16-52-16.188783.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_12_09T16_52_16.188783
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-09T16-52-16.188783.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-09T16-52-16.188783.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_12_09T16_52_16.188783
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-09T16-52-16.188783.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-09T16-52-16.188783.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_12_09T16_52_16.188783
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-09T16-52-16.188783.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-09T16-52-16.188783.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_12_09T16_52_16.188783
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-09T16-52-16.188783.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-09T16-52-16.188783.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_12_09T16_52_16.188783
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-09T16-52-16.188783.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-09T16-52-16.188783.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_12_09T16_52_16.188783
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-09T16-52-16.188783.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-09T16-52-16.188783.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_12_09T16_52_16.188783
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-09T16-52-16.188783.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-09T16-52-16.188783.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_12_09T16_52_16.188783
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-09T16-52-16.188783.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-09T16-52-16.188783.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_12_09T16_52_16.188783
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-09T16-52-16.188783.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-09T16-52-16.188783.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_12_09T16_52_16.188783
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-09T16-52-16.188783.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-09T16-52-16.188783.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_12_09T16_52_16.188783
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-12-09T16-52-16.188783.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-12-09T16-52-16.188783.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_12_09T16_52_16.188783
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-09T16-52-16.188783.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-09T16-52-16.188783.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_12_09T16_52_16.188783
path:
- '**/details_harness|hendrycksTest-virology|5_2023-12-09T16-52-16.188783.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-12-09T16-52-16.188783.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_12_09T16_52_16.188783
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-09T16-52-16.188783.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-09T16-52-16.188783.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_12_09T16_52_16.188783
path:
- '**/details_harness|truthfulqa:mc|0_2023-12-09T16-52-16.188783.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-12-09T16-52-16.188783.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_12_09T16_52_16.188783
path:
- '**/details_harness|winogrande|5_2023-12-09T16-52-16.188783.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-12-09T16-52-16.188783.parquet'
- config_name: results
data_files:
- split: 2023_12_09T16_52_16.188783
path:
- results_2023-12-09T16-52-16.188783.parquet
- split: latest
path:
- results_2023-12-09T16-52-16.188783.parquet
---
# Dataset Card for Evaluation run of Weyaxi/MetaMath-neural-chat-7b-v3-2-Ties
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Weyaxi/MetaMath-neural-chat-7b-v3-2-Ties
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [Weyaxi/MetaMath-neural-chat-7b-v3-2-Ties](https://huggingface.co/Weyaxi/MetaMath-neural-chat-7b-v3-2-Ties) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Weyaxi__MetaMath-neural-chat-7b-v3-2-Ties",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-09T16:52:16.188783](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__MetaMath-neural-chat-7b-v3-2-Ties/blob/main/results_2023-12-09T16-52-16.188783.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6262329269588028,
"acc_stderr": 0.03265531717656403,
"acc_norm": 0.6261458795179596,
"acc_norm_stderr": 0.033325096066245945,
"mc1": 0.3623011015911873,
"mc1_stderr": 0.016826646897262255,
"mc2": 0.5206285653012832,
"mc2_stderr": 0.015833320867777365
},
"harness|arc:challenge|25": {
"acc": 0.6109215017064846,
"acc_stderr": 0.014247309976045607,
"acc_norm": 0.6348122866894198,
"acc_norm_stderr": 0.014070265519268802
},
"harness|hellaswag|10": {
"acc": 0.6538538139812786,
"acc_stderr": 0.004747682003491466,
"acc_norm": 0.8234415455088627,
"acc_norm_stderr": 0.00380515334471309
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.24,
"acc_stderr": 0.04292346959909283,
"acc_norm": 0.24,
"acc_norm_stderr": 0.04292346959909283
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6074074074074074,
"acc_stderr": 0.04218506215368881,
"acc_norm": 0.6074074074074074,
"acc_norm_stderr": 0.04218506215368881
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6842105263157895,
"acc_stderr": 0.0378272898086547,
"acc_norm": 0.6842105263157895,
"acc_norm_stderr": 0.0378272898086547
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.62,
"acc_stderr": 0.048783173121456316,
"acc_norm": 0.62,
"acc_norm_stderr": 0.048783173121456316
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6754716981132075,
"acc_stderr": 0.02881561571343211,
"acc_norm": 0.6754716981132075,
"acc_norm_stderr": 0.02881561571343211
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7152777777777778,
"acc_stderr": 0.037738099906869334,
"acc_norm": 0.7152777777777778,
"acc_norm_stderr": 0.037738099906869334
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.45,
"acc_stderr": 0.05,
"acc_norm": 0.45,
"acc_norm_stderr": 0.05
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.53,
"acc_stderr": 0.05016135580465919,
"acc_norm": 0.53,
"acc_norm_stderr": 0.05016135580465919
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.39,
"acc_stderr": 0.04902071300001975,
"acc_norm": 0.39,
"acc_norm_stderr": 0.04902071300001975
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.630057803468208,
"acc_stderr": 0.0368122963339432,
"acc_norm": 0.630057803468208,
"acc_norm_stderr": 0.0368122963339432
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.3627450980392157,
"acc_stderr": 0.047840607041056527,
"acc_norm": 0.3627450980392157,
"acc_norm_stderr": 0.047840607041056527
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.76,
"acc_stderr": 0.042923469599092816,
"acc_norm": 0.76,
"acc_norm_stderr": 0.042923469599092816
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5787234042553191,
"acc_stderr": 0.03227834510146267,
"acc_norm": 0.5787234042553191,
"acc_norm_stderr": 0.03227834510146267
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.4473684210526316,
"acc_stderr": 0.046774730044911984,
"acc_norm": 0.4473684210526316,
"acc_norm_stderr": 0.046774730044911984
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5103448275862069,
"acc_stderr": 0.04165774775728762,
"acc_norm": 0.5103448275862069,
"acc_norm_stderr": 0.04165774775728762
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.3783068783068783,
"acc_stderr": 0.024976954053155254,
"acc_norm": 0.3783068783068783,
"acc_norm_stderr": 0.024976954053155254
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.4126984126984127,
"acc_stderr": 0.04403438954768177,
"acc_norm": 0.4126984126984127,
"acc_norm_stderr": 0.04403438954768177
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.35,
"acc_stderr": 0.047937248544110196,
"acc_norm": 0.35,
"acc_norm_stderr": 0.047937248544110196
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7419354838709677,
"acc_stderr": 0.024892469172462836,
"acc_norm": 0.7419354838709677,
"acc_norm_stderr": 0.024892469172462836
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.46798029556650245,
"acc_stderr": 0.035107665979592154,
"acc_norm": 0.46798029556650245,
"acc_norm_stderr": 0.035107665979592154
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.65,
"acc_stderr": 0.047937248544110196,
"acc_norm": 0.65,
"acc_norm_stderr": 0.047937248544110196
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7878787878787878,
"acc_stderr": 0.031922715695483,
"acc_norm": 0.7878787878787878,
"acc_norm_stderr": 0.031922715695483
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7777777777777778,
"acc_stderr": 0.02962022787479048,
"acc_norm": 0.7777777777777778,
"acc_norm_stderr": 0.02962022787479048
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8756476683937824,
"acc_stderr": 0.023814477086593552,
"acc_norm": 0.8756476683937824,
"acc_norm_stderr": 0.023814477086593552
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6384615384615384,
"acc_stderr": 0.024359581465396997,
"acc_norm": 0.6384615384615384,
"acc_norm_stderr": 0.024359581465396997
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.32592592592592595,
"acc_stderr": 0.02857834836547308,
"acc_norm": 0.32592592592592595,
"acc_norm_stderr": 0.02857834836547308
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.680672268907563,
"acc_stderr": 0.030283995525884396,
"acc_norm": 0.680672268907563,
"acc_norm_stderr": 0.030283995525884396
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.33112582781456956,
"acc_stderr": 0.038425817186598696,
"acc_norm": 0.33112582781456956,
"acc_norm_stderr": 0.038425817186598696
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8348623853211009,
"acc_stderr": 0.015919557829976054,
"acc_norm": 0.8348623853211009,
"acc_norm_stderr": 0.015919557829976054
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5324074074074074,
"acc_stderr": 0.03402801581358966,
"acc_norm": 0.5324074074074074,
"acc_norm_stderr": 0.03402801581358966
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.7892156862745098,
"acc_stderr": 0.028626547912437406,
"acc_norm": 0.7892156862745098,
"acc_norm_stderr": 0.028626547912437406
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7679324894514767,
"acc_stderr": 0.02747974455080851,
"acc_norm": 0.7679324894514767,
"acc_norm_stderr": 0.02747974455080851
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6771300448430493,
"acc_stderr": 0.03138147637575499,
"acc_norm": 0.6771300448430493,
"acc_norm_stderr": 0.03138147637575499
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7175572519083969,
"acc_stderr": 0.03948406125768361,
"acc_norm": 0.7175572519083969,
"acc_norm_stderr": 0.03948406125768361
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7520661157024794,
"acc_stderr": 0.03941897526516302,
"acc_norm": 0.7520661157024794,
"acc_norm_stderr": 0.03941897526516302
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7685185185185185,
"acc_stderr": 0.04077494709252626,
"acc_norm": 0.7685185185185185,
"acc_norm_stderr": 0.04077494709252626
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7055214723926381,
"acc_stderr": 0.03581165790474082,
"acc_norm": 0.7055214723926381,
"acc_norm_stderr": 0.03581165790474082
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.45535714285714285,
"acc_stderr": 0.047268355537191,
"acc_norm": 0.45535714285714285,
"acc_norm_stderr": 0.047268355537191
},
"harness|hendrycksTest-management|5": {
"acc": 0.7864077669902912,
"acc_stderr": 0.040580420156460344,
"acc_norm": 0.7864077669902912,
"acc_norm_stderr": 0.040580420156460344
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8717948717948718,
"acc_stderr": 0.021901905115073336,
"acc_norm": 0.8717948717948718,
"acc_norm_stderr": 0.021901905115073336
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.69,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.69,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8071519795657727,
"acc_stderr": 0.014108533515757431,
"acc_norm": 0.8071519795657727,
"acc_norm_stderr": 0.014108533515757431
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.6763005780346821,
"acc_stderr": 0.025190181327608408,
"acc_norm": 0.6763005780346821,
"acc_norm_stderr": 0.025190181327608408
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.4324022346368715,
"acc_stderr": 0.01656897123354861,
"acc_norm": 0.4324022346368715,
"acc_norm_stderr": 0.01656897123354861
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.6928104575163399,
"acc_stderr": 0.02641560191438899,
"acc_norm": 0.6928104575163399,
"acc_norm_stderr": 0.02641560191438899
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7009646302250804,
"acc_stderr": 0.026003301117885142,
"acc_norm": 0.7009646302250804,
"acc_norm_stderr": 0.026003301117885142
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.6944444444444444,
"acc_stderr": 0.025630824975621358,
"acc_norm": 0.6944444444444444,
"acc_norm_stderr": 0.025630824975621358
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.450354609929078,
"acc_stderr": 0.029680105565029036,
"acc_norm": 0.450354609929078,
"acc_norm_stderr": 0.029680105565029036
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4445893089960887,
"acc_stderr": 0.01269157579265712,
"acc_norm": 0.4445893089960887,
"acc_norm_stderr": 0.01269157579265712
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6286764705882353,
"acc_stderr": 0.029349803139765873,
"acc_norm": 0.6286764705882353,
"acc_norm_stderr": 0.029349803139765873
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6486928104575164,
"acc_stderr": 0.01931267606578655,
"acc_norm": 0.6486928104575164,
"acc_norm_stderr": 0.01931267606578655
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6363636363636364,
"acc_stderr": 0.04607582090719976,
"acc_norm": 0.6363636363636364,
"acc_norm_stderr": 0.04607582090719976
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7142857142857143,
"acc_stderr": 0.028920583220675606,
"acc_norm": 0.7142857142857143,
"acc_norm_stderr": 0.028920583220675606
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.7860696517412935,
"acc_stderr": 0.02899690969332891,
"acc_norm": 0.7860696517412935,
"acc_norm_stderr": 0.02899690969332891
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.86,
"acc_stderr": 0.03487350880197771,
"acc_norm": 0.86,
"acc_norm_stderr": 0.03487350880197771
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5120481927710844,
"acc_stderr": 0.03891364495835817,
"acc_norm": 0.5120481927710844,
"acc_norm_stderr": 0.03891364495835817
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8304093567251462,
"acc_stderr": 0.02878210810540171,
"acc_norm": 0.8304093567251462,
"acc_norm_stderr": 0.02878210810540171
},
"harness|truthfulqa:mc|0": {
"mc1": 0.3623011015911873,
"mc1_stderr": 0.016826646897262255,
"mc2": 0.5206285653012832,
"mc2_stderr": 0.015833320867777365
},
"harness|winogrande|5": {
"acc": 0.7687450670876085,
"acc_stderr": 0.01185004012485051
},
"harness|gsm8k|5": {
"acc": 0.6823351023502654,
"acc_stderr": 0.012824066621488836
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
CyberHarem/lam_neuralcloud | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of lam/ラム/拉姆 (Neural Cloud)
This is the dataset of lam/ラム/拉姆 (Neural Cloud), containing 75 images and their tags.
The core tags of this character are `long_hair, multicolored_hair, breasts, bangs, twintails, grey_hair, hair_ornament, blue_eyes, gradient_hair, hairclip, hair_between_eyes, medium_breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-----------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 75 | 124.43 MiB | [Download](https://huggingface.co/datasets/CyberHarem/lam_neuralcloud/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 75 | 66.24 MiB | [Download](https://huggingface.co/datasets/CyberHarem/lam_neuralcloud/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 188 | 140.84 MiB | [Download](https://huggingface.co/datasets/CyberHarem/lam_neuralcloud/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 75 | 111.28 MiB | [Download](https://huggingface.co/datasets/CyberHarem/lam_neuralcloud/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 188 | 207.33 MiB | [Download](https://huggingface.co/datasets/CyberHarem/lam_neuralcloud/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/lam_neuralcloud',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 8 |  |  |  |  |  | 1girl, cleavage, solo, bare_shoulders, looking_at_viewer, braid, closed_mouth, hair_bow, aqua_eyes, open_jacket, aqua_bow, black_thighhighs, china_dress, collarbone, earrings, off_shoulder, official_alternate_costume, ponytail, simple_background, white_background, blush, holding, large_breasts, upper_body |
| 1 | 6 |  |  |  |  |  | 1girl, black_gloves, crop_top, full_body, holding_gun, midriff, navel, short_shorts, single_thighhigh, solo, white_shirt, belt, headphones, headset, red_jacket, scope, shoes, thigh_strap, blue_shorts, looking_at_viewer, off_shoulder, open_clothes, simple_background, single_sock, uneven_legwear, white_background, white_hair, black_footwear, pink_hair, sniper_rifle, standing |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | cleavage | solo | bare_shoulders | looking_at_viewer | braid | closed_mouth | hair_bow | aqua_eyes | open_jacket | aqua_bow | black_thighhighs | china_dress | collarbone | earrings | off_shoulder | official_alternate_costume | ponytail | simple_background | white_background | blush | holding | large_breasts | upper_body | black_gloves | crop_top | full_body | holding_gun | midriff | navel | short_shorts | single_thighhigh | white_shirt | belt | headphones | headset | red_jacket | scope | shoes | thigh_strap | blue_shorts | open_clothes | single_sock | uneven_legwear | white_hair | black_footwear | pink_hair | sniper_rifle | standing |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-----------|:-------|:-----------------|:--------------------|:--------|:---------------|:-----------|:------------|:--------------|:-----------|:-------------------|:--------------|:-------------|:-----------|:---------------|:-----------------------------|:-----------|:--------------------|:-------------------|:--------|:----------|:----------------|:-------------|:---------------|:-----------|:------------|:--------------|:----------|:--------|:---------------|:-------------------|:--------------|:-------|:-------------|:----------|:-------------|:--------|:--------|:--------------|:--------------|:---------------|:--------------|:-----------------|:-------------|:-----------------|:------------|:---------------|:-----------|
| 0 | 8 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 6 |  |  |  |  |  | X | | X | | X | | | | | | | | | | | X | | | X | X | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
hendrycks/ethics | ---
license: mit
language: en
dataset_info:
- config_name: default
features:
- name: label
dtype: int64
- name: input
dtype: string
- config_name: commonsense
features:
- name: label
dtype: int32
- name: input
dtype: string
splits:
- name: train
num_bytes: 14429921
num_examples: 13910
- name: validation
num_bytes: 3148616
num_examples: 3885
- name: test
num_bytes: 3863068
num_examples: 3964
download_size: 21625153
dataset_size: 21441605
- config_name: deontology
features:
- name: label
dtype: int32
- name: scenario
dtype: string
- name: excuse
dtype: string
splits:
- name: train
num_bytes: 1854277
num_examples: 18164
- name: validation
num_bytes: 369318
num_examples: 3596
- name: test
num_bytes: 359268
num_examples: 3536
download_size: 2384007
dataset_size: 2582863
- config_name: justice
features:
- name: label
dtype: int32
- name: scenario
dtype: string
splits:
- name: train
num_bytes: 2423889
num_examples: 21791
- name: validation
num_bytes: 297935
num_examples: 2704
- name: test
num_bytes: 228008
num_examples: 2052
download_size: 2837375
dataset_size: 2949832
- config_name: utilitarianism
features:
- name: baseline
dtype: string
- name: less_pleasant
dtype: string
splits:
- name: train
num_bytes: 2186713
num_examples: 13737
- name: validation
num_bytes: 730391
num_examples: 4807
- name: test
num_bytes: 668429
num_examples: 4271
download_size: 3466564
dataset_size: 3585533
- config_name: virtue
features:
- name: label
dtype: int32
- name: scenario
dtype: string
splits:
- name: train
num_bytes: 2605021
num_examples: 28245
- name: validation
num_bytes: 467254
num_examples: 4975
- name: test
num_bytes: 452491
num_examples: 4780
download_size: 3364070
dataset_size: 3524766
tags:
- AI Alignment
---
# Dataset Card for ETHICS
This is the data from [Aligning AI With Shared Human Values](https://arxiv.org/pdf/2008.02275) by Dan Hendrycks, Collin Burns, Steven Basart, Andrew Critch, Jerry Li, Dawn Song, and Jacob Steinhardt, published at ICLR 2021.
For more information, see the [Github Repo](https://github.com/hendrycks/ethics).
## Dataset Summary
This dataset provides ethics-based tasks for evaluating language models for AI alignment.
## Loading Data
To load this data, you can use HuggingFace datasets and the dataloader script.
```
from datasets import load_dataset
load_dataset("hendrycks/ethics", "commonsense")
```
Where `commonsense` is one of the following sections: commonsense, deontology, justice, utilitarianism, and virtue.
### Citation Information
```
@article{hendrycks2021ethics,
title={Aligning AI With Shared Human Values},
author={Dan Hendrycks and Collin Burns and Steven Basart and Andrew Critch and Jerry Li and Dawn Song and Jacob Steinhardt},
journal={Proceedings of the International Conference on Learning Representations (ICLR)},
year={2021}
}
```
|
AdapterOcean/pythonbook-standardized_cluster_0 | ---
dataset_info:
features:
- name: text
dtype: string
- name: conversation_id
dtype: int64
- name: embedding
sequence: float64
- name: cluster
dtype: int64
splits:
- name: train
num_bytes: 29351116
num_examples: 2574
download_size: 0
dataset_size: 29351116
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "pythonbook-standardized_cluster_0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
MatsuoDochiai/Vitor | ---
license: openrail
---
|
tyzhu/fwv2_baseline_random_train_100_eval_100 | ---
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 17246
num_examples: 100
- name: eval_find_word
num_bytes: 17146
num_examples: 100
- name: validation
num_bytes: 17146
num_examples: 100
download_size: 34351
dataset_size: 51538
---
# Dataset Card for "fwv2_baseline_random_train_100_eval_100"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
somosnlp/filter_Rac_format_chatML_eti | ---
dataset_info:
features:
- name: Text
dtype: string
splits:
- name: train
num_bytes: 669652
num_examples: 668
download_size: 224616
dataset_size: 669652
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
language:
- es
---

```
<bos><start_of_turn>system
You are a helpful AI assistant.<end_of_turn>
<start_of_turn>user
¿¿Qué tema aborda la Enmienda 1 del RAC 1??<end_of_turn>
<start_of_turn>model
{
"pregunta": "¿Qué tema aborda la Enmienda 1 del RAC 1?",
"respuesta": "Se adicionan nuevas definiciones a los RAC",
"pagina": "1",
"rac": "Rac 1",
}<end_of_turn><eos>
``` |
PerceptionEval/Counting | ---
dataset_info:
features:
- name: idx
dtype: int32
- name: question
dtype: string
- name: image_1
dtype: image
- name: choices
sequence: string
- name: answer
dtype: string
- name: prompt
dtype: string
splits:
- name: val
num_bytes: 17371201.0
num_examples: 120
- name: test
num_bytes: 18538460.0
num_examples: 120
download_size: 35691831
dataset_size: 35909661.0
configs:
- config_name: default
data_files:
- split: val
path: data/val-*
- split: test
path: data/test-*
---
|
soypablo/Emoji_Dataset-Openmoji-BLIP | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 85108246.546
num_examples: 4083
download_size: 101495440
dataset_size: 85108246.546
---
# Dataset Card for "Emoji_Dataset-Openmoji-BLIP"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Jay-Rajput/oci_data | ---
license: apache-2.0
---
|
ai-habitat/hab_stretch | ---
license: other
pretty_name: Habitat Stretch Robot
viewer: false
---

# Hello Robot Stretch
Simulation model (URDF) of Hello Robot Stretch for use in [habitat-sim](https://github.com/facebookresearch/habitat-sim).
## License Information
See LICENSE.txt for more details.
```
Original "urdf/hab_stretch.urdf" and all assets referenced there-in are provided courtesy of Hello Robot, all rights reserved.
All other assets represent derivative work of said authors.
Written permission has been acquired for redistribution of these assets with attribution.
``` |
djemerson7k/Skilo2 | ---
license: mit
---
|
yangwang825/sst2-pwws-7 | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
- name: augment
dtype: string
splits:
- name: train
num_bytes: 6901034
num_examples: 54895
- name: validation
num_bytes: 110096
num_examples: 872
- name: test
num_bytes: 226340
num_examples: 1821
download_size: 1965487
dataset_size: 7237470
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
DewiBrynJones/banc-trawsgrifiadau-bangor-normalized | ---
license: cc0-1.0
dataset_info:
features:
- name: sentence
dtype: string
- name: clean
dtype: string
- name: normalized
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
splits:
- name: train
num_bytes: 8413973587.0
num_examples: 22621
- name: test
num_bytes: 2079668736.0
num_examples: 5656
download_size: 10269992449
dataset_size: 10493642323.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
wenbopan/cmmlu_dpo_pairs | ---
license: mit
dataset_info:
config_name: train
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
- name: source
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 1887072
num_examples: 5043
download_size: 309897
dataset_size: 1887072
configs:
- config_name: train
data_files:
- split: train
path: train/train-*
default: true
---
# Dataset Card for `cmmlu_dpo_pairs`
Preference pairs derived from `dev` split of [cmmlu](https://huggingface.co/datasets/haonan-li/cmmlu) and `valid` split of [ceval-exam](https://huggingface.co/datasets/ceval/ceval-exam).
Brute-forced way to align the distribution of LLM to favor the multi-choice style to increase scores on mmlu and ceval.
|
Nadav/pixel_glue_mnli_noisy_ocr | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
'2': '2'
splits:
- name: train
num_bytes: 400394884
num_examples: 1963505
- name: validation
num_bytes: 4042514
num_examples: 19647
download_size: 2578670
dataset_size: 404437398
---
# Dataset Card for "pixel_glue_mnli_noisy_ocr"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
damerajee/datasets-philo-2 | ---
license: mit
---
|
bgstud/libri | ---
annotations_creators:
- expert-generated
language:
- en
language_creators:
- found
license:
- mit
multilinguality:
- monolingual
paperswithcode_id: acronym-identification
pretty_name: Acronym Identification Dataset
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- token-classification-other-acronym-identification
train-eval-index:
- col_mapping:
labels: tags
tokens: tokens
config: default
splits:
eval_split: test
task: token-classification
task_id: entity_extraction
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
|
one-sec-cv12/chunk_2 | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
splits:
- name: train
num_bytes: 17130352896.0
num_examples: 178352
download_size: 15246910271
dataset_size: 17130352896.0
---
# Dataset Card for "chunk_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
FredZhang7/disco-diffusion | ---
license: mit
tags:
- stable-diffusion
- paint-journey
---
This dataset contains just under half of the training data used to train [Paint Journey](https://huggingface.co/FredZhang7/Paint-Journey). All 768x768 images were generated using one of Disco Diffusion v3.1, v4.1, and v5.x,
but later upscaled then downscaled twice (super resolution) using R-ESRGAN General WDN 4x V3 just before training. |
mteb/cqadupstack-stats | ---
language:
- en
multilinguality:
- monolingual
task_categories:
- text-retrieval
source_datasets:
- cqadupstack-stats
task_ids:
- document-retrieval
config_names:
- corpus
tags:
- text-retrieval
dataset_info:
- config_name: default
features:
- name: query-id
dtype: string
- name: corpus-id
dtype: string
- name: score
dtype: float64
splits:
- name: test
num_bytes: 23665
num_examples: 913
- config_name: corpus
features:
- name: _id
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: corpus
num_bytes: 45347600
num_examples: 42269
- config_name: queries
features:
- name: _id
dtype: string
- name: text
dtype: string
splits:
- name: queries
num_bytes: 45187
num_examples: 652
configs:
- config_name: default
data_files:
- split: test
path: qrels/test.jsonl
- config_name: corpus
data_files:
- split: corpus
path: corpus.jsonl
- config_name: queries
data_files:
- split: queries
path: queries.jsonl
--- |
0x7o/gamio-ai-authorLM-dataset | ---
dataset_info:
features:
- name: texts
dtype: string
splits:
- name: train
num_bytes: 6551786
num_examples: 288
download_size: 2843488
dataset_size: 6551786
---
# Dataset Card for "gamio-ai-authorLM-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
SEACrowd/korpus_nusantara | ---
license: unknown
tags:
- machine-translation
language:
- ind
- jav
- xdy
- bug
- sun
- mad
- bjn
- bbc
- msa
- min
---
# korpus_nusantara
This parallel corpus was collected from several studies, assignments, and thesis of
students of the Informatics Study Program, Tanjungpura University. Some of the corpus
are used in the translation machine from Indonesian to local languages http://nustor.untan.ac.id/cammane/.
This corpus can be used freely for research purposes by citing the paper
https://ijece.iaescore.com/index.php/IJECE/article/download/20046/13738.
The dataset is a combination of multiple machine translation works from the author,
Herry Sujaini, covering Indonesian to 25 local dialects in Indonesia. Since not all
dialects have ISO639-3 standard coding, as agreed with Pak Herry , we decided to
group the dataset into the closest language family, i.e.: Javanese, Dayak, Buginese,
Sundanese, Madurese, Banjar, Batak Toba, Khek, Malay, Minangkabau, and Tiociu.
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@article{sujaini2020improving,
title={Improving the role of language model in statistical machine translation (Indonesian-Javanese)},
author={Sujaini, Herry},
journal={International Journal of Electrical and Computer Engineering},
volume={10},
number={2},
pages={2102},
year={2020},
publisher={IAES Institute of Advanced Engineering and Science}
}
```
## License
Unknown
## Homepage
[https://github.com/herrysujaini/korpusnusantara](https://github.com/herrysujaini/korpusnusantara)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue) |
communityai/akjindal53244___Arithmo-Data-50k | ---
dataset_info:
features:
- name: source
dtype: string
- name: conversations
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 112707051.53117467
num_examples: 50000
download_size: 45840156
dataset_size: 112707051.53117467
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
kalyan003/Question_Dataset_M | ---
license: unknown
---
|
autoevaluate/autoeval-staging-eval-project-b40c7dea-3c58-4f26-a941-b0221649edda-6362 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- autoevaluate/xsum-sample
eval_info:
task: summarization
model: autoevaluate/summarization
metrics: []
dataset_name: autoevaluate/xsum-sample
dataset_config: autoevaluate--xsum-sample
dataset_split: test
col_mapping:
text: document
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: autoevaluate/summarization
* Dataset: autoevaluate/xsum-sample
* Config: autoevaluate--xsum-sample
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
kpriyanshu256/MultiTabQA-multitable_pretraining-Salesforce-codet5-base_train-latex-76000 | ---
dataset_info:
features:
- name: input_ids
sequence:
sequence: int32
- name: attention_mask
sequence:
sequence: int8
- name: labels
sequence:
sequence: int64
splits:
- name: train
num_bytes: 13336000
num_examples: 1000
download_size: 1037394
dataset_size: 13336000
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
heliosprime/twitter_dataset_1713223245 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 33843
num_examples: 96
download_size: 26143
dataset_size: 33843
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "twitter_dataset_1713223245"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ilsp/winogrande_greek | ---
language: el
license: cc-by-nc-sa-4.0
multilinguality: monolingual
size_categories: 10K<n<100K
task_categories:
- multiple-choice
pretty_name: Winogrande Greek
dataset_info:
splits:
- name: train
num_examples: 40398
- name: validation
num_examples: 1267
---
# Dataset Card for Winogrande Greek
The Winogrande Greek dataset is a set of 41665 pairs of sentences from the [WinoGrande dataset](https://huggingface.co/datasets/winogrande), machine-translated into Greek. The original dataset is formulated as a fill-in-a-blank task with binary options, and the goal is to choose the right option for a given sentence which requires commonsense reasoning. In Winogrande Greek the task is formulated as a pair of sentences, from which a model is to choose the most plausible sentence.
## Dataset Details
### Dataset Description
<!-- -->
- **Curated by:** ILSP/Athena RC
<!--- **Funded by [optional]:** [More Information Needed]-->
<!--- **Shared by [optional]:** [More Information Needed]-->
- **Language(s) (NLP):** el
- **License:** cc-by-nc-sa-4.0
<!--### Dataset Sources [optional]-->
<!-- Provide the basic links for the dataset. -->
<!--- **Repository:** [More Information Needed]-->
<!--- **Paper [optional]:** [More Information Needed]-->
<!--- **Demo [optional]:** [More Information Needed]-->
<!--## Uses-->
<!-- Address questions around how the dataset is intended to be used. -->
<!--### Direct Use-->
<!-- This section describes suitable use cases for the dataset. -->
<!--[More Information Needed]-->
<!--### Out-of-Scope Use-->
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
<!--[More Information Needed]-->
<!--## Dataset Structure-->
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
<!--[More Information Needed]-->
<!--## Dataset Creation-->
<!--### Curation Rationale-->
<!-- Motivation for the creation of this dataset. -->
<!--[More Information Needed]-->
<!--### Source Data-->
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
<!--#### Data Collection and Processing-->
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
<!--[More Information Needed]-->
<!--#### Who are the source data producers?-->
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
<!--[More Information Needed]-->
<!--### Annotations [optional]-->
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
<!--#### Annotation process-->
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
<!--[More Information Needed]-->
<!--#### Who are the annotators?-->
<!-- This section describes the people or systems who created the annotations. -->
<!--[More Information Needed]-->
<!--#### Personal and Sensitive Information-->
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
<!--[More Information Needed]-->
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
This dataset is the result of machine translation.
<!--### Recommendations-->
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
<!--Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.-->
<!--## Citation-->
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
<!--**BibTeX:**-->
<!--[More Information Needed]-->
<!--**APA:**-->
<!--[More Information Needed]-->
<!--## Glossary [optional]-->
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
<!--[More Information Needed]-->
<!--## More Information [optional]-->
<!--[More Information Needed]-->
<!--## Dataset Card Authors [optional]-->
<!--[More Information Needed]-->
## Dataset Card Contact
https://www.athenarc.gr/en/ilsp |
Multimodal-Fatima/VQAv2_sample_validation_facebook_opt_1.3b_VQAv2_visclues_ns_8 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: prompt
dtype: string
- name: question
dtype: string
- name: true_label
sequence: string
- name: prediction
dtype: string
- name: scores
sequence: float64
splits:
- name: fewshot_0_bs_8
num_bytes: 202345
num_examples: 8
download_size: 45104
dataset_size: 202345
---
# Dataset Card for "VQAv2_sample_validation_facebook_opt_1.3b_VQAv2_visclues_ns_8"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tyzhu/find_first_sent_train_200_eval_40_recite | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: title
dtype: string
- name: context
dtype: string
splits:
- name: train
num_bytes: 720201
num_examples: 440
- name: validation
num_bytes: 71058
num_examples: 40
download_size: 326588
dataset_size: 791259
---
# Dataset Card for "find_first_sent_train_200_eval_40_recite"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
joey234/mmlu-machine_learning-original-neg | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: test
num_bytes: 8467.0
num_examples: 28
download_size: 7546
dataset_size: 8467.0
---
# Dataset Card for "mmlu-machine_learning-original-neg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
dischargesum/triage | ---
dataset_info:
features:
- name: subject_id
dtype: int64
- name: stay_id
dtype: int64
- name: temperature
dtype: string
- name: heartrate
dtype: string
- name: resprate
dtype: string
- name: o2sat
dtype: string
- name: sbp
dtype: string
- name: dbp
dtype: string
- name: pain
dtype: string
- name: acuity
dtype: string
- name: chiefcomplaint
dtype: string
splits:
- name: train
num_bytes: 6560604
num_examples: 68936
- name: valid
num_bytes: 1404632
num_examples: 14751
- name: test
num_bytes: 1401558
num_examples: 14731
download_size: 2973447
dataset_size: 9366794
---
# Dataset Card for "triage"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
xinei/my_dataset | ---
license: lgpl-3.0
---
|
atmallen/qm_alice_easy_2_mixture_1.0e | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: alice_label
dtype: bool
- name: bob_label
dtype: bool
- name: difficulty
dtype: int64
- name: statement
dtype: string
- name: choices
sequence: string
- name: character
dtype: string
- name: label
dtype:
class_label:
names:
'0': 'False'
'1': 'True'
splits:
- name: train
num_bytes: 12520368.5
num_examples: 117117
- name: validation
num_bytes: 1221097.5
num_examples: 11279
- name: test
num_bytes: 1205746.0
num_examples: 11186
download_size: 3708154
dataset_size: 14947212.0
---
# Dataset Card for "qm_alice_easy_2_mixture_1.0e"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Umbrellos/yoolmsm | ---
license: openrail
---
|
open-llm-leaderboard/details_ceadar-ie__FinanceConnect-13B | ---
pretty_name: Evaluation run of ceadar-ie/FinanceConnect-13B
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [ceadar-ie/FinanceConnect-13B](https://huggingface.co/ceadar-ie/FinanceConnect-13B)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 4 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_ceadar-ie__FinanceConnect-13B\"\
,\n\t\"harness_truthfulqa_mc_0\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\
\nThese are the [latest results from run 2023-12-10T15:47:22.242382](https://huggingface.co/datasets/open-llm-leaderboard/details_ceadar-ie__FinanceConnect-13B/blob/main/results_2023-12-10T15-47-22.242382.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"mc1\": 0.2484700122399021,\n\
\ \"mc1_stderr\": 0.015127427096520672,\n \"mc2\": 0.37682302005478885,\n\
\ \"mc2_stderr\": 0.015200964572751172\n },\n \"harness|truthfulqa:mc|0\"\
: {\n \"mc1\": 0.2484700122399021,\n \"mc1_stderr\": 0.015127427096520672,\n\
\ \"mc2\": 0.37682302005478885,\n \"mc2_stderr\": 0.015200964572751172\n\
\ }\n}\n```"
repo_url: https://huggingface.co/ceadar-ie/FinanceConnect-13B
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_12_04T12_02_08.348872
path:
- '**/details_harness|arc:challenge|25_2023-12-04T12-02-08.348872.parquet'
- split: 2023_12_08T02_38_39.240881
path:
- '**/details_harness|arc:challenge|25_2023-12-08T02-38-39.240881.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-12-08T02-38-39.240881.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_12_04T12_02_08.348872
path:
- '**/details_harness|gsm8k|5_2023-12-04T12-02-08.348872.parquet'
- split: 2023_12_08T02_38_39.240881
path:
- '**/details_harness|gsm8k|5_2023-12-08T02-38-39.240881.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-12-08T02-38-39.240881.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_12_04T12_02_08.348872
path:
- '**/details_harness|hellaswag|10_2023-12-04T12-02-08.348872.parquet'
- split: 2023_12_08T02_38_39.240881
path:
- '**/details_harness|hellaswag|10_2023-12-08T02-38-39.240881.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-12-08T02-38-39.240881.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_12_04T12_02_08.348872
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-04T12-02-08.348872.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-04T12-02-08.348872.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-04T12-02-08.348872.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-04T12-02-08.348872.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-04T12-02-08.348872.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-04T12-02-08.348872.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-04T12-02-08.348872.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-04T12-02-08.348872.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-04T12-02-08.348872.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-04T12-02-08.348872.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-04T12-02-08.348872.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-04T12-02-08.348872.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-04T12-02-08.348872.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-04T12-02-08.348872.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-04T12-02-08.348872.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-04T12-02-08.348872.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-04T12-02-08.348872.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-04T12-02-08.348872.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-04T12-02-08.348872.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-04T12-02-08.348872.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-04T12-02-08.348872.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-04T12-02-08.348872.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-04T12-02-08.348872.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-04T12-02-08.348872.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-04T12-02-08.348872.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-04T12-02-08.348872.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-04T12-02-08.348872.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-04T12-02-08.348872.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-04T12-02-08.348872.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-04T12-02-08.348872.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-04T12-02-08.348872.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-04T12-02-08.348872.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-04T12-02-08.348872.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-04T12-02-08.348872.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-12-04T12-02-08.348872.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-04T12-02-08.348872.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-04T12-02-08.348872.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-04T12-02-08.348872.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-12-04T12-02-08.348872.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-12-04T12-02-08.348872.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-04T12-02-08.348872.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-04T12-02-08.348872.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-04T12-02-08.348872.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-04T12-02-08.348872.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-04T12-02-08.348872.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-04T12-02-08.348872.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-04T12-02-08.348872.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-04T12-02-08.348872.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-04T12-02-08.348872.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-04T12-02-08.348872.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-04T12-02-08.348872.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-04T12-02-08.348872.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-04T12-02-08.348872.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-12-04T12-02-08.348872.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-04T12-02-08.348872.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-12-04T12-02-08.348872.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-04T12-02-08.348872.parquet'
- split: 2023_12_08T02_38_39.240881
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-08T02-38-39.240881.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-12-08T02-38-39.240881.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-08T02-38-39.240881.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_12_04T12_02_08.348872
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-04T12-02-08.348872.parquet'
- split: 2023_12_08T02_38_39.240881
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-08T02-38-39.240881.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-08T02-38-39.240881.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_12_04T12_02_08.348872
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-04T12-02-08.348872.parquet'
- split: 2023_12_08T02_38_39.240881
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-08T02-38-39.240881.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-08T02-38-39.240881.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_12_04T12_02_08.348872
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-04T12-02-08.348872.parquet'
- split: 2023_12_08T02_38_39.240881
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-08T02-38-39.240881.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-08T02-38-39.240881.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_12_04T12_02_08.348872
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-04T12-02-08.348872.parquet'
- split: 2023_12_08T02_38_39.240881
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-08T02-38-39.240881.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-08T02-38-39.240881.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_12_04T12_02_08.348872
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-04T12-02-08.348872.parquet'
- split: 2023_12_08T02_38_39.240881
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-08T02-38-39.240881.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-08T02-38-39.240881.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_12_04T12_02_08.348872
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-04T12-02-08.348872.parquet'
- split: 2023_12_08T02_38_39.240881
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-08T02-38-39.240881.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-08T02-38-39.240881.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_12_04T12_02_08.348872
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-04T12-02-08.348872.parquet'
- split: 2023_12_08T02_38_39.240881
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-08T02-38-39.240881.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-08T02-38-39.240881.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_12_04T12_02_08.348872
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-04T12-02-08.348872.parquet'
- split: 2023_12_08T02_38_39.240881
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-08T02-38-39.240881.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-08T02-38-39.240881.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_12_04T12_02_08.348872
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-04T12-02-08.348872.parquet'
- split: 2023_12_08T02_38_39.240881
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-08T02-38-39.240881.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-08T02-38-39.240881.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_12_04T12_02_08.348872
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-04T12-02-08.348872.parquet'
- split: 2023_12_08T02_38_39.240881
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-08T02-38-39.240881.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-08T02-38-39.240881.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_12_04T12_02_08.348872
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-04T12-02-08.348872.parquet'
- split: 2023_12_08T02_38_39.240881
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-08T02-38-39.240881.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-08T02-38-39.240881.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_12_04T12_02_08.348872
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-04T12-02-08.348872.parquet'
- split: 2023_12_08T02_38_39.240881
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-08T02-38-39.240881.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-08T02-38-39.240881.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_12_04T12_02_08.348872
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-04T12-02-08.348872.parquet'
- split: 2023_12_08T02_38_39.240881
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-08T02-38-39.240881.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-08T02-38-39.240881.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_12_04T12_02_08.348872
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-04T12-02-08.348872.parquet'
- split: 2023_12_08T02_38_39.240881
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-08T02-38-39.240881.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-08T02-38-39.240881.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_12_04T12_02_08.348872
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-04T12-02-08.348872.parquet'
- split: 2023_12_08T02_38_39.240881
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-08T02-38-39.240881.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-08T02-38-39.240881.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_12_04T12_02_08.348872
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-04T12-02-08.348872.parquet'
- split: 2023_12_08T02_38_39.240881
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-08T02-38-39.240881.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-08T02-38-39.240881.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_12_04T12_02_08.348872
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-04T12-02-08.348872.parquet'
- split: 2023_12_08T02_38_39.240881
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-08T02-38-39.240881.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-08T02-38-39.240881.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_12_04T12_02_08.348872
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-04T12-02-08.348872.parquet'
- split: 2023_12_08T02_38_39.240881
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-08T02-38-39.240881.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-08T02-38-39.240881.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_12_04T12_02_08.348872
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-04T12-02-08.348872.parquet'
- split: 2023_12_08T02_38_39.240881
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-08T02-38-39.240881.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-08T02-38-39.240881.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_12_04T12_02_08.348872
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-04T12-02-08.348872.parquet'
- split: 2023_12_08T02_38_39.240881
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-08T02-38-39.240881.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-08T02-38-39.240881.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_12_04T12_02_08.348872
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-04T12-02-08.348872.parquet'
- split: 2023_12_08T02_38_39.240881
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-08T02-38-39.240881.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-08T02-38-39.240881.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_12_04T12_02_08.348872
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-04T12-02-08.348872.parquet'
- split: 2023_12_08T02_38_39.240881
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-08T02-38-39.240881.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-08T02-38-39.240881.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_12_04T12_02_08.348872
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-04T12-02-08.348872.parquet'
- split: 2023_12_08T02_38_39.240881
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-08T02-38-39.240881.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-08T02-38-39.240881.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_12_04T12_02_08.348872
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-04T12-02-08.348872.parquet'
- split: 2023_12_08T02_38_39.240881
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-08T02-38-39.240881.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-08T02-38-39.240881.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_12_04T12_02_08.348872
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-04T12-02-08.348872.parquet'
- split: 2023_12_08T02_38_39.240881
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-08T02-38-39.240881.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-08T02-38-39.240881.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_12_04T12_02_08.348872
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-04T12-02-08.348872.parquet'
- split: 2023_12_08T02_38_39.240881
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-08T02-38-39.240881.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-08T02-38-39.240881.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_12_04T12_02_08.348872
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-04T12-02-08.348872.parquet'
- split: 2023_12_08T02_38_39.240881
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-08T02-38-39.240881.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-08T02-38-39.240881.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_12_04T12_02_08.348872
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-04T12-02-08.348872.parquet'
- split: 2023_12_08T02_38_39.240881
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-08T02-38-39.240881.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-08T02-38-39.240881.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_12_04T12_02_08.348872
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-04T12-02-08.348872.parquet'
- split: 2023_12_08T02_38_39.240881
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-08T02-38-39.240881.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-08T02-38-39.240881.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_12_04T12_02_08.348872
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-04T12-02-08.348872.parquet'
- split: 2023_12_08T02_38_39.240881
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-08T02-38-39.240881.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-08T02-38-39.240881.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_12_04T12_02_08.348872
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-04T12-02-08.348872.parquet'
- split: 2023_12_08T02_38_39.240881
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-08T02-38-39.240881.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-08T02-38-39.240881.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_12_04T12_02_08.348872
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-04T12-02-08.348872.parquet'
- split: 2023_12_08T02_38_39.240881
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-08T02-38-39.240881.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-08T02-38-39.240881.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_12_04T12_02_08.348872
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-04T12-02-08.348872.parquet'
- split: 2023_12_08T02_38_39.240881
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-08T02-38-39.240881.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-08T02-38-39.240881.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_12_04T12_02_08.348872
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-04T12-02-08.348872.parquet'
- split: 2023_12_08T02_38_39.240881
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-08T02-38-39.240881.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-08T02-38-39.240881.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_12_04T12_02_08.348872
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-12-04T12-02-08.348872.parquet'
- split: 2023_12_08T02_38_39.240881
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-12-08T02-38-39.240881.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-12-08T02-38-39.240881.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_12_04T12_02_08.348872
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-04T12-02-08.348872.parquet'
- split: 2023_12_08T02_38_39.240881
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-08T02-38-39.240881.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-08T02-38-39.240881.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_12_04T12_02_08.348872
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-04T12-02-08.348872.parquet'
- split: 2023_12_08T02_38_39.240881
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-08T02-38-39.240881.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-08T02-38-39.240881.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_12_04T12_02_08.348872
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-04T12-02-08.348872.parquet'
- split: 2023_12_08T02_38_39.240881
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-08T02-38-39.240881.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-08T02-38-39.240881.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_12_04T12_02_08.348872
path:
- '**/details_harness|hendrycksTest-management|5_2023-12-04T12-02-08.348872.parquet'
- split: 2023_12_08T02_38_39.240881
path:
- '**/details_harness|hendrycksTest-management|5_2023-12-08T02-38-39.240881.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-12-08T02-38-39.240881.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_12_04T12_02_08.348872
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-12-04T12-02-08.348872.parquet'
- split: 2023_12_08T02_38_39.240881
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-12-08T02-38-39.240881.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-12-08T02-38-39.240881.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_12_04T12_02_08.348872
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-04T12-02-08.348872.parquet'
- split: 2023_12_08T02_38_39.240881
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-08T02-38-39.240881.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-08T02-38-39.240881.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_12_04T12_02_08.348872
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-04T12-02-08.348872.parquet'
- split: 2023_12_08T02_38_39.240881
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-08T02-38-39.240881.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-08T02-38-39.240881.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_12_04T12_02_08.348872
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-04T12-02-08.348872.parquet'
- split: 2023_12_08T02_38_39.240881
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-08T02-38-39.240881.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-08T02-38-39.240881.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_12_04T12_02_08.348872
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-04T12-02-08.348872.parquet'
- split: 2023_12_08T02_38_39.240881
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-08T02-38-39.240881.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-08T02-38-39.240881.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_12_04T12_02_08.348872
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-04T12-02-08.348872.parquet'
- split: 2023_12_08T02_38_39.240881
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-08T02-38-39.240881.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-08T02-38-39.240881.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_12_04T12_02_08.348872
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-04T12-02-08.348872.parquet'
- split: 2023_12_08T02_38_39.240881
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-08T02-38-39.240881.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-08T02-38-39.240881.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_12_04T12_02_08.348872
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-04T12-02-08.348872.parquet'
- split: 2023_12_08T02_38_39.240881
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-08T02-38-39.240881.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-08T02-38-39.240881.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_12_04T12_02_08.348872
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-04T12-02-08.348872.parquet'
- split: 2023_12_08T02_38_39.240881
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-08T02-38-39.240881.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-08T02-38-39.240881.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_12_04T12_02_08.348872
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-04T12-02-08.348872.parquet'
- split: 2023_12_08T02_38_39.240881
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-08T02-38-39.240881.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-08T02-38-39.240881.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_12_04T12_02_08.348872
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-04T12-02-08.348872.parquet'
- split: 2023_12_08T02_38_39.240881
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-08T02-38-39.240881.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-08T02-38-39.240881.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_12_04T12_02_08.348872
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-04T12-02-08.348872.parquet'
- split: 2023_12_08T02_38_39.240881
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-08T02-38-39.240881.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-08T02-38-39.240881.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_12_04T12_02_08.348872
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-04T12-02-08.348872.parquet'
- split: 2023_12_08T02_38_39.240881
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-08T02-38-39.240881.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-08T02-38-39.240881.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_12_04T12_02_08.348872
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-04T12-02-08.348872.parquet'
- split: 2023_12_08T02_38_39.240881
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-08T02-38-39.240881.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-08T02-38-39.240881.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_12_04T12_02_08.348872
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-12-04T12-02-08.348872.parquet'
- split: 2023_12_08T02_38_39.240881
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-12-08T02-38-39.240881.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-12-08T02-38-39.240881.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_12_04T12_02_08.348872
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-04T12-02-08.348872.parquet'
- split: 2023_12_08T02_38_39.240881
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-08T02-38-39.240881.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-08T02-38-39.240881.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_12_04T12_02_08.348872
path:
- '**/details_harness|hendrycksTest-virology|5_2023-12-04T12-02-08.348872.parquet'
- split: 2023_12_08T02_38_39.240881
path:
- '**/details_harness|hendrycksTest-virology|5_2023-12-08T02-38-39.240881.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-12-08T02-38-39.240881.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_12_04T12_02_08.348872
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-04T12-02-08.348872.parquet'
- split: 2023_12_08T02_38_39.240881
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-08T02-38-39.240881.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-08T02-38-39.240881.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_12_04T12_02_08.348872
path:
- '**/details_harness|truthfulqa:mc|0_2023-12-04T12-02-08.348872.parquet'
- split: 2023_12_08T02_38_39.240881
path:
- '**/details_harness|truthfulqa:mc|0_2023-12-08T02-38-39.240881.parquet'
- split: 2023_12_10T14_56_57.370238
path:
- '**/details_harness|truthfulqa:mc|0_2023-12-10T14-56-57.370238.parquet'
- split: 2023_12_10T15_47_22.242382
path:
- '**/details_harness|truthfulqa:mc|0_2023-12-10T15-47-22.242382.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-12-10T15-47-22.242382.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_12_04T12_02_08.348872
path:
- '**/details_harness|winogrande|5_2023-12-04T12-02-08.348872.parquet'
- split: 2023_12_08T02_38_39.240881
path:
- '**/details_harness|winogrande|5_2023-12-08T02-38-39.240881.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-12-08T02-38-39.240881.parquet'
- config_name: results
data_files:
- split: 2023_12_04T12_02_08.348872
path:
- results_2023-12-04T12-02-08.348872.parquet
- split: 2023_12_08T02_38_39.240881
path:
- results_2023-12-08T02-38-39.240881.parquet
- split: 2023_12_10T14_56_57.370238
path:
- results_2023-12-10T14-56-57.370238.parquet
- split: 2023_12_10T15_47_22.242382
path:
- results_2023-12-10T15-47-22.242382.parquet
- split: latest
path:
- results_2023-12-10T15-47-22.242382.parquet
---
# Dataset Card for Evaluation run of ceadar-ie/FinanceConnect-13B
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/ceadar-ie/FinanceConnect-13B
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [ceadar-ie/FinanceConnect-13B](https://huggingface.co/ceadar-ie/FinanceConnect-13B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 4 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_ceadar-ie__FinanceConnect-13B",
"harness_truthfulqa_mc_0",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-10T15:47:22.242382](https://huggingface.co/datasets/open-llm-leaderboard/details_ceadar-ie__FinanceConnect-13B/blob/main/results_2023-12-10T15-47-22.242382.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"mc1": 0.2484700122399021,
"mc1_stderr": 0.015127427096520672,
"mc2": 0.37682302005478885,
"mc2_stderr": 0.015200964572751172
},
"harness|truthfulqa:mc|0": {
"mc1": 0.2484700122399021,
"mc1_stderr": 0.015127427096520672,
"mc2": 0.37682302005478885,
"mc2_stderr": 0.015200964572751172
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
gsh3729/sw_t2 | ---
dataset_info:
features:
- name: filename
dtype: string
- name: tif
dtype: binary
- name: tfw
dtype: binary
splits:
- name: train
num_bytes: 420580395
num_examples: 30000
download_size: 417239716
dataset_size: 420580395
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
cleanrl/summarize_from_feedback_tldr_3_filtered_oai_preprocessing_1704427060 | ---
dataset_info:
features:
- name: id
dtype: string
- name: subreddit
dtype: string
- name: title
dtype: string
- name: post
dtype: string
- name: summary
dtype: string
- name: query_token
sequence: int64
- name: query
dtype: string
- name: reference_response
dtype: string
- name: reference_response_token
sequence: int64
- name: reference_response_token_len
dtype: int64
- name: query_reference_response
dtype: string
- name: query_reference_response_token
sequence: int64
- name: query_reference_response_token_len
dtype: int64
splits:
- name: train
num_bytes: 1600440249
num_examples: 116722
- name: validation
num_bytes: 88425771
num_examples: 6447
- name: test
num_bytes: 89922466
num_examples: 6553
download_size: 551824607
dataset_size: 1778788486
---
# TL;DR SFT Dataset for OpenAI's [Summarize from Feedback](https://openai.com/blog/summarization/) task
The dataset is directly taken from https://github.com/openai/summarize-from-feedback/tree/700967448d10004279f138666442bf1497d0e705#reddit-tldr-dataset
These columns are taken directly from the aforementioned dataset:
* **id**: unique identifier for the post
* **subreddit**: subreddit the post was taken from
* **title**: title of the post
* **post**: body of the post
* **summary**: summary of the post
* **reference_response**: reference response for the post
These columns are added by this preprocessing script:
* **query**: length-limited query for summarization: OAI pre-processes the main text (title + subreddit + post), ensuring it has only 512 tokens; if the main text is too long, then it tries to truncate at the last `
`. If it's too short it pads the main text ([summarize_from_feedback/tasks.py#L98-L165](https://github.com/openai/summarize-from-feedback/blob/700967448d10004279f138666442bf1497d0e705/summarize_from_feedback/tasks.py#L98-L165)). Padding is either space or `[PAD]` token (see Args below).
* **query_token**: tokenized version of `query`
* **reference_response_token**: tokenized version of `reference_response`
* **reference_response_token_len**: length of `reference_response_token`
* **query_reference_response**: concatenation of `query.strip()` and `reference_response`
* **query_reference_response_token**: tokenized version of `query_reference_response`, up to `max_sft_query_response_length` tokens
* **query_reference_response_token_len**: length of `query_reference_response_token`
# Args
```python
{'base_model': 'EleutherAI/pythia-1b-deduped',
'cnndm_params': TaskQueryHParams(length=1919,
format_str='Article:\n{article}\n\nTL;DR:\n',
truncate_field='article',
truncate_text='\n',
padding=[50277],
pad_side='left'),
'hf_entity': 'cleanrl',
'max_rm_query_response_length': 638,
'max_rm_response_length': 169,
'max_sft_query_response_length': 562,
'max_sft_response_length': 53,
'push_to_hub': True,
'tldr_params': TaskQueryHParams(length=512,
format_str='SUBREDDIT: r/{subreddit}\n'
'\n'
'TITLE: {title}\n'
'\n'
'POST: {post}\n'
'\n'
'TL;DR:',
truncate_field='post',
truncate_text='\n',
padding=[50277],
pad_side='left')}
```
|
UCL-DARK/openai-tldr-filtered-queries | ---
license: cc-by-4.0
annotations_creators:
- crowdsourced
language:
- en
language_creators:
- crowdsourced
multilinguality:
- monolingual
pretty_name: Filtered TL;DR
size_categories:
- 100K<n<1M
source_datasets:
- extended
tags:
- alignment
- text-classification
- summarisation
- human-feedback
task_categories:
- text-generation
task_ids: []
---
# Filtered TL;DR Dataset
This is the version of the dataset used in https://arxiv.org/abs/2310.06452.
If starting a new project we would recommend using https://huggingface.co/datasets/openai/summarize_from_feedback.
For more information see https://github.com/openai/summarize-from-feedback and for the original TL;DR dataset see https://zenodo.org/record/1168855#.YvzwJexudqs
This is the version of the dataset with only filtering on the queries, and hence there is more data than in https://huggingface.co/datasets/UCL-DARK/openai-tldr-filtered which contains data with filtering on the queries and summaries.
|
kenken6696/FOLIO_by_paraphrased_gpt3.5 | ---
dataset_info:
features:
- name: example_id
dtype: int64
- name: conclusion
dtype: string
- name: premises
sequence: string
- name: label
dtype: string
- name: LET_count
dtype: int64
- name: LEC_types
sequence: int64
splits:
- name: train
num_bytes: 2165430
num_examples: 4604
download_size: 137112
dataset_size: 2165430
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Borrri/borri | ---
license: cc
---
|
guyrose3/dreambooth-snufkin | ---
dataset_info:
features:
- name: image
dtype: image
splits:
- name: train
num_bytes: 355532.0
num_examples: 5
download_size: 356484
dataset_size: 355532.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
AI4Math/MathVerse | ---
task_categories:
- multiple-choice
- question-answering
- visual-question-answering
language:
- en
size_categories:
- 1K<n<10K
configs:
- config_name: testmini
data_files:
- split: testmini
path: "testmini.parquet"
- config_name: testmini_text_only
data_files:
- split: testmini_text_only
path: "testmini_text_only.parquet"
dataset_info:
- config_name: testmini
features:
- name: sample_index
dtype: string
- name: problem_index
dtype: string
- name: problem_version
dtype: string
- name: question
dtype: string
- name: image
dtype: image
- name: answer
dtype: string
- name: question_type
dtype: string
- name: metadata
struct:
- name: split
dtype: string
- name: source
dtype: string
- name: subject
dtype: string
- name: subfield
dtype: string
- name: query_wo
dtype: string
- name: query_cot
dtype: string
splits:
- name: testmini
num_bytes: 166789963
num_examples: 3940
- config_name: testmini_text_only
features:
- name: sample_index
dtype: string
- name: problem_index
dtype: string
- name: problem_version
dtype: string
- name: question
dtype: string
- name: image
dtype: string
- name: answer
dtype: string
- name: question_type
dtype: string
- name: metadata
struct:
- name: split
dtype: string
- name: source
dtype: string
- name: subject
dtype: string
- name: subfield
dtype: string
- name: query_wo
dtype: string
- name: query_cot
dtype: string
splits:
- name: testmini_text_only
num_bytes: 250959
num_examples: 788
---
# Dataset Card for MathVerse
- [Dataset Description](https://huggingface.co/datasets/AI4Math/MathVerse/blob/main/README.md#dataset-description)
- [Paper Information](https://huggingface.co/datasets/AI4Math/MathVerse/blob/main/README.md#paper-information)
- [Dataset Examples](https://huggingface.co/datasets/AI4Math/MathVerse/blob/main/README.md#dataset-examples)
- [Leaderboard](https://huggingface.co/datasets/AI4Math/MathVerse/blob/main/README.md#leaderboard)
- [Citation](https://huggingface.co/datasets/AI4Math/MathVerse/blob/main/README.md#citation)
## Dataset Description
The capabilities of **Multi-modal Large Language Models (MLLMs)** in **visual math problem-solving** remain insufficiently evaluated and understood. We investigate current benchmarks to incorporate excessive visual content within textual questions, which potentially assist MLLMs in deducing answers without truly interpreting the input diagrams.
<p align="center">
<img src="https://raw.githubusercontent.com/ZrrSkywalker/MathVerse/main/figs/fig1.png" width="90%"> <br>
</p>
To this end, we introduce **MathVerse**, an all-around visual math benchmark designed for an equitable and in-depth evaluation of MLLMs. We meticulously collect 2,612 high-quality, multi-subject math problems with diagrams from publicly available sources. Each problem is then transformed by human annotators into **six distinct versions**, each offering varying degrees of information content in multi-modality, contributing to **15K** test samples in total. This approach allows MathVerse to comprehensively assess ***whether and how much MLLMs can truly understand the visual diagrams for mathematical reasoning.***
<p align="center">
<img src="https://raw.githubusercontent.com/ZrrSkywalker/MathVerse/main/figs/fig2.png" width="90%"> <br>
Six different versions of each problem in <b>MathVerse</b> transformed by expert annotators.
</p>
In addition, we propose a **Chain-of-Thought (CoT) Evaluation strategy** for a fine-grained assessment of the output answers. Rather than naively judging True or False, we employ GPT-4(V) to adaptively extract crucial reasoning steps, and then score each step with detailed error analysis, which can reveal the intermediate CoT reasoning quality by MLLMs.
<p align="center">
<img src="https://raw.githubusercontent.com/ZrrSkywalker/MathVerse/main/figs/fig3.png" width="90%"> <br>
The two phases of the CoT evaluation strategy.
</p>
## Paper Information
- Code: https://github.com/ZrrSkywalker/MathVerse
- Project: https://mathverse-cuhk.github.io/
- Visualization: https://mathverse-cuhk.github.io/#visualization
- Leaderboard: https://mathverse-cuhk.github.io/#leaderboard
- Paper: https://arxiv.org/abs/2403.14624
## Dataset Examples
🖱 Click to expand the examples for six problems versions within three subjects</summary>
<details>
<summary>🔍 Plane Geometry</summary>
<p align="center">
<img src="https://raw.githubusercontent.com/ZrrSkywalker/MathVerse/main/figs/ver1.png" width="50%"> <br>
</p>
</details>
<details>
<summary>🔍 Solid Geometry</summary>
<p align="center">
<img src="https://raw.githubusercontent.com/ZrrSkywalker/MathVerse/main/figs/ver2.png" width="50%"> <br>
</p>
</details>
<details>
<summary>🔍 Functions</summary>
<p align="center">
<img src="https://raw.githubusercontent.com/ZrrSkywalker/MathVerse/main/figs/ver3.png" width="50%"> <br>
</p>
</details>
## Leaderboard
### Contributing to the Leaderboard
🚨 The [Leaderboard](https://mathverse-cuhk.github.io/#leaderboard) is continuously being updated.
The evaluation instructions and tools will be released soon. For now, please send your results on the ***testmini*** set to this email: 1700012927@pku.edu.cn. Please refer to the following template to prepare your result json file.
- [output_testmini_template.json]()
## Citation
If you find **MathVerse** useful for your research and applications, please kindly cite using this BibTeX:
```latex
@inproceedings{zhang2024mathverse,
title={MathVerse: Does Your Multi-modal LLM Truly See the Diagrams in Visual Math Problems?},
author={Renrui Zhang, Dongzhi Jiang, Yichi Zhang, Haokun Lin, Ziyu Guo, Pengshuo Qiu, Aojun Zhou, Pan Lu, Kai-Wei Chang, Peng Gao, Hongsheng Li},
booktitle={arXiv},
year={2024}
}
``` |
liuyanchen1015/MULTI_VALUE_mnli_for_to_pupose | ---
dataset_info:
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: score
dtype: int64
splits:
- name: dev_matched
num_bytes: 97171
num_examples: 387
- name: dev_mismatched
num_bytes: 116085
num_examples: 438
- name: test_matched
num_bytes: 97831
num_examples: 398
- name: test_mismatched
num_bytes: 114176
num_examples: 444
- name: train
num_bytes: 4041318
num_examples: 16171
download_size: 2712115
dataset_size: 4466581
---
# Dataset Card for "MULTI_VALUE_mnli_for_to_pupose"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
cestwc/FLD_gen | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: hypothesis
dtype: string
- name: context
dtype: string
- name: hypothesis_formula
dtype: string
- name: context_formula
dtype: string
- name: proofs
sequence: string
- name: proof_label
dtype: string
- name: proofs_formula
sequence: string
- name: world_assump_label
dtype: string
- name: original_tree_depth
dtype: int64
- name: depth
dtype: int64
- name: num_formula_distractors
dtype: int64
- name: num_translation_distractors
dtype: int64
- name: num_all_distractors
dtype: int64
- name: negative_hypothesis
dtype: string
- name: negative_hypothesis_formula
dtype: string
- name: negative_original_tree_depth
dtype: int64
- name: negative_proofs
sequence: string
- name: negative_proof_label
dtype: string
- name: negative_world_assump_label
dtype: string
- name: prompt_serial
dtype: string
- name: proof_serial
dtype: string
- name: version
dtype: string
- name: premise
dtype: string
- name: assumptions
sequence: string
- name: paraphrased_premises
sequence: string
- name: paraphrased_premise
dtype: string
- name: assumption
dtype: string
splits:
- name: train
num_bytes: 154414314
num_examples: 36401
- name: validation
num_bytes: 25351138
num_examples: 6004
- name: test
num_bytes: 25945020
num_examples: 6160
download_size: 45117566
dataset_size: 205710472
---
# Dataset Card for "FLD_gen"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
fathyshalab/massive_recommendation | ---
dataset_info:
features:
- name: id
dtype: string
- name: label
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 25598
num_examples: 433
- name: validation
num_bytes: 4186
num_examples: 69
- name: test
num_bytes: 5994
num_examples: 94
download_size: 21463
dataset_size: 35778
---
# Dataset Card for "massive_recommendation"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
kalyan003/Question_Answer_Dataset | ---
license: unknown
---
|
c01dsnap/DoHTunnelAnalyzer | ---
license: other
---
# Datasets Source
* Retrived from [CIRA-CIC-DoHBrw-2020](https://www.unb.ca/cic/datasets/dohbrw-2020.html)
* Used for [DoHTunnelAnalyzer](https://github.com/Coldwave96/DoHTunnelAnalyzer)
# License
You may redistribute, republish, and mirror the CIRA-CIC-DoHBrw-2020 dataset in any form. However, any use or redistribution of the data must include a citation to DoHMeter and the following research paper outlining the details of captured DoH traffic:
`Mohammadreza MontazeriShatoori, Logan Davidson, Gurdip Kaur, and Arash Habibi Lashkari, “Detection of DoH Tunnels using Time-series Classification of Encrypted Traffic”, The 5th IEEE Cyber Science and Technology Congress, Calgary, Canada, August 2020` |
TUMLegalTech/echr_rational | ---
license: afl-3.0
annotations_creators:
- expert-generated
language:
- en
language_creators:
- expert-generated
multilinguality:
- monolingual
size_categories:
- 50
---
# Dataset Card for echr_rational
### Dataset Summary
[Deconfounding Legal Judgment Prediction for European Court of Human
Rights Cases Towards Better Alignment with Experts](https://arxiv.org/pdf/2210.13836.pdf)
This work demonstrates that Legal Judgement Prediction systems without expert-informed adjustments can be vulnerable to shallow, distracting surface signals that arise from corpus construction, case distribution, and confounding factors. To mitigate this, we use domain expertise to strategically identify statistically predictive but legally irrelevant information. We adopt adversarial training to prevent the system from relying on it. We evaluate our deconfounded models by employing interpretability techniques and comparing to expert annotations. Quantitative experiments and qualitative analysis show that our deconfounded model consistently aligns better with expert rationales than baselines trained for prediction only. We further contribute a set of reference expert annotations to the validation and testing partitions of an existing benchmark dataset of European Court of Human Rights cases
### Languages
English
# Citation Information
@article{santosh2022deconfounding,
title={Deconfounding Legal Judgment Prediction for European Court of Human Rights Cases Towards Better Alignment with Experts},
author={Santosh, TYS and Xu, Shanshan and Ichim, Oana and Grabmair, Matthias},
journal={arXiv preprint arXiv:2210.13836},
year={2022}
}
|
CyberHarem/sol_neuralcloud | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of sol/ソル/苏尔 (Neural Cloud)
This is the dataset of sol/ソル/苏尔 (Neural Cloud), containing 21 images and their tags.
The core tags of this character are `long_hair, blonde_hair, hair_between_eyes, yellow_eyes, ponytail, very_long_hair, bangs, breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:-----------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 21 | 26.74 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sol_neuralcloud/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 21 | 17.03 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sol_neuralcloud/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 38 | 28.18 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sol_neuralcloud/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 21 | 24.02 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sol_neuralcloud/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 38 | 36.53 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sol_neuralcloud/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/sol_neuralcloud',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 21 |  |  |  |  |  | 1girl, solo, smile, fingerless_gloves, black_gloves, looking_at_viewer, white_shirt, belt, crop_top, midriff, navel, necklace, orange_jacket, open_jacket, black_pants, long_sleeves, standing, fur-trimmed_jacket, holding, outdoors, black_choker, boots, sky |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | smile | fingerless_gloves | black_gloves | looking_at_viewer | white_shirt | belt | crop_top | midriff | navel | necklace | orange_jacket | open_jacket | black_pants | long_sleeves | standing | fur-trimmed_jacket | holding | outdoors | black_choker | boots | sky |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:--------|:--------------------|:---------------|:--------------------|:--------------|:-------|:-----------|:----------|:--------|:-----------|:----------------|:--------------|:--------------|:---------------|:-----------|:---------------------|:----------|:-----------|:---------------|:--------|:------|
| 0 | 21 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
Puidii/aalen_university_faculty_computer_science | ---
license: mit
task_categories:
- question-answering
language:
- en
size_categories:
- 1K<n<10K
---
# Dataset Card
This dataset contains question-answer pairs from all study programmes of the Faculty of Computer Science at the University of Aalen, Germany. The training dataset is automatically generated by ChatGPT. The validation dataset was manually created.
It was collected to train an answer-Q&A chatbot based on LLM fine-tuning. All used scripts and examples can be found in the linked GitHub repository (https://github.com/pattplatt/llm_dataset_creation_and_finetuning).
## Dataset Details
### Dataset Description
The dataset was created as part of a study project. All real names and numbers have been changed.
The data comes from the website of the University of Aalen. It contains question-answer pairs extracted from all study programmes of the Faculty of Computer Science.
This includes course content, staff or university events until November 2023.
All included information was scraped from https://www.hs-aalen.de/, resulting in a total of 439 .txt files from 12 study programmes (3.1 megabytes of text).
The ChatGPT API (GPT3.5) was used to extract the question-answer pairs from the raw text data.
- **Curated by:** Patrick Müller
- **Language(s) (NLP):** English
- **License:** MIT License
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** https://github.com/pattplatt/llm_dataset_creation_and_finetuning
## Uses
1. For LLM fine-tuning, especially with limited computing power due to short sequence lengths of the Q&A pairs.
2. Evaluation of datasets extracted and created by LLMs.
### Out-of-Scope Use
The dataset does not cover the complete content of the study programs from the Faculty of Computer Science at Aalen University. The data does not necessarily reflect the true and complete contents and Aalen University. In addition, the data has not been fully checked for accuracy.
## Dataset Structure
The structure of the dataset is based on the well-known lima dataset: https://huggingface.co/datasets/GAIR/lima
## Dataset Creation
### Curation Rationale
The motivation was to test how LLMs can be used for automated dataset creation.
#### Data Collection and Processing
BeautifulSoup and Request were used for scraping. ChatGPT API was used to extract question-answer pairs.
#### Personal and Sensitive Information
The dataset has been anonymised, all names, emails and numbers have been changed.
## Dataset Card Authors [optional]
Patrick M.
## Dataset Card Contact
You can contact me via HF. |
SilkGPT/Silk_fMRI_ds4192 | ---
license: cc0-1.0
---
|
CogniVerse/tpmify | ---
license: other
---
|
daniel123321/common_voice_preprocessed | ---
dataset_info:
features:
- name: input_features
sequence:
sequence: float32
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 46009726456
num_examples: 47901
- name: validation
num_bytes: 1544505544
num_examples: 1608
- name: test
num_bytes: 1544490352
num_examples: 1608
download_size: 9714514239
dataset_size: 49098722352
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
joey234/mmlu-college_mathematics-neg-prepend-verbal | ---
configs:
- config_name: default
data_files:
- split: dev
path: data/dev-*
- split: test
path: data/test-*
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: negate_openai_prompt
struct:
- name: content
dtype: string
- name: role
dtype: string
- name: neg_question
dtype: string
- name: fewshot_context
dtype: string
- name: ori_prompt
dtype: string
- name: fewshot_context_neg
dtype: string
- name: fewshot_context_ori
dtype: string
- name: neg_prompt
dtype: string
splits:
- name: dev
num_bytes: 9276
num_examples: 5
- name: test
num_bytes: 925998
num_examples: 100
download_size: 148577
dataset_size: 935274
---
# Dataset Card for "mmlu-college_mathematics-neg-prepend-verbal"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CAiRE/YueMotion | ---
license: cc-by-sa-4.0
language:
- yue
tags:
- speech
- speech-emotion-recognition
pretty_name: YueMotion
size_categories:
- 1K<n<10K
---
# YueMotion
A Cantonese speech emotion recognition by adult (7 females + 4 males) and elderly (5 females + 2 males) speakers with 5 emotion labels: anger (1), happy (2), sad (3), neutral (4), fear (5), disgust (6).
In total, YueMotion consists of 1080 utterances, i.e., 420 utterances for elderly and 660 for adults.
## Dataset Details
For the details (e.g., the statistics of `train`, `valid`, and `test` data), please refer to our paper on [arXiv](https://arxiv.org/abs/2306.14517).
## Citation
Our paper will be published at INTERSPEECH 2023. In the meantime, you can find our paper on [arXiv](https://arxiv.org/abs/2306.14517).
If you find our work useful, please consider citing our paper as follows:
```
@misc{cahyawijaya2023crosslingual,
title={Cross-Lingual Cross-Age Group Adaptation for Low-Resource Elderly Speech Emotion Recognition},
author={Samuel Cahyawijaya and Holy Lovenia and Willy Chung and Rita Frieske and Zihan Liu and Pascale Fung},
year={2023},
eprint={2306.14517},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
JosephEudave/Stabledifussion-dreambooth | ---
license: other
---
|
kpriyanshu256/MultiTabQA-multitable_pretraining-train-v2-17500 | ---
dataset_info:
features:
- name: tables
sequence: string
- name: table_names
sequence: string
- name: query
dtype: string
- name: answer
dtype: string
- name: source
dtype: string
- name: target
dtype: string
- name: source_latex
dtype: string
- name: target_latex
dtype: string
- name: source_html
dtype: string
- name: target_html
dtype: string
- name: source_markdown
dtype: string
- name: target_markdown
dtype: string
splits:
- name: train
num_bytes: 15564097525
num_examples: 2500
download_size: 2992214861
dataset_size: 15564097525
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
allegro/klej-polemo2-out | ---
annotations_creators:
- expert-generated
language_creators:
- other
language:
- pl
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: 'PolEmo2.0-OUT'
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
---
# klej-polemo2-out
## Description
The PolEmo2.0 is a dataset of online consumer reviews from four domains: medicine, hotels, products, and university. It is human-annotated on a level of full reviews and individual sentences. It comprises over 8000 reviews, about 85% from the medicine and hotel domains.
We use the PolEmo2.0 dataset to form two tasks. Both use the same training dataset, i.e., reviews from medicine and hotel domains, but are evaluated on a different test set.
**Out-of-Domain** is the second task, and we test the model on out-of-domain reviews, i.e., from product and university domains. Since the original test sets for those domains are scarce (50 reviews each), we decided to use the original out-of-domain training set of 900 reviews for testing purposes and create a new split of development and test sets. As a result, the task consists of 1000 reviews, comparable in size to the in-domain test dataset of 1400 reviews.
## Tasks (input, output, and metrics)
The task is to predict the correct label of the review.
**Input** ('*text'* column): sentence
**Output** ('*target'* column): label for sentence sentiment ('zero': neutral, 'minus': negative, 'plus': positive, 'amb': ambiguous)
**Domain**: Online reviews
**Measurements**: Accuracy
**Example**:
Input: `Lekarz zalecił mi kurację alternatywną do dotychczasowej , więc jeszcze nie daję najwyższej oceny ( zobaczymy na ile okaże się skuteczna ) . Do Pana doktora nie mam zastrzeżeń : bardzo profesjonalny i kulturalny . Jedyny minus dotyczy gabinetu , który nie jest nowoczesny , co może zniechęcać pacjentki .`
Input (translated by DeepL): `The doctor recommended me an alternative treatment to the current one , so I do not yet give the highest rating ( we will see how effective it turns out to be ) . To the doctor I have no reservations : very professional and cultured . The only minus is about the office , which is not modern , which may discourage patients .`
Output: `amb` (ambiguous)
## Data splits
| Subset | Cardinality |
|:-----------|--------------:|
| train | 5783 |
| test | 722 |
| validation | 723 |
## Class distribution
| Class | Sentiment | train | validation | test |
|:------|:----------|------:|-----------:|------:|
| minus | positive | 0.379 | 0.334 | 0.368 |
| plus | negative | 0.271 | 0.332 | 0.302 |
| amb | ambiguous | 0.182 | 0.332 | 0.328 |
| zero | neutral | 0.168 | 0.002 | 0.002 |
## Citation
```
@inproceedings{kocon-etal-2019-multi,
title = "Multi-Level Sentiment Analysis of {P}ol{E}mo 2.0: Extended Corpus of Multi-Domain Consumer Reviews",
author = "Koco{\'n}, Jan and
Mi{\l}kowski, Piotr and
Za{\'s}ko-Zieli{\'n}ska, Monika",
booktitle = "Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)",
month = nov,
year = "2019",
address = "Hong Kong, China",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/K19-1092",
doi = "10.18653/v1/K19-1092",
pages = "980--991",
abstract = "In this article we present an extended version of PolEmo {--} a corpus of consumer reviews from 4 domains: medicine, hotels, products and school. Current version (PolEmo 2.0) contains 8,216 reviews having 57,466 sentences. Each text and sentence was manually annotated with sentiment in 2+1 scheme, which gives a total of 197,046 annotations. We obtained a high value of Positive Specific Agreement, which is 0.91 for texts and 0.88 for sentences. PolEmo 2.0 is publicly available under a Creative Commons copyright license. We explored recent deep learning approaches for the recognition of sentiment, such as Bi-directional Long Short-Term Memory (BiLSTM) and Bidirectional Encoder Representations from Transformers (BERT).",
}
```
## License
```
Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)
```
## Links
[HuggingFace](https://huggingface.co/datasets/allegro/klej-polemo2-out)
[Source](https://clarin-pl.eu/dspace/handle/11321/710)
[Paper](https://aclanthology.org/K19-1092/)
## Examples
### Loading
```python
from pprint import pprint
from datasets import load_dataset
dataset = load_dataset("allegro/klej-polemo2-out")
pprint(dataset['train'][0])
# {'sentence': 'Super lekarz i człowiek przez duże C . Bardzo duże doświadczenie '
# 'i trafne diagnozy . Wielka cierpliwość do ludzi starszych . Od '
# 'lat opiekuje się moją Mamą staruszką , i twierdzę , że mamy duże '
# 'szczęście , że mamy takiego lekarza . Naprawdę nie wiem cobyśmy '
# 'zrobili , gdyby nie Pan doktor . Dzięki temu , moja mama żyje . '
# 'Każda wizyta u specjalisty jest u niego konsultowana i uważam , '
# 'że jest lepszy od każdego z nich . Mamy do Niego prawie '
# 'nieograniczone zaufanie . Można wiele dobrego o Panu doktorze '
# 'jeszcze napisać . Niestety , ma bardzo dużo pacjentów , jest '
# 'przepracowany ( z tego powodu nawet obawiam się o jego zdrowie ) '
# 'i dostęp do niego jest trudny , ale zawsze możliwy .',
# 'target': '__label__meta_plus_m'}
```
### Evaluation
```python
import random
from pprint import pprint
from datasets import load_dataset, load_metric
dataset = load_dataset("allegro/klej-polemo2-out")
dataset = dataset.class_encode_column("target")
references = dataset["test"]["target"]
# generate random predictions
predictions = [random.randrange(max(references) + 1) for _ in range(len(references))]
acc = load_metric("accuracy")
f1 = load_metric("f1")
acc_score = acc.compute(predictions=predictions, references=references)
f1_score = f1.compute(predictions=predictions, references=references, average="macro")
pprint(acc_score)
pprint(f1_score)
# {'accuracy': 0.2894736842105263}
# {'f1': 0.2484406098784191}
``` |
dwilder-console/console-cloud-test-sample | ---
license: apache-2.0
---
|
EleutherAI/coqa | ---
license: other
language:
- en
size_categories:
- 1K<n<10K
---
"""CoQA dataset.
This `CoQA` adds the "additional_answers" feature that's missing in the original
datasets version:
https://github.com/huggingface/datasets/blob/master/datasets/coqa/coqa.py
"""
_CITATION = """\
@misc{reddy2018coqa,
title={CoQA: A Conversational Question Answering Challenge},
author={Siva Reddy and Danqi Chen and Christopher D. Manning},
year={2018},
eprint={1808.07042},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
"""
_DESCRIPTION = """\
CoQA is a large-scale dataset for building Conversational Question Answering
systems. The goal of the CoQA challenge is to measure the ability of machines to
understand a text passage and answer a series of interconnected questions that
appear in a conversation.
"""
_HOMEPAGE = "https://stanfordnlp.github.io/coqa/"
_LICENSE = "Different licenses depending on the content (see https://stanfordnlp.github.io/coqa/ for details)" |
Pm06/images-label-dataset | ---
dataset_info:
features:
- name: images
dtype: image
- name: vision_info
dtype: string
splits:
- name: train
num_bytes: 947557740.733
num_examples: 3747
download_size: 888570006
dataset_size: 947557740.733
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
andresmauriciogomezr/estatutoTributario.jsonl | ---
dataset_info:
features:
- name: fuente
dtype: string
- name: pregunta
dtype: string
- name: respuesta
dtype: string
splits:
- name: train
num_bytes: 210497
num_examples: 135
download_size: 42596
dataset_size: 210497
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
SherryT997/HelpSteer-hindi | ---
license: apache-2.0
language:
- hi
size_categories:
- 1K<n<10K
task_categories:
- conversational
- text-classification
- token-classification
- table-question-answering
- question-answering
- zero-shot-classification
- summarization
- feature-extraction
- text-generation
- text2text-generation
pretty_name: helpstem-hindi
--- |
AndyLiu0104/Soldering-Data-Tiny-More-Data-aug-appearance-hole-0809 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 14151528.625
num_examples: 10475
download_size: 9077914
dataset_size: 14151528.625
---
# Dataset Card for "Soldering-Data-Tiny-More-Data-aug-appearance-hole-0809"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Shravanig/vit-fire-detection | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': Fire
'1': Normal
'2': Smoke
splits:
- name: train
num_bytes: 160965820.64
num_examples: 6060
- name: validation
num_bytes: 85813019.0
num_examples: 756
- name: test
num_bytes: 93348677.0
num_examples: 759
download_size: 891539912
dataset_size: 340127516.64
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
zolak/twitter_dataset_78_1713061588 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 3272888
num_examples: 7935
download_size: 1665220
dataset_size: 3272888
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
BrunoHays/multilingual-TEDX-fr | ---
license: cc-by-nc-nd-4.0
task_categories:
- automatic-speech-recognition
language:
- fr
size_categories:
- 100K<n<1M
---
The french subset of the dataset [Multilingual TEDx](https://www.openslr.org/100). The data uploaded to HF corresponds to the directory fr-fr. The audio files are automatically resampled to 16 kHz.
#### Configs:
- single_samples (default): all samples taken separately
- max=30s: combine consecutive samples for a period shorter than 30 seconds
- max=10s: combine consecutive samples for a period shorter than 10 seconds
- max: combine all the samples of a TEDx talk
#### dependencies (only needed for much faster audio decoding):
- ffmpeg: apt install ffmpeg
- ffmpeg-python: pip install ffmpeg-python
#### Sample
```
{'file': '0u7tTptBo9I-0', 'audio': {'path': None, 'array': array([ 3.05175781e-05, 6.10351562e-05, 9.15527344e-05, ...,
-2.44140625e-04, -3.35693359e-04, -2.74658203e-04]), 'sampling_rate': 16000}, 'sentence': "Bonsoir ! Notre planète est recouverte à 70 % d'océan, et pourtant, étrangement, on a choisi de l'appeler « la Terre ». Le poète Heathcote Williams a une vision bien plus objective et moins anthropocentrique, quand il dit que « Vue de l'espace, la planète est bleue. Vue de l'espace, elle est le territoire, non pas des hommes, mais des baleines ». Et pourtant, on vient tous de l'océan. ", 'speaker_id': '0u7tTptBo9I', 'start_timestamp': 17.25, 'end_timestamp': 45.26, 'index': 0}
```
```
@inproceedings{salesky2021mtedx,
title={Multilingual TEDx Corpus for Speech Recognition and Translation},
author={Elizabeth Salesky and Matthew Wiesner and Jacob Bremerman and Roldano Cattoni and Matteo Negri and Marco Turchi and Douglas W. Oard and Matt Post},
booktitle={Proceedings of Interspeech},
year={2021},
}
``` |
CyberHarem/angelina_arknights | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of angelina/アンジェリーナ/安洁莉娜 (Arknights)
This is the dataset of angelina/アンジェリーナ/安洁莉娜 (Arknights), containing 500 images and their tags.
The core tags of this character are `animal_ears, brown_hair, long_hair, fox_ears, twintails, hairband, red_hairband, red_eyes, breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:--------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 1.01 GiB | [Download](https://huggingface.co/datasets/CyberHarem/angelina_arknights/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 474.49 MiB | [Download](https://huggingface.co/datasets/CyberHarem/angelina_arknights/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1289 | 1.03 GiB | [Download](https://huggingface.co/datasets/CyberHarem/angelina_arknights/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 854.10 MiB | [Download](https://huggingface.co/datasets/CyberHarem/angelina_arknights/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1289 | 1.62 GiB | [Download](https://huggingface.co/datasets/CyberHarem/angelina_arknights/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/angelina_arknights',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 9 |  |  |  |  |  | 1girl, black_gloves, black_shirt, black_shorts, black_socks, full_body, holding_staff, kneehighs, long_sleeves, sneakers, solo, black_footwear, short_shorts, open_jacket, white_jacket, looking_at_viewer, duffel_bag, red_jacket, white_coat, infection_monitor_(arknights), open_coat, shoulder_bag, smile |
| 1 | 7 |  |  |  |  |  | 1girl, black_gloves, black_shirt, black_shorts, black_socks, holding_staff, long_sleeves, looking_at_viewer, open_coat, open_jacket, simple_background, solo, white_coat, kneehighs, short_shorts, black_footwear, closed_mouth, shoes, striped_hairband, white_background, white_jacket, full_body, smile |
| 2 | 5 |  |  |  |  |  | 1girl, black_gloves, black_shirt, holding_staff, long_sleeves, looking_at_viewer, open_coat, open_jacket, solo, white_coat, upper_body, infection_monitor_(arknights), white_jacket, :d, brown_eyes, closed_mouth, earpiece, open_mouth |
| 3 | 18 |  |  |  |  |  | 1girl, open_jacket, solo, upper_body, looking_at_viewer, smile, black_shirt, infection_monitor_(arknights), white_jacket, blush, long_sleeves, simple_background, white_background, black_gloves, closed_mouth, collar, coat |
| 4 | 76 |  |  |  |  |  | 1girl, official_alternate_costume, solo, off_shoulder, long_sleeves, looking_at_viewer, red_coat, very_long_hair, bare_shoulders, thigh_strap, black_gloves, black_thighhighs, open_coat, black_leotard, infection_monitor_(arknights), medium_breasts, holding_staff, simple_background, black_footwear, red_jacket, white_background, boots, white_belt, smile, cowboy_shot |
| 5 | 14 |  |  |  |  |  | 1girl, bare_shoulders, casual_one-piece_swimsuit, fox_girl, necklace, official_alternate_costume, red_one-piece_swimsuit, bracelet, infection_monitor_(arknights), looking_at_viewer, solo, hair_ribbon, red_ribbon, collar, covered_navel, medium_breasts, swimsuit_cover-up, thigh_strap, fox_tail, smile, blush, closed_mouth, open_mouth, water, cowboy_shot, large_breasts |
| 6 | 8 |  |  |  |  |  | black_shorts, cleavage, midriff, navel, official_alternate_costume, white_sports_bra, 1girl, infection_monitor_(arknights), jacket_around_waist, large_breasts, looking_at_viewer, short_shorts, solo, very_long_hair, bare_shoulders, choker, crop_top, stomach, thigh_strap, thighs, fox_girl, fox_tail, simple_background, armpits, arms_up, basketball_(object), brown_eyes, shoes, white_background, bare_arms, bare_legs, blush, duffel_bag, holding, red_jacket, smile, standing, sweat, wristband |
| 7 | 12 |  |  |  |  |  | 1girl, bare_shoulders, black_dress, off-shoulder_dress, solo, looking_at_viewer, official_alternate_costume, closed_mouth, smile, black_bow, hair_bow, black_ribbon, fox_girl, simple_background, twin_drills, fox_tail, holding_instrument, medium_breasts, wrist_cuffs |
| 8 | 5 |  |  |  |  |  | 1girl, alternate_costume, serafuku, white_shirt, black_footwear, blush, full_body, looking_at_viewer, pleated_skirt, short_sleeves, smile, solo, closed_mouth, fox_girl, heart, simple_background, white_background, black_skirt, black_socks, blue_sailor_collar, extra_ears, fox_tail, hand_up, kneehighs, loafers, red_bowtie, red_neckerchief, white_socks |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | black_gloves | black_shirt | black_shorts | black_socks | full_body | holding_staff | kneehighs | long_sleeves | sneakers | solo | black_footwear | short_shorts | open_jacket | white_jacket | looking_at_viewer | duffel_bag | red_jacket | white_coat | infection_monitor_(arknights) | open_coat | shoulder_bag | smile | simple_background | closed_mouth | shoes | striped_hairband | white_background | upper_body | :d | brown_eyes | earpiece | open_mouth | blush | collar | coat | official_alternate_costume | off_shoulder | red_coat | very_long_hair | bare_shoulders | thigh_strap | black_thighhighs | black_leotard | medium_breasts | boots | white_belt | cowboy_shot | casual_one-piece_swimsuit | fox_girl | necklace | red_one-piece_swimsuit | bracelet | hair_ribbon | red_ribbon | covered_navel | swimsuit_cover-up | fox_tail | water | large_breasts | cleavage | midriff | navel | white_sports_bra | jacket_around_waist | choker | crop_top | stomach | thighs | armpits | arms_up | basketball_(object) | bare_arms | bare_legs | holding | standing | sweat | wristband | black_dress | off-shoulder_dress | black_bow | hair_bow | black_ribbon | twin_drills | holding_instrument | wrist_cuffs | alternate_costume | serafuku | white_shirt | pleated_skirt | short_sleeves | heart | black_skirt | blue_sailor_collar | extra_ears | hand_up | loafers | red_bowtie | red_neckerchief | white_socks |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------------|:--------------|:---------------|:--------------|:------------|:----------------|:------------|:---------------|:-----------|:-------|:-----------------|:---------------|:--------------|:---------------|:--------------------|:-------------|:-------------|:-------------|:--------------------------------|:------------|:---------------|:--------|:--------------------|:---------------|:--------|:-------------------|:-------------------|:-------------|:-----|:-------------|:-----------|:-------------|:--------|:---------|:-------|:-----------------------------|:---------------|:-----------|:-----------------|:-----------------|:--------------|:-------------------|:----------------|:-----------------|:--------|:-------------|:--------------|:----------------------------|:-----------|:-----------|:-------------------------|:-----------|:--------------|:-------------|:----------------|:--------------------|:-----------|:--------|:----------------|:-----------|:----------|:--------|:-------------------|:----------------------|:---------|:-----------|:----------|:---------|:----------|:----------|:----------------------|:------------|:------------|:----------|:-----------|:--------|:------------|:--------------|:---------------------|:------------|:-----------|:---------------|:--------------|:---------------------|:--------------|:--------------------|:-----------|:--------------|:----------------|:----------------|:--------|:--------------|:---------------------|:-------------|:----------|:----------|:-------------|:------------------|:--------------|
| 0 | 9 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 7 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | | X | X | X | X | X | X | | | X | | X | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 5 |  |  |  |  |  | X | X | X | | | | X | | X | | X | | | X | X | X | | | X | X | X | | | | X | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 18 |  |  |  |  |  | X | X | X | | | | | | X | | X | | | X | X | X | | | | X | | | X | X | X | | | X | X | | | | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 76 |  |  |  |  |  | X | X | | | | | X | | X | | X | X | | | | X | | X | | X | X | | X | X | | | | X | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 14 |  |  |  |  |  | X | | | | | | | | | | X | | | | | X | | | | X | | | X | | X | | | | | | | | X | X | X | | X | | | | X | X | | | X | | | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 6 | 8 |  |  |  |  |  | X | | | X | | | | | | | X | | X | | | X | X | X | | X | | | X | X | | X | | X | | | X | | | X | | | X | | | X | X | X | | | | | | | | X | | | | | | | | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | |
| 7 | 12 |  |  |  |  |  | X | | | | | | | | | | X | | | | | X | | | | | | | X | X | X | | | | | | | | | | | | X | | | | X | | | | X | | | | | X | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | |
| 8 | 5 |  |  |  |  |  | X | | | | X | X | | X | | | X | X | | | | X | | | | | | | X | X | X | | | X | | | | | | X | | | | | | | | | | | | | | | | X | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
open-llm-leaderboard/details_abhinand__gemma-2b-it-tamil-v0.1-alpha | ---
pretty_name: Evaluation run of abhinand/gemma-2b-it-tamil-v0.1-alpha
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [abhinand/gemma-2b-it-tamil-v0.1-alpha](https://huggingface.co/abhinand/gemma-2b-it-tamil-v0.1-alpha)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_abhinand__gemma-2b-it-tamil-v0.1-alpha\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-02-29T18:37:34.730615](https://huggingface.co/datasets/open-llm-leaderboard/details_abhinand__gemma-2b-it-tamil-v0.1-alpha/blob/main/results_2024-02-29T18-37-34.730615.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.40322043069379365,\n\
\ \"acc_stderr\": 0.034368568711516966,\n \"acc_norm\": 0.40644647303564846,\n\
\ \"acc_norm_stderr\": 0.0351249450866089,\n \"mc1\": 0.28518971848225216,\n\
\ \"mc1_stderr\": 0.015805827874454892,\n \"mc2\": 0.42628263032891045,\n\
\ \"mc2_stderr\": 0.014683639845915582\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.4778156996587031,\n \"acc_stderr\": 0.014597001927076138,\n\
\ \"acc_norm\": 0.5008532423208191,\n \"acc_norm_stderr\": 0.014611369529813269\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.5376419040031866,\n\
\ \"acc_stderr\": 0.0049756211474061025,\n \"acc_norm\": 0.7141007767377017,\n\
\ \"acc_norm_stderr\": 0.004509181919322837\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.27,\n \"acc_stderr\": 0.04461960433384739,\n \
\ \"acc_norm\": 0.27,\n \"acc_norm_stderr\": 0.04461960433384739\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.4444444444444444,\n\
\ \"acc_stderr\": 0.04292596718256981,\n \"acc_norm\": 0.4444444444444444,\n\
\ \"acc_norm_stderr\": 0.04292596718256981\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.3881578947368421,\n \"acc_stderr\": 0.03965842097512744,\n\
\ \"acc_norm\": 0.3881578947368421,\n \"acc_norm_stderr\": 0.03965842097512744\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.39,\n\
\ \"acc_stderr\": 0.04902071300001975,\n \"acc_norm\": 0.39,\n \
\ \"acc_norm_stderr\": 0.04902071300001975\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.39622641509433965,\n \"acc_stderr\": 0.030102793781791194,\n\
\ \"acc_norm\": 0.39622641509433965,\n \"acc_norm_stderr\": 0.030102793781791194\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.4444444444444444,\n\
\ \"acc_stderr\": 0.04155319955593146,\n \"acc_norm\": 0.4444444444444444,\n\
\ \"acc_norm_stderr\": 0.04155319955593146\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.37,\n \"acc_stderr\": 0.04852365870939099,\n \
\ \"acc_norm\": 0.37,\n \"acc_norm_stderr\": 0.04852365870939099\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.34,\n \"acc_stderr\": 0.04760952285695235,\n \"acc_norm\": 0.34,\n\
\ \"acc_norm_stderr\": 0.04760952285695235\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.28,\n \"acc_stderr\": 0.04512608598542127,\n \
\ \"acc_norm\": 0.28,\n \"acc_norm_stderr\": 0.04512608598542127\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.3699421965317919,\n\
\ \"acc_stderr\": 0.0368122963339432,\n \"acc_norm\": 0.3699421965317919,\n\
\ \"acc_norm_stderr\": 0.0368122963339432\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.17647058823529413,\n \"acc_stderr\": 0.0379328118530781,\n\
\ \"acc_norm\": 0.17647058823529413,\n \"acc_norm_stderr\": 0.0379328118530781\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.51,\n \"acc_stderr\": 0.05024183937956912,\n \"acc_norm\": 0.51,\n\
\ \"acc_norm_stderr\": 0.05024183937956912\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.39148936170212767,\n \"acc_stderr\": 0.031907012423268113,\n\
\ \"acc_norm\": 0.39148936170212767,\n \"acc_norm_stderr\": 0.031907012423268113\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.3508771929824561,\n\
\ \"acc_stderr\": 0.044895393502706986,\n \"acc_norm\": 0.3508771929824561,\n\
\ \"acc_norm_stderr\": 0.044895393502706986\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.4689655172413793,\n \"acc_stderr\": 0.04158632762097828,\n\
\ \"acc_norm\": 0.4689655172413793,\n \"acc_norm_stderr\": 0.04158632762097828\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.2751322751322751,\n \"acc_stderr\": 0.02300008685906864,\n \"\
acc_norm\": 0.2751322751322751,\n \"acc_norm_stderr\": 0.02300008685906864\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.35714285714285715,\n\
\ \"acc_stderr\": 0.04285714285714281,\n \"acc_norm\": 0.35714285714285715,\n\
\ \"acc_norm_stderr\": 0.04285714285714281\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.29,\n \"acc_stderr\": 0.045604802157206845,\n \
\ \"acc_norm\": 0.29,\n \"acc_norm_stderr\": 0.045604802157206845\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\"\
: 0.41935483870967744,\n \"acc_stderr\": 0.02807158890109184,\n \"\
acc_norm\": 0.41935483870967744,\n \"acc_norm_stderr\": 0.02807158890109184\n\
\ },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\"\
: 0.3448275862068966,\n \"acc_stderr\": 0.03344283744280458,\n \"\
acc_norm\": 0.3448275862068966,\n \"acc_norm_stderr\": 0.03344283744280458\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.42,\n \"acc_stderr\": 0.049604496374885836,\n \"acc_norm\"\
: 0.42,\n \"acc_norm_stderr\": 0.049604496374885836\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.44242424242424244,\n \"acc_stderr\": 0.038783721137112745,\n\
\ \"acc_norm\": 0.44242424242424244,\n \"acc_norm_stderr\": 0.038783721137112745\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.4696969696969697,\n \"acc_stderr\": 0.03555804051763929,\n \"\
acc_norm\": 0.4696969696969697,\n \"acc_norm_stderr\": 0.03555804051763929\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.49740932642487046,\n \"acc_stderr\": 0.03608390745384487,\n\
\ \"acc_norm\": 0.49740932642487046,\n \"acc_norm_stderr\": 0.03608390745384487\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.36666666666666664,\n \"acc_stderr\": 0.02443301646605245,\n\
\ \"acc_norm\": 0.36666666666666664,\n \"acc_norm_stderr\": 0.02443301646605245\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.23703703703703705,\n \"acc_stderr\": 0.02592887613276612,\n \
\ \"acc_norm\": 0.23703703703703705,\n \"acc_norm_stderr\": 0.02592887613276612\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.37815126050420167,\n \"acc_stderr\": 0.031499305777849054,\n\
\ \"acc_norm\": 0.37815126050420167,\n \"acc_norm_stderr\": 0.031499305777849054\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.23178807947019867,\n \"acc_stderr\": 0.03445406271987054,\n \"\
acc_norm\": 0.23178807947019867,\n \"acc_norm_stderr\": 0.03445406271987054\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.5119266055045871,\n \"acc_stderr\": 0.021431223617362233,\n \"\
acc_norm\": 0.5119266055045871,\n \"acc_norm_stderr\": 0.021431223617362233\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.2824074074074074,\n \"acc_stderr\": 0.030701372111510927,\n \"\
acc_norm\": 0.2824074074074074,\n \"acc_norm_stderr\": 0.030701372111510927\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.4166666666666667,\n \"acc_stderr\": 0.03460228327239172,\n \"\
acc_norm\": 0.4166666666666667,\n \"acc_norm_stderr\": 0.03460228327239172\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.43037974683544306,\n \"acc_stderr\": 0.032230171959375976,\n \
\ \"acc_norm\": 0.43037974683544306,\n \"acc_norm_stderr\": 0.032230171959375976\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.43946188340807174,\n\
\ \"acc_stderr\": 0.03331092511038179,\n \"acc_norm\": 0.43946188340807174,\n\
\ \"acc_norm_stderr\": 0.03331092511038179\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.44274809160305345,\n \"acc_stderr\": 0.04356447202665069,\n\
\ \"acc_norm\": 0.44274809160305345,\n \"acc_norm_stderr\": 0.04356447202665069\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.6363636363636364,\n \"acc_stderr\": 0.043913262867240704,\n \"\
acc_norm\": 0.6363636363636364,\n \"acc_norm_stderr\": 0.043913262867240704\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.4074074074074074,\n\
\ \"acc_stderr\": 0.04750077341199986,\n \"acc_norm\": 0.4074074074074074,\n\
\ \"acc_norm_stderr\": 0.04750077341199986\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.3803680981595092,\n \"acc_stderr\": 0.03814269893261836,\n\
\ \"acc_norm\": 0.3803680981595092,\n \"acc_norm_stderr\": 0.03814269893261836\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.38392857142857145,\n\
\ \"acc_stderr\": 0.04616143075028547,\n \"acc_norm\": 0.38392857142857145,\n\
\ \"acc_norm_stderr\": 0.04616143075028547\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.4368932038834951,\n \"acc_stderr\": 0.04911147107365777,\n\
\ \"acc_norm\": 0.4368932038834951,\n \"acc_norm_stderr\": 0.04911147107365777\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.5854700854700855,\n\
\ \"acc_stderr\": 0.03227396567623779,\n \"acc_norm\": 0.5854700854700855,\n\
\ \"acc_norm_stderr\": 0.03227396567623779\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.42,\n \"acc_stderr\": 0.04960449637488584,\n \
\ \"acc_norm\": 0.42,\n \"acc_norm_stderr\": 0.04960449637488584\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.5351213282247765,\n\
\ \"acc_stderr\": 0.017835798806290642,\n \"acc_norm\": 0.5351213282247765,\n\
\ \"acc_norm_stderr\": 0.017835798806290642\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.3670520231213873,\n \"acc_stderr\": 0.025950054337654085,\n\
\ \"acc_norm\": 0.3670520231213873,\n \"acc_norm_stderr\": 0.025950054337654085\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.2558659217877095,\n\
\ \"acc_stderr\": 0.014593620923210732,\n \"acc_norm\": 0.2558659217877095,\n\
\ \"acc_norm_stderr\": 0.014593620923210732\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.48366013071895425,\n \"acc_stderr\": 0.028614624752805407,\n\
\ \"acc_norm\": 0.48366013071895425,\n \"acc_norm_stderr\": 0.028614624752805407\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.44694533762057875,\n\
\ \"acc_stderr\": 0.028237769422085328,\n \"acc_norm\": 0.44694533762057875,\n\
\ \"acc_norm_stderr\": 0.028237769422085328\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.4351851851851852,\n \"acc_stderr\": 0.027586006221607704,\n\
\ \"acc_norm\": 0.4351851851851852,\n \"acc_norm_stderr\": 0.027586006221607704\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.3120567375886525,\n \"acc_stderr\": 0.02764012054516993,\n \
\ \"acc_norm\": 0.3120567375886525,\n \"acc_norm_stderr\": 0.02764012054516993\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.3213820078226858,\n\
\ \"acc_stderr\": 0.011927581352265078,\n \"acc_norm\": 0.3213820078226858,\n\
\ \"acc_norm_stderr\": 0.011927581352265078\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.2610294117647059,\n \"acc_stderr\": 0.02667925227010312,\n\
\ \"acc_norm\": 0.2610294117647059,\n \"acc_norm_stderr\": 0.02667925227010312\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.40522875816993464,\n \"acc_stderr\": 0.019861155193829173,\n \
\ \"acc_norm\": 0.40522875816993464,\n \"acc_norm_stderr\": 0.019861155193829173\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.43636363636363634,\n\
\ \"acc_stderr\": 0.04750185058907297,\n \"acc_norm\": 0.43636363636363634,\n\
\ \"acc_norm_stderr\": 0.04750185058907297\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.3673469387755102,\n \"acc_stderr\": 0.030862144921087558,\n\
\ \"acc_norm\": 0.3673469387755102,\n \"acc_norm_stderr\": 0.030862144921087558\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.5124378109452736,\n\
\ \"acc_stderr\": 0.03534439848539579,\n \"acc_norm\": 0.5124378109452736,\n\
\ \"acc_norm_stderr\": 0.03534439848539579\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.56,\n \"acc_stderr\": 0.049888765156985884,\n \
\ \"acc_norm\": 0.56,\n \"acc_norm_stderr\": 0.049888765156985884\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.40963855421686746,\n\
\ \"acc_stderr\": 0.038284011150790206,\n \"acc_norm\": 0.40963855421686746,\n\
\ \"acc_norm_stderr\": 0.038284011150790206\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.5614035087719298,\n \"acc_stderr\": 0.038057975055904594,\n\
\ \"acc_norm\": 0.5614035087719298,\n \"acc_norm_stderr\": 0.038057975055904594\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.28518971848225216,\n\
\ \"mc1_stderr\": 0.015805827874454892,\n \"mc2\": 0.42628263032891045,\n\
\ \"mc2_stderr\": 0.014683639845915582\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.6495659037095501,\n \"acc_stderr\": 0.013409047676670192\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.16603487490523122,\n \
\ \"acc_stderr\": 0.010249811990593523\n }\n}\n```"
repo_url: https://huggingface.co/abhinand/gemma-2b-it-tamil-v0.1-alpha
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_02_29T18_37_34.730615
path:
- '**/details_harness|arc:challenge|25_2024-02-29T18-37-34.730615.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-02-29T18-37-34.730615.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_02_29T18_37_34.730615
path:
- '**/details_harness|gsm8k|5_2024-02-29T18-37-34.730615.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-02-29T18-37-34.730615.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_02_29T18_37_34.730615
path:
- '**/details_harness|hellaswag|10_2024-02-29T18-37-34.730615.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-02-29T18-37-34.730615.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_02_29T18_37_34.730615
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-29T18-37-34.730615.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-29T18-37-34.730615.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-29T18-37-34.730615.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_02_29T18_37_34.730615
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-29T18-37-34.730615.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-29T18-37-34.730615.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_02_29T18_37_34.730615
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-29T18-37-34.730615.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-29T18-37-34.730615.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_02_29T18_37_34.730615
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-29T18-37-34.730615.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-29T18-37-34.730615.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_02_29T18_37_34.730615
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-29T18-37-34.730615.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-29T18-37-34.730615.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_02_29T18_37_34.730615
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-29T18-37-34.730615.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-29T18-37-34.730615.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_02_29T18_37_34.730615
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-29T18-37-34.730615.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-29T18-37-34.730615.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_02_29T18_37_34.730615
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-29T18-37-34.730615.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-29T18-37-34.730615.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_02_29T18_37_34.730615
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-29T18-37-34.730615.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-29T18-37-34.730615.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_02_29T18_37_34.730615
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-29T18-37-34.730615.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-29T18-37-34.730615.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_02_29T18_37_34.730615
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-29T18-37-34.730615.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-29T18-37-34.730615.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_02_29T18_37_34.730615
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-29T18-37-34.730615.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-29T18-37-34.730615.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_02_29T18_37_34.730615
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-29T18-37-34.730615.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-29T18-37-34.730615.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_02_29T18_37_34.730615
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-29T18-37-34.730615.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-29T18-37-34.730615.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_02_29T18_37_34.730615
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-29T18-37-34.730615.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-29T18-37-34.730615.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_02_29T18_37_34.730615
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-29T18-37-34.730615.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-29T18-37-34.730615.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_02_29T18_37_34.730615
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-29T18-37-34.730615.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-29T18-37-34.730615.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_02_29T18_37_34.730615
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-29T18-37-34.730615.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-29T18-37-34.730615.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_02_29T18_37_34.730615
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-29T18-37-34.730615.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-29T18-37-34.730615.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_02_29T18_37_34.730615
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-29T18-37-34.730615.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-29T18-37-34.730615.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_02_29T18_37_34.730615
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-29T18-37-34.730615.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-29T18-37-34.730615.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_02_29T18_37_34.730615
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-29T18-37-34.730615.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-29T18-37-34.730615.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_02_29T18_37_34.730615
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-29T18-37-34.730615.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-29T18-37-34.730615.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_02_29T18_37_34.730615
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-29T18-37-34.730615.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-29T18-37-34.730615.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_02_29T18_37_34.730615
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-29T18-37-34.730615.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-29T18-37-34.730615.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_02_29T18_37_34.730615
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-29T18-37-34.730615.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-29T18-37-34.730615.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_02_29T18_37_34.730615
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-29T18-37-34.730615.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-29T18-37-34.730615.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_02_29T18_37_34.730615
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-29T18-37-34.730615.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-29T18-37-34.730615.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_02_29T18_37_34.730615
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-29T18-37-34.730615.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-29T18-37-34.730615.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_02_29T18_37_34.730615
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-29T18-37-34.730615.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-29T18-37-34.730615.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_02_29T18_37_34.730615
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-29T18-37-34.730615.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-29T18-37-34.730615.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_02_29T18_37_34.730615
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-29T18-37-34.730615.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-29T18-37-34.730615.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_02_29T18_37_34.730615
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-29T18-37-34.730615.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-29T18-37-34.730615.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_02_29T18_37_34.730615
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-29T18-37-34.730615.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-29T18-37-34.730615.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_02_29T18_37_34.730615
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-29T18-37-34.730615.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-29T18-37-34.730615.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_02_29T18_37_34.730615
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-29T18-37-34.730615.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-29T18-37-34.730615.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_02_29T18_37_34.730615
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-29T18-37-34.730615.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-29T18-37-34.730615.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_02_29T18_37_34.730615
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-29T18-37-34.730615.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-29T18-37-34.730615.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_02_29T18_37_34.730615
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-29T18-37-34.730615.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-29T18-37-34.730615.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_02_29T18_37_34.730615
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-29T18-37-34.730615.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-29T18-37-34.730615.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_02_29T18_37_34.730615
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-29T18-37-34.730615.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-29T18-37-34.730615.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_02_29T18_37_34.730615
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-29T18-37-34.730615.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-29T18-37-34.730615.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_02_29T18_37_34.730615
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-29T18-37-34.730615.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-29T18-37-34.730615.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_02_29T18_37_34.730615
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-29T18-37-34.730615.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-29T18-37-34.730615.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_02_29T18_37_34.730615
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-29T18-37-34.730615.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-29T18-37-34.730615.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_02_29T18_37_34.730615
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-29T18-37-34.730615.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-29T18-37-34.730615.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_02_29T18_37_34.730615
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-29T18-37-34.730615.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-29T18-37-34.730615.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_02_29T18_37_34.730615
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-29T18-37-34.730615.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-29T18-37-34.730615.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_02_29T18_37_34.730615
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-29T18-37-34.730615.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-29T18-37-34.730615.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_02_29T18_37_34.730615
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-29T18-37-34.730615.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-29T18-37-34.730615.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_02_29T18_37_34.730615
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-29T18-37-34.730615.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-29T18-37-34.730615.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_02_29T18_37_34.730615
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-29T18-37-34.730615.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-29T18-37-34.730615.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_02_29T18_37_34.730615
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-29T18-37-34.730615.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-29T18-37-34.730615.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_02_29T18_37_34.730615
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-29T18-37-34.730615.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-29T18-37-34.730615.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_02_29T18_37_34.730615
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-29T18-37-34.730615.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-29T18-37-34.730615.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_02_29T18_37_34.730615
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-29T18-37-34.730615.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-29T18-37-34.730615.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_02_29T18_37_34.730615
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-29T18-37-34.730615.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-29T18-37-34.730615.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_02_29T18_37_34.730615
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-29T18-37-34.730615.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-29T18-37-34.730615.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_02_29T18_37_34.730615
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-29T18-37-34.730615.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-29T18-37-34.730615.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_02_29T18_37_34.730615
path:
- '**/details_harness|winogrande|5_2024-02-29T18-37-34.730615.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-02-29T18-37-34.730615.parquet'
- config_name: results
data_files:
- split: 2024_02_29T18_37_34.730615
path:
- results_2024-02-29T18-37-34.730615.parquet
- split: latest
path:
- results_2024-02-29T18-37-34.730615.parquet
---
# Dataset Card for Evaluation run of abhinand/gemma-2b-it-tamil-v0.1-alpha
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [abhinand/gemma-2b-it-tamil-v0.1-alpha](https://huggingface.co/abhinand/gemma-2b-it-tamil-v0.1-alpha) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_abhinand__gemma-2b-it-tamil-v0.1-alpha",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-02-29T18:37:34.730615](https://huggingface.co/datasets/open-llm-leaderboard/details_abhinand__gemma-2b-it-tamil-v0.1-alpha/blob/main/results_2024-02-29T18-37-34.730615.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.40322043069379365,
"acc_stderr": 0.034368568711516966,
"acc_norm": 0.40644647303564846,
"acc_norm_stderr": 0.0351249450866089,
"mc1": 0.28518971848225216,
"mc1_stderr": 0.015805827874454892,
"mc2": 0.42628263032891045,
"mc2_stderr": 0.014683639845915582
},
"harness|arc:challenge|25": {
"acc": 0.4778156996587031,
"acc_stderr": 0.014597001927076138,
"acc_norm": 0.5008532423208191,
"acc_norm_stderr": 0.014611369529813269
},
"harness|hellaswag|10": {
"acc": 0.5376419040031866,
"acc_stderr": 0.0049756211474061025,
"acc_norm": 0.7141007767377017,
"acc_norm_stderr": 0.004509181919322837
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.27,
"acc_stderr": 0.04461960433384739,
"acc_norm": 0.27,
"acc_norm_stderr": 0.04461960433384739
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.4444444444444444,
"acc_stderr": 0.04292596718256981,
"acc_norm": 0.4444444444444444,
"acc_norm_stderr": 0.04292596718256981
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.3881578947368421,
"acc_stderr": 0.03965842097512744,
"acc_norm": 0.3881578947368421,
"acc_norm_stderr": 0.03965842097512744
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.39,
"acc_stderr": 0.04902071300001975,
"acc_norm": 0.39,
"acc_norm_stderr": 0.04902071300001975
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.39622641509433965,
"acc_stderr": 0.030102793781791194,
"acc_norm": 0.39622641509433965,
"acc_norm_stderr": 0.030102793781791194
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.4444444444444444,
"acc_stderr": 0.04155319955593146,
"acc_norm": 0.4444444444444444,
"acc_norm_stderr": 0.04155319955593146
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.37,
"acc_stderr": 0.04852365870939099,
"acc_norm": 0.37,
"acc_norm_stderr": 0.04852365870939099
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.34,
"acc_stderr": 0.04760952285695235,
"acc_norm": 0.34,
"acc_norm_stderr": 0.04760952285695235
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.28,
"acc_stderr": 0.04512608598542127,
"acc_norm": 0.28,
"acc_norm_stderr": 0.04512608598542127
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.3699421965317919,
"acc_stderr": 0.0368122963339432,
"acc_norm": 0.3699421965317919,
"acc_norm_stderr": 0.0368122963339432
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.17647058823529413,
"acc_stderr": 0.0379328118530781,
"acc_norm": 0.17647058823529413,
"acc_norm_stderr": 0.0379328118530781
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.51,
"acc_stderr": 0.05024183937956912,
"acc_norm": 0.51,
"acc_norm_stderr": 0.05024183937956912
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.39148936170212767,
"acc_stderr": 0.031907012423268113,
"acc_norm": 0.39148936170212767,
"acc_norm_stderr": 0.031907012423268113
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.3508771929824561,
"acc_stderr": 0.044895393502706986,
"acc_norm": 0.3508771929824561,
"acc_norm_stderr": 0.044895393502706986
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.4689655172413793,
"acc_stderr": 0.04158632762097828,
"acc_norm": 0.4689655172413793,
"acc_norm_stderr": 0.04158632762097828
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.2751322751322751,
"acc_stderr": 0.02300008685906864,
"acc_norm": 0.2751322751322751,
"acc_norm_stderr": 0.02300008685906864
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.35714285714285715,
"acc_stderr": 0.04285714285714281,
"acc_norm": 0.35714285714285715,
"acc_norm_stderr": 0.04285714285714281
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.29,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.29,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.41935483870967744,
"acc_stderr": 0.02807158890109184,
"acc_norm": 0.41935483870967744,
"acc_norm_stderr": 0.02807158890109184
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.3448275862068966,
"acc_stderr": 0.03344283744280458,
"acc_norm": 0.3448275862068966,
"acc_norm_stderr": 0.03344283744280458
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.42,
"acc_stderr": 0.049604496374885836,
"acc_norm": 0.42,
"acc_norm_stderr": 0.049604496374885836
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.44242424242424244,
"acc_stderr": 0.038783721137112745,
"acc_norm": 0.44242424242424244,
"acc_norm_stderr": 0.038783721137112745
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.4696969696969697,
"acc_stderr": 0.03555804051763929,
"acc_norm": 0.4696969696969697,
"acc_norm_stderr": 0.03555804051763929
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.49740932642487046,
"acc_stderr": 0.03608390745384487,
"acc_norm": 0.49740932642487046,
"acc_norm_stderr": 0.03608390745384487
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.36666666666666664,
"acc_stderr": 0.02443301646605245,
"acc_norm": 0.36666666666666664,
"acc_norm_stderr": 0.02443301646605245
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.23703703703703705,
"acc_stderr": 0.02592887613276612,
"acc_norm": 0.23703703703703705,
"acc_norm_stderr": 0.02592887613276612
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.37815126050420167,
"acc_stderr": 0.031499305777849054,
"acc_norm": 0.37815126050420167,
"acc_norm_stderr": 0.031499305777849054
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.23178807947019867,
"acc_stderr": 0.03445406271987054,
"acc_norm": 0.23178807947019867,
"acc_norm_stderr": 0.03445406271987054
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.5119266055045871,
"acc_stderr": 0.021431223617362233,
"acc_norm": 0.5119266055045871,
"acc_norm_stderr": 0.021431223617362233
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.2824074074074074,
"acc_stderr": 0.030701372111510927,
"acc_norm": 0.2824074074074074,
"acc_norm_stderr": 0.030701372111510927
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.4166666666666667,
"acc_stderr": 0.03460228327239172,
"acc_norm": 0.4166666666666667,
"acc_norm_stderr": 0.03460228327239172
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.43037974683544306,
"acc_stderr": 0.032230171959375976,
"acc_norm": 0.43037974683544306,
"acc_norm_stderr": 0.032230171959375976
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.43946188340807174,
"acc_stderr": 0.03331092511038179,
"acc_norm": 0.43946188340807174,
"acc_norm_stderr": 0.03331092511038179
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.44274809160305345,
"acc_stderr": 0.04356447202665069,
"acc_norm": 0.44274809160305345,
"acc_norm_stderr": 0.04356447202665069
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.6363636363636364,
"acc_stderr": 0.043913262867240704,
"acc_norm": 0.6363636363636364,
"acc_norm_stderr": 0.043913262867240704
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.4074074074074074,
"acc_stderr": 0.04750077341199986,
"acc_norm": 0.4074074074074074,
"acc_norm_stderr": 0.04750077341199986
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.3803680981595092,
"acc_stderr": 0.03814269893261836,
"acc_norm": 0.3803680981595092,
"acc_norm_stderr": 0.03814269893261836
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.38392857142857145,
"acc_stderr": 0.04616143075028547,
"acc_norm": 0.38392857142857145,
"acc_norm_stderr": 0.04616143075028547
},
"harness|hendrycksTest-management|5": {
"acc": 0.4368932038834951,
"acc_stderr": 0.04911147107365777,
"acc_norm": 0.4368932038834951,
"acc_norm_stderr": 0.04911147107365777
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.5854700854700855,
"acc_stderr": 0.03227396567623779,
"acc_norm": 0.5854700854700855,
"acc_norm_stderr": 0.03227396567623779
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.42,
"acc_stderr": 0.04960449637488584,
"acc_norm": 0.42,
"acc_norm_stderr": 0.04960449637488584
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.5351213282247765,
"acc_stderr": 0.017835798806290642,
"acc_norm": 0.5351213282247765,
"acc_norm_stderr": 0.017835798806290642
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.3670520231213873,
"acc_stderr": 0.025950054337654085,
"acc_norm": 0.3670520231213873,
"acc_norm_stderr": 0.025950054337654085
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.2558659217877095,
"acc_stderr": 0.014593620923210732,
"acc_norm": 0.2558659217877095,
"acc_norm_stderr": 0.014593620923210732
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.48366013071895425,
"acc_stderr": 0.028614624752805407,
"acc_norm": 0.48366013071895425,
"acc_norm_stderr": 0.028614624752805407
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.44694533762057875,
"acc_stderr": 0.028237769422085328,
"acc_norm": 0.44694533762057875,
"acc_norm_stderr": 0.028237769422085328
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.4351851851851852,
"acc_stderr": 0.027586006221607704,
"acc_norm": 0.4351851851851852,
"acc_norm_stderr": 0.027586006221607704
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.3120567375886525,
"acc_stderr": 0.02764012054516993,
"acc_norm": 0.3120567375886525,
"acc_norm_stderr": 0.02764012054516993
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.3213820078226858,
"acc_stderr": 0.011927581352265078,
"acc_norm": 0.3213820078226858,
"acc_norm_stderr": 0.011927581352265078
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.2610294117647059,
"acc_stderr": 0.02667925227010312,
"acc_norm": 0.2610294117647059,
"acc_norm_stderr": 0.02667925227010312
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.40522875816993464,
"acc_stderr": 0.019861155193829173,
"acc_norm": 0.40522875816993464,
"acc_norm_stderr": 0.019861155193829173
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.43636363636363634,
"acc_stderr": 0.04750185058907297,
"acc_norm": 0.43636363636363634,
"acc_norm_stderr": 0.04750185058907297
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.3673469387755102,
"acc_stderr": 0.030862144921087558,
"acc_norm": 0.3673469387755102,
"acc_norm_stderr": 0.030862144921087558
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.5124378109452736,
"acc_stderr": 0.03534439848539579,
"acc_norm": 0.5124378109452736,
"acc_norm_stderr": 0.03534439848539579
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.56,
"acc_stderr": 0.049888765156985884,
"acc_norm": 0.56,
"acc_norm_stderr": 0.049888765156985884
},
"harness|hendrycksTest-virology|5": {
"acc": 0.40963855421686746,
"acc_stderr": 0.038284011150790206,
"acc_norm": 0.40963855421686746,
"acc_norm_stderr": 0.038284011150790206
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.5614035087719298,
"acc_stderr": 0.038057975055904594,
"acc_norm": 0.5614035087719298,
"acc_norm_stderr": 0.038057975055904594
},
"harness|truthfulqa:mc|0": {
"mc1": 0.28518971848225216,
"mc1_stderr": 0.015805827874454892,
"mc2": 0.42628263032891045,
"mc2_stderr": 0.014683639845915582
},
"harness|winogrande|5": {
"acc": 0.6495659037095501,
"acc_stderr": 0.013409047676670192
},
"harness|gsm8k|5": {
"acc": 0.16603487490523122,
"acc_stderr": 0.010249811990593523
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
ShareGPTVideo/test_video_and_instruction | ---
license: apache-2.0
task_categories:
- question-answering
- other
language:
- en
tags:
- GPT-4V
- video
size_categories:
- n < 1M
---
# ShareGPTVideo Testing Data
All dataset and models can be found at [ShareGPTVideo](https://huggingface.co/ShareGPTVideo).
Contents:
[Test video frames](https://huggingface.co/datasets/ShareGPTVideo/test_video_and_instruction/tree/main/video_data/test): contains video frames used for testing
- In-domain: WebVid, Vidal (Youtube shorts), ActivityNet
- Out-of-domain: MSRVTT, MSVD, TGIF, SSV2
Testing pipeline refer to [LLaVA-Hound-DPO test](https://github.com/RifleZhang/LLaVA-Hound-DPO/blob/main/llava_hound_dpo/test/README.md)
# Set up:
```bash
git clone git@github.com:RifleZhang/LLaVA-Hound-DPO.git
source setup/setup_env.sh
source setup/setup_test_data.sh
```
**Video Frames**:
```bash
video_data
└── test
├── actnet
├── msrvtt
├── msvd
├── ssv2
├── tgif
├── vidal
└── webvid
```
For **raw videos**, we upload in [ShareGPTVideo/test_video_data](https://huggingface.co/datasets/ShareGPTVideo/test_video_data)
**Test Video QA Data**:
```bash
video_instruction
├── test
│ ├── actnet.qa.jsonl
│ ├── msrvtt.qa.jsonl
│ ├── msrvtt.qa.official.jsonl
│ ├── msvd.qa.jsonl
│ ├── msvd.qa.official.jsonl
│ ├── ssv2.qa.jsonl
│ ├── tgif.qa.jsonl
│ ├── tgif.qa.official.jsonl
│ ├── vidal.qa.jsonl
│ └── webvid.qa.jsonl
└── test_result
├── eval_results_official.jsonl
```
# Preview examples
<details><summary>Exisiting Video QA from Video-ChatGPT</summary>
MSRVTT dataset example, we use a 5k subset from [Video-ChatGPT](https://github.com/mbzuai-oryx/Video-ChatGPT?tab=readme-ov-file#quantitative-evaluation-bar_chart)
```bash
{
"id":"v_video7012_0",
"video":"test/msrvtt/video7012",
"conversations":[
{
"from":"human",
"value":"<video>\nwhat is a suit man doing?"
},
{
"from":"gpt",
"value":"talk"
}
]
}
```
</details>
<details><summary>Our Proposed Video QA Benchmark</summary>
MSRVTT dataset example, with detailed caption and high quality QA.
```bash
{
"id":"video1_0",
"video":"test/msrvtt/video1",
"conversations":[
{
"from":"human",
"value":"<video>\nWhat is being added to the red liquid in the cooking pot at the beginning of the video?"
},
{
"from":"gpt",
"value":"All-purpose seasoning and curry powder are being added to the red liquid in the cooking pot at the beginning of the video."
}
],
"caption":"The video opens with a close-up of a cooking pot containing a red liquid to which all-purpose seasoning and curry powder are being added. The text overlay mentions these ingredients. The next frame shows a hand holding a container with dried/smoked prawns over a cooking pot; a text overlay confirms the ingredient. Subsequently, the pot is now shown with prawns added, stirring is underway, and the same descriptive text is visible. The following frames reveal someone adding dried crayfish to the pot, with a textual indication of the ingredient and giving an alternative preference option. Then, one or two scoops of tomato puree are added to the mixture in the pot, and a textual description quantifies the amount. The scene transitions to show someone adding salt to the pot and stirring its contents, as narrated by the respective text overlay. Finally, a lid is being placed on the pot, and the instruction to cover and cook for 30-45 minutes appears as text on the screen."
}
```
</details>
<br>
# Issues and Requests
send to [project page](https://github.com/RifleZhang/LLaVA-Hound-DPO/tree/main)
# Citation
```
@misc{zhang2024direct,
title={Direct Preference Optimization of Video Large Multimodal Models from Language Model Reward},
author={Ruohong Zhang and Liangke Gui and Zhiqing Sun and Yihao Feng and Keyang Xu and Yuanhan Zhang and Di Fu and Chunyuan Li and Alexander Hauptmann and Yonatan Bisk and Yiming Yang},
year={2024},
eprint={2404.01258},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
``` |
THUDM/LongBench | ---
task_categories:
- question-answering
- text-generation
- summarization
- conversational
- text-classification
language:
- en
- zh
tags:
- Long Context
size_categories:
- 1K<n<10K
---
# Introduction
**LongBench** is the first benchmark for bilingual, multitask, and comprehensive assessment of **long context understanding** capabilities of large language models. LongBench includes different languages (Chinese and English) to provide a more comprehensive evaluation of the large models' multilingual capabilities on long contexts. In addition, LongBench is composed of six major categories and twenty one different tasks, covering key long-text application scenarios such as single-document QA, multi-document QA, summarization, few-shot learning, synthetic tasks and code completion.
We are fully aware of the potentially high costs involved in the model evaluation process, especially in the context of long context scenarios (such as manual annotation costs or API call costs). Therefore, we adopt a fully automated evaluation method, aimed at measuring and evaluating the model's ability to understand long contexts at the lowest cost.
LongBench includes 14 English tasks, 5 Chinese tasks, and 2 code tasks, with the average length of most tasks ranging from 5k to 15k, and a total of 4,750 test data. For detailed statistics and construction methods of LongBench tasks, please refer [here](task.md). In addition, we provide LongBench-E, a test set with a more uniform length distribution constructed by uniform sampling, with comparable amounts of data in the 0-4k, 4k-8k, and 8k+ length intervals to provide an analysis of the model's performance variations at different input lengths.
Github Repo for LongBench: https://github.com/THUDM/LongBench
Arxiv Paper for LongBench: https://arxiv.org/pdf/2308.14508.pdf
# How to use it?
#### Loading Data
```python
from datasets import load_dataset
datasets = ["narrativeqa", "qasper", "multifieldqa_en", "multifieldqa_zh", "hotpotqa", "2wikimqa", "musique", \
"dureader", "gov_report", "qmsum", "multi_news", "vcsum", "trec", "triviaqa", "samsum", "lsht", \
"passage_count", "passage_retrieval_en", "passage_retrieval_zh", "lcc", "repobench-p"]
for dataset in datasets:
data = load_dataset('THUDM/LongBench', dataset, split='test')
```
Similarly, you can load the **LongBench-E** data
```python
from datasets import load_dataset
datasets = ["qasper", "multifieldqa_en", "hotpotqa", "2wikimqa", "gov_report", "multi_news", "trec", \
"triviaqa", "samsum", "passage_count", "passage_retrieval_en", "lcc", "repobench-p"]
for dataset in datasets:
data = load_dataset('THUDM/LongBench', f"{dataset}_e", split='test')
```
Alternatively, you can download the folder from [this link](https://huggingface.co/datasets/THUDM/LongBench/resolve/main/data.zip) to load the data.
#### Data Format
All data in **LongBench** (LongBench-E) are standardized to the following format:
```json
{
"input": "The input/command for the task, usually short, such as questions in QA, queries in Few-shot tasks, etc",
"context": "The long context required for the task, such as documents, cross-file code, few-shot examples in Few-shot tasks",
"answers": "A List of all true answers",
"length": "Total length of the first three items (counted in characters for Chinese and words for English)",
"dataset": "The name of the dataset to which this piece of data belongs",
"language": "The language of this piece of data",
"all_classes": "All categories in classification tasks, null for non-classification tasks",
"_id": "Random id for each piece of data"
}
```
#### Evaluation
This repository provides data download for LongBench. If you wish to use this dataset for automated evaluation, please refer to our [github](https://github.com/THUDM/LongBench).
# Task statistics
| Task | Task Type | Eval metric | Avg len |Language | \#Sample |
| :-------- | :-----------:| :-----------: |:-------: | :-----------: |:--------: |
| HotpotQA | Multi-doc QA | F1 |9,151 |EN |200 |
| 2WikiMultihopQA| Multi-doc QA | F1 |4,887 |EN |200 |
| MuSiQue| Multi-doc QA | F1 |11,214 |EN |200 |
| DuReader| Multi-doc QA | Rouge-L |15,768 |ZH |200 |
| MultiFieldQA-en| Single-doc QA | F1 |4,559 |EN |150 |
| MultiFieldQA-zh| Single-doc QA | F1 |6,701 |ZH |200 |
| NarrativeQA| Single-doc QA | F1 |18,409 |EN |200 |
| Qasper| Single-doc QA | F1 |3,619 |EN |200 |
| GovReport| Summarization | Rouge-L |8,734 |EN |200 |
| QMSum| Summarization | Rouge-L |10,614 |EN |200 |
| MultiNews| Summarization | Rouge-L |2,113 |EN |200 |
| VCSUM| Summarization | Rouge-L |15,380 |ZH |200 |
| TriviaQA| Few shot | F1 |8,209 |EN |200 |
| SAMSum| Few shot | Rouge-L |6,258 |EN |200 |
| TREC| Few shot | Accuracy |5,177 |EN |200 |
| LSHT| Few shot | Accuracy |22,337 |ZH |200 |
| PassageRetrieval-en| Synthetic | Accuracy |9,289 |EN |200 |
| PassageCount| Synthetic | Accuracy |11,141 |EN |200 |
| PassageRetrieval-zh | Synthetic | Accuracy |6,745 |ZH |200 |
| LCC| Code | Edit Sim |1,235 |Python/C#/Java |500 |
| RepoBench-P| Code | Edit Sim |4,206 |Python/Java |500 |
> Note: In order to avoid discrepancies caused by different tokenizers, we use the word count (using Python's split function) to calculate the average length of English datasets and code datasets, and use the character count to calculate the average length of Chinese datasets.
# Task description
| Task | Task Description |
| :---------------- | :----------------------------------------------------------- |
| HotpotQA | Answer related questions based on multiple given documents |
| 2WikiMultihopQA | Answer related questions based on multiple given documents |
| MuSiQue | Answer related questions based on multiple given documents |
| DuReader | Answer related Chinese questions based on multiple retrieved documents |
| MultiFieldQA-en | Answer English questions based on a long article, which comes from a relatively diverse field |
| MultiFieldQA-zh | Answer Chinese questions based on a long article, which comes from a relatively diverse field |
| NarrativeQA | Answer questions based on stories or scripts, including understanding of important elements such as characters, plots, themes, etc. |
| Qasper | Answer questions based on a NLP research paper, questions proposed and answered by NLP practitioners |
| GovReport | A summarization task that requires summarizing government work reports |
| MultiNews | A multi-doc summarization that requires summarizing over multiple news |
| QMSum | A summarization task that requires summarizing meeting records based on user queries |
| VCSUM | A summarization task that requires summarizing Chinese meeting records |
| SAMSum | A dialogue summarization task, providing several few-shot examples |
| TriviaQA | Single document question answering task, providing several few-shot examples |
| NQ | Single document question answering task, providing several few-shot examples |
| TREC | A classification task that requires categorizing questions, includes 50 categories in total |
| LSHT | A Chinese classification task that requires categorizing news, includes 24 categories in total |
| PassageRetrieval-en | Given 30 English Wikipedia paragraphs, determine which paragraph the given summary corresponds to |
| PassageCount | Determine the total number of different paragraphs in a given repetitive article |
| PassageRetrieval-zh | Given several Chinese paragraphs from the C4 data set, determine which paragraph the given abstract corresponds to |
| LCC | Given a long piece of code, predict the next line of code |
| RepoBench-P | Given code in multiple files within a GitHub repository (including cross-file dependencies), predict the next line of code |
# Task construction
> Note: For all tasks constructed from existing datasets, we use data from the validation or test set of the existing dataset (except for VCSUM).
- The tasks of [HotpotQA](https://hotpotqa.github.io/), [2WikiMultihopQA](https://aclanthology.org/2020.coling-main.580/), [MuSiQue](https://arxiv.org/abs/2108.00573), and [DuReader](https://github.com/baidu/DuReader) are built based on the original datasets and processed to be suitable for long context evaluation. Specifically, for questions in the validation set, we select the evidence passage that contains the answer and several distracting articles. These articles together with the original question constitute the input of the tasks.
- The tasks of MultiFiedQA-zh and MultiFieldQA-en consist of long artical data from about 10 sources, including Latex papers, judicial documents, government work reports, and PDF documents indexed by Google. For each long artical, we invite several PhD and master students to annotate, i.e., to ask questions based on the long artical and give the correct answers. To better automate evaluation, we ask the annotators to propose questions with definitive answers as much as possible.
- The tasks of [NarrativeQA](https://arxiv.org/pdf/1712.07040.pdf), [Qasper](https://arxiv.org/pdf/2105.03011.pdf), [GovReport](https://arxiv.org/pdf/2104.02112.pdf), [QMSum](https://arxiv.org/pdf/2104.05938.pdf) and [MultiNews](https://aclanthology.org/P19-1102.pdf) directly use the data provided by the original papers. In the specific construction, we use the template provided by [ZeroSCROLLS](https://www.zero.scrolls-benchmark.com/) to convert the corresponding data into pure text input.
- The [VCSUM](https://arxiv.org/abs/2305.05280) task is built based on the original dataset, and we design a corresponding template to convert the corresponding data into pure text input.
- The [TriviaQA](https://nlp.cs.washington.edu/triviaqa/) task is constructed in the manner of [CoLT5](https://arxiv.org/abs/2303.09752), which provides several examples of question and answering based on documents, and requires the language model to answer related questions based on new documents.
- The tasks of [SAMSum](https://aclanthology.org/D19-5409.pdf), [TREC](https://aclanthology.org/C02-1150.pdf) and [LSHT](http://tcci.ccf.org.cn/conference/2014/dldoc/evatask6.pdf) are built based on the original datasets. For each question in the validation set, we sample several data from the training set to form few-shot examples. These examples together with the questions in the validation set constitute the input for this task.
- The PassageRetrieval-en task is constructed based on English Wikipedia. For each piece of data, we randomly sample 30 paragraphs from English Wikipedia and select one for summarization (using GPT-3.5-Turbo). This task requires the model to give the original paragraph name to which the summary corresponds.
- The PassageCount task is constructed based on the English wiki. For each piece of data, we randomly sample several passages from English Wikipedia, repeat each paragraph at random several times, and finally shuffle the paragraphs. This task requires the model to determine the total number of different paragraphs in the given context.
- The PasskeyRetrieval-zh task is constructed based on [C4](https://arxiv.org/abs/1910.10683). For each piece of data, we randomly sample several Chinese paragraphs from C4 and select one of them for summarization (using GPT-3.5-Turbo). This task requires the model to give the original paragraph name to which the summary corresponds.
- For the [LCC](https://arxiv.org/abs/2306.14893) task, we sample from the original code completion dataset. In the [RepoBench-P](https://arxiv.org/abs/2306.03091) task, we select the most challenging XF-F (Cross-File-First) setting from the original dataset and refer to the Oracle-Filled scenario in the paper. For each original piece of data, we randomly extract multiple cross-file code snippets, including the gold cross-file code snippet, and concatenate them as input, requiring the model to effectively use cross-file code for completion.
# LongBench-E statistics
| Task | Task Type | \#data in 0-4k | \#data in 4-8k | \#data in 8k+|
| :--------- | :-----------:| :-----------: |:---------: | :-------------: |
| HotpotQA | Multi-doc QA | 100 |100 |100 |
| 2WikiMultihopQA| Multi-doc QA | 100 |100 |100 |
| MultiFieldQA-en| Single-doc QA | 67 |70 |13 |
| Qasper| Single-doc QA | 100 |100 |24 |
| GovReport| Summarization | 100 |100 |100 |
| MultiNews| Summarization | 100 |100 |94 |
| TriviaQA| Few shot | 100 |100 |100 |
| SAMSum| Few shot | 100 |100 |100 |
| TREC| Few shot | 100 |100 |100 |
| PassageRetrieval-en| Synthetic | 100 |100 |100 |
| PassageCount| Synthetic | 100 |100 |100 |
| LCC| Code | 100 |100 |100 |
| RepoBench-P| Code | 100 |100 |100 |
# Citation
```
@misc{bai2023longbench,
title={LongBench: A Bilingual, Multitask Benchmark for Long Context Understanding},
author={Yushi Bai and Xin Lv and Jiajie Zhang and Hongchang Lyu and Jiankai Tang and Zhidian Huang and Zhengxiao Du and Xiao Liu and Aohan Zeng and Lei Hou and Yuxiao Dong and Jie Tang and Juanzi Li},
year={2023},
eprint={2308.14508},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
CyberHarem/nami_leagueoflegends | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of nami (League of Legends)
This is the dataset of nami (League of Legends), containing 81 images and their tags.
The core tags of this character are `breasts, long_hair, large_breasts, monster_girl, hair_ornament, blue_eyes, purple_hair, colored_skin`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:----------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 81 | 133.32 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nami_leagueoflegends/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 81 | 68.97 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nami_leagueoflegends/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 188 | 139.53 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nami_leagueoflegends/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 81 | 113.00 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nami_leagueoflegends/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 188 | 206.07 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nami_leagueoflegends/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/nami_leagueoflegends',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 23 |  |  |  |  |  | 1girl, solo, looking_at_viewer, facial_mark, bare_shoulders, bracelet, mermaid, parted_lips, smile, collarbone, detached_sleeves, head_fins, cleavage, water |
| 1 | 9 |  |  |  |  |  | 1girl, solo, looking_at_viewer, pink_hair, bangs, mermaid, red_eyes, gloves, holding, staff |
| 2 | 7 |  |  |  |  |  | uncensored, 1girl, hetero, penis, solo_focus, 1boy, clitoris, cum, inverted_nipples, paizuri, pussy, spread_legs |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | looking_at_viewer | facial_mark | bare_shoulders | bracelet | mermaid | parted_lips | smile | collarbone | detached_sleeves | head_fins | cleavage | water | pink_hair | bangs | red_eyes | gloves | holding | staff | uncensored | hetero | penis | solo_focus | 1boy | clitoris | cum | inverted_nipples | paizuri | pussy | spread_legs |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:--------------------|:--------------|:-----------------|:-----------|:----------|:--------------|:--------|:-------------|:-------------------|:------------|:-----------|:--------|:------------|:--------|:-----------|:---------|:----------|:--------|:-------------|:---------|:--------|:-------------|:-------|:-----------|:------|:-------------------|:----------|:--------|:--------------|
| 0 | 23 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | |
| 1 | 9 |  |  |  |  |  | X | X | X | | | | X | | | | | | | | X | X | X | X | X | X | | | | | | | | | | | |
| 2 | 7 |  |  |  |  |  | X | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X |
|
math-ai/StackMathQA | ---
license: cc-by-4.0
task_categories:
- text-generation
- question-answering
language:
- en
pretty_name: StackMathQA
size_categories:
- 1B<n<10B
configs:
- config_name: stackmathqa1600k
data_files: data/stackmathqa1600k/all.jsonl
default: true
- config_name: stackmathqa800k
data_files: data/stackmathqa800k/all.jsonl
- config_name: stackmathqa400k
data_files: data/stackmathqa400k/all.jsonl
- config_name: stackmathqa200k
data_files: data/stackmathqa200k/all.jsonl
- config_name: stackmathqa100k
data_files: data/stackmathqa100k/all.jsonl
- config_name: stackmathqafull-1q1a
data_files: preprocessed/stackexchange-math--1q1a/*.jsonl
- config_name: stackmathqafull-qalist
data_files: preprocessed/stackexchange-math/*.jsonl
tags:
- mathematical-reasoning
- reasoning
- finetuning
- pretraining
- llm
---
# StackMathQA
StackMathQA is a meticulously curated collection of **2 million** mathematical questions and answers, sourced from various Stack Exchange sites. This repository is designed to serve as a comprehensive resource for researchers, educators, and enthusiasts in the field of mathematics and AI research.
## Configs
```YAML
configs:
- config_name: stackmathqa1600k
data_files: data/stackmathqa1600k/all.jsonl
default: true
- config_name: stackmathqa800k
data_files: data/stackmathqa800k/all.jsonl
- config_name: stackmathqa400k
data_files: data/stackmathqa400k/all.jsonl
- config_name: stackmathqa200k
data_files: data/stackmathqa200k/all.jsonl
- config_name: stackmathqa100k
data_files: data/stackmathqa100k/all.jsonl
- config_name: stackmathqafull-1q1a
data_files: preprocessed/stackexchange-math--1q1a/*.jsonl
- config_name: stackmathqafull-qalist
data_files: preprocessed/stackexchange-math/*.jsonl
```
How to load data:
```python
from datasets import load_dataset
ds = load_dataset("math-ai/StackMathQA", "stackmathqa1600k") # or any valid config_name
```
## Preprocessed Data
In the `./preprocessed/stackexchange-math` directory and `./preprocessed/stackexchange-math--1q1a` directory, you will find the data structured in two formats:
1. **Question and List of Answers Format**:
Each entry is structured as {"Q": "question", "A_List": ["answer1", "answer2", ...]}.
- `math.stackexchange.com.jsonl`: 827,439 lines
- `mathoverflow.net.jsonl`: 90,645 lines
- `stats.stackexchange.com.jsonl`: 103,024 lines
- `physics.stackexchange.com.jsonl`: 117,318 lines
- In total: **1,138,426** questions
```YAML
dataset_info:
features:
- name: Q
dtype: string
description: "The mathematical question in LaTeX encoded format."
- name: A_list
dtype: sequence
description: "The list of answers to the mathematical question, also in LaTeX encoded."
- name: meta
dtype: dict
description: "A collection of metadata for each question and its corresponding answer list."
```
2. **Question and Single Answer Format**:
Each line contains a question and one corresponding answer, structured as {"Q": "question", "A": "answer"}. Multiple answers for the same question are separated into different lines.
- `math.stackexchange.com.jsonl`: 1,407,739 lines
- `mathoverflow.net.jsonl`: 166,592 lines
- `stats.stackexchange.com.jsonl`: 156,143 lines
- `physics.stackexchange.com.jsonl`: 226,532 lines
- In total: **1,957,006** answers
```YAML
dataset_info:
features:
- name: Q
dtype: string
description: "The mathematical question in LaTeX encoded format."
- name: A
dtype: string
description: "The answer to the mathematical question, also in LaTeX encoded."
- name: meta
dtype: dict
description: "A collection of metadata for each question-answer pair."
```
## Selected Data
The dataset has been carefully curated using importance sampling. We offer selected subsets of the dataset (`./preprocessed/stackexchange-math--1q1a`) with different sizes to cater to varied needs:
```YAML
dataset_info:
features:
- name: Q
dtype: string
description: "The mathematical question in LaTeX encoded format."
- name: A
dtype: string
description: "The answer to the mathematical question, also in LaTeX encoded."
- name: meta
dtype: dict
description: "A collection of metadata for each question-answer pair."
```
### StackMathQA1600K
- Location: `./data/stackmathqa1600k`
- Contents:
- `all.jsonl`: Containing 1.6 million entries.
- `meta.json`: Metadata and additional information.
```bash
Source: Stack Exchange (Math), Count: 1244887
Source: MathOverflow, Count: 110041
Source: Stack Exchange (Stats), Count: 99878
Source: Stack Exchange (Physics), Count: 145194
```
Similar structures are available for StackMathQA800K, StackMathQA400K, StackMathQA200K, and StackMathQA100K subsets.
### StackMathQA800K
- Location: `./data/stackmathqa800k`
- Contents:
- `all.jsonl`: Containing 800k entries.
- `meta.json`: Metadata and additional information.
```bash
Source: Stack Exchange (Math), Count: 738850
Source: MathOverflow, Count: 24276
Source: Stack Exchange (Stats), Count: 15046
Source: Stack Exchange (Physics), Count: 21828
```
### StackMathQA400K
- Location: `./data/stackmathqa400k`
- Contents:
- `all.jsonl`: Containing 400k entries.
- `meta.json`: Metadata and additional information.
```bash
Source: Stack Exchange (Math), Count: 392940
Source: MathOverflow, Count: 3963
Source: Stack Exchange (Stats), Count: 1637
Source: Stack Exchange (Physics), Count: 1460
```
### StackMathQA200K
- Location: `./data/stackmathqa200k`
- Contents:
- `all.jsonl`: Containing 200k entries.
- `meta.json`: Metadata and additional information.
```bash
Source: Stack Exchange (Math), Count: 197792
Source: MathOverflow, Count: 1367
Source: Stack Exchange (Stats), Count: 423
Source: Stack Exchange (Physics), Count: 418
```
### StackMathQA100K
- Location: `./data/stackmathqa100k`
- Contents:
- `all.jsonl`: Containing 100k entries.
- `meta.json`: Metadata and additional information.
```bash
Source: Stack Exchange (Math), Count: 99013
Source: MathOverflow, Count: 626
Source: Stack Exchange (Stats), Count: 182
Source: Stack Exchange (Physics), Count: 179
```
## Citation
We appreciate your use of StackMathQA in your work. If you find this repository helpful, please consider citing it and star this repo. Feel free to contact zhangyif21@tsinghua.edu.cn or open an issue if you have any questions.
```bibtex
@misc{stackmathqa2024,
title={StackMathQA: A Curated Collection of 2 Million Mathematical Questions and Answers Sourced from Stack Exchange},
author={Zhang, Yifan},
year={2024},
}
```
|
mwkldeveloper/sprites_1788_16 | ---
dataset_info:
features:
- name: images
dtype: image
- name: label
sequence: int32
splits:
- name: train
num_bytes: 75545704.0
num_examples: 89400
download_size: 42418183
dataset_size: 75545704.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
CyberHarem/ruri_pokemon | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of ruri/ルリ (Pokémon)
This is the dataset of ruri/ルリ (Pokémon), containing 34 images and their tags.
The core tags of this character are `pink_hair, hat, blue_eyes, breasts, short_hair, bow, mole`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:--------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 34 | 21.72 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ruri_pokemon/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 34 | 15.87 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ruri_pokemon/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 70 | 31.04 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ruri_pokemon/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 34 | 20.72 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ruri_pokemon/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 70 | 39.33 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ruri_pokemon/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/ruri_pokemon',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------------------------------------------------------------------------------------------|
| 0 | 23 |  |  |  |  |  | 1girl, solo, blush, looking_at_viewer, smile, bag, open_mouth, skirt, hat_bow, long_sleeves, shirt, white_headwear |
| 1 | 8 |  |  |  |  |  | 1boy, 1girl, hetero, nude, penis, solo_focus, blush, cum, nipples, smile, large_breasts, open_mouth, long_hair, pov, pussy, sex |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | blush | looking_at_viewer | smile | bag | open_mouth | skirt | hat_bow | long_sleeves | shirt | white_headwear | 1boy | hetero | nude | penis | solo_focus | cum | nipples | large_breasts | long_hair | pov | pussy | sex |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:--------|:--------------------|:--------|:------|:-------------|:--------|:----------|:---------------|:--------|:-----------------|:-------|:---------|:-------|:--------|:-------------|:------|:----------|:----------------|:------------|:------|:--------|:------|
| 0 | 23 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | |
| 1 | 8 |  |  |  |  |  | X | | X | | X | | X | | | | | | X | X | X | X | X | X | X | X | X | X | X | X |
|
open-llm-leaderboard/details_theNovaAI__Supernova-experimental | ---
pretty_name: Evaluation run of theNovaAI/Supernova-experimental
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [theNovaAI/Supernova-experimental](https://huggingface.co/theNovaAI/Supernova-experimental)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_theNovaAI__Supernova-experimental\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-03-10T12:34:01.420352](https://huggingface.co/datasets/open-llm-leaderboard/details_theNovaAI__Supernova-experimental/blob/main/results_2024-03-10T12-34-01.420352.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.5663270464450889,\n\
\ \"acc_stderr\": 0.03356166882892655,\n \"acc_norm\": 0.5715895778655974,\n\
\ \"acc_norm_stderr\": 0.03426551856832842,\n \"mc1\": 0.3390452876376989,\n\
\ \"mc1_stderr\": 0.016571797910626608,\n \"mc2\": 0.49371884206186833,\n\
\ \"mc2_stderr\": 0.015090933240631366\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.5921501706484642,\n \"acc_stderr\": 0.014361097288449703,\n\
\ \"acc_norm\": 0.6305460750853242,\n \"acc_norm_stderr\": 0.014104578366491887\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6363274248157738,\n\
\ \"acc_stderr\": 0.004800728138792395,\n \"acc_norm\": 0.8365863373829915,\n\
\ \"acc_norm_stderr\": 0.0036898701424130753\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.36,\n \"acc_stderr\": 0.04824181513244218,\n \
\ \"acc_norm\": 0.36,\n \"acc_norm_stderr\": 0.04824181513244218\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5037037037037037,\n\
\ \"acc_stderr\": 0.04319223625811331,\n \"acc_norm\": 0.5037037037037037,\n\
\ \"acc_norm_stderr\": 0.04319223625811331\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.5394736842105263,\n \"acc_stderr\": 0.04056242252249034,\n\
\ \"acc_norm\": 0.5394736842105263,\n \"acc_norm_stderr\": 0.04056242252249034\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.56,\n\
\ \"acc_stderr\": 0.04988876515698589,\n \"acc_norm\": 0.56,\n \
\ \"acc_norm_stderr\": 0.04988876515698589\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.5811320754716981,\n \"acc_stderr\": 0.03036505082911521,\n\
\ \"acc_norm\": 0.5811320754716981,\n \"acc_norm_stderr\": 0.03036505082911521\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.5972222222222222,\n\
\ \"acc_stderr\": 0.04101405519842426,\n \"acc_norm\": 0.5972222222222222,\n\
\ \"acc_norm_stderr\": 0.04101405519842426\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252604,\n \
\ \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252604\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.41,\n \"acc_stderr\": 0.049431107042371025,\n \"acc_norm\": 0.41,\n\
\ \"acc_norm_stderr\": 0.049431107042371025\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.36,\n \"acc_stderr\": 0.04824181513244218,\n \
\ \"acc_norm\": 0.36,\n \"acc_norm_stderr\": 0.04824181513244218\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.5491329479768786,\n\
\ \"acc_stderr\": 0.0379401267469703,\n \"acc_norm\": 0.5491329479768786,\n\
\ \"acc_norm_stderr\": 0.0379401267469703\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.2647058823529412,\n \"acc_stderr\": 0.04389869956808777,\n\
\ \"acc_norm\": 0.2647058823529412,\n \"acc_norm_stderr\": 0.04389869956808777\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.72,\n \"acc_stderr\": 0.04512608598542129,\n \"acc_norm\": 0.72,\n\
\ \"acc_norm_stderr\": 0.04512608598542129\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.4595744680851064,\n \"acc_stderr\": 0.032579014820998356,\n\
\ \"acc_norm\": 0.4595744680851064,\n \"acc_norm_stderr\": 0.032579014820998356\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.2982456140350877,\n\
\ \"acc_stderr\": 0.04303684033537315,\n \"acc_norm\": 0.2982456140350877,\n\
\ \"acc_norm_stderr\": 0.04303684033537315\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5172413793103449,\n \"acc_stderr\": 0.04164188720169375,\n\
\ \"acc_norm\": 0.5172413793103449,\n \"acc_norm_stderr\": 0.04164188720169375\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.328042328042328,\n \"acc_stderr\": 0.024180497164376914,\n \"\
acc_norm\": 0.328042328042328,\n \"acc_norm_stderr\": 0.024180497164376914\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.373015873015873,\n\
\ \"acc_stderr\": 0.04325506042017086,\n \"acc_norm\": 0.373015873015873,\n\
\ \"acc_norm_stderr\": 0.04325506042017086\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.37,\n \"acc_stderr\": 0.04852365870939099,\n \
\ \"acc_norm\": 0.37,\n \"acc_norm_stderr\": 0.04852365870939099\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.6580645161290323,\n\
\ \"acc_stderr\": 0.026985289576552746,\n \"acc_norm\": 0.6580645161290323,\n\
\ \"acc_norm_stderr\": 0.026985289576552746\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.4482758620689655,\n \"acc_stderr\": 0.03499113137676744,\n\
\ \"acc_norm\": 0.4482758620689655,\n \"acc_norm_stderr\": 0.03499113137676744\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.54,\n \"acc_stderr\": 0.05009082659620333,\n \"acc_norm\"\
: 0.54,\n \"acc_norm_stderr\": 0.05009082659620333\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.6727272727272727,\n \"acc_stderr\": 0.03663974994391244,\n\
\ \"acc_norm\": 0.6727272727272727,\n \"acc_norm_stderr\": 0.03663974994391244\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.7171717171717171,\n \"acc_stderr\": 0.03208779558786752,\n \"\
acc_norm\": 0.7171717171717171,\n \"acc_norm_stderr\": 0.03208779558786752\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.7979274611398963,\n \"acc_stderr\": 0.02897908979429673,\n\
\ \"acc_norm\": 0.7979274611398963,\n \"acc_norm_stderr\": 0.02897908979429673\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.517948717948718,\n \"acc_stderr\": 0.025334667080954925,\n \
\ \"acc_norm\": 0.517948717948718,\n \"acc_norm_stderr\": 0.025334667080954925\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.3296296296296296,\n \"acc_stderr\": 0.028661201116524575,\n \
\ \"acc_norm\": 0.3296296296296296,\n \"acc_norm_stderr\": 0.028661201116524575\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6008403361344538,\n \"acc_stderr\": 0.03181110032413926,\n \
\ \"acc_norm\": 0.6008403361344538,\n \"acc_norm_stderr\": 0.03181110032413926\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.31788079470198677,\n \"acc_stderr\": 0.038020397601079024,\n \"\
acc_norm\": 0.31788079470198677,\n \"acc_norm_stderr\": 0.038020397601079024\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.7486238532110092,\n \"acc_stderr\": 0.018599206360287415,\n \"\
acc_norm\": 0.7486238532110092,\n \"acc_norm_stderr\": 0.018599206360287415\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.39814814814814814,\n \"acc_stderr\": 0.033384734032074016,\n \"\
acc_norm\": 0.39814814814814814,\n \"acc_norm_stderr\": 0.033384734032074016\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.75,\n \"acc_stderr\": 0.03039153369274154,\n \"acc_norm\": 0.75,\n\
\ \"acc_norm_stderr\": 0.03039153369274154\n },\n \"harness|hendrycksTest-high_school_world_history|5\"\
: {\n \"acc\": 0.7721518987341772,\n \"acc_stderr\": 0.02730348459906943,\n\
\ \"acc_norm\": 0.7721518987341772,\n \"acc_norm_stderr\": 0.02730348459906943\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6860986547085202,\n\
\ \"acc_stderr\": 0.03114679648297246,\n \"acc_norm\": 0.6860986547085202,\n\
\ \"acc_norm_stderr\": 0.03114679648297246\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.6412213740458015,\n \"acc_stderr\": 0.04206739313864908,\n\
\ \"acc_norm\": 0.6412213740458015,\n \"acc_norm_stderr\": 0.04206739313864908\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.743801652892562,\n \"acc_stderr\": 0.039849796533028725,\n \"\
acc_norm\": 0.743801652892562,\n \"acc_norm_stderr\": 0.039849796533028725\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7314814814814815,\n\
\ \"acc_stderr\": 0.042844679680521934,\n \"acc_norm\": 0.7314814814814815,\n\
\ \"acc_norm_stderr\": 0.042844679680521934\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7116564417177914,\n \"acc_stderr\": 0.03559039531617342,\n\
\ \"acc_norm\": 0.7116564417177914,\n \"acc_norm_stderr\": 0.03559039531617342\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.33035714285714285,\n\
\ \"acc_stderr\": 0.04464285714285714,\n \"acc_norm\": 0.33035714285714285,\n\
\ \"acc_norm_stderr\": 0.04464285714285714\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7281553398058253,\n \"acc_stderr\": 0.044052680241409216,\n\
\ \"acc_norm\": 0.7281553398058253,\n \"acc_norm_stderr\": 0.044052680241409216\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8034188034188035,\n\
\ \"acc_stderr\": 0.02603538609895129,\n \"acc_norm\": 0.8034188034188035,\n\
\ \"acc_norm_stderr\": 0.02603538609895129\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.59,\n \"acc_stderr\": 0.049431107042371025,\n \
\ \"acc_norm\": 0.59,\n \"acc_norm_stderr\": 0.049431107042371025\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7637292464878672,\n\
\ \"acc_stderr\": 0.01519047371703751,\n \"acc_norm\": 0.7637292464878672,\n\
\ \"acc_norm_stderr\": 0.01519047371703751\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.6416184971098265,\n \"acc_stderr\": 0.02581675679158419,\n\
\ \"acc_norm\": 0.6416184971098265,\n \"acc_norm_stderr\": 0.02581675679158419\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.47039106145251397,\n\
\ \"acc_stderr\": 0.016693154927383567,\n \"acc_norm\": 0.47039106145251397,\n\
\ \"acc_norm_stderr\": 0.016693154927383567\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.6405228758169934,\n \"acc_stderr\": 0.027475969910660952,\n\
\ \"acc_norm\": 0.6405228758169934,\n \"acc_norm_stderr\": 0.027475969910660952\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6527331189710611,\n\
\ \"acc_stderr\": 0.027040745502307336,\n \"acc_norm\": 0.6527331189710611,\n\
\ \"acc_norm_stderr\": 0.027040745502307336\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.6327160493827161,\n \"acc_stderr\": 0.026822801759507894,\n\
\ \"acc_norm\": 0.6327160493827161,\n \"acc_norm_stderr\": 0.026822801759507894\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.4219858156028369,\n \"acc_stderr\": 0.029462189233370593,\n \
\ \"acc_norm\": 0.4219858156028369,\n \"acc_norm_stderr\": 0.029462189233370593\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4302477183833116,\n\
\ \"acc_stderr\": 0.012645361435115231,\n \"acc_norm\": 0.4302477183833116,\n\
\ \"acc_norm_stderr\": 0.012645361435115231\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.5404411764705882,\n \"acc_stderr\": 0.030273325077345755,\n\
\ \"acc_norm\": 0.5404411764705882,\n \"acc_norm_stderr\": 0.030273325077345755\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.5816993464052288,\n \"acc_stderr\": 0.019955975145835546,\n \
\ \"acc_norm\": 0.5816993464052288,\n \"acc_norm_stderr\": 0.019955975145835546\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6545454545454545,\n\
\ \"acc_stderr\": 0.04554619617541054,\n \"acc_norm\": 0.6545454545454545,\n\
\ \"acc_norm_stderr\": 0.04554619617541054\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.6204081632653061,\n \"acc_stderr\": 0.031067211262872468,\n\
\ \"acc_norm\": 0.6204081632653061,\n \"acc_norm_stderr\": 0.031067211262872468\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.7412935323383084,\n\
\ \"acc_stderr\": 0.03096590312357302,\n \"acc_norm\": 0.7412935323383084,\n\
\ \"acc_norm_stderr\": 0.03096590312357302\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.83,\n \"acc_stderr\": 0.03775251680686371,\n \
\ \"acc_norm\": 0.83,\n \"acc_norm_stderr\": 0.03775251680686371\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.4819277108433735,\n\
\ \"acc_stderr\": 0.038899512528272166,\n \"acc_norm\": 0.4819277108433735,\n\
\ \"acc_norm_stderr\": 0.038899512528272166\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.7953216374269005,\n \"acc_stderr\": 0.030944459778533197,\n\
\ \"acc_norm\": 0.7953216374269005,\n \"acc_norm_stderr\": 0.030944459778533197\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.3390452876376989,\n\
\ \"mc1_stderr\": 0.016571797910626608,\n \"mc2\": 0.49371884206186833,\n\
\ \"mc2_stderr\": 0.015090933240631366\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7734806629834254,\n \"acc_stderr\": 0.011764149054698338\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.287338893100834,\n \
\ \"acc_stderr\": 0.012464677060107081\n }\n}\n```"
repo_url: https://huggingface.co/theNovaAI/Supernova-experimental
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_03_10T12_34_01.420352
path:
- '**/details_harness|arc:challenge|25_2024-03-10T12-34-01.420352.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-03-10T12-34-01.420352.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_03_10T12_34_01.420352
path:
- '**/details_harness|gsm8k|5_2024-03-10T12-34-01.420352.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-03-10T12-34-01.420352.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_03_10T12_34_01.420352
path:
- '**/details_harness|hellaswag|10_2024-03-10T12-34-01.420352.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-03-10T12-34-01.420352.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_03_10T12_34_01.420352
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-10T12-34-01.420352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-10T12-34-01.420352.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-10T12-34-01.420352.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_03_10T12_34_01.420352
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-10T12-34-01.420352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-10T12-34-01.420352.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_03_10T12_34_01.420352
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-10T12-34-01.420352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-10T12-34-01.420352.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_03_10T12_34_01.420352
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-10T12-34-01.420352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-10T12-34-01.420352.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_03_10T12_34_01.420352
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-10T12-34-01.420352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-10T12-34-01.420352.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_03_10T12_34_01.420352
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-10T12-34-01.420352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-10T12-34-01.420352.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_03_10T12_34_01.420352
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-10T12-34-01.420352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-10T12-34-01.420352.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_03_10T12_34_01.420352
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-10T12-34-01.420352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-10T12-34-01.420352.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_03_10T12_34_01.420352
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-10T12-34-01.420352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-10T12-34-01.420352.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_03_10T12_34_01.420352
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-10T12-34-01.420352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-10T12-34-01.420352.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_03_10T12_34_01.420352
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-10T12-34-01.420352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-10T12-34-01.420352.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_03_10T12_34_01.420352
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-10T12-34-01.420352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-10T12-34-01.420352.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_03_10T12_34_01.420352
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-10T12-34-01.420352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-10T12-34-01.420352.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_03_10T12_34_01.420352
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-10T12-34-01.420352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-10T12-34-01.420352.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_03_10T12_34_01.420352
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-10T12-34-01.420352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-10T12-34-01.420352.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_03_10T12_34_01.420352
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-10T12-34-01.420352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-10T12-34-01.420352.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_03_10T12_34_01.420352
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-10T12-34-01.420352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-10T12-34-01.420352.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_03_10T12_34_01.420352
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-10T12-34-01.420352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-10T12-34-01.420352.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_03_10T12_34_01.420352
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-10T12-34-01.420352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-10T12-34-01.420352.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_03_10T12_34_01.420352
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-10T12-34-01.420352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-10T12-34-01.420352.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_03_10T12_34_01.420352
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-10T12-34-01.420352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-10T12-34-01.420352.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_03_10T12_34_01.420352
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-10T12-34-01.420352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-10T12-34-01.420352.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_03_10T12_34_01.420352
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-10T12-34-01.420352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-10T12-34-01.420352.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_03_10T12_34_01.420352
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-10T12-34-01.420352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-10T12-34-01.420352.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_03_10T12_34_01.420352
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-10T12-34-01.420352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-10T12-34-01.420352.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_03_10T12_34_01.420352
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-10T12-34-01.420352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-10T12-34-01.420352.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_03_10T12_34_01.420352
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-10T12-34-01.420352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-10T12-34-01.420352.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_03_10T12_34_01.420352
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-10T12-34-01.420352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-10T12-34-01.420352.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_03_10T12_34_01.420352
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-10T12-34-01.420352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-10T12-34-01.420352.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_03_10T12_34_01.420352
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-10T12-34-01.420352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-10T12-34-01.420352.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_03_10T12_34_01.420352
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-10T12-34-01.420352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-10T12-34-01.420352.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_03_10T12_34_01.420352
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-10T12-34-01.420352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-10T12-34-01.420352.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_03_10T12_34_01.420352
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-10T12-34-01.420352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-10T12-34-01.420352.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_03_10T12_34_01.420352
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-10T12-34-01.420352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-10T12-34-01.420352.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_03_10T12_34_01.420352
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-10T12-34-01.420352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-10T12-34-01.420352.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_03_10T12_34_01.420352
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-10T12-34-01.420352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-10T12-34-01.420352.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_03_10T12_34_01.420352
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-10T12-34-01.420352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-10T12-34-01.420352.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_03_10T12_34_01.420352
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-10T12-34-01.420352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-10T12-34-01.420352.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_03_10T12_34_01.420352
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-10T12-34-01.420352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-10T12-34-01.420352.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_03_10T12_34_01.420352
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-10T12-34-01.420352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-10T12-34-01.420352.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_03_10T12_34_01.420352
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-10T12-34-01.420352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-10T12-34-01.420352.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_03_10T12_34_01.420352
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-10T12-34-01.420352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-10T12-34-01.420352.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_03_10T12_34_01.420352
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-10T12-34-01.420352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-10T12-34-01.420352.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_03_10T12_34_01.420352
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-10T12-34-01.420352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-10T12-34-01.420352.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_03_10T12_34_01.420352
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-10T12-34-01.420352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-10T12-34-01.420352.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_03_10T12_34_01.420352
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-10T12-34-01.420352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-10T12-34-01.420352.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_03_10T12_34_01.420352
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-10T12-34-01.420352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-10T12-34-01.420352.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_03_10T12_34_01.420352
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-10T12-34-01.420352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-10T12-34-01.420352.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_03_10T12_34_01.420352
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-10T12-34-01.420352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-10T12-34-01.420352.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_03_10T12_34_01.420352
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-10T12-34-01.420352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-10T12-34-01.420352.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_03_10T12_34_01.420352
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-10T12-34-01.420352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-10T12-34-01.420352.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_03_10T12_34_01.420352
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-10T12-34-01.420352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-10T12-34-01.420352.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_03_10T12_34_01.420352
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-10T12-34-01.420352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-10T12-34-01.420352.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_03_10T12_34_01.420352
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-10T12-34-01.420352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-10T12-34-01.420352.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_03_10T12_34_01.420352
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-10T12-34-01.420352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-10T12-34-01.420352.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_03_10T12_34_01.420352
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-10T12-34-01.420352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-10T12-34-01.420352.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_03_10T12_34_01.420352
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-10T12-34-01.420352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-10T12-34-01.420352.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_03_10T12_34_01.420352
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-10T12-34-01.420352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-10T12-34-01.420352.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_03_10T12_34_01.420352
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-10T12-34-01.420352.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-10T12-34-01.420352.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_03_10T12_34_01.420352
path:
- '**/details_harness|winogrande|5_2024-03-10T12-34-01.420352.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-03-10T12-34-01.420352.parquet'
- config_name: results
data_files:
- split: 2024_03_10T12_34_01.420352
path:
- results_2024-03-10T12-34-01.420352.parquet
- split: latest
path:
- results_2024-03-10T12-34-01.420352.parquet
---
# Dataset Card for Evaluation run of theNovaAI/Supernova-experimental
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [theNovaAI/Supernova-experimental](https://huggingface.co/theNovaAI/Supernova-experimental) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_theNovaAI__Supernova-experimental",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-03-10T12:34:01.420352](https://huggingface.co/datasets/open-llm-leaderboard/details_theNovaAI__Supernova-experimental/blob/main/results_2024-03-10T12-34-01.420352.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.5663270464450889,
"acc_stderr": 0.03356166882892655,
"acc_norm": 0.5715895778655974,
"acc_norm_stderr": 0.03426551856832842,
"mc1": 0.3390452876376989,
"mc1_stderr": 0.016571797910626608,
"mc2": 0.49371884206186833,
"mc2_stderr": 0.015090933240631366
},
"harness|arc:challenge|25": {
"acc": 0.5921501706484642,
"acc_stderr": 0.014361097288449703,
"acc_norm": 0.6305460750853242,
"acc_norm_stderr": 0.014104578366491887
},
"harness|hellaswag|10": {
"acc": 0.6363274248157738,
"acc_stderr": 0.004800728138792395,
"acc_norm": 0.8365863373829915,
"acc_norm_stderr": 0.0036898701424130753
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.36,
"acc_stderr": 0.04824181513244218,
"acc_norm": 0.36,
"acc_norm_stderr": 0.04824181513244218
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.5037037037037037,
"acc_stderr": 0.04319223625811331,
"acc_norm": 0.5037037037037037,
"acc_norm_stderr": 0.04319223625811331
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.5394736842105263,
"acc_stderr": 0.04056242252249034,
"acc_norm": 0.5394736842105263,
"acc_norm_stderr": 0.04056242252249034
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.56,
"acc_stderr": 0.04988876515698589,
"acc_norm": 0.56,
"acc_norm_stderr": 0.04988876515698589
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.5811320754716981,
"acc_stderr": 0.03036505082911521,
"acc_norm": 0.5811320754716981,
"acc_norm_stderr": 0.03036505082911521
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.5972222222222222,
"acc_stderr": 0.04101405519842426,
"acc_norm": 0.5972222222222222,
"acc_norm_stderr": 0.04101405519842426
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.33,
"acc_stderr": 0.04725815626252604,
"acc_norm": 0.33,
"acc_norm_stderr": 0.04725815626252604
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.41,
"acc_stderr": 0.049431107042371025,
"acc_norm": 0.41,
"acc_norm_stderr": 0.049431107042371025
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.36,
"acc_stderr": 0.04824181513244218,
"acc_norm": 0.36,
"acc_norm_stderr": 0.04824181513244218
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.5491329479768786,
"acc_stderr": 0.0379401267469703,
"acc_norm": 0.5491329479768786,
"acc_norm_stderr": 0.0379401267469703
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.2647058823529412,
"acc_stderr": 0.04389869956808777,
"acc_norm": 0.2647058823529412,
"acc_norm_stderr": 0.04389869956808777
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.72,
"acc_stderr": 0.04512608598542129,
"acc_norm": 0.72,
"acc_norm_stderr": 0.04512608598542129
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.4595744680851064,
"acc_stderr": 0.032579014820998356,
"acc_norm": 0.4595744680851064,
"acc_norm_stderr": 0.032579014820998356
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.2982456140350877,
"acc_stderr": 0.04303684033537315,
"acc_norm": 0.2982456140350877,
"acc_norm_stderr": 0.04303684033537315
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5172413793103449,
"acc_stderr": 0.04164188720169375,
"acc_norm": 0.5172413793103449,
"acc_norm_stderr": 0.04164188720169375
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.328042328042328,
"acc_stderr": 0.024180497164376914,
"acc_norm": 0.328042328042328,
"acc_norm_stderr": 0.024180497164376914
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.373015873015873,
"acc_stderr": 0.04325506042017086,
"acc_norm": 0.373015873015873,
"acc_norm_stderr": 0.04325506042017086
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.37,
"acc_stderr": 0.04852365870939099,
"acc_norm": 0.37,
"acc_norm_stderr": 0.04852365870939099
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.6580645161290323,
"acc_stderr": 0.026985289576552746,
"acc_norm": 0.6580645161290323,
"acc_norm_stderr": 0.026985289576552746
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.4482758620689655,
"acc_stderr": 0.03499113137676744,
"acc_norm": 0.4482758620689655,
"acc_norm_stderr": 0.03499113137676744
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.54,
"acc_stderr": 0.05009082659620333,
"acc_norm": 0.54,
"acc_norm_stderr": 0.05009082659620333
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.6727272727272727,
"acc_stderr": 0.03663974994391244,
"acc_norm": 0.6727272727272727,
"acc_norm_stderr": 0.03663974994391244
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7171717171717171,
"acc_stderr": 0.03208779558786752,
"acc_norm": 0.7171717171717171,
"acc_norm_stderr": 0.03208779558786752
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.7979274611398963,
"acc_stderr": 0.02897908979429673,
"acc_norm": 0.7979274611398963,
"acc_norm_stderr": 0.02897908979429673
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.517948717948718,
"acc_stderr": 0.025334667080954925,
"acc_norm": 0.517948717948718,
"acc_norm_stderr": 0.025334667080954925
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3296296296296296,
"acc_stderr": 0.028661201116524575,
"acc_norm": 0.3296296296296296,
"acc_norm_stderr": 0.028661201116524575
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6008403361344538,
"acc_stderr": 0.03181110032413926,
"acc_norm": 0.6008403361344538,
"acc_norm_stderr": 0.03181110032413926
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.31788079470198677,
"acc_stderr": 0.038020397601079024,
"acc_norm": 0.31788079470198677,
"acc_norm_stderr": 0.038020397601079024
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.7486238532110092,
"acc_stderr": 0.018599206360287415,
"acc_norm": 0.7486238532110092,
"acc_norm_stderr": 0.018599206360287415
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.39814814814814814,
"acc_stderr": 0.033384734032074016,
"acc_norm": 0.39814814814814814,
"acc_norm_stderr": 0.033384734032074016
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.75,
"acc_stderr": 0.03039153369274154,
"acc_norm": 0.75,
"acc_norm_stderr": 0.03039153369274154
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7721518987341772,
"acc_stderr": 0.02730348459906943,
"acc_norm": 0.7721518987341772,
"acc_norm_stderr": 0.02730348459906943
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6860986547085202,
"acc_stderr": 0.03114679648297246,
"acc_norm": 0.6860986547085202,
"acc_norm_stderr": 0.03114679648297246
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.6412213740458015,
"acc_stderr": 0.04206739313864908,
"acc_norm": 0.6412213740458015,
"acc_norm_stderr": 0.04206739313864908
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.743801652892562,
"acc_stderr": 0.039849796533028725,
"acc_norm": 0.743801652892562,
"acc_norm_stderr": 0.039849796533028725
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7314814814814815,
"acc_stderr": 0.042844679680521934,
"acc_norm": 0.7314814814814815,
"acc_norm_stderr": 0.042844679680521934
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7116564417177914,
"acc_stderr": 0.03559039531617342,
"acc_norm": 0.7116564417177914,
"acc_norm_stderr": 0.03559039531617342
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.33035714285714285,
"acc_stderr": 0.04464285714285714,
"acc_norm": 0.33035714285714285,
"acc_norm_stderr": 0.04464285714285714
},
"harness|hendrycksTest-management|5": {
"acc": 0.7281553398058253,
"acc_stderr": 0.044052680241409216,
"acc_norm": 0.7281553398058253,
"acc_norm_stderr": 0.044052680241409216
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8034188034188035,
"acc_stderr": 0.02603538609895129,
"acc_norm": 0.8034188034188035,
"acc_norm_stderr": 0.02603538609895129
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.59,
"acc_stderr": 0.049431107042371025,
"acc_norm": 0.59,
"acc_norm_stderr": 0.049431107042371025
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.7637292464878672,
"acc_stderr": 0.01519047371703751,
"acc_norm": 0.7637292464878672,
"acc_norm_stderr": 0.01519047371703751
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.6416184971098265,
"acc_stderr": 0.02581675679158419,
"acc_norm": 0.6416184971098265,
"acc_norm_stderr": 0.02581675679158419
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.47039106145251397,
"acc_stderr": 0.016693154927383567,
"acc_norm": 0.47039106145251397,
"acc_norm_stderr": 0.016693154927383567
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.6405228758169934,
"acc_stderr": 0.027475969910660952,
"acc_norm": 0.6405228758169934,
"acc_norm_stderr": 0.027475969910660952
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6527331189710611,
"acc_stderr": 0.027040745502307336,
"acc_norm": 0.6527331189710611,
"acc_norm_stderr": 0.027040745502307336
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.6327160493827161,
"acc_stderr": 0.026822801759507894,
"acc_norm": 0.6327160493827161,
"acc_norm_stderr": 0.026822801759507894
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4219858156028369,
"acc_stderr": 0.029462189233370593,
"acc_norm": 0.4219858156028369,
"acc_norm_stderr": 0.029462189233370593
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4302477183833116,
"acc_stderr": 0.012645361435115231,
"acc_norm": 0.4302477183833116,
"acc_norm_stderr": 0.012645361435115231
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.5404411764705882,
"acc_stderr": 0.030273325077345755,
"acc_norm": 0.5404411764705882,
"acc_norm_stderr": 0.030273325077345755
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.5816993464052288,
"acc_stderr": 0.019955975145835546,
"acc_norm": 0.5816993464052288,
"acc_norm_stderr": 0.019955975145835546
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6545454545454545,
"acc_stderr": 0.04554619617541054,
"acc_norm": 0.6545454545454545,
"acc_norm_stderr": 0.04554619617541054
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.6204081632653061,
"acc_stderr": 0.031067211262872468,
"acc_norm": 0.6204081632653061,
"acc_norm_stderr": 0.031067211262872468
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.7412935323383084,
"acc_stderr": 0.03096590312357302,
"acc_norm": 0.7412935323383084,
"acc_norm_stderr": 0.03096590312357302
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.83,
"acc_stderr": 0.03775251680686371,
"acc_norm": 0.83,
"acc_norm_stderr": 0.03775251680686371
},
"harness|hendrycksTest-virology|5": {
"acc": 0.4819277108433735,
"acc_stderr": 0.038899512528272166,
"acc_norm": 0.4819277108433735,
"acc_norm_stderr": 0.038899512528272166
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.7953216374269005,
"acc_stderr": 0.030944459778533197,
"acc_norm": 0.7953216374269005,
"acc_norm_stderr": 0.030944459778533197
},
"harness|truthfulqa:mc|0": {
"mc1": 0.3390452876376989,
"mc1_stderr": 0.016571797910626608,
"mc2": 0.49371884206186833,
"mc2_stderr": 0.015090933240631366
},
"harness|winogrande|5": {
"acc": 0.7734806629834254,
"acc_stderr": 0.011764149054698338
},
"harness|gsm8k|5": {
"acc": 0.287338893100834,
"acc_stderr": 0.012464677060107081
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
taufiqdp/all-ds-merge-clean | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: bug
num_bytes: 2194029
num_examples: 61619
- name: tet
num_bytes: 152255168
num_examples: 54769
- name: bjn
num_bytes: 228070271
num_examples: 3205230
- name: nia
num_bytes: 4957236
num_examples: 15460
- name: iba
num_bytes: 30464643
num_examples: 7638
- name: ban
num_bytes: 185152095
num_examples: 2256163
- name: ace
num_bytes: 345726685
num_examples: 5100238
- name: jv
num_bytes: 2907349936
num_examples: 32314788
- name: sxn
num_bytes: 1474713
num_examples: 197
- name: sda
num_bytes: 1602502
num_examples: 317
- name: ms
num_bytes: 497312421
num_examples: 5106555
- name: su
num_bytes: 2163098849
num_examples: 23269748
- name: bew
num_bytes: 8473801
num_examples: 2677
- name: mad
num_bytes: 2723657
num_examples: 509
- name: mrw
num_bytes: 243973
num_examples: 29
- name: mkn
num_bytes: 2601916
num_examples: 402
- name: min
num_bytes: 191716189
num_examples: 3981216
- name: map_bms
num_bytes: 4127598
num_examples: 59400
- name: gor
num_bytes: 6468109
num_examples: 92176
- name: mak
num_bytes: 3666984
num_examples: 555
- name: train
num_bytes: 6739680775
num_examples: 75529686
download_size: 8660688390
dataset_size: 13479361550
configs:
- config_name: default
data_files:
- split: bug
path: data/bug-*
- split: tet
path: data/tet-*
- split: bjn
path: data/bjn-*
- split: nia
path: data/nia-*
- split: iba
path: data/iba-*
- split: ban
path: data/ban-*
- split: ace
path: data/ace-*
- split: jv
path: data/jv-*
- split: sxn
path: data/sxn-*
- split: sda
path: data/sda-*
- split: ms
path: data/ms-*
- split: su
path: data/su-*
- split: bew
path: data/bew-*
- split: mad
path: data/mad-*
- split: mrw
path: data/mrw-*
- split: mkn
path: data/mkn-*
- split: min
path: data/min-*
- split: map_bms
path: data/map_bms-*
- split: gor
path: data/gor-*
- split: mak
path: data/mak-*
- split: train
path: data/train-*
---
|
kaleemWaheed/twitter_dataset_1713045333 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 20515
num_examples: 46
download_size: 11761
dataset_size: 20515
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Arch4ngel/pochita_v2 | ---
dataset_info:
features:
- name: image
dtype: image
splits:
- name: train
num_bytes: 67970413.0
num_examples: 15
download_size: 67840616
dataset_size: 67970413.0
---
# Dataset Card for "pochita_v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
VodLM/medqa | ---
license: mit
---
|
arianhosseini/swag_formatted_to_quail | ---
dataset_info:
features:
- name: video-id
dtype: string
- name: fold-ind
dtype: string
- name: startphrase
dtype: string
- name: gold-ending
dtype: string
- name: distractor-0
dtype: string
- name: distractor-1
dtype: string
- name: distractor-2
dtype: string
- name: distractor-3
dtype: string
- name: gold-source
dtype: string
- name: gold-type
dtype: string
- name: distractor-0-type
dtype: string
- name: distractor-1-type
dtype: string
- name: distractor-2-type
dtype: string
- name: distractor-3-type
dtype: string
- name: sent1
dtype: string
- name: sent2
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence: string
- name: correct_answer_id
dtype: int64
splits:
- name: train
num_bytes: 58608654
num_examples: 73546
- name: validation
num_bytes: 16545043
num_examples: 20006
download_size: 35695452
dataset_size: 75153697
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
jose-h-solorzano/synth-forgetting-generalization-1 | ---
dataset_info:
features:
- name: input
sequence: float64
- name: output
sequence: float64
splits:
- name: train
num_bytes: 16320000.0
num_examples: 40000
- name: test
num_bytes: 4080000.0
num_examples: 10000
download_size: 14119474
dataset_size: 20400000.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
automated-research-group/llama2_7b_chat-openbookqa-results | ---
dataset_info:
- config_name: '{''do_sample''=False, ''beams''=10}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: openbookqa_accuracy
dtype: bool
splits:
- name: train
num_bytes: 166633
num_examples: 500
download_size: 85130
dataset_size: 166633
- config_name: '{''do_sample''=False, ''beams''=1}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: openbookqa_accuracy
dtype: bool
splits:
- name: train
num_bytes: 166633
num_examples: 500
download_size: 85130
dataset_size: 166633
- config_name: '{''do_sample''=False, ''beams''=5}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: openbookqa_accuracy
dtype: bool
splits:
- name: train
num_bytes: 166633
num_examples: 500
download_size: 85130
dataset_size: 166633
configs:
- config_name: '{''do_sample''=False, ''beams''=10}'
data_files:
- split: train
path: '{''do_sample''=False, ''beams''=10}/train-*'
- config_name: '{''do_sample''=False, ''beams''=1}'
data_files:
- split: train
path: '{''do_sample''=False, ''beams''=1}/train-*'
- config_name: '{''do_sample''=False, ''beams''=5}'
data_files:
- split: train
path: '{''do_sample''=False, ''beams''=5}/train-*'
---
|
boborr/FLUTTER | ---
license: openrail
---
|
neuralsentry/bigvul_devign_cvefixes_neuralsentry_commits | ---
dataset_info:
features:
- name: commit_msg
dtype: string
- name: commit_hash
dtype: string
- name: project
dtype: string
- name: source
dtype: string
- name: labels
dtype: int64
- name: repo_url
dtype: string
- name: commit_url
dtype: string
- name: commit_date
dtype: string
splits:
- name: train
num_bytes: 21506788
num_examples: 34991
- name: test
num_bytes: 2863491
num_examples: 1981
download_size: 1485790
dataset_size: 24370279
---
# Dataset Card for "bigvul_devign_cvefixes_neuralsentry_commits"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Jing24/seperate_3 | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
struct:
- name: answer_start
sequence: int32
- name: text
sequence: string
splits:
- name: train
num_bytes: 6880782
num_examples: 7720
download_size: 1220030
dataset_size: 6880782
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "seperate_3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ZurabDz/geo_large_corpus_cleaned | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 10479169400
num_examples: 12626101
download_size: 3626972633
dataset_size: 10479169400
---
# Dataset Card for "geo_large_corpus_cleaned"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jdelcidr/garabato | ---
license: afl-3.0
---
|
aghent/copiapoa-roboflow | ---
license: apache-2.0
---
|
tianmeow/sal | ---
license: bsd
---
|
open-llm-leaderboard/details_augtoma__qCammel-70v1 | ---
pretty_name: Evaluation run of augtoma/qCammel-70v1
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [augtoma/qCammel-70v1](https://huggingface.co/augtoma/qCammel-70v1) on the [Open\
\ LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 3 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_augtoma__qCammel-70v1\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-09-17T06:45:18.044644](https://huggingface.co/datasets/open-llm-leaderboard/details_augtoma__qCammel-70v1/blob/main/results_2023-09-17T06-45-18.044644.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.033766778523489936,\n\
\ \"em_stderr\": 0.001849802869119515,\n \"f1\": 0.10340918624161041,\n\
\ \"f1_stderr\": 0.0022106009828094797,\n \"acc\": 0.5700654570173166,\n\
\ \"acc_stderr\": 0.011407494958111332\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.033766778523489936,\n \"em_stderr\": 0.001849802869119515,\n\
\ \"f1\": 0.10340918624161041,\n \"f1_stderr\": 0.0022106009828094797\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.2971948445792267,\n \
\ \"acc_stderr\": 0.012588685966624186\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.8429360694554064,\n \"acc_stderr\": 0.010226303949598479\n\
\ }\n}\n```"
repo_url: https://huggingface.co/augtoma/qCammel-70v1
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_drop_3
data_files:
- split: 2023_09_17T06_45_18.044644
path:
- '**/details_harness|drop|3_2023-09-17T06-45-18.044644.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-09-17T06-45-18.044644.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_09_17T06_45_18.044644
path:
- '**/details_harness|gsm8k|5_2023-09-17T06-45-18.044644.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-09-17T06-45-18.044644.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_09_17T06_45_18.044644
path:
- '**/details_harness|winogrande|5_2023-09-17T06-45-18.044644.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-09-17T06-45-18.044644.parquet'
- config_name: results
data_files:
- split: 2023_09_17T06_45_18.044644
path:
- results_2023-09-17T06-45-18.044644.parquet
- split: latest
path:
- results_2023-09-17T06-45-18.044644.parquet
---
# Dataset Card for Evaluation run of augtoma/qCammel-70v1
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/augtoma/qCammel-70v1
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [augtoma/qCammel-70v1](https://huggingface.co/augtoma/qCammel-70v1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_augtoma__qCammel-70v1",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-09-17T06:45:18.044644](https://huggingface.co/datasets/open-llm-leaderboard/details_augtoma__qCammel-70v1/blob/main/results_2023-09-17T06-45-18.044644.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.033766778523489936,
"em_stderr": 0.001849802869119515,
"f1": 0.10340918624161041,
"f1_stderr": 0.0022106009828094797,
"acc": 0.5700654570173166,
"acc_stderr": 0.011407494958111332
},
"harness|drop|3": {
"em": 0.033766778523489936,
"em_stderr": 0.001849802869119515,
"f1": 0.10340918624161041,
"f1_stderr": 0.0022106009828094797
},
"harness|gsm8k|5": {
"acc": 0.2971948445792267,
"acc_stderr": 0.012588685966624186
},
"harness|winogrande|5": {
"acc": 0.8429360694554064,
"acc_stderr": 0.010226303949598479
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
CyberHarem/setsuna_fireemblem | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of setsuna (Fire Emblem)
This is the dataset of setsuna (Fire Emblem), containing 71 images and their tags.
The core tags of this character are `hair_over_one_eye, short_hair, blue_hair, blue_eyes, hairband`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:--------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 71 | 51.67 MiB | [Download](https://huggingface.co/datasets/CyberHarem/setsuna_fireemblem/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 71 | 36.31 MiB | [Download](https://huggingface.co/datasets/CyberHarem/setsuna_fireemblem/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 110 | 58.38 MiB | [Download](https://huggingface.co/datasets/CyberHarem/setsuna_fireemblem/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 71 | 47.84 MiB | [Download](https://huggingface.co/datasets/CyberHarem/setsuna_fireemblem/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 110 | 77.21 MiB | [Download](https://huggingface.co/datasets/CyberHarem/setsuna_fireemblem/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/setsuna_fireemblem',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------|
| 0 | 6 |  |  |  |  |  | 1girl, arrow_(projectile), gloves, solo, quiver, simple_background, holding_bow_(weapon), white_background |
| 1 | 6 |  |  |  |  |  | 1girl, simple_background, solo, upper_body, white_background |
| 2 | 5 |  |  |  |  |  | 1girl, fingerless_gloves, solo, upper_body, looking_at_viewer |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | arrow_(projectile) | gloves | solo | quiver | simple_background | holding_bow_(weapon) | white_background | upper_body | fingerless_gloves | looking_at_viewer |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------------------|:---------|:-------|:---------|:--------------------|:-----------------------|:-------------------|:-------------|:--------------------|:--------------------|
| 0 | 6 |  |  |  |  |  | X | X | X | X | X | X | X | X | | | |
| 1 | 6 |  |  |  |  |  | X | | | X | | X | | X | X | | |
| 2 | 5 |  |  |  |  |  | X | | | X | | | | | X | X | X |
|
sravan1320/guanaco-llama2-1k | ---
license: apache-2.0
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1654448
num_examples: 1000
download_size: 966694
dataset_size: 1654448
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
CyberHarem/taira_no_kagekiyo_fgo | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of taira_no_kagekiyo/平景清/平景清 (Fate/Grand Order)
This is the dataset of taira_no_kagekiyo/平景清/平景清 (Fate/Grand Order), containing 119 images and their tags.
The core tags of this character are `long_hair, black_hair, side_ponytail, breasts, bangs, hat, very_long_hair, parted_bangs, multicolored_eyes, purple_lips`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-----------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 119 | 181.73 MiB | [Download](https://huggingface.co/datasets/CyberHarem/taira_no_kagekiyo_fgo/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 119 | 96.33 MiB | [Download](https://huggingface.co/datasets/CyberHarem/taira_no_kagekiyo_fgo/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 270 | 197.79 MiB | [Download](https://huggingface.co/datasets/CyberHarem/taira_no_kagekiyo_fgo/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 119 | 157.09 MiB | [Download](https://huggingface.co/datasets/CyberHarem/taira_no_kagekiyo_fgo/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 270 | 293.54 MiB | [Download](https://huggingface.co/datasets/CyberHarem/taira_no_kagekiyo_fgo/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/taira_no_kagekiyo_fgo',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 37 |  |  |  |  |  | 1girl, katana, solo, holding_sword, tate_eboshi, gloves, japanese_armor, makeup, looking_at_viewer, smile, shoulder_armor, black_headwear, dual_wielding, purple_eyes |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | katana | solo | holding_sword | tate_eboshi | gloves | japanese_armor | makeup | looking_at_viewer | smile | shoulder_armor | black_headwear | dual_wielding | purple_eyes |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------|:-------|:----------------|:--------------|:---------|:-----------------|:---------|:--------------------|:--------|:-----------------|:-----------------|:----------------|:--------------|
| 0 | 37 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.