datasetId stringlengths 2 117 | card stringlengths 19 1.01M |
|---|---|
AdapterOcean/python3-standardized_cluster_4_alpaca | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 5731600
num_examples: 2410
download_size: 0
dataset_size: 5731600
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "python3-standardized_cluster_4_alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
autoevaluate/autoeval-staging-eval-project-adversarial_qa-e34332b7-12205628 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- adversarial_qa
eval_info:
task: extractive_question_answering
model: deepset/tinybert-6l-768d-squad2
metrics: []
dataset_name: adversarial_qa
dataset_config: adversarialQA
dataset_split: validation
col_mapping:
context: context
question: question
answers-text: answers.text
answers-answer_start: answers.answer_start
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/tinybert-6l-768d-squad2
* Dataset: adversarial_qa
* Config: adversarialQA
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@ceyda](https://huggingface.co/ceyda) for evaluating this model. |
lurosenb/boolq_reformatted | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 6501275
num_examples: 9427
- name: validation
num_bytes: 1110546
num_examples: 1635
- name: test
num_bytes: 1120634
num_examples: 1635
download_size: 5124077
dataset_size: 8732455
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
open-llm-leaderboard/details_aloobun__Synch-Qwen1.5-1.8B | ---
pretty_name: Evaluation run of aloobun/Synch-Qwen1.5-1.8B
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [aloobun/Synch-Qwen1.5-1.8B](https://huggingface.co/aloobun/Synch-Qwen1.5-1.8B)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_aloobun__Synch-Qwen1.5-1.8B\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-03-22T20:14:51.646868](https://huggingface.co/datasets/open-llm-leaderboard/details_aloobun__Synch-Qwen1.5-1.8B/blob/main/results_2024-03-22T20-14-51.646868.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.44731280280831115,\n\
\ \"acc_stderr\": 0.03442875263084712,\n \"acc_norm\": 0.44943841295273806,\n\
\ \"acc_norm_stderr\": 0.03514556906718136,\n \"mc1\": 0.2582619339045288,\n\
\ \"mc1_stderr\": 0.015321821688476196,\n \"mc2\": 0.4143669782380921,\n\
\ \"mc2_stderr\": 0.013963345006309792\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.3412969283276451,\n \"acc_stderr\": 0.013855831287497714,\n\
\ \"acc_norm\": 0.36945392491467577,\n \"acc_norm_stderr\": 0.014104578366491911\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.4471220872336188,\n\
\ \"acc_stderr\": 0.004961799358836432,\n \"acc_norm\": 0.6018721370244972,\n\
\ \"acc_norm_stderr\": 0.00488511646555027\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.32,\n \"acc_stderr\": 0.046882617226215034,\n \
\ \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.046882617226215034\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.3925925925925926,\n\
\ \"acc_stderr\": 0.04218506215368879,\n \"acc_norm\": 0.3925925925925926,\n\
\ \"acc_norm_stderr\": 0.04218506215368879\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.4407894736842105,\n \"acc_stderr\": 0.04040311062490436,\n\
\ \"acc_norm\": 0.4407894736842105,\n \"acc_norm_stderr\": 0.04040311062490436\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.53,\n\
\ \"acc_stderr\": 0.050161355804659205,\n \"acc_norm\": 0.53,\n \
\ \"acc_norm_stderr\": 0.050161355804659205\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.4679245283018868,\n \"acc_stderr\": 0.03070948699255655,\n\
\ \"acc_norm\": 0.4679245283018868,\n \"acc_norm_stderr\": 0.03070948699255655\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.4027777777777778,\n\
\ \"acc_stderr\": 0.04101405519842425,\n \"acc_norm\": 0.4027777777777778,\n\
\ \"acc_norm_stderr\": 0.04101405519842425\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.32,\n \"acc_stderr\": 0.046882617226215034,\n \
\ \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.046882617226215034\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"\
acc\": 0.4,\n \"acc_stderr\": 0.04923659639173309,\n \"acc_norm\"\
: 0.4,\n \"acc_norm_stderr\": 0.04923659639173309\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.35,\n \"acc_stderr\": 0.04793724854411019,\n \
\ \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.04793724854411019\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.4277456647398844,\n\
\ \"acc_stderr\": 0.037724468575180255,\n \"acc_norm\": 0.4277456647398844,\n\
\ \"acc_norm_stderr\": 0.037724468575180255\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.21568627450980393,\n \"acc_stderr\": 0.04092563958237655,\n\
\ \"acc_norm\": 0.21568627450980393,\n \"acc_norm_stderr\": 0.04092563958237655\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.58,\n \"acc_stderr\": 0.049604496374885836,\n \"acc_norm\": 0.58,\n\
\ \"acc_norm_stderr\": 0.049604496374885836\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.425531914893617,\n \"acc_stderr\": 0.03232146916224469,\n\
\ \"acc_norm\": 0.425531914893617,\n \"acc_norm_stderr\": 0.03232146916224469\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.2543859649122807,\n\
\ \"acc_stderr\": 0.040969851398436716,\n \"acc_norm\": 0.2543859649122807,\n\
\ \"acc_norm_stderr\": 0.040969851398436716\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.46206896551724136,\n \"acc_stderr\": 0.041546596717075474,\n\
\ \"acc_norm\": 0.46206896551724136,\n \"acc_norm_stderr\": 0.041546596717075474\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.335978835978836,\n \"acc_stderr\": 0.024326310529149128,\n \"\
acc_norm\": 0.335978835978836,\n \"acc_norm_stderr\": 0.024326310529149128\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.23015873015873015,\n\
\ \"acc_stderr\": 0.03764950879790605,\n \"acc_norm\": 0.23015873015873015,\n\
\ \"acc_norm_stderr\": 0.03764950879790605\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252604,\n \
\ \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252604\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.4645161290322581,\n\
\ \"acc_stderr\": 0.028372287797962956,\n \"acc_norm\": 0.4645161290322581,\n\
\ \"acc_norm_stderr\": 0.028372287797962956\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.3448275862068966,\n \"acc_stderr\": 0.033442837442804574,\n\
\ \"acc_norm\": 0.3448275862068966,\n \"acc_norm_stderr\": 0.033442837442804574\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.46,\n \"acc_stderr\": 0.05009082659620333,\n \"acc_norm\"\
: 0.46,\n \"acc_norm_stderr\": 0.05009082659620333\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.6121212121212121,\n \"acc_stderr\": 0.03804913653971012,\n\
\ \"acc_norm\": 0.6121212121212121,\n \"acc_norm_stderr\": 0.03804913653971012\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.5606060606060606,\n \"acc_stderr\": 0.035360859475294805,\n \"\
acc_norm\": 0.5606060606060606,\n \"acc_norm_stderr\": 0.035360859475294805\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.5440414507772021,\n \"acc_stderr\": 0.035944137112724366,\n\
\ \"acc_norm\": 0.5440414507772021,\n \"acc_norm_stderr\": 0.035944137112724366\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.3435897435897436,\n \"acc_stderr\": 0.024078696580635474,\n\
\ \"acc_norm\": 0.3435897435897436,\n \"acc_norm_stderr\": 0.024078696580635474\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.3148148148148148,\n \"acc_stderr\": 0.02831753349606647,\n \
\ \"acc_norm\": 0.3148148148148148,\n \"acc_norm_stderr\": 0.02831753349606647\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.41596638655462187,\n \"acc_stderr\": 0.03201650100739615,\n\
\ \"acc_norm\": 0.41596638655462187,\n \"acc_norm_stderr\": 0.03201650100739615\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.2185430463576159,\n \"acc_stderr\": 0.033742355504256936,\n \"\
acc_norm\": 0.2185430463576159,\n \"acc_norm_stderr\": 0.033742355504256936\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.5596330275229358,\n \"acc_stderr\": 0.02128431062376155,\n \"\
acc_norm\": 0.5596330275229358,\n \"acc_norm_stderr\": 0.02128431062376155\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.2777777777777778,\n \"acc_stderr\": 0.030546745264953178,\n \"\
acc_norm\": 0.2777777777777778,\n \"acc_norm_stderr\": 0.030546745264953178\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.43137254901960786,\n \"acc_stderr\": 0.03476099060501636,\n \"\
acc_norm\": 0.43137254901960786,\n \"acc_norm_stderr\": 0.03476099060501636\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.5991561181434599,\n \"acc_stderr\": 0.031900803894732356,\n \
\ \"acc_norm\": 0.5991561181434599,\n \"acc_norm_stderr\": 0.031900803894732356\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.515695067264574,\n\
\ \"acc_stderr\": 0.0335412657542081,\n \"acc_norm\": 0.515695067264574,\n\
\ \"acc_norm_stderr\": 0.0335412657542081\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.5419847328244275,\n \"acc_stderr\": 0.04369802690578756,\n\
\ \"acc_norm\": 0.5419847328244275,\n \"acc_norm_stderr\": 0.04369802690578756\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.6611570247933884,\n \"acc_stderr\": 0.043207678075366705,\n \"\
acc_norm\": 0.6611570247933884,\n \"acc_norm_stderr\": 0.043207678075366705\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.49074074074074076,\n\
\ \"acc_stderr\": 0.04832853553437055,\n \"acc_norm\": 0.49074074074074076,\n\
\ \"acc_norm_stderr\": 0.04832853553437055\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.4049079754601227,\n \"acc_stderr\": 0.038566721635489125,\n\
\ \"acc_norm\": 0.4049079754601227,\n \"acc_norm_stderr\": 0.038566721635489125\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4375,\n\
\ \"acc_stderr\": 0.04708567521880525,\n \"acc_norm\": 0.4375,\n \
\ \"acc_norm_stderr\": 0.04708567521880525\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.6601941747572816,\n \"acc_stderr\": 0.046897659372781335,\n\
\ \"acc_norm\": 0.6601941747572816,\n \"acc_norm_stderr\": 0.046897659372781335\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.7435897435897436,\n\
\ \"acc_stderr\": 0.028605953702004243,\n \"acc_norm\": 0.7435897435897436,\n\
\ \"acc_norm_stderr\": 0.028605953702004243\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.52,\n \"acc_stderr\": 0.05021167315686779,\n \
\ \"acc_norm\": 0.52,\n \"acc_norm_stderr\": 0.05021167315686779\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.5849297573435505,\n\
\ \"acc_stderr\": 0.01762013700365527,\n \"acc_norm\": 0.5849297573435505,\n\
\ \"acc_norm_stderr\": 0.01762013700365527\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.5317919075144508,\n \"acc_stderr\": 0.026864624366756646,\n\
\ \"acc_norm\": 0.5317919075144508,\n \"acc_norm_stderr\": 0.026864624366756646\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.25139664804469275,\n\
\ \"acc_stderr\": 0.01450897945355397,\n \"acc_norm\": 0.25139664804469275,\n\
\ \"acc_norm_stderr\": 0.01450897945355397\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.5718954248366013,\n \"acc_stderr\": 0.028332397483664274,\n\
\ \"acc_norm\": 0.5718954248366013,\n \"acc_norm_stderr\": 0.028332397483664274\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.4437299035369775,\n\
\ \"acc_stderr\": 0.02821768355665231,\n \"acc_norm\": 0.4437299035369775,\n\
\ \"acc_norm_stderr\": 0.02821768355665231\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.4845679012345679,\n \"acc_stderr\": 0.0278074900442762,\n\
\ \"acc_norm\": 0.4845679012345679,\n \"acc_norm_stderr\": 0.0278074900442762\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.3191489361702128,\n \"acc_stderr\": 0.0278079901413202,\n \
\ \"acc_norm\": 0.3191489361702128,\n \"acc_norm_stderr\": 0.0278079901413202\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.3533246414602347,\n\
\ \"acc_stderr\": 0.01220840821108243,\n \"acc_norm\": 0.3533246414602347,\n\
\ \"acc_norm_stderr\": 0.01220840821108243\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.31985294117647056,\n \"acc_stderr\": 0.028332959514031208,\n\
\ \"acc_norm\": 0.31985294117647056,\n \"acc_norm_stderr\": 0.028332959514031208\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.42483660130718953,\n \"acc_stderr\": 0.019997973035458336,\n \
\ \"acc_norm\": 0.42483660130718953,\n \"acc_norm_stderr\": 0.019997973035458336\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.5909090909090909,\n\
\ \"acc_stderr\": 0.04709306978661895,\n \"acc_norm\": 0.5909090909090909,\n\
\ \"acc_norm_stderr\": 0.04709306978661895\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.42857142857142855,\n \"acc_stderr\": 0.031680911612338825,\n\
\ \"acc_norm\": 0.42857142857142855,\n \"acc_norm_stderr\": 0.031680911612338825\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.5970149253731343,\n\
\ \"acc_stderr\": 0.034683432951111266,\n \"acc_norm\": 0.5970149253731343,\n\
\ \"acc_norm_stderr\": 0.034683432951111266\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.67,\n \"acc_stderr\": 0.047258156262526066,\n \
\ \"acc_norm\": 0.67,\n \"acc_norm_stderr\": 0.047258156262526066\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.40963855421686746,\n\
\ \"acc_stderr\": 0.03828401115079023,\n \"acc_norm\": 0.40963855421686746,\n\
\ \"acc_norm_stderr\": 0.03828401115079023\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.5555555555555556,\n \"acc_stderr\": 0.038110796698335316,\n\
\ \"acc_norm\": 0.5555555555555556,\n \"acc_norm_stderr\": 0.038110796698335316\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.2582619339045288,\n\
\ \"mc1_stderr\": 0.015321821688476196,\n \"mc2\": 0.4143669782380921,\n\
\ \"mc2_stderr\": 0.013963345006309792\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.6124704025256511,\n \"acc_stderr\": 0.013692354636016766\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.33965125094768767,\n \
\ \"acc_stderr\": 0.01304504506766527\n }\n}\n```"
repo_url: https://huggingface.co/aloobun/Synch-Qwen1.5-1.8B
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_03_22T19_50_28.542025
path:
- '**/details_harness|arc:challenge|25_2024-03-22T19-50-28.542025.parquet'
- split: 2024_03_22T20_14_51.646868
path:
- '**/details_harness|arc:challenge|25_2024-03-22T20-14-51.646868.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-03-22T20-14-51.646868.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_03_22T19_50_28.542025
path:
- '**/details_harness|gsm8k|5_2024-03-22T19-50-28.542025.parquet'
- split: 2024_03_22T20_14_51.646868
path:
- '**/details_harness|gsm8k|5_2024-03-22T20-14-51.646868.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-03-22T20-14-51.646868.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_03_22T19_50_28.542025
path:
- '**/details_harness|hellaswag|10_2024-03-22T19-50-28.542025.parquet'
- split: 2024_03_22T20_14_51.646868
path:
- '**/details_harness|hellaswag|10_2024-03-22T20-14-51.646868.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-03-22T20-14-51.646868.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_03_22T19_50_28.542025
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-22T19-50-28.542025.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-22T19-50-28.542025.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-22T19-50-28.542025.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-22T19-50-28.542025.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-22T19-50-28.542025.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-22T19-50-28.542025.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-22T19-50-28.542025.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-22T19-50-28.542025.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-22T19-50-28.542025.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-22T19-50-28.542025.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-22T19-50-28.542025.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-22T19-50-28.542025.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-22T19-50-28.542025.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-22T19-50-28.542025.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-22T19-50-28.542025.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-22T19-50-28.542025.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-22T19-50-28.542025.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-22T19-50-28.542025.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-22T19-50-28.542025.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-22T19-50-28.542025.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-22T19-50-28.542025.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-22T19-50-28.542025.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-22T19-50-28.542025.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-22T19-50-28.542025.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-22T19-50-28.542025.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-22T19-50-28.542025.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-22T19-50-28.542025.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-22T19-50-28.542025.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-22T19-50-28.542025.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-22T19-50-28.542025.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-22T19-50-28.542025.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-22T19-50-28.542025.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-22T19-50-28.542025.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-22T19-50-28.542025.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-22T19-50-28.542025.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-22T19-50-28.542025.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-22T19-50-28.542025.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-22T19-50-28.542025.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-22T19-50-28.542025.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-22T19-50-28.542025.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-22T19-50-28.542025.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-22T19-50-28.542025.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-22T19-50-28.542025.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-22T19-50-28.542025.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-22T19-50-28.542025.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-22T19-50-28.542025.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-22T19-50-28.542025.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-22T19-50-28.542025.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-22T19-50-28.542025.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-22T19-50-28.542025.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-22T19-50-28.542025.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-22T19-50-28.542025.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-22T19-50-28.542025.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-22T19-50-28.542025.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-22T19-50-28.542025.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-22T19-50-28.542025.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-22T19-50-28.542025.parquet'
- split: 2024_03_22T20_14_51.646868
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-22T20-14-51.646868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-22T20-14-51.646868.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-22T20-14-51.646868.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_03_22T19_50_28.542025
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-22T19-50-28.542025.parquet'
- split: 2024_03_22T20_14_51.646868
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-22T20-14-51.646868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-22T20-14-51.646868.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_03_22T19_50_28.542025
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-22T19-50-28.542025.parquet'
- split: 2024_03_22T20_14_51.646868
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-22T20-14-51.646868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-22T20-14-51.646868.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_03_22T19_50_28.542025
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-22T19-50-28.542025.parquet'
- split: 2024_03_22T20_14_51.646868
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-22T20-14-51.646868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-22T20-14-51.646868.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_03_22T19_50_28.542025
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-22T19-50-28.542025.parquet'
- split: 2024_03_22T20_14_51.646868
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-22T20-14-51.646868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-22T20-14-51.646868.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_03_22T19_50_28.542025
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-22T19-50-28.542025.parquet'
- split: 2024_03_22T20_14_51.646868
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-22T20-14-51.646868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-22T20-14-51.646868.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_03_22T19_50_28.542025
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-22T19-50-28.542025.parquet'
- split: 2024_03_22T20_14_51.646868
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-22T20-14-51.646868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-22T20-14-51.646868.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_03_22T19_50_28.542025
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-22T19-50-28.542025.parquet'
- split: 2024_03_22T20_14_51.646868
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-22T20-14-51.646868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-22T20-14-51.646868.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_03_22T19_50_28.542025
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-22T19-50-28.542025.parquet'
- split: 2024_03_22T20_14_51.646868
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-22T20-14-51.646868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-22T20-14-51.646868.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_03_22T19_50_28.542025
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-22T19-50-28.542025.parquet'
- split: 2024_03_22T20_14_51.646868
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-22T20-14-51.646868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-22T20-14-51.646868.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_03_22T19_50_28.542025
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-22T19-50-28.542025.parquet'
- split: 2024_03_22T20_14_51.646868
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-22T20-14-51.646868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-22T20-14-51.646868.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_03_22T19_50_28.542025
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-22T19-50-28.542025.parquet'
- split: 2024_03_22T20_14_51.646868
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-22T20-14-51.646868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-22T20-14-51.646868.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_03_22T19_50_28.542025
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-22T19-50-28.542025.parquet'
- split: 2024_03_22T20_14_51.646868
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-22T20-14-51.646868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-22T20-14-51.646868.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_03_22T19_50_28.542025
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-22T19-50-28.542025.parquet'
- split: 2024_03_22T20_14_51.646868
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-22T20-14-51.646868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-22T20-14-51.646868.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_03_22T19_50_28.542025
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-22T19-50-28.542025.parquet'
- split: 2024_03_22T20_14_51.646868
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-22T20-14-51.646868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-22T20-14-51.646868.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_03_22T19_50_28.542025
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-22T19-50-28.542025.parquet'
- split: 2024_03_22T20_14_51.646868
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-22T20-14-51.646868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-22T20-14-51.646868.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_03_22T19_50_28.542025
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-22T19-50-28.542025.parquet'
- split: 2024_03_22T20_14_51.646868
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-22T20-14-51.646868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-22T20-14-51.646868.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_03_22T19_50_28.542025
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-22T19-50-28.542025.parquet'
- split: 2024_03_22T20_14_51.646868
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-22T20-14-51.646868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-22T20-14-51.646868.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_03_22T19_50_28.542025
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-22T19-50-28.542025.parquet'
- split: 2024_03_22T20_14_51.646868
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-22T20-14-51.646868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-22T20-14-51.646868.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_03_22T19_50_28.542025
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-22T19-50-28.542025.parquet'
- split: 2024_03_22T20_14_51.646868
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-22T20-14-51.646868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-22T20-14-51.646868.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_03_22T19_50_28.542025
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-22T19-50-28.542025.parquet'
- split: 2024_03_22T20_14_51.646868
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-22T20-14-51.646868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-22T20-14-51.646868.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_03_22T19_50_28.542025
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-22T19-50-28.542025.parquet'
- split: 2024_03_22T20_14_51.646868
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-22T20-14-51.646868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-22T20-14-51.646868.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_03_22T19_50_28.542025
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-22T19-50-28.542025.parquet'
- split: 2024_03_22T20_14_51.646868
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-22T20-14-51.646868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-22T20-14-51.646868.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_03_22T19_50_28.542025
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-22T19-50-28.542025.parquet'
- split: 2024_03_22T20_14_51.646868
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-22T20-14-51.646868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-22T20-14-51.646868.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_03_22T19_50_28.542025
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-22T19-50-28.542025.parquet'
- split: 2024_03_22T20_14_51.646868
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-22T20-14-51.646868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-22T20-14-51.646868.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_03_22T19_50_28.542025
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-22T19-50-28.542025.parquet'
- split: 2024_03_22T20_14_51.646868
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-22T20-14-51.646868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-22T20-14-51.646868.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_03_22T19_50_28.542025
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-22T19-50-28.542025.parquet'
- split: 2024_03_22T20_14_51.646868
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-22T20-14-51.646868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-22T20-14-51.646868.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_03_22T19_50_28.542025
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-22T19-50-28.542025.parquet'
- split: 2024_03_22T20_14_51.646868
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-22T20-14-51.646868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-22T20-14-51.646868.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_03_22T19_50_28.542025
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-22T19-50-28.542025.parquet'
- split: 2024_03_22T20_14_51.646868
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-22T20-14-51.646868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-22T20-14-51.646868.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_03_22T19_50_28.542025
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-22T19-50-28.542025.parquet'
- split: 2024_03_22T20_14_51.646868
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-22T20-14-51.646868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-22T20-14-51.646868.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_03_22T19_50_28.542025
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-22T19-50-28.542025.parquet'
- split: 2024_03_22T20_14_51.646868
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-22T20-14-51.646868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-22T20-14-51.646868.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_03_22T19_50_28.542025
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-22T19-50-28.542025.parquet'
- split: 2024_03_22T20_14_51.646868
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-22T20-14-51.646868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-22T20-14-51.646868.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_03_22T19_50_28.542025
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-22T19-50-28.542025.parquet'
- split: 2024_03_22T20_14_51.646868
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-22T20-14-51.646868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-22T20-14-51.646868.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_03_22T19_50_28.542025
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-22T19-50-28.542025.parquet'
- split: 2024_03_22T20_14_51.646868
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-22T20-14-51.646868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-22T20-14-51.646868.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_03_22T19_50_28.542025
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-22T19-50-28.542025.parquet'
- split: 2024_03_22T20_14_51.646868
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-22T20-14-51.646868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-22T20-14-51.646868.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_03_22T19_50_28.542025
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-22T19-50-28.542025.parquet'
- split: 2024_03_22T20_14_51.646868
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-22T20-14-51.646868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-22T20-14-51.646868.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_03_22T19_50_28.542025
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-22T19-50-28.542025.parquet'
- split: 2024_03_22T20_14_51.646868
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-22T20-14-51.646868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-22T20-14-51.646868.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_03_22T19_50_28.542025
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-22T19-50-28.542025.parquet'
- split: 2024_03_22T20_14_51.646868
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-22T20-14-51.646868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-22T20-14-51.646868.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_03_22T19_50_28.542025
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-22T19-50-28.542025.parquet'
- split: 2024_03_22T20_14_51.646868
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-22T20-14-51.646868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-22T20-14-51.646868.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_03_22T19_50_28.542025
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-22T19-50-28.542025.parquet'
- split: 2024_03_22T20_14_51.646868
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-22T20-14-51.646868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-22T20-14-51.646868.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_03_22T19_50_28.542025
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-22T19-50-28.542025.parquet'
- split: 2024_03_22T20_14_51.646868
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-22T20-14-51.646868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-22T20-14-51.646868.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_03_22T19_50_28.542025
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-22T19-50-28.542025.parquet'
- split: 2024_03_22T20_14_51.646868
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-22T20-14-51.646868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-22T20-14-51.646868.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_03_22T19_50_28.542025
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-22T19-50-28.542025.parquet'
- split: 2024_03_22T20_14_51.646868
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-22T20-14-51.646868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-22T20-14-51.646868.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_03_22T19_50_28.542025
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-22T19-50-28.542025.parquet'
- split: 2024_03_22T20_14_51.646868
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-22T20-14-51.646868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-22T20-14-51.646868.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_03_22T19_50_28.542025
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-22T19-50-28.542025.parquet'
- split: 2024_03_22T20_14_51.646868
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-22T20-14-51.646868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-22T20-14-51.646868.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_03_22T19_50_28.542025
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-22T19-50-28.542025.parquet'
- split: 2024_03_22T20_14_51.646868
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-22T20-14-51.646868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-22T20-14-51.646868.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_03_22T19_50_28.542025
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-22T19-50-28.542025.parquet'
- split: 2024_03_22T20_14_51.646868
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-22T20-14-51.646868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-22T20-14-51.646868.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_03_22T19_50_28.542025
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-22T19-50-28.542025.parquet'
- split: 2024_03_22T20_14_51.646868
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-22T20-14-51.646868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-22T20-14-51.646868.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_03_22T19_50_28.542025
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-22T19-50-28.542025.parquet'
- split: 2024_03_22T20_14_51.646868
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-22T20-14-51.646868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-22T20-14-51.646868.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_03_22T19_50_28.542025
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-22T19-50-28.542025.parquet'
- split: 2024_03_22T20_14_51.646868
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-22T20-14-51.646868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-22T20-14-51.646868.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_03_22T19_50_28.542025
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-22T19-50-28.542025.parquet'
- split: 2024_03_22T20_14_51.646868
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-22T20-14-51.646868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-22T20-14-51.646868.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_03_22T19_50_28.542025
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-22T19-50-28.542025.parquet'
- split: 2024_03_22T20_14_51.646868
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-22T20-14-51.646868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-22T20-14-51.646868.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_03_22T19_50_28.542025
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-22T19-50-28.542025.parquet'
- split: 2024_03_22T20_14_51.646868
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-22T20-14-51.646868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-22T20-14-51.646868.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_03_22T19_50_28.542025
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-22T19-50-28.542025.parquet'
- split: 2024_03_22T20_14_51.646868
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-22T20-14-51.646868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-22T20-14-51.646868.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_03_22T19_50_28.542025
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-22T19-50-28.542025.parquet'
- split: 2024_03_22T20_14_51.646868
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-22T20-14-51.646868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-22T20-14-51.646868.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_03_22T19_50_28.542025
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-22T19-50-28.542025.parquet'
- split: 2024_03_22T20_14_51.646868
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-22T20-14-51.646868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-22T20-14-51.646868.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_03_22T19_50_28.542025
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-22T19-50-28.542025.parquet'
- split: 2024_03_22T20_14_51.646868
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-22T20-14-51.646868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-22T20-14-51.646868.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_03_22T19_50_28.542025
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-22T19-50-28.542025.parquet'
- split: 2024_03_22T20_14_51.646868
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-22T20-14-51.646868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-22T20-14-51.646868.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_03_22T19_50_28.542025
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-22T19-50-28.542025.parquet'
- split: 2024_03_22T20_14_51.646868
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-22T20-14-51.646868.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-22T20-14-51.646868.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_03_22T19_50_28.542025
path:
- '**/details_harness|winogrande|5_2024-03-22T19-50-28.542025.parquet'
- split: 2024_03_22T20_14_51.646868
path:
- '**/details_harness|winogrande|5_2024-03-22T20-14-51.646868.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-03-22T20-14-51.646868.parquet'
- config_name: results
data_files:
- split: 2024_03_22T19_50_28.542025
path:
- results_2024-03-22T19-50-28.542025.parquet
- split: 2024_03_22T20_14_51.646868
path:
- results_2024-03-22T20-14-51.646868.parquet
- split: latest
path:
- results_2024-03-22T20-14-51.646868.parquet
---
# Dataset Card for Evaluation run of aloobun/Synch-Qwen1.5-1.8B
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [aloobun/Synch-Qwen1.5-1.8B](https://huggingface.co/aloobun/Synch-Qwen1.5-1.8B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_aloobun__Synch-Qwen1.5-1.8B",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-03-22T20:14:51.646868](https://huggingface.co/datasets/open-llm-leaderboard/details_aloobun__Synch-Qwen1.5-1.8B/blob/main/results_2024-03-22T20-14-51.646868.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.44731280280831115,
"acc_stderr": 0.03442875263084712,
"acc_norm": 0.44943841295273806,
"acc_norm_stderr": 0.03514556906718136,
"mc1": 0.2582619339045288,
"mc1_stderr": 0.015321821688476196,
"mc2": 0.4143669782380921,
"mc2_stderr": 0.013963345006309792
},
"harness|arc:challenge|25": {
"acc": 0.3412969283276451,
"acc_stderr": 0.013855831287497714,
"acc_norm": 0.36945392491467577,
"acc_norm_stderr": 0.014104578366491911
},
"harness|hellaswag|10": {
"acc": 0.4471220872336188,
"acc_stderr": 0.004961799358836432,
"acc_norm": 0.6018721370244972,
"acc_norm_stderr": 0.00488511646555027
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.32,
"acc_stderr": 0.046882617226215034,
"acc_norm": 0.32,
"acc_norm_stderr": 0.046882617226215034
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.3925925925925926,
"acc_stderr": 0.04218506215368879,
"acc_norm": 0.3925925925925926,
"acc_norm_stderr": 0.04218506215368879
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.4407894736842105,
"acc_stderr": 0.04040311062490436,
"acc_norm": 0.4407894736842105,
"acc_norm_stderr": 0.04040311062490436
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.53,
"acc_stderr": 0.050161355804659205,
"acc_norm": 0.53,
"acc_norm_stderr": 0.050161355804659205
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.4679245283018868,
"acc_stderr": 0.03070948699255655,
"acc_norm": 0.4679245283018868,
"acc_norm_stderr": 0.03070948699255655
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.4027777777777778,
"acc_stderr": 0.04101405519842425,
"acc_norm": 0.4027777777777778,
"acc_norm_stderr": 0.04101405519842425
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.32,
"acc_stderr": 0.046882617226215034,
"acc_norm": 0.32,
"acc_norm_stderr": 0.046882617226215034
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.4,
"acc_stderr": 0.04923659639173309,
"acc_norm": 0.4,
"acc_norm_stderr": 0.04923659639173309
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.35,
"acc_stderr": 0.04793724854411019,
"acc_norm": 0.35,
"acc_norm_stderr": 0.04793724854411019
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.4277456647398844,
"acc_stderr": 0.037724468575180255,
"acc_norm": 0.4277456647398844,
"acc_norm_stderr": 0.037724468575180255
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.21568627450980393,
"acc_stderr": 0.04092563958237655,
"acc_norm": 0.21568627450980393,
"acc_norm_stderr": 0.04092563958237655
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.58,
"acc_stderr": 0.049604496374885836,
"acc_norm": 0.58,
"acc_norm_stderr": 0.049604496374885836
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.425531914893617,
"acc_stderr": 0.03232146916224469,
"acc_norm": 0.425531914893617,
"acc_norm_stderr": 0.03232146916224469
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.2543859649122807,
"acc_stderr": 0.040969851398436716,
"acc_norm": 0.2543859649122807,
"acc_norm_stderr": 0.040969851398436716
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.46206896551724136,
"acc_stderr": 0.041546596717075474,
"acc_norm": 0.46206896551724136,
"acc_norm_stderr": 0.041546596717075474
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.335978835978836,
"acc_stderr": 0.024326310529149128,
"acc_norm": 0.335978835978836,
"acc_norm_stderr": 0.024326310529149128
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.23015873015873015,
"acc_stderr": 0.03764950879790605,
"acc_norm": 0.23015873015873015,
"acc_norm_stderr": 0.03764950879790605
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.33,
"acc_stderr": 0.04725815626252604,
"acc_norm": 0.33,
"acc_norm_stderr": 0.04725815626252604
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.4645161290322581,
"acc_stderr": 0.028372287797962956,
"acc_norm": 0.4645161290322581,
"acc_norm_stderr": 0.028372287797962956
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.3448275862068966,
"acc_stderr": 0.033442837442804574,
"acc_norm": 0.3448275862068966,
"acc_norm_stderr": 0.033442837442804574
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.46,
"acc_stderr": 0.05009082659620333,
"acc_norm": 0.46,
"acc_norm_stderr": 0.05009082659620333
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.6121212121212121,
"acc_stderr": 0.03804913653971012,
"acc_norm": 0.6121212121212121,
"acc_norm_stderr": 0.03804913653971012
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.5606060606060606,
"acc_stderr": 0.035360859475294805,
"acc_norm": 0.5606060606060606,
"acc_norm_stderr": 0.035360859475294805
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.5440414507772021,
"acc_stderr": 0.035944137112724366,
"acc_norm": 0.5440414507772021,
"acc_norm_stderr": 0.035944137112724366
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.3435897435897436,
"acc_stderr": 0.024078696580635474,
"acc_norm": 0.3435897435897436,
"acc_norm_stderr": 0.024078696580635474
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3148148148148148,
"acc_stderr": 0.02831753349606647,
"acc_norm": 0.3148148148148148,
"acc_norm_stderr": 0.02831753349606647
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.41596638655462187,
"acc_stderr": 0.03201650100739615,
"acc_norm": 0.41596638655462187,
"acc_norm_stderr": 0.03201650100739615
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.2185430463576159,
"acc_stderr": 0.033742355504256936,
"acc_norm": 0.2185430463576159,
"acc_norm_stderr": 0.033742355504256936
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.5596330275229358,
"acc_stderr": 0.02128431062376155,
"acc_norm": 0.5596330275229358,
"acc_norm_stderr": 0.02128431062376155
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.2777777777777778,
"acc_stderr": 0.030546745264953178,
"acc_norm": 0.2777777777777778,
"acc_norm_stderr": 0.030546745264953178
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.43137254901960786,
"acc_stderr": 0.03476099060501636,
"acc_norm": 0.43137254901960786,
"acc_norm_stderr": 0.03476099060501636
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.5991561181434599,
"acc_stderr": 0.031900803894732356,
"acc_norm": 0.5991561181434599,
"acc_norm_stderr": 0.031900803894732356
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.515695067264574,
"acc_stderr": 0.0335412657542081,
"acc_norm": 0.515695067264574,
"acc_norm_stderr": 0.0335412657542081
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.5419847328244275,
"acc_stderr": 0.04369802690578756,
"acc_norm": 0.5419847328244275,
"acc_norm_stderr": 0.04369802690578756
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.6611570247933884,
"acc_stderr": 0.043207678075366705,
"acc_norm": 0.6611570247933884,
"acc_norm_stderr": 0.043207678075366705
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.49074074074074076,
"acc_stderr": 0.04832853553437055,
"acc_norm": 0.49074074074074076,
"acc_norm_stderr": 0.04832853553437055
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.4049079754601227,
"acc_stderr": 0.038566721635489125,
"acc_norm": 0.4049079754601227,
"acc_norm_stderr": 0.038566721635489125
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.4375,
"acc_stderr": 0.04708567521880525,
"acc_norm": 0.4375,
"acc_norm_stderr": 0.04708567521880525
},
"harness|hendrycksTest-management|5": {
"acc": 0.6601941747572816,
"acc_stderr": 0.046897659372781335,
"acc_norm": 0.6601941747572816,
"acc_norm_stderr": 0.046897659372781335
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.7435897435897436,
"acc_stderr": 0.028605953702004243,
"acc_norm": 0.7435897435897436,
"acc_norm_stderr": 0.028605953702004243
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.52,
"acc_stderr": 0.05021167315686779,
"acc_norm": 0.52,
"acc_norm_stderr": 0.05021167315686779
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.5849297573435505,
"acc_stderr": 0.01762013700365527,
"acc_norm": 0.5849297573435505,
"acc_norm_stderr": 0.01762013700365527
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.5317919075144508,
"acc_stderr": 0.026864624366756646,
"acc_norm": 0.5317919075144508,
"acc_norm_stderr": 0.026864624366756646
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.25139664804469275,
"acc_stderr": 0.01450897945355397,
"acc_norm": 0.25139664804469275,
"acc_norm_stderr": 0.01450897945355397
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.5718954248366013,
"acc_stderr": 0.028332397483664274,
"acc_norm": 0.5718954248366013,
"acc_norm_stderr": 0.028332397483664274
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.4437299035369775,
"acc_stderr": 0.02821768355665231,
"acc_norm": 0.4437299035369775,
"acc_norm_stderr": 0.02821768355665231
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.4845679012345679,
"acc_stderr": 0.0278074900442762,
"acc_norm": 0.4845679012345679,
"acc_norm_stderr": 0.0278074900442762
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.3191489361702128,
"acc_stderr": 0.0278079901413202,
"acc_norm": 0.3191489361702128,
"acc_norm_stderr": 0.0278079901413202
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.3533246414602347,
"acc_stderr": 0.01220840821108243,
"acc_norm": 0.3533246414602347,
"acc_norm_stderr": 0.01220840821108243
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.31985294117647056,
"acc_stderr": 0.028332959514031208,
"acc_norm": 0.31985294117647056,
"acc_norm_stderr": 0.028332959514031208
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.42483660130718953,
"acc_stderr": 0.019997973035458336,
"acc_norm": 0.42483660130718953,
"acc_norm_stderr": 0.019997973035458336
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.5909090909090909,
"acc_stderr": 0.04709306978661895,
"acc_norm": 0.5909090909090909,
"acc_norm_stderr": 0.04709306978661895
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.42857142857142855,
"acc_stderr": 0.031680911612338825,
"acc_norm": 0.42857142857142855,
"acc_norm_stderr": 0.031680911612338825
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.5970149253731343,
"acc_stderr": 0.034683432951111266,
"acc_norm": 0.5970149253731343,
"acc_norm_stderr": 0.034683432951111266
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.67,
"acc_stderr": 0.047258156262526066,
"acc_norm": 0.67,
"acc_norm_stderr": 0.047258156262526066
},
"harness|hendrycksTest-virology|5": {
"acc": 0.40963855421686746,
"acc_stderr": 0.03828401115079023,
"acc_norm": 0.40963855421686746,
"acc_norm_stderr": 0.03828401115079023
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.5555555555555556,
"acc_stderr": 0.038110796698335316,
"acc_norm": 0.5555555555555556,
"acc_norm_stderr": 0.038110796698335316
},
"harness|truthfulqa:mc|0": {
"mc1": 0.2582619339045288,
"mc1_stderr": 0.015321821688476196,
"mc2": 0.4143669782380921,
"mc2_stderr": 0.013963345006309792
},
"harness|winogrande|5": {
"acc": 0.6124704025256511,
"acc_stderr": 0.013692354636016766
},
"harness|gsm8k|5": {
"acc": 0.33965125094768767,
"acc_stderr": 0.01304504506766527
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
emozilla/proofpile-test-tokenized-mistral | ---
dataset_info:
features:
- name: text
dtype: string
- name: meta
dtype: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: tokenized_len
dtype: int64
splits:
- name: train
num_bytes: 1647980074
num_examples: 46251
download_size: 554081392
dataset_size: 1647980074
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "proofpile-test-tokenized-mistral"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tastypear/lmsys-chat-lewd-minimal | ---
task_categories:
- conversational
language:
- en
---
This dataset is extracted from lmsys/lmsys-chat-1m.
Multiple filters were used to extract 800+ pieces of sex-related data.
Removed:
- prompt words generated by role-playing programs.
- Jailbreak prompts.
- Answers that are too "appropriate" |
open-llm-leaderboard/details_keyfan__vicuna-chinese-replication-v1.1 | ---
pretty_name: Evaluation run of keyfan/vicuna-chinese-replication-v1.1
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [keyfan/vicuna-chinese-replication-v1.1](https://huggingface.co/keyfan/vicuna-chinese-replication-v1.1)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 64 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_keyfan__vicuna-chinese-replication-v1.1\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-09-20T16:29:17.450088](https://huggingface.co/datasets/open-llm-leaderboard/details_keyfan__vicuna-chinese-replication-v1.1/blob/main/results_2023-09-20T16-29-17.450088.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.19274328859060402,\n\
\ \"em_stderr\": 0.004039569791455342,\n \"f1\": 0.2668655620805379,\n\
\ \"f1_stderr\": 0.004116773539445767,\n \"acc\": 0.3844009566932927,\n\
\ \"acc_stderr\": 0.0106207870984688\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.19274328859060402,\n \"em_stderr\": 0.004039569791455342,\n\
\ \"f1\": 0.2668655620805379,\n \"f1_stderr\": 0.004116773539445767\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.09476876421531463,\n \
\ \"acc_stderr\": 0.008067791560015412\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.6740331491712708,\n \"acc_stderr\": 0.013173782636922189\n\
\ }\n}\n```"
repo_url: https://huggingface.co/keyfan/vicuna-chinese-replication-v1.1
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_07_24T15_34_51.648519
path:
- '**/details_harness|arc:challenge|25_2023-07-24T15:34:51.648519.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-07-24T15:34:51.648519.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_09_20T16_29_17.450088
path:
- '**/details_harness|drop|3_2023-09-20T16-29-17.450088.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-09-20T16-29-17.450088.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_09_20T16_29_17.450088
path:
- '**/details_harness|gsm8k|5_2023-09-20T16-29-17.450088.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-09-20T16-29-17.450088.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_07_24T15_34_51.648519
path:
- '**/details_harness|hellaswag|10_2023-07-24T15:34:51.648519.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-07-24T15:34:51.648519.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_07_24T15_34_51.648519
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-24T15:34:51.648519.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-07-24T15:34:51.648519.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-24T15:34:51.648519.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_07_24T15_34_51.648519
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-24T15:34:51.648519.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-24T15:34:51.648519.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_07_24T15_34_51.648519
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-24T15:34:51.648519.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-24T15:34:51.648519.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_07_24T15_34_51.648519
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-24T15:34:51.648519.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-24T15:34:51.648519.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_07_24T15_34_51.648519
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-24T15:34:51.648519.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-24T15:34:51.648519.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_07_24T15_34_51.648519
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-24T15:34:51.648519.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-24T15:34:51.648519.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_07_24T15_34_51.648519
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-24T15:34:51.648519.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-24T15:34:51.648519.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_07_24T15_34_51.648519
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-24T15:34:51.648519.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-24T15:34:51.648519.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_07_24T15_34_51.648519
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-24T15:34:51.648519.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-24T15:34:51.648519.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_07_24T15_34_51.648519
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-24T15:34:51.648519.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-24T15:34:51.648519.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_07_24T15_34_51.648519
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-24T15:34:51.648519.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-24T15:34:51.648519.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_07_24T15_34_51.648519
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-24T15:34:51.648519.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-24T15:34:51.648519.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_07_24T15_34_51.648519
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-24T15:34:51.648519.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-24T15:34:51.648519.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_07_24T15_34_51.648519
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-24T15:34:51.648519.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-24T15:34:51.648519.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_07_24T15_34_51.648519
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-24T15:34:51.648519.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-24T15:34:51.648519.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_07_24T15_34_51.648519
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-24T15:34:51.648519.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-24T15:34:51.648519.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_07_24T15_34_51.648519
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-24T15:34:51.648519.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-24T15:34:51.648519.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_07_24T15_34_51.648519
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-24T15:34:51.648519.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-24T15:34:51.648519.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_07_24T15_34_51.648519
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-24T15:34:51.648519.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-24T15:34:51.648519.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_07_24T15_34_51.648519
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-24T15:34:51.648519.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-24T15:34:51.648519.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_07_24T15_34_51.648519
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-24T15:34:51.648519.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-24T15:34:51.648519.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_07_24T15_34_51.648519
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-24T15:34:51.648519.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-24T15:34:51.648519.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_07_24T15_34_51.648519
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-24T15:34:51.648519.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-24T15:34:51.648519.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_07_24T15_34_51.648519
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-24T15:34:51.648519.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-24T15:34:51.648519.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_07_24T15_34_51.648519
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-24T15:34:51.648519.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-24T15:34:51.648519.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_07_24T15_34_51.648519
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-24T15:34:51.648519.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-24T15:34:51.648519.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_07_24T15_34_51.648519
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-24T15:34:51.648519.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-24T15:34:51.648519.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_07_24T15_34_51.648519
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-24T15:34:51.648519.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-24T15:34:51.648519.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_07_24T15_34_51.648519
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-24T15:34:51.648519.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-24T15:34:51.648519.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_07_24T15_34_51.648519
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-24T15:34:51.648519.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-24T15:34:51.648519.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_07_24T15_34_51.648519
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-24T15:34:51.648519.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-24T15:34:51.648519.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_07_24T15_34_51.648519
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-24T15:34:51.648519.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-24T15:34:51.648519.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_07_24T15_34_51.648519
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-24T15:34:51.648519.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-24T15:34:51.648519.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_07_24T15_34_51.648519
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-24T15:34:51.648519.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-24T15:34:51.648519.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_07_24T15_34_51.648519
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-24T15:34:51.648519.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-24T15:34:51.648519.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_07_24T15_34_51.648519
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-07-24T15:34:51.648519.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-07-24T15:34:51.648519.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_07_24T15_34_51.648519
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-24T15:34:51.648519.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-24T15:34:51.648519.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_07_24T15_34_51.648519
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-24T15:34:51.648519.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-24T15:34:51.648519.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_07_24T15_34_51.648519
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-24T15:34:51.648519.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-24T15:34:51.648519.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_07_24T15_34_51.648519
path:
- '**/details_harness|hendrycksTest-management|5_2023-07-24T15:34:51.648519.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-07-24T15:34:51.648519.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_07_24T15_34_51.648519
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-07-24T15:34:51.648519.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-07-24T15:34:51.648519.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_07_24T15_34_51.648519
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-24T15:34:51.648519.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-24T15:34:51.648519.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_07_24T15_34_51.648519
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-24T15:34:51.648519.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-24T15:34:51.648519.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_07_24T15_34_51.648519
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-24T15:34:51.648519.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-24T15:34:51.648519.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_07_24T15_34_51.648519
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-24T15:34:51.648519.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-24T15:34:51.648519.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_07_24T15_34_51.648519
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-24T15:34:51.648519.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-24T15:34:51.648519.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_07_24T15_34_51.648519
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-24T15:34:51.648519.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-24T15:34:51.648519.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_07_24T15_34_51.648519
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-24T15:34:51.648519.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-24T15:34:51.648519.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_07_24T15_34_51.648519
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-24T15:34:51.648519.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-24T15:34:51.648519.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_07_24T15_34_51.648519
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-24T15:34:51.648519.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-24T15:34:51.648519.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_07_24T15_34_51.648519
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-24T15:34:51.648519.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-24T15:34:51.648519.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_07_24T15_34_51.648519
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-24T15:34:51.648519.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-24T15:34:51.648519.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_07_24T15_34_51.648519
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-24T15:34:51.648519.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-24T15:34:51.648519.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_07_24T15_34_51.648519
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-24T15:34:51.648519.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-24T15:34:51.648519.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_07_24T15_34_51.648519
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-07-24T15:34:51.648519.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-07-24T15:34:51.648519.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_07_24T15_34_51.648519
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-24T15:34:51.648519.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-24T15:34:51.648519.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_07_24T15_34_51.648519
path:
- '**/details_harness|hendrycksTest-virology|5_2023-07-24T15:34:51.648519.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-07-24T15:34:51.648519.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_07_24T15_34_51.648519
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-24T15:34:51.648519.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-24T15:34:51.648519.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_07_24T15_34_51.648519
path:
- '**/details_harness|truthfulqa:mc|0_2023-07-24T15:34:51.648519.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-07-24T15:34:51.648519.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_09_20T16_29_17.450088
path:
- '**/details_harness|winogrande|5_2023-09-20T16-29-17.450088.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-09-20T16-29-17.450088.parquet'
- config_name: results
data_files:
- split: 2023_07_24T15_34_51.648519
path:
- results_2023-07-24T15:34:51.648519.parquet
- split: 2023_09_20T16_29_17.450088
path:
- results_2023-09-20T16-29-17.450088.parquet
- split: latest
path:
- results_2023-09-20T16-29-17.450088.parquet
---
# Dataset Card for Evaluation run of keyfan/vicuna-chinese-replication-v1.1
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/keyfan/vicuna-chinese-replication-v1.1
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [keyfan/vicuna-chinese-replication-v1.1](https://huggingface.co/keyfan/vicuna-chinese-replication-v1.1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_keyfan__vicuna-chinese-replication-v1.1",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-09-20T16:29:17.450088](https://huggingface.co/datasets/open-llm-leaderboard/details_keyfan__vicuna-chinese-replication-v1.1/blob/main/results_2023-09-20T16-29-17.450088.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.19274328859060402,
"em_stderr": 0.004039569791455342,
"f1": 0.2668655620805379,
"f1_stderr": 0.004116773539445767,
"acc": 0.3844009566932927,
"acc_stderr": 0.0106207870984688
},
"harness|drop|3": {
"em": 0.19274328859060402,
"em_stderr": 0.004039569791455342,
"f1": 0.2668655620805379,
"f1_stderr": 0.004116773539445767
},
"harness|gsm8k|5": {
"acc": 0.09476876421531463,
"acc_stderr": 0.008067791560015412
},
"harness|winogrande|5": {
"acc": 0.6740331491712708,
"acc_stderr": 0.013173782636922189
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
celikmus/symptom_text_to_disease_01 | ---
license: apache-2.0
dataset_info:
features:
- name: text
dtype: string
- name: labels
dtype:
class_label:
names:
'0': emotional pain
'1': hair falling out
'2': heart hurts
'3': infected wound
'4': foot ache
'5': shoulder pain
'6': injury from sports
'7': skin issue
'8': stomach ache
'9': knee pain
'10': joint pain
'11': hard to breath
'12': head ache
'13': body feels weak
'14': feeling dizzy
'15': back pain
'16': open wound
'17': internal pain
'18': blurry vision
'19': acne
'20': muscle pain
'21': neck pain
'22': cough
'23': ear ache
'24': feeling cold
splits:
- name: train
num_bytes: 330494.3762197868
num_examples: 5328
- name: test
num_bytes: 41373.82675273983
num_examples: 667
- name: valid
num_bytes: 41311.79702747335
num_examples: 666
download_size: 145457
dataset_size: 413180.0
---
|
khanzaid/data_for_una-cybertron-7B-v2-GPTQ | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 855618
num_examples: 4846
download_size: 416065
dataset_size: 855618
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
jzju/nst | ---
pretty_name: NST
language:
- sv
task_categories:
- automatic-speech-recognition
license:
- cc0-1.0
---
**Homepage:** https://www.nb.no/sprakbanken/en/resource-catalogue/oai-nb-no-sbr-56
Used lydfiler_16_1.tar.gz and metadata_se_csv.zip |
jan-hq/finqa_bench_stealth-finance-v4 | ---
dataset_info:
features:
- name: id
dtype: string
- name: query
dtype: string
- name: answer
dtype: string
- name: text
dtype: string
- name: response
dtype: string
- name: options
struct:
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: golden_key
dtype: string
splits:
- name: train
num_bytes: 25546199
num_examples: 5074
download_size: 11302579
dataset_size: 25546199
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mishrasaurabh847/covid-tweet-text-classification | ---
license: mit
---
|
zolak/twitter_dataset_50_1713117444 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 226322
num_examples: 552
download_size: 113860
dataset_size: 226322
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
rntc/biomed-fr-pubmed-en | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 4562571188
num_examples: 15561370
- name: validation
num_bytes: 46015018
num_examples: 157186
download_size: 3088461733
dataset_size: 4608586206
---
# Dataset Card for "biomed-fr-pubmed-en"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
saasdsfsfsdsds/turkishReviews-ds-mini | ---
dataset_info:
features:
- name: review
dtype: string
- name: review_length
dtype: int64
splits:
- name: train
num_bytes: 1014763.0133191262
num_examples: 2736
- name: validation
num_bytes: 112751.44592434737
num_examples: 304
download_size: 725717
dataset_size: 1127514.4592434736
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
myrtotsok/clf-5 | ---
dataset_info:
features:
- name: request
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 44057
num_examples: 720
- name: validation
num_bytes: 11580
num_examples: 180
download_size: 13093
dataset_size: 55637
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
reciprocate/gsm8k_pairwise | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: selected
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 106512
num_examples: 128
download_size: 65268
dataset_size: 106512
---
# Dataset Card for "gsm8k_pairwise"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
shidowake/Doctor-Shotgun_capybara-sharegpt_subset_split_2 | ---
dataset_info:
features:
- name: source
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 9064100.571348244
num_examples: 2001
download_size: 4780403
dataset_size: 9064100.571348244
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
MagnusEngdal/datacamp-tutorial | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 4201526
num_examples: 1000
download_size: 2247084
dataset_size: 4201526
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Eddiefloat/paot | ---
license: other
---
|
Pixelatory/GDB-11 | ---
tags:
- chemistry
- biology
size_categories:
- 10M<n<100M
---
26,425,839 samples.
Contains only the unique, RDKit canonicalized SMILES molecules in a CSV format (after extracting), from the original "Entire GDB-11" dataset found at https://gdb.unibe.ch/downloads/. |
thomasavare/waste-classification-v2 | ---
language:
- en
size_categories:
- 10K<n<100K
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Dataset used to train a language model to do classification on 50 different waste classes.
### Languages
English
## Dataset Structure
### Data Instances
Phrase | Class | Index
-------|-------|-------
"I have this apple phone charger to throw, where should I put it ?" | PHONE CHARGER | 26
"Should I recycle a disposable cup ?" | Plastic Cup | 32
"I have a milk brick" | Tetrapack | 45
### Data Fields
- Phrase
- Class
- Class_index
### Data Splits
train: 12.5K rows
test: 5.38K rows
additional data: 7.24K rows (unseen_phrases.csv)
## Dataset Creation
Manualy with objects and phrases templates.
### Annotations
#### Annotation process
Each object was annotated and then the phrases were annotated according to the object according to its annnotation.
#### Who are the annotators?
Myself
### Personal and Sensitive Information
None
## Considerations for Using the Data
### Social Impact of Dataset
None
### Discussion of Biases
Some classes are more present than others but the dataset is balanced overall. Because it was created using patterns, might not be very robust.
### Other Known Limitations
Repetition of phrase patterns, have to verify performances of model on external phrases for robustness.
|
tj-solergibert/SRV-Europarl-ST-processed-mt-en | ---
dataset_info:
features:
- name: source_text
dtype: string
- name: dest_text
dtype: string
- name: dest_lang
dtype: string
splits:
- name: train
num_bytes: 159929144.55095986
num_examples: 602605
- name: valid
num_bytes: 21162053.230128862
num_examples: 81968
- name: test
num_bytes: 22144424.302616265
num_examples: 86170
download_size: 138665727
dataset_size: 203235622.08370498
---
# Dataset Card for "SRV-Europarl-ST-processed-mt-en"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Ericizepic/address_std_1 | ---
dataset_info:
features:
- name: non-std-addres
dtype: string
- name: std-address
dtype: string
splits:
- name: train
num_bytes: 140324602.81238925
num_examples: 1568144
- name: test
num_bytes: 35081240.18761074
num_examples: 392037
download_size: 133625813
dataset_size: 175405843.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
mask-distilled-onesec-cv12-each-chunk-uniq/chunk_122 | ---
dataset_info:
features:
- name: logits
sequence: float32
- name: mfcc
sequence:
sequence: float64
splits:
- name: train
num_bytes: 1528119384.0
num_examples: 300102
download_size: 1560700928
dataset_size: 1528119384.0
---
# Dataset Card for "chunk_122"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ssahir/common_voice_13_0_dv_preprocessed | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
license:
- cc0-1.0
multilinguality:
- multilingual
size_categories:
ab:
- 10K<n<100K
ar:
- 100K<n<1M
as:
- 1K<n<10K
ast:
- 1K<n<10K
az:
- n<1K
ba:
- 100K<n<1M
bas:
- 1K<n<10K
be:
- 1M<n<10M
bg:
- 10K<n<100K
bn:
- 1M<n<10M
br:
- 10K<n<100K
ca:
- 1M<n<10M
ckb:
- 100K<n<1M
cnh:
- 1K<n<10K
cs:
- 100K<n<1M
cv:
- 10K<n<100K
cy:
- 100K<n<1M
da:
- 10K<n<100K
de:
- 100K<n<1M
dv:
- 10K<n<100K
dyu:
- n<1K
el:
- 10K<n<100K
en:
- 1M<n<10M
eo:
- 1M<n<10M
es:
- 1M<n<10M
et:
- 10K<n<100K
eu:
- 100K<n<1M
fa:
- 100K<n<1M
fi:
- 10K<n<100K
fr:
- 100K<n<1M
fy-NL:
- 100K<n<1M
ga-IE:
- 10K<n<100K
gl:
- 10K<n<100K
gn:
- 1K<n<10K
ha:
- 10K<n<100K
hi:
- 10K<n<100K
hsb:
- 1K<n<10K
hu:
- 10K<n<100K
hy-AM:
- 1K<n<10K
ia:
- 10K<n<100K
id:
- 10K<n<100K
ig:
- 1K<n<10K
is:
- n<1K
it:
- 100K<n<1M
ja:
- 100K<n<1M
ka:
- 10K<n<100K
kab:
- 100K<n<1M
kk:
- 1K<n<10K
kmr:
- 10K<n<100K
ko:
- 1K<n<10K
ky:
- 10K<n<100K
lg:
- 100K<n<1M
lo:
- n<1K
lt:
- 10K<n<100K
lv:
- 10K<n<100K
mdf:
- n<1K
mhr:
- 100K<n<1M
mk:
- n<1K
ml:
- 1K<n<10K
mn:
- 10K<n<100K
mr:
- 10K<n<100K
mrj:
- 10K<n<100K
mt:
- 10K<n<100K
myv:
- 1K<n<10K
nan-tw:
- 10K<n<100K
ne-NP:
- n<1K
nl:
- 10K<n<100K
nn-NO:
- n<1K
oc:
- 1K<n<10K
or:
- 1K<n<10K
pa-IN:
- 1K<n<10K
pl:
- 100K<n<1M
pt:
- 100K<n<1M
quy:
- n<1K
rm-sursilv:
- 1K<n<10K
rm-vallader:
- 1K<n<10K
ro:
- 10K<n<100K
ru:
- 100K<n<1M
rw:
- 1M<n<10M
sah:
- 1K<n<10K
sat:
- n<1K
sc:
- 1K<n<10K
sk:
- 10K<n<100K
skr:
- 1K<n<10K
sl:
- 10K<n<100K
sr:
- 1K<n<10K
sv-SE:
- 10K<n<100K
sw:
- 100K<n<1M
ta:
- 100K<n<1M
th:
- 100K<n<1M
ti:
- n<1K
tig:
- n<1K
tk:
- 1K<n<10K
tok:
- 10K<n<100K
tr:
- 10K<n<100K
tt:
- 10K<n<100K
tw:
- n<1K
ug:
- 10K<n<100K
uk:
- 10K<n<100K
ur:
- 100K<n<1M
uz:
- 100K<n<1M
vi:
- 10K<n<100K
vot:
- n<1K
yo:
- 1K<n<10K
yue:
- 10K<n<100K
zh-CN:
- 100K<n<1M
zh-HK:
- 100K<n<1M
zh-TW:
- 100K<n<1M
source_datasets:
- extended|common_voice
task_categories:
- automatic-speech-recognition
paperswithcode_id: common-voice
pretty_name: Common Voice Corpus 13.0
language_bcp47:
- ab
- ar
- as
- ast
- az
- ba
- bas
- be
- bg
- bn
- br
- ca
- ckb
- cnh
- cs
- cv
- cy
- da
- de
- dv
- dyu
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy-NL
- ga-IE
- gl
- gn
- ha
- hi
- hsb
- hu
- hy-AM
- ia
- id
- ig
- is
- it
- ja
- ka
- kab
- kk
- kmr
- ko
- ky
- lg
- lo
- lt
- lv
- mdf
- mhr
- mk
- ml
- mn
- mr
- mrj
- mt
- myv
- nan-tw
- ne-NP
- nl
- nn-NO
- oc
- or
- pa-IN
- pl
- pt
- quy
- rm-sursilv
- rm-vallader
- ro
- ru
- rw
- sah
- sat
- sc
- sk
- skr
- sl
- sr
- sv-SE
- sw
- ta
- th
- ti
- tig
- tk
- tok
- tr
- tt
- tw
- ug
- uk
- ur
- uz
- vi
- vot
- yo
- yue
- zh-CN
- zh-HK
- zh-TW
extra_gated_prompt: By clicking on “Access repository” below, you also agree to not
attempt to determine the identity of speakers in the Common Voice dataset.
---
# Dataset Card for Common Voice Corpus 13.0
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://commonvoice.mozilla.org/en/datasets
- **Repository:** https://github.com/common-voice/common-voice
- **Paper:** https://arxiv.org/abs/1912.06670
- **Leaderboard:** https://paperswithcode.com/dataset/common-voice
- **Point of Contact:** [Vaibhav Srivastav](mailto:vaibhav@huggingface.co)
### Dataset Summary
The Common Voice dataset consists of a unique MP3 and corresponding text file.
Many of the 27141 recorded hours in the dataset also include demographic metadata like age, sex, and accent
that can help improve the accuracy of speech recognition engines.
The dataset currently consists of 17689 validated hours in 108 languages, but more voices and languages are always added.
Take a look at the [Languages](https://commonvoice.mozilla.org/en/languages) page to request a language or start contributing.
### Supported Tasks and Leaderboards
The results for models trained on the Common Voice datasets are available via the
[🤗 Autoevaluate Leaderboard](https://huggingface.co/spaces/autoevaluate/leaderboards?dataset=mozilla-foundation%2Fcommon_voice_11_0&only_verified=0&task=automatic-speech-recognition&config=ar&split=test&metric=wer)
### Languages
```
Abkhaz, Arabic, Armenian, Assamese, Asturian, Azerbaijani, Basaa, Bashkir, Basque, Belarusian, Bengali, Breton, Bulgarian, Cantonese, Catalan, Central Kurdish, Chinese (China), Chinese (Hong Kong), Chinese (Taiwan), Chuvash, Czech, Danish, Dhivehi, Dioula, Dutch, English, Erzya, Esperanto, Estonian, Finnish, French, Frisian, Galician, Georgian, German, Greek, Guarani, Hakha Chin, Hausa, Hill Mari, Hindi, Hungarian, Icelandic, Igbo, Indonesian, Interlingua, Irish, Italian, Japanese, Kabyle, Kazakh, Kinyarwanda, Korean, Kurmanji Kurdish, Kyrgyz, Lao, Latvian, Lithuanian, Luganda, Macedonian, Malayalam, Maltese, Marathi, Meadow Mari, Moksha, Mongolian, Nepali, Norwegian Nynorsk, Occitan, Odia, Persian, Polish, Portuguese, Punjabi, Quechua Chanka, Romanian, Romansh Sursilvan, Romansh Vallader, Russian, Sakha, Santali (Ol Chiki), Saraiki, Sardinian, Serbian, Slovak, Slovenian, Sorbian, Upper, Spanish, Swahili, Swedish, Taiwanese (Minnan), Tamil, Tatar, Thai, Tigre, Tigrinya, Toki Pona, Turkish, Turkmen, Twi, Ukrainian, Urdu, Uyghur, Uzbek, Vietnamese, Votic, Welsh, Yoruba
```
## How to use
The `datasets` library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the `load_dataset` function.
For example, to download the Hindi config, simply specify the corresponding language config name (i.e., "hi" for Hindi):
```python
from datasets import load_dataset
cv_13 = load_dataset("mozilla-foundation/common_voice_13_0", "hi", split="train")
```
Using the datasets library, you can also stream the dataset on-the-fly by adding a `streaming=True` argument to the `load_dataset` function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.
```python
from datasets import load_dataset
cv_13 = load_dataset("mozilla-foundation/common_voice_13_0", "hi", split="train", streaming=True)
print(next(iter(cv_13)))
```
*Bonus*: create a [PyTorch dataloader](https://huggingface.co/docs/datasets/use_with_pytorch) directly with your own datasets (local/streamed).
### Local
```python
from datasets import load_dataset
from torch.utils.data.sampler import BatchSampler, RandomSampler
cv_13 = load_dataset("mozilla-foundation/common_voice_13_0", "hi", split="train")
batch_sampler = BatchSampler(RandomSampler(cv_13), batch_size=32, drop_last=False)
dataloader = DataLoader(cv_13, batch_sampler=batch_sampler)
```
### Streaming
```python
from datasets import load_dataset
from torch.utils.data import DataLoader
cv_13 = load_dataset("mozilla-foundation/common_voice_13_0", "hi", split="train")
dataloader = DataLoader(cv_13, batch_size=32)
```
To find out more about loading and preparing audio datasets, head over to [hf.co/blog/audio-datasets](https://huggingface.co/blog/audio-datasets).
### Example scripts
Train your own CTC or Seq2Seq Automatic Speech Recognition models on Common Voice 13 with `transformers` - [here](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition).
## Dataset Structure
### Data Instances
A typical data point comprises the `path` to the audio file and its `sentence`.
Additional fields include `accent`, `age`, `client_id`, `up_votes`, `down_votes`, `gender`, `locale` and `segment`.
```python
{
'client_id': 'd59478fbc1ee646a28a3c652a119379939123784d99131b865a89f8b21c81f69276c48bd574b81267d9d1a77b83b43e6d475a6cfc79c232ddbca946ae9c7afc5',
'path': 'et/clips/common_voice_et_18318995.mp3',
'audio': {
'path': 'et/clips/common_voice_et_18318995.mp3',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 48000
},
'sentence': 'Tasub kokku saada inimestega, keda tunned juba ammust ajast saati.',
'up_votes': 2,
'down_votes': 0,
'age': 'twenties',
'gender': 'male',
'accent': '',
'locale': 'et',
'segment': ''
}
```
### Data Fields
`client_id` (`string`): An id for which client (voice) made the recording
`path` (`string`): The path to the audio file
`audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
`sentence` (`string`): The sentence the user was prompted to speak
`up_votes` (`int64`): How many upvotes the audio file has received from reviewers
`down_votes` (`int64`): How many downvotes the audio file has received from reviewers
`age` (`string`): The age of the speaker (e.g. `teens`, `twenties`, `fifties`)
`gender` (`string`): The gender of the speaker
`accent` (`string`): Accent of the speaker
`locale` (`string`): The locale of the speaker
`segment` (`string`): Usually an empty field
### Data Splits
The speech material has been subdivided into portions for dev, train, test, validated, invalidated, reported and other.
The validated data is data that has been validated with reviewers and received upvotes that the data is of high quality.
The invalidated data is data has been invalidated by reviewers
and received downvotes indicating that the data is of low quality.
The reported data is data that has been reported, for different reasons.
The other data is data that has not yet been reviewed.
The dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train.
## Data Preprocessing Recommended by Hugging Face
The following are data preprocessing steps advised by the Hugging Face team. They are accompanied by an example code snippet that shows how to put them to practice.
Many examples in this dataset have trailing quotations marks, e.g _“the cat sat on the mat.“_. These trailing quotation marks do not change the actual meaning of the sentence, and it is near impossible to infer whether a sentence is a quotation or not a quotation from audio data alone. In these cases, it is advised to strip the quotation marks, leaving: _the cat sat on the mat_.
In addition, the majority of training sentences end in punctuation ( . or ? or ! ), whereas just a small proportion do not. In the dev set, **almost all** sentences end in punctuation. Thus, it is recommended to append a full-stop ( . ) to the end of the small number of training examples that do not end in punctuation.
```python
from datasets import load_dataset
ds = load_dataset("mozilla-foundation/common_voice_13_0", "en", use_auth_token=True)
def prepare_dataset(batch):
"""Function to preprocess the dataset with the .map method"""
transcription = batch["sentence"]
if transcription.startswith('"') and transcription.endswith('"'):
# we can remove trailing quotation marks as they do not affect the transcription
transcription = transcription[1:-1]
if transcription[-1] not in [".", "?", "!"]:
# append a full-stop to sentences that do not end in punctuation
transcription = transcription + "."
batch["sentence"] = transcription
return batch
ds = ds.map(prepare_dataset, desc="preprocess dataset")
```
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
## Considerations for Using the Data
### Social Impact of Dataset
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Public Domain, [CC-0](https://creativecommons.org/share-your-work/public-domain/cc0/)
### Citation Information
```
@inproceedings{commonvoice:2020,
author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.},
title = {Common Voice: A Massively-Multilingual Speech Corpus},
booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)},
pages = {4211--4215},
year = 2020
}
``` |
nalmeida/agile_dataset_fusionado | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2665390
num_examples: 968
download_size: 687019
dataset_size: 2665390
---
# Dataset Card for "agile_dataset_fusionado"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
man4j/aisha_v3_alignment | ---
dataset_info:
features:
- name: instruct
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: topic
dtype: string
splits:
- name: train
num_bytes: 1309
num_examples: 1
download_size: 10509
dataset_size: 1309
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
kakooch/persian-poetry-qa | ---
name: Persian Poetry QA Dataset
description: |
This dataset is structured in a question-answering format derived from a rich collection of Persian poems along with metadata about the poets and the verses.
It is designed to be utilized for various Natural Language Processing and analysis tasks related to Persian poetry, such as Question Answering, Text Generation, Language Modeling, and Style Analysis.
license: gpl-2.0
url: https://github.com/ganjoor/desktop/releases/tag/v2.81
citation: |
Persian Poetry QA Dataset. Collected by Kakooch from the Ganjoor Project.
Available at: https://huggingface.co/datasets/persian_poetry
size: "Custom"
language:
- fa
splits:
train:
description: "This split contains Persian poems structured for QA, where each row asks for a sample poem from a specific poet with the poem or verse as the answer."
validation:
description: "This split contains random selection of 1% of Persian poems in original dataset."
features:
context:
description: "A static string which is 'Persian Poetry or شعر فارسی'."
type: "string"
question:
description: "A string that asks for a sample poem from a specific poet in the format 'یک نمونه از شعر [POET_NAME]'."
type: "string"
answer:
description: "Text of a hemistich or verse."
type: "string"
answer_start:
description: "The starting character index of `answer` within `context` (Note: this is always -1 in the current dataset as `answer` is not a substring of `context`)."
type: "int32"
configs:
- config_name: default
data_files:
- split: train
path: poems-qa.csv
---
# Persian Poetry Dataset
## Dataset Description
### Overview
This dataset contains a collection of Persian poems structured in a question-answering format. The dataset is derived from various Persian poets and their poems, providing a rich source for exploring Persian poetry in a structured manner suitable for machine learning applications, especially in natural language processing tasks like question answering.
### Data Collection
- **Data Collection Source:** The data is sourced from the [Ganjoor project](https://github.com/ganjoor/). The specific database file can be found in the [releases section](https://github.com/ganjoor/desktop/releases/tag/v2.81) of their GitHub repository.
- **Time Period:** Oct-12-2023
- **Collection Methods:** The data was collected by downloading the raw database file from the Ganjoor project's GitHub repository.
### Data Structure
The dataset is structured into a CSV file with the following columns:
- `context`: A static string which is "Persian Poetry or شعر فارسی".
- `question`: A string that asks for a sample poem from a specific poet in the format "یک نمونه از شعر [POET_NAME]".
- `answer`: Text of a hemistich or verse. Verses of a hemistich are TAB SEPARATED
- `answer_start`: The starting character index of `answer` within `context` (Note: this is always -1 in the current dataset as `answer` is not a substring of `context`).
### Data Example
```plaintext
context,question,answer,answer_start
Persian Poetry,یک نمونه از شعر صائب تبریزی,خار نتواند گرفتن دامن ریگ روان رهنورد شوق، افسردن نمی داند که چیست,-1
```
## Dataset Usage
### Use Cases
This dataset can be utilized for various Natural Language Processing and analysis tasks related to Persian poetry, such as:
- Question Answering
- Text Generation
- Language Modeling
- Style Analysis
### Challenges & Limitations
- The `answer_start` field is always -1 as the `answer` is not a substring of `context`. Depending on your use-case, you might need to adjust how `context` and `answer_start` are determined.
- The dataset does not contain long verses that are over 100 characters.
### License
GPL-2 (GNU General Public License) ingerited from original ganjoor project
## Additional Information
### Citation
```
Persian Poetry Dataset. Collected by Kakooch from the Ganjoor Project. Available at: https://huggingface.co/datasets/persian_poetry
```
### Dataset Link
[Download the dataset from Hugging Face](https://huggingface.co/datasets/persian_poetry)
### Contact
Email: [kakooch@gmail.com](mailto:kakooch@gmail.com) | GitHub: [kakooch](https://github.com/kakooch)
---
*This README was generated by Kakooch.*
|
Asap7772/ultrachat_samples | ---
dataset_info:
features:
- name: name
dtype: string
- name: prompt
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 987585880
num_examples: 623520
download_size: 651315903
dataset_size: 987585880
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Jeryr/Yisus | ---
license: apache-2.0
---
|
7essen/sketchData | ---
language:
- en
--- |
chenxxiao/beauty | ---
license: apache-2.0
---
|
michaelnath/functions_annotated_with_intents | ---
dataset_info:
features:
- name: function
dtype: string
- name: intent_category
dtype: string
splits:
- name: train
num_bytes: 1123421
num_examples: 2768
download_size: 419825
dataset_size: 1123421
---
# Dataset Card for "functions_annotated_with_intents"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
MLP-Lemma/SFT-cnn | ---
dataset_info:
features:
- name: context
dtype: string
- name: summary
dtype: string
- name: sentences
sequence: string
- name: instruction
dtype: string
splits:
- name: train
num_bytes: 2472479459
num_examples: 287113
- name: validation
num_bytes: 112391385
num_examples: 13368
- name: test
num_bytes: 97414019
num_examples: 11490
download_size: 1618666774
dataset_size: 2682284863
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
monist/chinese_poetry | ---
license: mit
---
|
RamaSchneider/wpc | ---
task_categories:
- text-generation
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
piamo/auto-retrain-input-dataset | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': ADONIS
'1': AFRICAN GIANT SWALLOWTAIL
'2': AMERICAN SNOOT
splits:
- name: train
num_bytes: 8825732.0
num_examples: 338
download_size: 8823395
dataset_size: 8825732.0
---
# Dataset Card for "input-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CVasNLPExperiments/VQAv2_minival_validation_google_flan_t5_xxl_mode_A_C_D_PNP_GENERIC_Q_rices_ns_25994 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: question
dtype: string
- name: true_label
sequence: string
- name: prediction
dtype: string
splits:
- name: fewshot_0_clip_tags_LAION_ViT_H_14_2B_with_openai_Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full_DETA_detections_deta_swin_large_o365_coco_classes_caption_all_patches_Salesforce_blip_image_captioning_large_clean_
num_bytes: 3720348
num_examples: 25994
download_size: 1342337
dataset_size: 3720348
---
# Dataset Card for "VQAv2_minival_validation_google_flan_t5_xxl_mode_A_C_D_PNP_GENERIC_Q_rices_ns_25994"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Multimodal-Fatima/VQAv2_minival_no_image | ---
dataset_info:
features:
- name: question_type
dtype: string
- name: multiple_choice_answer
dtype: string
- name: answers
sequence: string
- name: answers_original
list:
- name: answer
dtype: string
- name: answer_confidence
dtype: string
- name: answer_id
dtype: int64
- name: id_image
dtype: int64
- name: answer_type
dtype: string
- name: question_id
dtype: int64
- name: question
dtype: string
- name: clip_tags_ViT_L_14
sequence: string
- name: blip_caption
dtype: string
- name: LLM_Description_gpt3_downstream_tasks_visual_genome_ViT_L_14
sequence: string
- name: DETA_detections_deta_swin_large_o365_coco_classes
list:
- name: attribute
dtype: string
- name: box
sequence: float32
- name: label
dtype: string
- name: location
dtype: string
- name: ratio
dtype: float32
- name: size
dtype: string
- name: tag
dtype: string
- name: DETA_detections_deta_swin_large_o365_clip_ViT_L_14
list:
- name: attribute
dtype: string
- name: box
sequence: float64
- name: label
dtype: string
- name: location
dtype: string
- name: ratio
dtype: float64
- name: size
dtype: string
- name: tag
dtype: string
- name: DETA_detections_deta_swin_large_o365_clip_ViT_L_14_blip_caption
list:
- name: attribute
dtype: string
- name: box
sequence: float64
- name: caption
dtype: string
- name: label
dtype: string
- name: location
dtype: string
- name: ratio
dtype: float64
- name: size
dtype: string
- name: tag
dtype: string
- name: id
dtype: int64
- name: DETA_detections_deta_swin_large_o365_clip_ViT_L_14_blip_caption_caption_module
list:
- name: attribute
dtype: string
- name: box
sequence: float64
- name: caption
dtype: string
- name: captions_module
sequence: string
- name: label
dtype: string
- name: location
dtype: string
- name: ratio
dtype: float64
- name: size
dtype: string
- name: tag
dtype: string
- name: DETA_detections_deta_swin_large_o365_clip_ViT_L_14_blip_caption_caption_module_without_filtering
list:
- name: attribute
dtype: string
- name: box
sequence: float64
- name: caption
dtype: string
- name: captions_module
sequence: string
- name: label
dtype: string
- name: location
dtype: string
- name: ratio
dtype: float64
- name: size
dtype: string
- name: tag
dtype: string
- name: DETA_detections_deta_swin_large_o365_clip_ViT_L_14_blip_caption_caption_module_random
list:
- name: attribute
dtype: string
- name: box
sequence: float64
- name: caption
dtype: string
- name: captions_module
sequence: string
- name: captions_module_filter
sequence: string
- name: label
dtype: string
- name: location
dtype: string
- name: ratio
dtype: float64
- name: size
dtype: string
- name: tag
dtype: string
- name: clip_tags_LAION_ViT_H_14_2B
sequence: string
- name: LLM_Description_gpt3_downstream_tasks_visual_genome_LAION-ViT-H-14-2B
sequence: string
- name: Attributes_ViT_L_14_descriptors_text_davinci_003_full
sequence: string
- name: clip_tags_ViT_L_14_wo_openai
sequence: string
- name: clip_tags_ViT_L_14_with_openai
sequence: string
- name: clip_tags_LAION_ViT_H_14_2B_wo_openai
sequence: string
- name: clip_tags_LAION_ViT_H_14_2B_with_openai
sequence: string
- name: clip_tags_LAION_ViT_bigG_14_2B_wo_openai
sequence: string
- name: clip_tags_LAION_ViT_bigG_14_2B_with_openai
sequence: string
- name: Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full
sequence: string
- name: Attributes_LAION_ViT_bigG_14_2B_descriptors_text_davinci_003_full
sequence: string
- name: clip_tags_ViT_B_16_with_openai
sequence: string
splits:
- name: validation
num_bytes: 1766679196
num_examples: 25994
download_size: 340842185
dataset_size: 1766679196
---
# Dataset Card for "VQAv2_minival_no_image"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
TryOnVirtual/VITON-HD-IMAGE | ---
license: cc-by-nc-sa-4.0
---
|
AdapterOcean/data-standardized_cluster_3 | ---
dataset_info:
features:
- name: text
dtype: string
- name: conversation_id
dtype: int64
- name: embedding
sequence: float64
- name: cluster
dtype: int64
splits:
- name: train
num_bytes: 41994569
num_examples: 3975
download_size: 12107572
dataset_size: 41994569
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "data-standardized_cluster_3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ajanco/anc_object_detect | ---
dataset_info:
features:
- name: image_id
dtype: int64
- name: image
dtype: image
- name: width
dtype: int64
- name: height
dtype: int64
- name: objects
struct:
- name: area
sequence: float64
- name: bbox
sequence:
sequence: float64
- name: category
sequence: int64
- name: id
sequence: int64
splits:
- name: train
num_bytes: 157851896.0
num_examples: 132
download_size: 151292559
dataset_size: 157851896.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "anc_object_detect"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
frimelle/wiki-stance | ---
license: cc-by-sa-3.0
---
# wiki-stance dataset
The wiki-stance dataset is provided in three languages: English (en), German (de), and Turkish (tr), as well as a multilingual version (ml), which mixes the three languages
and aligns the policies across languages.
For more details see the EMNLP 2023 paper "Why Should This Article Be Deleted? Transparent Stance Detection in Multilingual Wikipedia Editor Discussions". |
P1ayer-1/annas-zlib3-index | ---
dataset_info:
features:
- name: aacid
dtype: string
- name: metadata
struct:
- name: zlibrary_id
dtype: int64
- name: date_added
dtype: string
- name: date_modified
dtype: string
- name: extension
dtype: string
- name: filesize_reported
dtype: int64
- name: md5_reported
dtype: string
- name: title
dtype: string
- name: author
dtype: string
- name: publisher
dtype: string
- name: language
dtype: string
- name: series
dtype: string
- name: volume
dtype: string
- name: edition
dtype: string
- name: year
dtype: string
- name: pages
dtype: string
- name: description
dtype: string
- name: cover_path
dtype: string
- name: isbns
sequence: string
- name: category_id
dtype: string
splits:
- name: train
num_bytes: 2163495791
num_examples: 2630955
download_size: 1175094406
dataset_size: 2163495791
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "annas-zlib3-index"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
kaleemWaheed/twitter_dataset_1713169171 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 21851
num_examples: 51
download_size: 12814
dataset_size: 21851
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
pranamya-nayak/barcode-only-dataset | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 3303440.0
num_examples: 26
download_size: 3304774
dataset_size: 3303440.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
juliozhao/dataengine_minigpt4 | ---
license: apache-2.0
---
|
DragonLine/ksponspeech | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcripts
dtype: string
splits:
- name: train
num_bytes: 53133867240.215996
num_examples: 299636
- name: test
num_bytes: 6736967357.531417
num_examples: 37455
- name: valid
num_bytes: 6484620568.886582
num_examples: 37454
download_size: 62734833313
dataset_size: 66355455166.633995
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: valid
path: data/valid-*
---
|
micsell/hebrew_kan_sentence50000 | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: id
dtype: string
- name: language
dtype: string
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 1893933781.0
num_examples: 10000
download_size: 1893130719
dataset_size: 1893933781.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mariammaher550/detoxify-dataset | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: output
dtype: string
- name: instruction
dtype: string
- name: input
dtype: string
splits:
- name: train
num_bytes: 15784576
num_examples: 113758
download_size: 0
dataset_size: 15784576
---
# Dataset Card for "detoxify-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
MuthuAI9/SecurityEval_Transformed | ---
license: mit
---
|
lmattingly/simpsons_canny | ---
dataset_info:
features:
- name: original_image
dtype: image
- name: condtioning_image
dtype: image
- name: caption
dtype: string
splits:
- name: train
num_bytes: 92880745.0
num_examples: 786
download_size: 92730591
dataset_size: 92880745.0
---
# Dataset Card for "simpsons_canny"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
AtreidePrime/Python_Code_Generation | ---
license: mit
---
|
Vinibarcley/Anahii | ---
license: openrail
---
|
ranWang/questions_with_answers | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: is_full
dtype: bool
- name: is_error
dtype: bool
- name: file_path
dtype: string
splits:
- name: train
num_bytes: 25021526
num_examples: 21250
download_size: 9838879
dataset_size: 25021526
---
# features
- question:问
- answer:答
- is_full:此文件的题是否都可以提取出来
- is_error:这道题是否没有出错(当is_error为true时,这道题问和答均为空,当前仅作为标记错误的字段,后通过上下文再查找问题出现的原因)
- file_path:文件路径
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
EgilKarlsen/Application_110K | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: log
dtype: string
splits:
- name: train
num_bytes: 31417397
num_examples: 100000
- name: validation
num_bytes: 3119424
num_examples: 10000
download_size: 6859931
dataset_size: 34536821
---
# Dataset Card for "Application_110K"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Abzu/wizard | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 85659801.65210004
num_examples: 49263
- name: test
num_bytes: 9518335.347899958
num_examples: 5474
download_size: 50310834
dataset_size: 95178137
license: cc-by-sa-3.0
task_categories:
- text-generation
language:
- en
---
# Dataset Card for "wizard"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
novus677/nlp-xsum-test-large | ---
dataset_info:
features:
- name: summary
dtype: string
- name: prompt
dtype: string
splits:
- name: test
num_bytes: 26948819
num_examples: 11334
download_size: 16961128
dataset_size: 26948819
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
Andrijan/self_improving | ---
license: other
---
|
tasksource/prontoqa | ---
license: apache-2.0
task_categories:
- question-answering
- text-classification
language:
- en
---
https://github.com/asaparov/prontoqa/
```
@article{saparov2022language,
title={Language models are greedy reasoners: A systematic formal analysis of chain-of-thought},
author={Saparov, Abulhair and He, He},
journal={arXiv preprint arXiv:2210.01240},
year={2022}
}
``` |
qgallouedec/prj_gia_dataset_metaworld_push_wall_v2_1111 | ---
library_name: gia
tags:
- deep-reinforcement-learning
- reinforcement-learning
- gia
- multi-task
- multi-modal
- imitation-learning
- offline-reinforcement-learning
---
An imitation learning environment for the push-wall-v2 environment, sample for the policy push-wall-v2
This environment was created as part of the Generally Intelligent Agents
project gia: https://github.com/huggingface/gia
## Load dataset
First, clone it with
```sh
git clone https://huggingface.co/datasets/qgallouedec/prj_gia_dataset_metaworld_push_wall_v2_1111
```
Then, load it with
```python
import numpy as np
dataset = np.load("prj_gia_dataset_metaworld_push_wall_v2_1111/dataset.npy", allow_pickle=True).item()
print(dataset.keys()) # dict_keys(['observations', 'actions', 'dones', 'rewards'])
```
|
lansinuote/cv.3.image_object_detection.detect_illustration | ---
dataset_info:
features:
- name: image_id
dtype: int64
- name: image
dtype: image
- name: width
dtype: int32
- name: height
dtype: int32
- name: objects
list:
- name: category_id
dtype:
class_label:
names:
'0': early_printed_illustration
- name: image_id
dtype: string
- name: id
dtype: int64
- name: area
dtype: int64
- name: bbox
sequence: float32
length: 4
- name: segmentation
list:
list: float32
- name: iscrowd
dtype: bool
splits:
- name: train
num_bytes: 894127063.61973
num_examples: 6800
- name: test
num_bytes: 25952722.812344998
num_examples: 200
download_size: 0
dataset_size: 920079786.432075
---
# Dataset Card for "cv.3.image_object_detection.detect_illustration"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Mitsuki-Sakamoto/alpaca_farm-deberta-re-pref-64-_fil_self_160m_bo2_100_kl_0.1_prm_160m_thr_1.0_seed_2 | ---
dataset_info:
config_name: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: preference
dtype: int64
- name: output_1
dtype: string
- name: output_2
dtype: string
- name: reward_model_prompt_format
dtype: string
- name: gen_prompt_format
dtype: string
- name: gen_kwargs
struct:
- name: do_sample
dtype: bool
- name: max_new_tokens
dtype: int64
- name: pad_token_id
dtype: int64
- name: top_k
dtype: int64
- name: top_p
dtype: float64
- name: reward_1
dtype: float64
- name: reward_2
dtype: float64
- name: n_samples
dtype: int64
- name: reject_select
dtype: string
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
- name: index
dtype: int64
- name: filtered_epoch
dtype: int64
- name: gen_reward
dtype: float64
- name: gen_response
dtype: string
splits:
- name: epoch_0
num_bytes: 43551536
num_examples: 18929
- name: epoch_1
num_bytes: 44128474
num_examples: 18929
- name: epoch_2
num_bytes: 44191785
num_examples: 18929
- name: epoch_3
num_bytes: 44237515
num_examples: 18929
- name: epoch_4
num_bytes: 44265700
num_examples: 18929
- name: epoch_5
num_bytes: 44284519
num_examples: 18929
- name: epoch_6
num_bytes: 44299908
num_examples: 18929
- name: epoch_7
num_bytes: 44311706
num_examples: 18929
- name: epoch_8
num_bytes: 44321409
num_examples: 18929
- name: epoch_9
num_bytes: 44322380
num_examples: 18929
- name: epoch_10
num_bytes: 44326369
num_examples: 18929
- name: epoch_11
num_bytes: 44324769
num_examples: 18929
- name: epoch_12
num_bytes: 44329932
num_examples: 18929
- name: epoch_13
num_bytes: 44328118
num_examples: 18929
- name: epoch_14
num_bytes: 44329056
num_examples: 18929
- name: epoch_15
num_bytes: 44331421
num_examples: 18929
- name: epoch_16
num_bytes: 44332346
num_examples: 18929
- name: epoch_17
num_bytes: 44334249
num_examples: 18929
- name: epoch_18
num_bytes: 44335029
num_examples: 18929
- name: epoch_19
num_bytes: 44333272
num_examples: 18929
- name: epoch_20
num_bytes: 44333461
num_examples: 18929
- name: epoch_21
num_bytes: 44336853
num_examples: 18929
- name: epoch_22
num_bytes: 44333147
num_examples: 18929
- name: epoch_23
num_bytes: 44334757
num_examples: 18929
- name: epoch_24
num_bytes: 44335929
num_examples: 18929
- name: epoch_25
num_bytes: 44332279
num_examples: 18929
- name: epoch_26
num_bytes: 44334818
num_examples: 18929
- name: epoch_27
num_bytes: 44337605
num_examples: 18929
- name: epoch_28
num_bytes: 44334320
num_examples: 18929
- name: epoch_29
num_bytes: 44337029
num_examples: 18929
download_size: 699532857
dataset_size: 1328569691
configs:
- config_name: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1
data_files:
- split: epoch_0
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_0-*
- split: epoch_1
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_1-*
- split: epoch_2
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_2-*
- split: epoch_3
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_3-*
- split: epoch_4
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_4-*
- split: epoch_5
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_5-*
- split: epoch_6
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_6-*
- split: epoch_7
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_7-*
- split: epoch_8
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_8-*
- split: epoch_9
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_9-*
- split: epoch_10
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_10-*
- split: epoch_11
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_11-*
- split: epoch_12
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_12-*
- split: epoch_13
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_13-*
- split: epoch_14
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_14-*
- split: epoch_15
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_15-*
- split: epoch_16
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_16-*
- split: epoch_17
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_17-*
- split: epoch_18
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_18-*
- split: epoch_19
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_19-*
- split: epoch_20
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_20-*
- split: epoch_21
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_21-*
- split: epoch_22
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_22-*
- split: epoch_23
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_23-*
- split: epoch_24
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_24-*
- split: epoch_25
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_25-*
- split: epoch_26
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_26-*
- split: epoch_27
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_27-*
- split: epoch_28
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_28-*
- split: epoch_29
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_29-*
---
|
Nadav/pixel_glue_qnli | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: train
num_bytes: 1826489002.125
num_examples: 104743
- name: validation
num_bytes: 96827557.125
num_examples: 5463
download_size: 1902639822
dataset_size: 1923316559.25
---
# Dataset Card for "pixel_glue_qnli"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
JJ404/orca_instructions | ---
language:
- en
--- |
open-llm-leaderboard/details_YeungNLP__firefly-bloom-2b6-v2 | ---
pretty_name: Evaluation run of YeungNLP/firefly-bloom-2b6-v2
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [YeungNLP/firefly-bloom-2b6-v2](https://huggingface.co/YeungNLP/firefly-bloom-2b6-v2)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 3 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_YeungNLP__firefly-bloom-2b6-v2\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-13T11:51:41.999066](https://huggingface.co/datasets/open-llm-leaderboard/details_YeungNLP__firefly-bloom-2b6-v2/blob/main/results_2023-10-13T11-51-41.999066.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.08630453020134228,\n\
\ \"em_stderr\": 0.002875790094905939,\n \"f1\": 0.1275723573825503,\n\
\ \"f1_stderr\": 0.00310355978869451,\n \"acc\": 0.2825940222825524,\n\
\ \"acc_stderr\": 0.008796871542302145\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.08630453020134228,\n \"em_stderr\": 0.002875790094905939,\n\
\ \"f1\": 0.1275723573825503,\n \"f1_stderr\": 0.00310355978869451\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.017437452615617893,\n \
\ \"acc_stderr\": 0.003605486867998265\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.5477505919494869,\n \"acc_stderr\": 0.013988256216606024\n\
\ }\n}\n```"
repo_url: https://huggingface.co/YeungNLP/firefly-bloom-2b6-v2
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_drop_3
data_files:
- split: 2023_10_13T11_51_41.999066
path:
- '**/details_harness|drop|3_2023-10-13T11-51-41.999066.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-13T11-51-41.999066.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_13T11_51_41.999066
path:
- '**/details_harness|gsm8k|5_2023-10-13T11-51-41.999066.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-13T11-51-41.999066.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_13T11_51_41.999066
path:
- '**/details_harness|winogrande|5_2023-10-13T11-51-41.999066.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-13T11-51-41.999066.parquet'
- config_name: results
data_files:
- split: 2023_10_13T11_51_41.999066
path:
- results_2023-10-13T11-51-41.999066.parquet
- split: latest
path:
- results_2023-10-13T11-51-41.999066.parquet
---
# Dataset Card for Evaluation run of YeungNLP/firefly-bloom-2b6-v2
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/YeungNLP/firefly-bloom-2b6-v2
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [YeungNLP/firefly-bloom-2b6-v2](https://huggingface.co/YeungNLP/firefly-bloom-2b6-v2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_YeungNLP__firefly-bloom-2b6-v2",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-13T11:51:41.999066](https://huggingface.co/datasets/open-llm-leaderboard/details_YeungNLP__firefly-bloom-2b6-v2/blob/main/results_2023-10-13T11-51-41.999066.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.08630453020134228,
"em_stderr": 0.002875790094905939,
"f1": 0.1275723573825503,
"f1_stderr": 0.00310355978869451,
"acc": 0.2825940222825524,
"acc_stderr": 0.008796871542302145
},
"harness|drop|3": {
"em": 0.08630453020134228,
"em_stderr": 0.002875790094905939,
"f1": 0.1275723573825503,
"f1_stderr": 0.00310355978869451
},
"harness|gsm8k|5": {
"acc": 0.017437452615617893,
"acc_stderr": 0.003605486867998265
},
"harness|winogrande|5": {
"acc": 0.5477505919494869,
"acc_stderr": 0.013988256216606024
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
mwkldeveloper/mingliu_all_512 | ---
dataset_info:
features:
- name: char
dtype: string
- name: unicode
dtype: string
- name: images
dtype: image
splits:
- name: train
num_bytes: 2636794745.0
num_examples: 74952
download_size: 1744284895
dataset_size: 2636794745.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
tyzhu/squad_qa_wrong_num_v5_full_recite_ans_sent_first_permute_rerun | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: answer
dtype: string
- name: context_id
dtype: string
- name: correct_id
dtype: string
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 7888289.7738175
num_examples: 4778
- name: validation
num_bytes: 406689
num_examples: 300
download_size: 1587986
dataset_size: 8294978.7738175
---
# Dataset Card for "squad_qa_wrong_num_v5_full_recite_ans_sent_first_permute_rerun"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
open-llm-leaderboard/details_edor__Stable-Platypus2-mini-7B | ---
pretty_name: Evaluation run of edor/Stable-Platypus2-mini-7B
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [edor/Stable-Platypus2-mini-7B](https://huggingface.co/edor/Stable-Platypus2-mini-7B)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 61 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_edor__Stable-Platypus2-mini-7B\"\
,\n\t\"harness_truthfulqa_mc_0\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\
\nThese are the [latest results from run 2023-08-16T10:44:20.574252](https://huggingface.co/datasets/open-llm-leaderboard/details_edor__Stable-Platypus2-mini-7B/blob/main/results_2023-08-16T10%3A44%3A20.574252.json)\
\ (note that their might be results for other tasks in the repos if successive evals\
\ didn't cover the same tasks. You find each in the results and the \"latest\" split\
\ for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.519238503099194,\n\
\ \"acc_stderr\": 0.03487887571401071,\n \"acc_norm\": 0.5229272130971759,\n\
\ \"acc_norm_stderr\": 0.03486396112216957,\n \"mc1\": 0.3561811505507956,\n\
\ \"mc1_stderr\": 0.01676379072844634,\n \"mc2\": 0.5106039601116779,\n\
\ \"mc2_stderr\": 0.015454187246822623\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.5238907849829352,\n \"acc_stderr\": 0.014594701798071654,\n\
\ \"acc_norm\": 0.5486348122866894,\n \"acc_norm_stderr\": 0.014542104569955267\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.5965943039235212,\n\
\ \"acc_stderr\": 0.004895782107786497,\n \"acc_norm\": 0.7894841665006971,\n\
\ \"acc_norm_stderr\": 0.0040684184172756635\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.37,\n \"acc_stderr\": 0.04852365870939099,\n \
\ \"acc_norm\": 0.37,\n \"acc_norm_stderr\": 0.04852365870939099\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.4888888888888889,\n\
\ \"acc_stderr\": 0.04318275491977976,\n \"acc_norm\": 0.4888888888888889,\n\
\ \"acc_norm_stderr\": 0.04318275491977976\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.40789473684210525,\n \"acc_stderr\": 0.03999309712777471,\n\
\ \"acc_norm\": 0.40789473684210525,\n \"acc_norm_stderr\": 0.03999309712777471\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.5,\n\
\ \"acc_stderr\": 0.050251890762960605,\n \"acc_norm\": 0.5,\n \
\ \"acc_norm_stderr\": 0.050251890762960605\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.5924528301886792,\n \"acc_stderr\": 0.03024223380085449,\n\
\ \"acc_norm\": 0.5924528301886792,\n \"acc_norm_stderr\": 0.03024223380085449\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.5416666666666666,\n\
\ \"acc_stderr\": 0.04166666666666666,\n \"acc_norm\": 0.5416666666666666,\n\
\ \"acc_norm_stderr\": 0.04166666666666666\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.38,\n \"acc_stderr\": 0.04878317312145632,\n \
\ \"acc_norm\": 0.38,\n \"acc_norm_stderr\": 0.04878317312145632\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.39,\n \"acc_stderr\": 0.04902071300001974,\n \"acc_norm\": 0.39,\n\
\ \"acc_norm_stderr\": 0.04902071300001974\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.32,\n \"acc_stderr\": 0.04688261722621504,\n \
\ \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.04688261722621504\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.4682080924855491,\n\
\ \"acc_stderr\": 0.03804749744364764,\n \"acc_norm\": 0.4682080924855491,\n\
\ \"acc_norm_stderr\": 0.03804749744364764\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.27450980392156865,\n \"acc_stderr\": 0.04440521906179327,\n\
\ \"acc_norm\": 0.27450980392156865,\n \"acc_norm_stderr\": 0.04440521906179327\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.61,\n \"acc_stderr\": 0.04902071300001975,\n \"acc_norm\": 0.61,\n\
\ \"acc_norm_stderr\": 0.04902071300001975\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.4765957446808511,\n \"acc_stderr\": 0.03265019475033582,\n\
\ \"acc_norm\": 0.4765957446808511,\n \"acc_norm_stderr\": 0.03265019475033582\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.2631578947368421,\n\
\ \"acc_stderr\": 0.041424397194893624,\n \"acc_norm\": 0.2631578947368421,\n\
\ \"acc_norm_stderr\": 0.041424397194893624\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.4896551724137931,\n \"acc_stderr\": 0.04165774775728763,\n\
\ \"acc_norm\": 0.4896551724137931,\n \"acc_norm_stderr\": 0.04165774775728763\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.30158730158730157,\n \"acc_stderr\": 0.0236369759961018,\n \"\
acc_norm\": 0.30158730158730157,\n \"acc_norm_stderr\": 0.0236369759961018\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.31746031746031744,\n\
\ \"acc_stderr\": 0.04163453031302859,\n \"acc_norm\": 0.31746031746031744,\n\
\ \"acc_norm_stderr\": 0.04163453031302859\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.32,\n \"acc_stderr\": 0.046882617226215034,\n \
\ \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.046882617226215034\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\"\
: 0.5645161290322581,\n \"acc_stderr\": 0.02820622559150274,\n \"\
acc_norm\": 0.5645161290322581,\n \"acc_norm_stderr\": 0.02820622559150274\n\
\ },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\"\
: 0.3448275862068966,\n \"acc_stderr\": 0.033442837442804574,\n \"\
acc_norm\": 0.3448275862068966,\n \"acc_norm_stderr\": 0.033442837442804574\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.44,\n \"acc_stderr\": 0.04988876515698589,\n \"acc_norm\"\
: 0.44,\n \"acc_norm_stderr\": 0.04988876515698589\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7212121212121212,\n \"acc_stderr\": 0.03501438706296781,\n\
\ \"acc_norm\": 0.7212121212121212,\n \"acc_norm_stderr\": 0.03501438706296781\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.6414141414141414,\n \"acc_stderr\": 0.034169036403915214,\n \"\
acc_norm\": 0.6414141414141414,\n \"acc_norm_stderr\": 0.034169036403915214\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.7772020725388601,\n \"acc_stderr\": 0.030031147977641538,\n\
\ \"acc_norm\": 0.7772020725388601,\n \"acc_norm_stderr\": 0.030031147977641538\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.4948717948717949,\n \"acc_stderr\": 0.02534967290683866,\n \
\ \"acc_norm\": 0.4948717948717949,\n \"acc_norm_stderr\": 0.02534967290683866\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.24814814814814815,\n \"acc_stderr\": 0.0263357394040558,\n \
\ \"acc_norm\": 0.24814814814814815,\n \"acc_norm_stderr\": 0.0263357394040558\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.5168067226890757,\n \"acc_stderr\": 0.03246013680375308,\n \
\ \"acc_norm\": 0.5168067226890757,\n \"acc_norm_stderr\": 0.03246013680375308\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.2980132450331126,\n \"acc_stderr\": 0.037345356767871984,\n \"\
acc_norm\": 0.2980132450331126,\n \"acc_norm_stderr\": 0.037345356767871984\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.7321100917431193,\n \"acc_stderr\": 0.018987462257978652,\n \"\
acc_norm\": 0.7321100917431193,\n \"acc_norm_stderr\": 0.018987462257978652\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.42592592592592593,\n \"acc_stderr\": 0.03372343271653063,\n \"\
acc_norm\": 0.42592592592592593,\n \"acc_norm_stderr\": 0.03372343271653063\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.696078431372549,\n \"acc_stderr\": 0.03228210387037893,\n \"acc_norm\"\
: 0.696078431372549,\n \"acc_norm_stderr\": 0.03228210387037893\n },\n\
\ \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\":\
\ 0.7130801687763713,\n \"acc_stderr\": 0.029443773022594693,\n \"\
acc_norm\": 0.7130801687763713,\n \"acc_norm_stderr\": 0.029443773022594693\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6143497757847534,\n\
\ \"acc_stderr\": 0.03266842214289201,\n \"acc_norm\": 0.6143497757847534,\n\
\ \"acc_norm_stderr\": 0.03266842214289201\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.6106870229007634,\n \"acc_stderr\": 0.04276486542814591,\n\
\ \"acc_norm\": 0.6106870229007634,\n \"acc_norm_stderr\": 0.04276486542814591\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.6859504132231405,\n \"acc_stderr\": 0.042369647530410184,\n \"\
acc_norm\": 0.6859504132231405,\n \"acc_norm_stderr\": 0.042369647530410184\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.5833333333333334,\n\
\ \"acc_stderr\": 0.04766075165356461,\n \"acc_norm\": 0.5833333333333334,\n\
\ \"acc_norm_stderr\": 0.04766075165356461\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.5460122699386503,\n \"acc_stderr\": 0.0391170190467718,\n\
\ \"acc_norm\": 0.5460122699386503,\n \"acc_norm_stderr\": 0.0391170190467718\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4107142857142857,\n\
\ \"acc_stderr\": 0.04669510663875191,\n \"acc_norm\": 0.4107142857142857,\n\
\ \"acc_norm_stderr\": 0.04669510663875191\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7281553398058253,\n \"acc_stderr\": 0.044052680241409216,\n\
\ \"acc_norm\": 0.7281553398058253,\n \"acc_norm_stderr\": 0.044052680241409216\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.782051282051282,\n\
\ \"acc_stderr\": 0.02704685763071669,\n \"acc_norm\": 0.782051282051282,\n\
\ \"acc_norm_stderr\": 0.02704685763071669\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.64,\n \"acc_stderr\": 0.04824181513244218,\n \
\ \"acc_norm\": 0.64,\n \"acc_norm_stderr\": 0.04824181513244218\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7164750957854407,\n\
\ \"acc_stderr\": 0.01611731816683227,\n \"acc_norm\": 0.7164750957854407,\n\
\ \"acc_norm_stderr\": 0.01611731816683227\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.5780346820809249,\n \"acc_stderr\": 0.026589231142174263,\n\
\ \"acc_norm\": 0.5780346820809249,\n \"acc_norm_stderr\": 0.026589231142174263\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.2569832402234637,\n\
\ \"acc_stderr\": 0.01461446582196633,\n \"acc_norm\": 0.2569832402234637,\n\
\ \"acc_norm_stderr\": 0.01461446582196633\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.5424836601307189,\n \"acc_stderr\": 0.028526383452142635,\n\
\ \"acc_norm\": 0.5424836601307189,\n \"acc_norm_stderr\": 0.028526383452142635\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.5852090032154341,\n\
\ \"acc_stderr\": 0.027982680459759563,\n \"acc_norm\": 0.5852090032154341,\n\
\ \"acc_norm_stderr\": 0.027982680459759563\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.5370370370370371,\n \"acc_stderr\": 0.027744313443376536,\n\
\ \"acc_norm\": 0.5370370370370371,\n \"acc_norm_stderr\": 0.027744313443376536\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.3900709219858156,\n \"acc_stderr\": 0.029097675599463926,\n \
\ \"acc_norm\": 0.3900709219858156,\n \"acc_norm_stderr\": 0.029097675599463926\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.3917861799217731,\n\
\ \"acc_stderr\": 0.01246756441814513,\n \"acc_norm\": 0.3917861799217731,\n\
\ \"acc_norm_stderr\": 0.01246756441814513\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.5183823529411765,\n \"acc_stderr\": 0.03035230339535197,\n\
\ \"acc_norm\": 0.5183823529411765,\n \"acc_norm_stderr\": 0.03035230339535197\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.5098039215686274,\n \"acc_stderr\": 0.0202239460050743,\n \
\ \"acc_norm\": 0.5098039215686274,\n \"acc_norm_stderr\": 0.0202239460050743\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6181818181818182,\n\
\ \"acc_stderr\": 0.046534298079135075,\n \"acc_norm\": 0.6181818181818182,\n\
\ \"acc_norm_stderr\": 0.046534298079135075\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.6571428571428571,\n \"acc_stderr\": 0.030387262919547735,\n\
\ \"acc_norm\": 0.6571428571428571,\n \"acc_norm_stderr\": 0.030387262919547735\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.6318407960199005,\n\
\ \"acc_stderr\": 0.03410410565495302,\n \"acc_norm\": 0.6318407960199005,\n\
\ \"acc_norm_stderr\": 0.03410410565495302\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.73,\n \"acc_stderr\": 0.0446196043338474,\n \
\ \"acc_norm\": 0.73,\n \"acc_norm_stderr\": 0.0446196043338474\n },\n\
\ \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.42771084337349397,\n\
\ \"acc_stderr\": 0.03851597683718534,\n \"acc_norm\": 0.42771084337349397,\n\
\ \"acc_norm_stderr\": 0.03851597683718534\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.695906432748538,\n \"acc_stderr\": 0.03528211258245229,\n\
\ \"acc_norm\": 0.695906432748538,\n \"acc_norm_stderr\": 0.03528211258245229\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.3561811505507956,\n\
\ \"mc1_stderr\": 0.01676379072844634,\n \"mc2\": 0.5106039601116779,\n\
\ \"mc2_stderr\": 0.015454187246822623\n }\n}\n```"
repo_url: https://huggingface.co/edor/Stable-Platypus2-mini-7B
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_08_16T10_44_20.574252
path:
- '**/details_harness|arc:challenge|25_2023-08-16T10:44:20.574252.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-08-16T10:44:20.574252.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_08_16T10_44_20.574252
path:
- '**/details_harness|hellaswag|10_2023-08-16T10:44:20.574252.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-08-16T10:44:20.574252.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_08_16T10_44_20.574252
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-16T10:44:20.574252.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-16T10:44:20.574252.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-16T10:44:20.574252.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_08_16T10_44_20.574252
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-16T10:44:20.574252.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-16T10:44:20.574252.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_08_16T10_44_20.574252
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-16T10:44:20.574252.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-16T10:44:20.574252.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_08_16T10_44_20.574252
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-16T10:44:20.574252.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-16T10:44:20.574252.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_08_16T10_44_20.574252
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-16T10:44:20.574252.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-16T10:44:20.574252.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_08_16T10_44_20.574252
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-16T10:44:20.574252.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-16T10:44:20.574252.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_08_16T10_44_20.574252
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-16T10:44:20.574252.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-16T10:44:20.574252.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_08_16T10_44_20.574252
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-16T10:44:20.574252.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-16T10:44:20.574252.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_08_16T10_44_20.574252
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-16T10:44:20.574252.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-16T10:44:20.574252.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_08_16T10_44_20.574252
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-16T10:44:20.574252.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-16T10:44:20.574252.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_08_16T10_44_20.574252
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-16T10:44:20.574252.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-16T10:44:20.574252.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_08_16T10_44_20.574252
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-16T10:44:20.574252.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-16T10:44:20.574252.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_08_16T10_44_20.574252
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-16T10:44:20.574252.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-16T10:44:20.574252.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_08_16T10_44_20.574252
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-16T10:44:20.574252.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-16T10:44:20.574252.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_08_16T10_44_20.574252
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-16T10:44:20.574252.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-16T10:44:20.574252.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_08_16T10_44_20.574252
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-16T10:44:20.574252.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-16T10:44:20.574252.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_08_16T10_44_20.574252
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-16T10:44:20.574252.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-16T10:44:20.574252.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_08_16T10_44_20.574252
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-16T10:44:20.574252.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-16T10:44:20.574252.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_08_16T10_44_20.574252
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-16T10:44:20.574252.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-16T10:44:20.574252.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_08_16T10_44_20.574252
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-16T10:44:20.574252.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-16T10:44:20.574252.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_08_16T10_44_20.574252
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-16T10:44:20.574252.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-16T10:44:20.574252.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_08_16T10_44_20.574252
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-16T10:44:20.574252.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-16T10:44:20.574252.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_08_16T10_44_20.574252
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-16T10:44:20.574252.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-16T10:44:20.574252.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_08_16T10_44_20.574252
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-16T10:44:20.574252.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-16T10:44:20.574252.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_08_16T10_44_20.574252
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-16T10:44:20.574252.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-16T10:44:20.574252.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_08_16T10_44_20.574252
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-16T10:44:20.574252.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-16T10:44:20.574252.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_08_16T10_44_20.574252
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-16T10:44:20.574252.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-16T10:44:20.574252.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_08_16T10_44_20.574252
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-16T10:44:20.574252.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-16T10:44:20.574252.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_08_16T10_44_20.574252
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-16T10:44:20.574252.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-16T10:44:20.574252.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_08_16T10_44_20.574252
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-16T10:44:20.574252.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-16T10:44:20.574252.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_08_16T10_44_20.574252
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-16T10:44:20.574252.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-16T10:44:20.574252.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_08_16T10_44_20.574252
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-16T10:44:20.574252.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-16T10:44:20.574252.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_08_16T10_44_20.574252
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-16T10:44:20.574252.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-16T10:44:20.574252.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_08_16T10_44_20.574252
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-16T10:44:20.574252.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-16T10:44:20.574252.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_08_16T10_44_20.574252
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-16T10:44:20.574252.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-16T10:44:20.574252.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_08_16T10_44_20.574252
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-16T10:44:20.574252.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-16T10:44:20.574252.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_08_16T10_44_20.574252
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-16T10:44:20.574252.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-16T10:44:20.574252.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_08_16T10_44_20.574252
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-16T10:44:20.574252.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-16T10:44:20.574252.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_08_16T10_44_20.574252
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-16T10:44:20.574252.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-16T10:44:20.574252.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_08_16T10_44_20.574252
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-16T10:44:20.574252.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-16T10:44:20.574252.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_08_16T10_44_20.574252
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-16T10:44:20.574252.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-16T10:44:20.574252.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_08_16T10_44_20.574252
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-16T10:44:20.574252.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-16T10:44:20.574252.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_08_16T10_44_20.574252
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-16T10:44:20.574252.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-16T10:44:20.574252.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_08_16T10_44_20.574252
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-16T10:44:20.574252.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-16T10:44:20.574252.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_08_16T10_44_20.574252
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-16T10:44:20.574252.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-16T10:44:20.574252.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_08_16T10_44_20.574252
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-16T10:44:20.574252.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-16T10:44:20.574252.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_08_16T10_44_20.574252
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-16T10:44:20.574252.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-16T10:44:20.574252.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_08_16T10_44_20.574252
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-16T10:44:20.574252.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-16T10:44:20.574252.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_08_16T10_44_20.574252
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-16T10:44:20.574252.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-16T10:44:20.574252.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_08_16T10_44_20.574252
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-16T10:44:20.574252.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-16T10:44:20.574252.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_08_16T10_44_20.574252
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-16T10:44:20.574252.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-16T10:44:20.574252.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_08_16T10_44_20.574252
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-16T10:44:20.574252.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-16T10:44:20.574252.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_08_16T10_44_20.574252
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-16T10:44:20.574252.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-16T10:44:20.574252.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_08_16T10_44_20.574252
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-16T10:44:20.574252.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-16T10:44:20.574252.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_08_16T10_44_20.574252
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-16T10:44:20.574252.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-16T10:44:20.574252.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_08_16T10_44_20.574252
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-16T10:44:20.574252.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-16T10:44:20.574252.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_08_16T10_44_20.574252
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-16T10:44:20.574252.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-16T10:44:20.574252.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_08_16T10_44_20.574252
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-16T10:44:20.574252.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-16T10:44:20.574252.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_08_16T10_44_20.574252
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-16T10:44:20.574252.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-16T10:44:20.574252.parquet'
- config_name: results
data_files:
- split: 2023_08_16T10_44_20.574252
path:
- results_2023-08-16T10:44:20.574252.parquet
- split: latest
path:
- results_2023-08-16T10:44:20.574252.parquet
---
# Dataset Card for Evaluation run of edor/Stable-Platypus2-mini-7B
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/edor/Stable-Platypus2-mini-7B
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [edor/Stable-Platypus2-mini-7B](https://huggingface.co/edor/Stable-Platypus2-mini-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_edor__Stable-Platypus2-mini-7B",
"harness_truthfulqa_mc_0",
split="train")
```
## Latest results
These are the [latest results from run 2023-08-16T10:44:20.574252](https://huggingface.co/datasets/open-llm-leaderboard/details_edor__Stable-Platypus2-mini-7B/blob/main/results_2023-08-16T10%3A44%3A20.574252.json) (note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.519238503099194,
"acc_stderr": 0.03487887571401071,
"acc_norm": 0.5229272130971759,
"acc_norm_stderr": 0.03486396112216957,
"mc1": 0.3561811505507956,
"mc1_stderr": 0.01676379072844634,
"mc2": 0.5106039601116779,
"mc2_stderr": 0.015454187246822623
},
"harness|arc:challenge|25": {
"acc": 0.5238907849829352,
"acc_stderr": 0.014594701798071654,
"acc_norm": 0.5486348122866894,
"acc_norm_stderr": 0.014542104569955267
},
"harness|hellaswag|10": {
"acc": 0.5965943039235212,
"acc_stderr": 0.004895782107786497,
"acc_norm": 0.7894841665006971,
"acc_norm_stderr": 0.0040684184172756635
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.37,
"acc_stderr": 0.04852365870939099,
"acc_norm": 0.37,
"acc_norm_stderr": 0.04852365870939099
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.4888888888888889,
"acc_stderr": 0.04318275491977976,
"acc_norm": 0.4888888888888889,
"acc_norm_stderr": 0.04318275491977976
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.40789473684210525,
"acc_stderr": 0.03999309712777471,
"acc_norm": 0.40789473684210525,
"acc_norm_stderr": 0.03999309712777471
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.5,
"acc_stderr": 0.050251890762960605,
"acc_norm": 0.5,
"acc_norm_stderr": 0.050251890762960605
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.5924528301886792,
"acc_stderr": 0.03024223380085449,
"acc_norm": 0.5924528301886792,
"acc_norm_stderr": 0.03024223380085449
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.5416666666666666,
"acc_stderr": 0.04166666666666666,
"acc_norm": 0.5416666666666666,
"acc_norm_stderr": 0.04166666666666666
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.38,
"acc_stderr": 0.04878317312145632,
"acc_norm": 0.38,
"acc_norm_stderr": 0.04878317312145632
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.39,
"acc_stderr": 0.04902071300001974,
"acc_norm": 0.39,
"acc_norm_stderr": 0.04902071300001974
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.32,
"acc_stderr": 0.04688261722621504,
"acc_norm": 0.32,
"acc_norm_stderr": 0.04688261722621504
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.4682080924855491,
"acc_stderr": 0.03804749744364764,
"acc_norm": 0.4682080924855491,
"acc_norm_stderr": 0.03804749744364764
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.27450980392156865,
"acc_stderr": 0.04440521906179327,
"acc_norm": 0.27450980392156865,
"acc_norm_stderr": 0.04440521906179327
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.61,
"acc_stderr": 0.04902071300001975,
"acc_norm": 0.61,
"acc_norm_stderr": 0.04902071300001975
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.4765957446808511,
"acc_stderr": 0.03265019475033582,
"acc_norm": 0.4765957446808511,
"acc_norm_stderr": 0.03265019475033582
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.2631578947368421,
"acc_stderr": 0.041424397194893624,
"acc_norm": 0.2631578947368421,
"acc_norm_stderr": 0.041424397194893624
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.4896551724137931,
"acc_stderr": 0.04165774775728763,
"acc_norm": 0.4896551724137931,
"acc_norm_stderr": 0.04165774775728763
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.30158730158730157,
"acc_stderr": 0.0236369759961018,
"acc_norm": 0.30158730158730157,
"acc_norm_stderr": 0.0236369759961018
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.31746031746031744,
"acc_stderr": 0.04163453031302859,
"acc_norm": 0.31746031746031744,
"acc_norm_stderr": 0.04163453031302859
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.32,
"acc_stderr": 0.046882617226215034,
"acc_norm": 0.32,
"acc_norm_stderr": 0.046882617226215034
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.5645161290322581,
"acc_stderr": 0.02820622559150274,
"acc_norm": 0.5645161290322581,
"acc_norm_stderr": 0.02820622559150274
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.3448275862068966,
"acc_stderr": 0.033442837442804574,
"acc_norm": 0.3448275862068966,
"acc_norm_stderr": 0.033442837442804574
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.44,
"acc_stderr": 0.04988876515698589,
"acc_norm": 0.44,
"acc_norm_stderr": 0.04988876515698589
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7212121212121212,
"acc_stderr": 0.03501438706296781,
"acc_norm": 0.7212121212121212,
"acc_norm_stderr": 0.03501438706296781
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.6414141414141414,
"acc_stderr": 0.034169036403915214,
"acc_norm": 0.6414141414141414,
"acc_norm_stderr": 0.034169036403915214
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.7772020725388601,
"acc_stderr": 0.030031147977641538,
"acc_norm": 0.7772020725388601,
"acc_norm_stderr": 0.030031147977641538
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.4948717948717949,
"acc_stderr": 0.02534967290683866,
"acc_norm": 0.4948717948717949,
"acc_norm_stderr": 0.02534967290683866
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.24814814814814815,
"acc_stderr": 0.0263357394040558,
"acc_norm": 0.24814814814814815,
"acc_norm_stderr": 0.0263357394040558
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.5168067226890757,
"acc_stderr": 0.03246013680375308,
"acc_norm": 0.5168067226890757,
"acc_norm_stderr": 0.03246013680375308
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.2980132450331126,
"acc_stderr": 0.037345356767871984,
"acc_norm": 0.2980132450331126,
"acc_norm_stderr": 0.037345356767871984
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.7321100917431193,
"acc_stderr": 0.018987462257978652,
"acc_norm": 0.7321100917431193,
"acc_norm_stderr": 0.018987462257978652
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.42592592592592593,
"acc_stderr": 0.03372343271653063,
"acc_norm": 0.42592592592592593,
"acc_norm_stderr": 0.03372343271653063
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.696078431372549,
"acc_stderr": 0.03228210387037893,
"acc_norm": 0.696078431372549,
"acc_norm_stderr": 0.03228210387037893
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7130801687763713,
"acc_stderr": 0.029443773022594693,
"acc_norm": 0.7130801687763713,
"acc_norm_stderr": 0.029443773022594693
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6143497757847534,
"acc_stderr": 0.03266842214289201,
"acc_norm": 0.6143497757847534,
"acc_norm_stderr": 0.03266842214289201
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.6106870229007634,
"acc_stderr": 0.04276486542814591,
"acc_norm": 0.6106870229007634,
"acc_norm_stderr": 0.04276486542814591
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.6859504132231405,
"acc_stderr": 0.042369647530410184,
"acc_norm": 0.6859504132231405,
"acc_norm_stderr": 0.042369647530410184
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.5833333333333334,
"acc_stderr": 0.04766075165356461,
"acc_norm": 0.5833333333333334,
"acc_norm_stderr": 0.04766075165356461
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.5460122699386503,
"acc_stderr": 0.0391170190467718,
"acc_norm": 0.5460122699386503,
"acc_norm_stderr": 0.0391170190467718
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.4107142857142857,
"acc_stderr": 0.04669510663875191,
"acc_norm": 0.4107142857142857,
"acc_norm_stderr": 0.04669510663875191
},
"harness|hendrycksTest-management|5": {
"acc": 0.7281553398058253,
"acc_stderr": 0.044052680241409216,
"acc_norm": 0.7281553398058253,
"acc_norm_stderr": 0.044052680241409216
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.782051282051282,
"acc_stderr": 0.02704685763071669,
"acc_norm": 0.782051282051282,
"acc_norm_stderr": 0.02704685763071669
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.64,
"acc_stderr": 0.04824181513244218,
"acc_norm": 0.64,
"acc_norm_stderr": 0.04824181513244218
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.7164750957854407,
"acc_stderr": 0.01611731816683227,
"acc_norm": 0.7164750957854407,
"acc_norm_stderr": 0.01611731816683227
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.5780346820809249,
"acc_stderr": 0.026589231142174263,
"acc_norm": 0.5780346820809249,
"acc_norm_stderr": 0.026589231142174263
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.2569832402234637,
"acc_stderr": 0.01461446582196633,
"acc_norm": 0.2569832402234637,
"acc_norm_stderr": 0.01461446582196633
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.5424836601307189,
"acc_stderr": 0.028526383452142635,
"acc_norm": 0.5424836601307189,
"acc_norm_stderr": 0.028526383452142635
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.5852090032154341,
"acc_stderr": 0.027982680459759563,
"acc_norm": 0.5852090032154341,
"acc_norm_stderr": 0.027982680459759563
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.5370370370370371,
"acc_stderr": 0.027744313443376536,
"acc_norm": 0.5370370370370371,
"acc_norm_stderr": 0.027744313443376536
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.3900709219858156,
"acc_stderr": 0.029097675599463926,
"acc_norm": 0.3900709219858156,
"acc_norm_stderr": 0.029097675599463926
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.3917861799217731,
"acc_stderr": 0.01246756441814513,
"acc_norm": 0.3917861799217731,
"acc_norm_stderr": 0.01246756441814513
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.5183823529411765,
"acc_stderr": 0.03035230339535197,
"acc_norm": 0.5183823529411765,
"acc_norm_stderr": 0.03035230339535197
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.5098039215686274,
"acc_stderr": 0.0202239460050743,
"acc_norm": 0.5098039215686274,
"acc_norm_stderr": 0.0202239460050743
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6181818181818182,
"acc_stderr": 0.046534298079135075,
"acc_norm": 0.6181818181818182,
"acc_norm_stderr": 0.046534298079135075
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.6571428571428571,
"acc_stderr": 0.030387262919547735,
"acc_norm": 0.6571428571428571,
"acc_norm_stderr": 0.030387262919547735
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.6318407960199005,
"acc_stderr": 0.03410410565495302,
"acc_norm": 0.6318407960199005,
"acc_norm_stderr": 0.03410410565495302
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.73,
"acc_stderr": 0.0446196043338474,
"acc_norm": 0.73,
"acc_norm_stderr": 0.0446196043338474
},
"harness|hendrycksTest-virology|5": {
"acc": 0.42771084337349397,
"acc_stderr": 0.03851597683718534,
"acc_norm": 0.42771084337349397,
"acc_norm_stderr": 0.03851597683718534
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.695906432748538,
"acc_stderr": 0.03528211258245229,
"acc_norm": 0.695906432748538,
"acc_norm_stderr": 0.03528211258245229
},
"harness|truthfulqa:mc|0": {
"mc1": 0.3561811505507956,
"mc1_stderr": 0.01676379072844634,
"mc2": 0.5106039601116779,
"mc2_stderr": 0.015454187246822623
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
qazisaad/llama-2-optimized-product-titles-esci-test-temp | ---
dataset_info:
features:
- name: level_0
dtype: int64
- name: index
dtype: int64
- name: product_title
dtype: string
- name: average_score
dtype: float64
- name: total_score
dtype: float64
- name: text
dtype: string
- name: preds
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 11865828
num_examples: 3780
download_size: 2246163
dataset_size: 11865828
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "llama-2-optimized-product-titles-esci-test-temp"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
autoevaluate/autoeval-staging-eval-squad_v2-squad_v2-76c05b-14906069 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- squad_v2
eval_info:
task: extractive_question_answering
model: deepset/xlm-roberta-large-squad2
metrics: ['bertscore']
dataset_name: squad_v2
dataset_config: squad_v2
dataset_split: validation
col_mapping:
context: context
question: question
answers-text: answers.text
answers-answer_start: answers.answer_start
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/xlm-roberta-large-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model. |
Hitochu/hate-speech-fr-en | ---
license: wtfpl
---
```
{
"label": {
0: "normal",
1: "offensive",
2: "hateful",
3: "abusive",
4: "fearful",
5: "disrespectful",
99: "unknown"
},
"tweet": <string>
}
``` |
yentinglin/ASR-correction-pilot | ---
license: apache-2.0
---
# Dataset Name: Pilot dataset for ASR corrections
## Description
Consolidated from [PeacefulData/HyPoradise-v0](https://huggingface.co/datasets/PeacefulData/HyPoradise-v0)
## Structure
### Data Split
- **Training Data**: 168,460 entries
- **Test Data**: 6,992 entries
### Columns
- `hypothesis`: N-best hypothesis from beam search.
- `transcription`: Corrected asr transcription.
- `hypothesis_concatenated`: An alternative version of the text output.
- `source`: The source of the text entry, indicating the origin dataset.
- `score`: An acoustic model score (not all entries have this).
### Source Distribution
- **Training Sources**:
- `train_cv`: 47,293 entries
- `train_wsj`: 37,514 entries
- `train_wsj_score`: 37,514 entries
- `train_swbd`: 36,539 entries
- `train_chime4`: 9,600 entries
- **Test Sources**:
- `test_swbd`: 2,000 entries
- `test_cv`: 2,000 entries
- `test_chime4`: 1,320 entries
- `test_wsj`: 836 entries
- `test_wsj_score`: 836 entries
## Access
The dataset can be accessed and downloaded through the HuggingFace Datasets library. Use the following command to load the dataset:
```python
from datasets import load_dataset
dataset = load_dataset("yentinglin/ASR-correction-pilot")
```
## Acknowledgments
Thanks https://huggingface.co/datasets/PeacefulData/HyPoradise-v0 for sharing this dataset.
|
PDBEurope/protein_structure_NER_model_v1.2 | ---
license: mit
language:
- en
tags:
- biology
- protein structure
- token classification
configs:
- config_name: protein_structure_NER_model_v1.2
data_files:
- split: train
path: "annotation_IOB/train.tsv"
- split: dev
path: "annotation_IOB/dev.tsv"
- split: test
path: "annotation_IOB/test.tsv"
---
## Overview
This data was used to train model:
https://huggingface.co/PDBEurope/BiomedNLP-PubMedBERT-ProteinStructure-NER-v1.2
There are 19 different entity types in this dataset:
"chemical", "complex_assembly", "evidence", "experimental_method", "gene", "mutant",
"oligomeric_state", "protein", "protein_state", "protein_type", "ptm", "residue_name",
"residue_name_number","residue_number", "residue_range", "site", "species", "structure_element",
"taxonomy_domain"
The data prepared as IOB formated input has been used during training, development
and testing. Additional data formats such as JSON and XML as well as CSV files are
also available and are described below.
Annotation was carried out with the free annotation tool TeamTat (https://www.teamtat.org/) and
documents were downloaded as BioC XML before converting them to IOB, annotation only JSON and CSV format.
The number of annotations and sentences in each file is given below:
| document ID | number of annotations in BioC XML | number of annotations in IOB/JSON/CSV | number of sentences |
| --- | --- | --- | --- |
| PMC4850273 | 1121 | 1121 | 204 |
| PMC4784909 | 865 | 865 | 204 |
| PMC4850288 | 716 | 708 | 146 |
| PMC4887326 | 933 | 933 | 152 |
| PMC4833862 | 1044 | 1044 | 192 |
| PMC4832331 | 739 | 718 | 134 |
| PMC4852598 | 1229 | 1218 | 250 |
| PMC4786784 | 1549 | 1549 | 232 |
| PMC4848090 | 987 | 985 | 191 |
| PMC4792962 | 1268 | 1268 | 256 |
| total | 10451 | 10409 | 1961 |
Documents and annotations are easiest viewed by using the BioC XML files and opening
them in free annotation tool TeamTat. More about the BioC
format can be found here: https://bioc.sourceforge.net/
## Raw BioC XML files
These are the raw, un-annotated XML files for the publications in the dataset in BioC format.
The files are found in the directory: "raw_BioC_XML".
There is one file for each document and they follow standard naming
"unique PubMedCentral ID"_raw.xml.
## Annotations in IOB format
The IOB formated files can be found in the directory: "annotation_IOB"
The four files are as follows:
* all.tsv --> all sentences and annotations used to create model
"PDBEurope/BiomedNLP-PubMedBERT-ProteinStructure-NER-v1.2"; 1961 sentences
* train.tsv --> training subset of the data; 1372 sentences
* dev.tsv --> development subset of the data; 294 sentences
* test.tsv --> testing subset of the data; 295 sentences
The total number of annotations is: 10409
## Annotations in BioC JSON
The BioC formated JSON files of the publications have been downloaded from the annotation
tool TeamTat. The files are found in the directory: "annotated_BioC_JSON"
There is one file for each document and they follow standard naming
"unique PubMedCentral ID"_ann.json
Each document JSON contains the following relevant keys:
* "sourceid" --> giving the numerical part of the unique PubMedCentral ID
* "text" --> containing the complete raw text of the publication as a string
* "denotations" --> containing a list of all the annotations for the text
Each annotation is a dictionary with the following keys:
* "span" --> gives the start and end of the annotatiom span defined by sub keys:
* "begin" --> character start position of annotation
* "end" --> character end position of annotation
* "obj" --> a string containing a number of terms that can be separated by ","; the order
of the terms gives the following: entity type, reference to ontology, annotator,
time stamp
* "id" --> unique annotation ID
Here an example:
```json
[{"sourceid":"4784909",
"sourcedb":"",
"project":"",
"target":"",
"text":"",
"denotations":[{"span":{"begin":24,
"end":34},
"obj":"chemical,CHEBI:,melaniev@ebi.ac.uk,2023-03-21T15:19:42Z",
"id":"4500"},
{"span":{"begin":50,
"end":59},
"obj":"taxonomy_domain,DUMMY:,melaniev@ebi.ac.uk,2023-03-21T15:15:03Z",
"id":"1281"}]
}
]
```
## Annotations in BioC XML
The BioC formated XML files of the publications have been downloaded from the annotation
tool TeamTat. The files are found in the directory: "annotated_BioC_XML"
There is one file for each document and they follow standard naming
"unique PubMedCentral ID_ann.xml
The key XML tags to be able to visualise the annotations in TeamTat as well as extracting
them to create the training data are "passage" and "offset". The "passage" tag encloses a
text passage or paragraph to which the annotations are linked. "Offset" gives the passage/
paragraph offset and allows to determine the character starting and ending postions of the
annotations. The tag "text" encloses the raw text of the passage.
Each annotation in the XML file is tagged as below:
* "annotation id=" --> giving the unique ID of the annotation
* "infon key="type"" --> giving the entity type of the annotation
* "infon key="identifier"" --> giving a reference to an ontology for the annotation
* "infon key="annotator"" --> giving the annotator
* "infon key="updated_at"" --> providing a time stamp for annotation creation/update
* "location" --> start and end character positions for the annotated text span
* "offset" --> start character position as defined by offset value
* "length" --> length of the annotation span; sum of "offset" and "length" creates
the end character position
Here is a basic example of what the BioC XML looks like. Additional tags for document
management are not given. Please refer to the documenttation to find out more.
```xml
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE collection SYSTEM "BioC.dtd">
<collection>
<source>PMC</source>
<date>20140719</date>
<key>pmc.key</key>
<document>
<id>4784909</id>
<passage>
<offset>0</offset>
<text>The Structural Basis of Coenzyme A Recycling in a Bacterial Organelle</text>
<annotation id="4500">
<infon key="type">chemical</infon>
<infon key="identifier">CHEBI:</infon>
<infon key="annotator">melaniev@ebi.ac.uk</infon>
<infon key="updated_at">2023-03-21T15:19:42Z</infon>
<location offset="24" length="10"/>
<text>Coenzyme A</text>
</annotation>
</passage>
</document>
</collection>
```
## Annotations in CSV
The annotations and the relevant sentences they have been found in have also been made
available as tab-separated CSV files, one for each publication in the dataset. The files can
be found in directory "annotation_CSV". Each file is named as "unique PubMedCentral ID".csv.
The column labels in the CSV files are as follows:
* "anno_start" --> character start position of the annotation
* "anno_end" --> character end position of the annotation
* "anno_text" --> text covered by the annotation
* "entity_type" --> entity type of the annotation
* "sentence" --> sentence text in which the annotation was found
* "section" --> publication section in which the annotation was found
## Annotations in JSON
A combined JSON file was created only containing the relevant sentences and associated
annotations for each publication in the dataset. The file can be found in directory
"annotation_JSON" under the name "annotations.json".
The following keys are used:
* "PMC4850273" --> unique PubMedCentral of the publication
* "annotations" --> list of dictionaries for the relevant, annotated sentences of the
document; each dictionary has the following sub keys
* "sid" --> unique sentence ID
* "sent" --> sentence text as string
* "section" --> publication section the sentence is in
* "ner" --> nested list of annotations; each sublist contains the following items:
start character position, end character position, annotation text,
entity type
Here is an example of a sentence and its annotations:
```json
{"PMC4850273": {"annotations":
[{"sid": 0,
"sent": "Molecular Dissection of Xyloglucan Recognition in a Prominent Human Gut Symbiont",
"section": "TITLE",
"ner": [
[24,34,"Xyloglucan","chemical"],
[62,67,"Human","species"],]
},]
}}
``` |
rombodawg/LosslessMegaCodeTrainingV3_Tiny | ---
license: other
---
This is a new version and experinmental version of the LosslessMegacodeTraining series. Its like the version 3 but only using the most refine parts of the dataset.
The content of this dataset is roughly 80% coding instruction data and 20% non-coding instruction data. Amounting to 650,000 evol instruction-formatted lines of data.
The outcome of having 20% non coding instruction data in the dataset is to preserve logic and reasoning skills within the model while training on coding. The lack of such skills has been observed to be a major issue with coding models such as Wizardcoder-15b and NewHope, but training models on this dataset alleviates that issue while also giving similar levels of coding knowledge.
This dataset is a combination of the following datasets:
- https://huggingface.co/datasets/rombodawg/Platypus_Evol
- https://huggingface.co/datasets/rombodawg/Rombodawgs_commitpackft_Evolinstruct_Converted
- https://huggingface.co/datasets/rombodawg/airoboros-2.1_general_purpose
- https://huggingface.co/datasets/shahules786/megacode-best |
arthurmluz/GPTextSum2_data-xlsum_cstnews_1024_results | ---
dataset_info:
features:
- name: id
dtype: int64
- name: text
dtype: string
- name: summary
dtype: string
- name: gen_summary
dtype: string
- name: rouge
struct:
- name: rouge1
dtype: float64
- name: rouge2
dtype: float64
- name: rougeL
dtype: float64
- name: rougeLsum
dtype: float64
- name: bert
struct:
- name: f1
sequence: float64
- name: hashcode
dtype: string
- name: precision
sequence: float64
- name: recall
sequence: float64
- name: moverScore
dtype: float64
splits:
- name: validation
num_bytes: 91939
num_examples: 20
download_size: 89878
dataset_size: 91939
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
---
# Dataset Card for "gptextsum2_data-xlsum_cstnews_1024_results"
rouge= {'rouge1': 0.39418346930184295, 'rouge2': 0.17965035175767424, 'rougeL': 0.2455202016037282, 'rougeLsum': 0.2455202016037282}
bert= {'precision': 0.7633351445198059, 'recall': 0.7100760132074356, 'f1': 0.7354371815919876}
mover 0.6302502833672502 |
joshbaradia/my_orca | ---
license: apache-2.0
---
|
Lollitor/similar | ---
dataset_info:
config_name: Lollitor
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 303
num_examples: 7
download_size: 1067
dataset_size: 303
configs:
- config_name: Lollitor
data_files:
- split: train
path: Lollitor/train-*
---
# Dataset Card for "similar"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
coelhobrbr/bolinha | ---
license: wtfpl
---
|
HydraLM/partitioned_v3_standardized_05 | ---
dataset_info:
features:
- name: message
dtype: string
- name: message_type
dtype: string
- name: message_id
dtype: int64
- name: conversation_id
dtype: int64
- name: dataset_id
dtype: string
- name: unique_id
dtype: string
splits:
- name: train
num_bytes: 10155860.52533418
num_examples: 18887
download_size: 3249498
dataset_size: 10155860.52533418
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "partitioned_v3_standardized_05"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ivanzhouyq/RedPajama-Tiny | ---
language:
- en
license: apache-2.0
size_categories:
- n<1K
task_categories:
- text-generation
pretty_name: RedPajama Tiny
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
- name: meta
dtype: string
splits:
- name: train
num_bytes: 32428740
num_examples: 448
download_size: 18977230
dataset_size: 32428740
---
# Dataset Card for Dataset Name
### Dataset Summary
This is a tiny version of the [RedPajama dataset](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T).
It contains 64 samples from each of the 7 sources.
This dataset is intended for developing and testing data/training pipeline for loading the full RedPajama dataset or any general HuggingFace dataset.
It is very fast to download and easy to examine. You should not use it for training a full model, but you can use it for overfitting test or any other sanity checks.
## Dataset Structure
The dataset structure is as follows:
```
{
"text": ...,
"meta": {"url": "...", "timestamp": "...", "source": "...", "language": "...", ...}
}
```
|
jcramirezpr/dreambooth-hackathon-images | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
splits:
- name: train
num_bytes: 15092589.0
num_examples: 12
download_size: 15084194
dataset_size: 15092589.0
---
# Dataset Card for "dreambooth-hackathon-images"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tyzhu/eval_tag_nq_dev_v10_first | ---
dataset_info:
features:
- name: question
dtype: string
- name: title
dtype: string
- name: inputs
dtype: string
- name: targets
dtype: string
- name: answers
struct:
- name: answer_start
sequence: 'null'
- name: text
sequence: string
- name: id
dtype: string
- name: titles
dtype: string
splits:
- name: train
num_bytes: 3200
num_examples: 10
- name: validation
num_bytes: 2312059
num_examples: 6515
download_size: 1383725
dataset_size: 2315259
---
# Dataset Card for "eval_tag_nq_dev_v10_first"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
flow3rdown/MARS | ---
language:
- en
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
CyberHarem/nakaseko_kaori_soundeuphonium | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of Nakaseko Kaori/中世古香織 (Sound! Euphonium)
This is the dataset of Nakaseko Kaori/中世古香織 (Sound! Euphonium), containing 291 images and their tags.
The core tags of this character are `brown_hair, short_hair, mole, mole_under_eye, red_eyes`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 291 | 177.55 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nakaseko_kaori_soundeuphonium/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 1200 | 291 | 177.43 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nakaseko_kaori_soundeuphonium/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 522 | 297.25 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nakaseko_kaori_soundeuphonium/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/nakaseko_kaori_soundeuphonium',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 6 |  |  |  |  |  | blush, solo_focus, 2girls, band_uniform, blurry, black_hair, looking_at_viewer, purple_eyes, white_gloves, holding, long_hair, shako_cap, sleeveless, smile, trumpet |
| 1 | 5 |  |  |  |  |  | blush, shoulder_cutout, solo_focus, 2girls, bag, necklace, purple_eyes, blurry_foreground, collarbone, grey_shirt, long_hair, closed_mouth, skirt |
| 2 | 6 |  |  |  |  |  | blush, green_neckerchief, indoors, kitauji_high_school_uniform, serafuku, white_sailor_collar, brown_shirt, closed_mouth, solo_focus, window, 2girls, holding, blurry, curtains, long_sleeves, sitting, smile |
| 3 | 11 |  |  |  |  |  | 1girl, brown_shirt, green_neckerchief, kitauji_high_school_uniform, serafuku, solo, white_sailor_collar, blush, closed_mouth, looking_at_viewer, smile, outdoors, blurry_background, upper_body |
| 4 | 17 |  |  |  |  |  | 1girl, blue_sailor_collar, blush, kitauji_high_school_uniform, serafuku, white_shirt, closed_mouth, solo, green_neckerchief, looking_at_viewer, smile, blurry_background, indoors, pink_eyes |
| 5 | 7 |  |  |  |  |  | 1girl, brown_shirt, kitauji_high_school_uniform, serafuku, solo, trumpet, holding_instrument, white_sailor_collar, green_neckerchief, blush, long_sleeves, playing_instrument, indoors |
| 6 | 7 |  |  |  |  |  | 1girl, brown_shirt, brown_skirt, green_neckerchief, indoors, kitauji_high_school_uniform, long_sleeves, pleated_skirt, serafuku, trumpet, white_sailor_collar, holding_instrument, standing, solo, blurry, open_mouth, locker, smile, window |
| 7 | 5 |  |  |  |  |  | 1girl, blush, brown_shirt, brown_skirt, green_neckerchief, kitauji_high_school_uniform, kneehighs, long_sleeves, pleated_skirt, school_bag, solo, standing, white_sailor_collar, white_socks, brown_serafuku, from_side, open_mouth, outdoors, tree, brown_eyes, leaning_forward, smile, closed_eyes |
| 8 | 10 |  |  |  |  |  | kitauji_high_school_uniform, long_hair, serafuku, blush, green_neckerchief, short_sleeves, 2girls, blue_sailor_collar, solo_focus, white_shirt, blue_skirt, open_mouth, pleated_skirt, black_hair, indoors, school_bag |
| 9 | 9 |  |  |  |  |  | outdoors, solo_focus, blush, day, white_bikini, 1girl, cleavage, frilled_bikini, medium_breasts, blurry_background, cloud, multiple_girls, sky, smile, upper_body |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | blush | solo_focus | 2girls | band_uniform | blurry | black_hair | looking_at_viewer | purple_eyes | white_gloves | holding | long_hair | shako_cap | sleeveless | smile | trumpet | shoulder_cutout | bag | necklace | blurry_foreground | collarbone | grey_shirt | closed_mouth | skirt | green_neckerchief | indoors | kitauji_high_school_uniform | serafuku | white_sailor_collar | brown_shirt | window | curtains | long_sleeves | sitting | 1girl | solo | outdoors | blurry_background | upper_body | blue_sailor_collar | white_shirt | pink_eyes | holding_instrument | playing_instrument | brown_skirt | pleated_skirt | standing | open_mouth | locker | kneehighs | school_bag | white_socks | brown_serafuku | from_side | tree | brown_eyes | leaning_forward | closed_eyes | short_sleeves | blue_skirt | day | white_bikini | cleavage | frilled_bikini | medium_breasts | cloud | multiple_girls | sky |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------------|:---------|:---------------|:---------|:-------------|:--------------------|:--------------|:---------------|:----------|:------------|:------------|:-------------|:--------|:----------|:------------------|:------|:-----------|:--------------------|:-------------|:-------------|:---------------|:--------|:--------------------|:----------|:------------------------------|:-----------|:----------------------|:--------------|:---------|:-----------|:---------------|:----------|:--------|:-------|:-----------|:--------------------|:-------------|:---------------------|:--------------|:------------|:---------------------|:---------------------|:--------------|:----------------|:-----------|:-------------|:---------|:------------|:-------------|:--------------|:-----------------|:------------|:-------|:-------------|:------------------|:--------------|:----------------|:-------------|:------|:---------------|:-----------|:-----------------|:-----------------|:--------|:-----------------|:------|
| 0 | 6 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 5 |  |  |  |  |  | X | X | X | | | | | X | | | X | | | | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 6 |  |  |  |  |  | X | X | X | | X | | | | | X | | | | X | | | | | | | | X | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 11 |  |  |  |  |  | X | | | | | | X | | | | | | | X | | | | | | | | X | | X | | X | X | X | X | | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 17 |  |  |  |  |  | X | | | | | | X | | | | | | | X | | | | | | | | X | | X | X | X | X | | | | | | | X | X | | X | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 7 |  |  |  |  |  | X | | | | | | | | | | | | | | X | | | | | | | | | X | X | X | X | X | X | | | X | | X | X | | | | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | |
| 6 | 7 |  |  |  |  |  | | | | | X | | | | | | | | | X | X | | | | | | | | | X | X | X | X | X | X | X | | X | | X | X | | | | | | | X | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | |
| 7 | 5 |  |  |  |  |  | X | | | | | | | | | | | | | X | | | | | | | | | | X | | X | | X | X | | | X | | X | X | X | | | | | | | | X | X | X | X | | X | X | X | X | X | X | X | X | X | | | | | | | | | | |
| 8 | 10 |  |  |  |  |  | X | X | X | | | X | | | | | X | | | | | | | | | | | | | X | X | X | X | | | | | | | | | | | | X | X | | | | | X | | X | | | X | | | | | | | | X | X | | | | | | | | |
| 9 | 9 |  |  |  |  |  | X | X | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | X | | X | X | X | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X |
|
missvector/asd-qa-train | ---
license: mit
dataset_info:
features:
- name: question
dtype: string
- name: answers
struct:
- name: answer_end
dtype: int64
- name: answer_start
dtype: int64
- name: text
dtype: string
- name: paragraph
dtype: string
splits:
- name: train
num_bytes: 3060746
num_examples: 2593
download_size: 450478
dataset_size: 3060746
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for The ASD QA Dataset (train set)
## Dataset Description
- **Repository:** https://github.com/vifirsanova/empi
### Dataset Summary
A dataset for question-answering used for building an informational Russian language chatbot for the inclusion of people with autism spectrum disorder and Asperger syndrome in particular, based on data from the following website: https://aspergers.ru.
### Languages
Russian
## Dataset Structure
The dataset inherits SQuAD 2.0 structure.
### Source Data
https://aspergers.ru
### Dataset Curators
Victoria Firsanova |
ibranze/araproje_mmlu_en_conf_llama_nearestscore_true_y | ---
dataset_info:
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: validation
num_bytes: 130579.0
num_examples: 250
download_size: 79306
dataset_size: 130579.0
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
---
# Dataset Card for "araproje_mmlu_en_conf_llama_nearestscore_true_y"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Uncaged-Shrimp/tw-test | ---
license: cc-by-nc-nd-3.0
---
|
biglam/us_national_archives_flickr | ---
license: cc0-1.0
---
|
Norod78/jojo-stone-ocean-blip-captions-512 | ---
language: en
license: cc-by-nc-sa-4.0
size_categories:
- 1K<n<10K
pretty_name: 'JoJo''s Bizarre Adventure: Stone Ocean - Blip captions'
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 94744425.832
num_examples: 1376
download_size: 94450521
dataset_size: 94744425.832
tags:
- text-to-image
---
# Dataset Card for "jojo-stone-ocean-blip-captions-512"
## JoJo's Bizarre Adventure: Stone Ocean with Blip captions.
## Dataset contains 512x512 cropped images whose source is [jojowiki](https://jojowiki.com/Stone_Ocean_(Anime)) |
autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_dev-mathemakitte-7776e8-1573055858 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- mathemakitten/winobias_antistereotype_dev
eval_info:
task: text_zero_shot_classification
model: facebook/opt-6.7b
metrics: []
dataset_name: mathemakitten/winobias_antistereotype_dev
dataset_config: mathemakitten--winobias_antistereotype_dev
dataset_split: validation
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-6.7b
* Dataset: mathemakitten/winobias_antistereotype_dev
* Config: mathemakitten--winobias_antistereotype_dev
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. |
Jumtra/oasst1_ja | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 13784892
num_examples: 7630
download_size: 7262531
dataset_size: 13784892
---
# Dataset Card for "oasst1_ja"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ccmusic-database/acapella | ---
license: mit
task_categories:
- audio-classification
- table-question-answering
- summarization
language:
- zh
- en
tags:
- music
- art
pretty_name: Acapella Evaluation Dataset
size_categories:
- n<1K
viewer: false
---
# Dataset Card for Acapella Evaluation
This raw dataset comprises six Mandarin pop song segments performed by 22 singers, resulting in a total of 132 audio clips. Each segment includes both a verse and a chorus. Four judges from the China Conservatory of Music assess the singing across nine dimensions: pitch, rhythm, vocal range, timbre, pronunciation, vibrato, dynamics, breath control, and overall performance, using a 10-point scale. The evaluations are recorded in an Excel spreadsheet in .xls format.
## Dataset Description
- **Homepage:** <https://ccmusic-database.github.io>
- **Repository:** <https://huggingface.co/datasets/CCMUSIC/acapella_evaluation>
- **Paper:** <https://doi.org/10.5281/zenodo.5676893>
- **Leaderboard:** <https://www.modelscope.cn/datasets/ccmusic/acapella>
- **Point of Contact:** <https://www.mdpi.com/2076-3417/12/19/9931>
### Dataset Summary
Due to the original dataset comprising separate files for audio recordings and evaluation sheets, which hindered efficient data retrieval, we have consolidated the raw vocal recordings with their corresponding assessments. The dataset is divided into six segments, each representing a different song, resulting in a total of six divisions. Each segment contains 22 entries, with each entry detailing the vocal recording of an individual singer sampled at 22,050 Hz, the singer's ID, and evaluations across the nine dimensions previously mentioned. Consequently, each entry encompasses 11 columns of data. This dataset is well-suited for tasks such as vocal analysis and regression-based singing voice rating. For instance, as previously stated, the final column of each entry denotes the overall performance score, allowing the audio to be utilized as data and this score to serve as the label for regression analysis.
### Supported Tasks and Leaderboards
Acapella evaluation/scoring
### Languages
Chinese, English
## Maintenance
```bash
GIT_LFS_SKIP_SMUDGE=1 git clone git@hf.co:datasets/ccmusic-database/acapella
cd acapella
```
## Usage
```python
from datasets import load_dataset
dataset = load_dataset("ccmusic-database/acapella")
for i in range(1, 7):
for item in dataset[f"song{i}"]:
print(item)
```
## Dataset Structure
| audio(22050Hz) | mel(22050Hz) | singer_id | pitch / rhythm / ... / overall_performance |
| :-------------------------------------------------------------------------------------------------------------------------: | :-------------------------------: | :-------: | :----------------------------------------: |
| <audio controls src="https://huggingface.co/datasets/ccmusic-database/acapella/resolve/main/data/song1%20(16).wav"></audio> | <img src="./data/song1 (16).jpg"> | int | float(0-10) |
| ... | ... | ... | ... |
### Data Instances
.wav & .csv
### Data Fields
song, singer id, pitch, rhythm, vocal range, timbre, pronunciation, vibrato, dynamic, breath control and overall performance
### Data Splits
song1-6
## Dataset Creation
### Curation Rationale
Lack of a training dataset for the acapella scoring system
### Source Data
#### Initial Data Collection and Normalization
Zhaorui Liu, Monan Zhou
#### Who are the source language producers?
Students and judges from CCMUSIC
### Annotations
#### Annotation process
6 Mandarin song segments were sung by 22 singers, totaling 132 audio clips. Each segment consists of a verse and a chorus. Four judges evaluate the singing from nine aspects which are pitch, rhythm, vocal range, timbre, pronunciation, vibrato, dynamic, breath control and overall performance on a 10-point scale. The scores are recorded on a sheet.
#### Who are the annotators?
Judges from CCMUSIC
### Personal and Sensitive Information
Singers' and judges' names are hided
## Considerations for Using the Data
### Social Impact of Dataset
Providing a training dataset for the acapella scoring system may improve the development of related Apps
### Discussion of Biases
Only for Mandarin songs
### Other Known Limitations
No starting point has been marked for the vocal
## Additional Information
### Dataset Curators
Zijin Li
### Evaluation
[Li, R.; Zhang, M. Singing-Voice Timbre Evaluations Based on Transfer Learning. Appl. Sci. 2022, 12, 9931. https://doi.org/10.3390/app12199931](https://www.mdpi.com/2076-3417/12/19/9931)
### Licensing Information
```
MIT License
Copyright (c) CCMUSIC
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
```
### Citation Information
```bibtex
@dataset{zhaorui_liu_2021_5676893,
author = {Monan Zhou, Shenyang Xu, Zhaorui Liu, Zhaowen Wang, Feng Yu, Wei Li and Baoqiang Han},
title = {CCMusic: an Open and Diverse Database for Chinese and General Music Information Retrieval Research},
month = {mar},
year = {2024},
publisher = {HuggingFace},
version = {1.2},
url = {https://huggingface.co/ccmusic-database}
}
```
### Contributions
Provide a training dataset for the acapella scoring system |
polytechXhf/jojos-dataset-small | ---
dataset_info:
features:
- name: image
dtype: image
- name: char_name
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 17859557.0
num_examples: 97
download_size: 17860793
dataset_size: 17859557.0
---
# Dataset Card for "jojos-dataset-small"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
gmongaras/dummy_text_dataset | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1063271
num_examples: 2048
download_size: 1079397
dataset_size: 1063271
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "dummy_text_dataset"
Dummy text dataset with 2048 random sequences of characters of length 10 to 1024. |
stacked-summaries/stacked-samsum-1024 | ---
license: apache-2.0
source_datasets:
- samsum
task_categories:
- summarization
language:
- en
tags:
- stacked summaries
pretty_name: Stacked Samsum - 1024
size_categories:
- 10K<n<100K
---
# stacked samsum 1024
Created with the `stacked-booksum` repo version v0.25. It contains:
1. Original Dataset: copy of the base dataset
2. Stacked Rows: The original dataset is processed by stacking rows based on certain criteria:
- Maximum Input Length: The maximum length for input sequences is 1024 tokens in the longt5 model tokenizer.
- Maximum Output Length: The maximum length for output sequences is also 1024 tokens in the longt5 model tokenizer.
3. Special Token: The dataset utilizes the `[NEXT_CONCEPT]` token to indicate a new topic **within** the same summary. It is recommended to explicitly add this special token to your model's tokenizer before training, ensuring that it is recognized and processed correctly during downstream usage.
## stats

## dataset details
Default (train):
```python
[2022-12-04 13:19:32] INFO:root:{'num_columns': 4,
'num_rows': 14732,
'num_unique_target': 14730,
'num_unique_text': 14265,
'summary - average chars': 110.13,
'summary - average tokens': 28.693727939180015,
'text input - average chars': 511.22,
'text input - average tokens': 148.88759163725223}
```
stacked (train)
```python
[2022-12-05 00:49:04] INFO:root:stacked 14730 rows, 2 rows were ineligible
[2022-12-05 00:49:04] INFO:root:dropped 20 duplicate rows, 29442 rows remain
[2022-12-05 00:49:04] INFO:root:shuffling output with seed 182
[2022-12-05 00:49:04] INFO:root:STACKED - basic stats - train
[2022-12-05 00:49:04] INFO:root:{'num_columns': 5,
'num_rows': 29442,
'num_unique_chapters': 28975,
'num_unique_summaries': 29441,
'summary - average chars': 452.8,
'summary - average tokens': 106.46820868147545,
'text input - average chars': 1814.09,
'text input - average tokens': 528.665579783982}
``` |
bigscience-data/roots_indic-bn_indic_nlp_corpus | ---
language: bn
license: cc-by-nc-4.0
extra_gated_prompt: 'By accessing this dataset, you agree to abide by the BigScience
Ethical Charter. The charter can be found at:
https://hf.co/spaces/bigscience/ethical-charter'
extra_gated_fields:
I have read and agree to abide by the BigScience Ethical Charter: checkbox
---
ROOTS Subset: roots_indic-bn_indic_nlp_corpus
# Indic NLP Corpus
- Dataset uid: `indic_nlp_corpus`
### Description
The IndicNLP corpus is a largescale, general-domain corpus containing 2.7 billion words for 10 Indian languages from two language families. s (IndoAryan branch and Dravidian). Each language has at least 100 million words (except Oriya).
### Homepage
https://github.com/AI4Bharat/indicnlp_corpus#publicly-available-classification-datasets
### Licensing
- non-commercial use
- cc-by-nc-sa-4.0: Creative Commons Attribution Non Commercial Share Alike 4.0 International
### Speaker Locations
- Southern Asia
- India
### Sizes
- 3.4019 % of total
- 44.4368 % of indic-hi
- 64.2943 % of indic-ta
- 70.5374 % of indic-ml
- 54.2394 % of indic-te
- 55.9105 % of indic-kn
- 61.6111 % of indic-mr
- 67.2242 % of indic-pa
- 68.1470 % of indic-or
- 64.3879 % of indic-gu
- 4.1495 % of indic-bn
### BigScience processing steps
#### Filters applied to: indic-hi
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-ta
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-ml
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-te
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-kn
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-mr
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-pa
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-or
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
#### Filters applied to: indic-gu
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-bn
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
|
juanito666/data | ---
license: apache-2.0
---
|
lansinuote/nlp.5.classification | ---
dataset_info:
features:
- name: label
dtype:
class_label:
names:
'0': unacceptable
'1': acceptable
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 621111
num_examples: 8551
- name: validation
num_bytes: 77558
num_examples: 1043
- name: test
num_bytes: 78463
num_examples: 1063
download_size: 0
dataset_size: 777132
---
# Dataset Card for "nlp.5.classification"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.